Putting AI best practice into a New Zealand context
If your business AI creates something, do you own the intellectual property? It depends.
New technologies, trends, legislative requirements, threats and opportunities . . . there is always something new to assess and discuss around the board table.
Hello AI!
While there is certainly benefit to be reaped by leveraging Generative AI tools, data loss, privacy breaches, biased outputs and copyright breaches are just a short list of the issues that can arise if these aren’t used in the correct way.
Good governance of AI usage within an organisation is the key to mitigating risk. That means directors also need to continually expand their understanding and vocabularies when considering AI implications.
Chances are, 12 months ago, few directors would be able to succinctly explain what phrases such as ‘neural networks’ or ‘large language model’ actually mean – and what boards and organisations need to do with them.
But AI is now here, there and everywhere . . . and directors need to keep on top of this ever evolving phenomenon.
With that in mind, IoD national sponsor Kordia has produced two useful toolkits that every director should keep to hand – an AI Usage Policy Checklist and a glossary titled AI Terms Explained.
The glossary also includes a summary of common major AI tools available, such as ChatGPT and Dall-E.
Kordia’s glossary below, while by no means a complete list, will help directors navigate this rapidly evolving and at-times intimidating new technology.
Understanding the terms will also assist boards and management to draft easily understood AI usage policies for their organisations as they take steps to ensure an ethical and secure use of these tools.
Artificial Intelligence (AI) that is considered to have a human level of intelligence and capable of doing tasks in a wide array of areas.
A computer system that can use a neural network to analyse data and come up with reasoned responses to queries based on provided data.
Bias is an inclination or prejudice for or against something, especially in a way considered to be unfair. There are several types of bias that are referred to within AI. Computational bias is when AI produces a systematic error or deviation from the true value of a prediction – which can be caused by the AI model making an assumption, or from an issue with the data it’s been trained or fed. Cognitive bias refers to inaccurate individual judgment or distorted thinking, while societal bias leads to systemic prejudice, favouritism, and/or discrimination in favour of or against an individual or group. These two types of bias can factor into AI if the training data hasn’t come from a wide range of diverse sources.
A form of AI designed to simulate human-like conversations and interactions that uses Natural Language Processing to understand and respond to questions. Often used in a customer assistance setting.
Images and videos that have been manipulated to depict realistic looking, but ultimately fake events. Often used for spreading misinformation or for purposes such as blackmail.
A subset of Machine Learning (ML) in which artificial neural networks that mimic the human brain are used to do unsupervised complex tasks.
The ability to describe or provide sufficient information about how an AI system generates a specific output or arrives at a decision in a specific context to a predetermined question.
A field of AI that uses machine learning models trained on large data sets to create new content, such as written text, code, images, music, simulations and videos. These models are capable of generating novel outputs based on input data or user prompts.
Instances where a generative AI model creates content that either contradicts the source or creates factually incorrect outputs under the appearance of fact.
A form of AI that utilises deep learning algorithms to create models trained on massive text data sets to analyse and learn patterns and relationships among characters, words and phrases. There are generally two types of LLMs:
A subset of AI that concentrates on the use of algorithms that improve through iterative use.
A subfield of AI that helps computers understand, interpret and manipulate human language by transforming information into content. It enables machines to read text or spoken language, interpret its meaning, measure sentiment, and determine which parts are important for understanding.
A type of model used in machine learning that mimics the way neurons in the brain interact with multiple processing layers, including at least one hidden layer. This layered approach enables neural networks to model complex nonlinear relationships and patterns within data. Artificial neural networks have a range of applications, such as image recognition and medical diagnosis.
AI/ML tools that provide a private context for users to add data, without that data being used in the public domain.
An independent assessment of the impact of any new system or application that deals with Personally Identifiable Information (PII) for the purpose of ensuring that all the necessary controls are in place not to breach any NZ or international laws.
AI/ML tools that are made available to the public (either for free or for a fee) as a service, where users can’t control the algorithms or how data provided to the tool is used. For example - Chat GPT.
Data generated by a system or model that can mimic and resemble the structure and statistical properties of real data. It is often used for testing or training machine learning models, particularly in cases where real-world data is limited, unavailable or too sensitive to use.