Putting AI best practice into a New Zealand context
If your business AI creates something, do you own the intellectual property? It depends.
Gen Z are using AI at work, whether or not there are policies in place to govern it.
Around a third of Gen Z New Zealanders surveyed by cybersecurity firm Kordia reported using generative AI – tools that can create text, audio, images or video such as Chat GPT or DalleE – in the workplace.
In some cases they are simply playing with the technology, in others they are looking for the productivity gains that it promises, says Alastair Miller, Principal Consultant at Aura, the independent security consultancy owned by Kordia.
“They are putting their work into limericks or songs in the style of a particular person for fun,” Miller says. “They are also asking it to go away and generate a report for them.”
That is, of course, one of the key selling points of generative AI. The technology has the potential to improve productivity because it can manipulate large amounts of data and generate a report quickly.
However, well-known issues can arise when staff use generative AI to, for example, summarise the pros and cons of a proposal or report on business activity.
Miller says these include potential copyright breaches, producing analysis that is correct overseas but not relevant to New Zealand, mimicking the bias in source data or simply getting things wrong.
Boards need to get their heads around generative AI and put in place guidelines for their organisations, he says. These should include an indication of what the technology should be used for, and how data security – and accuracy – risks will be managed.
One of the most important things to understand is the difference between public and private generative AI tools.
Miller cites the high-profile case of Samsung, where employees inadvertently leaked source code and internal meeting minutes when using the public Chat GPT tool to summarise information.
“Once data is entered into a public AI tool, it becomes part of a pool of training data – and there’s no telling who might be able to access it. That’s why businesses need to ensure generative AI is used appropriately, and any sensitive data or personal information is kept away from these tools.”
Any content produced by AI also needs to be thoroughly vetted for errors. Miller says to apply the 80/20 rule – most of it will be fine but some of it is likely to be wrong, although it will look like it is correct.
“If you are not an expert in the subject, you need to have a report checked in case AI is tricking you. It is prone to hallucinations.”
What we are seeing today with AI is just the beginning of a field brimming with possibilities, Miller says, adding that when change comes it will come quickly.
“IT technology generally moves faster that boards or government regulation. What this research shows is that, for Gen Z, the horse has bolted. Boards need to get moving or you may end up wishing you had done something sooner.”