Ensuring the AI dream does not become your biggest nightmare

Directors need to be acutely aware of the potential risks and consequences of using artificial intelligence tools.

type
Boardroom article
author
By Hayley Miller, Partner & Gunes Haksever, Senior Associate, Dentons Kensington Swan
date
30 Jun 2023
read time
4 min to read
misty forest

ChatGPT requires only the mildest encouragement to become wildly florid. Invited to paint a picture of a future AI world, it writes:

“Waking up from a restless sleep, sweaty and short of breath, the haunting visions of both utopian and dystopian futures linger in our minds. The world stands at the precipice of unprecedented technological advancements, and the possibilities seem limitless, yet terrifying at the same time. However, just like most things in our existence, our future will not be determined by a dichotomy of two extremes.”

It can appear at once both daunting in its possibilities and comically inept, but directors need to understand the benefits and risks of using this technology in their businesses, and be able to intelligently explain the pros and cons to their stakeholders.

The potent capabilities of ChatGPT offer remarkable productivity opportunities. However, like all emerging technologies, it also comes bearing dangerous gifts. Directors need to be acutely aware of potential risks and consequences from trust, information security and legal perspectives. We see four main risk categories:

1. Privacy

OpenAI, which developed ChatGPT, collects a broad swathe of user information. This raises the distinct possibility of private information – for which you are responsible – finding its way into the wrong hands.

The ChatGPT terms of use grant access to a great deal of your data: IP addresses, browser information, everything you ask, everything you type into their engine. And those terms also entitle them to share all of that data with unspecified third parties, without informing you.

New Zealand Privacy Commissioner Michael Webster expects anyone using systems that can take personal information to create new content to be thinking about the consequences of using generative AI technologies before they start.

Not only that, you are, when you use this technology, potentially putting customer, client, and even company/ firm confidential data into the public arena. When the tool asks questions or performs tasks, any data you give is then indexed for future use by ChatGPT.

Let’s say you ask the tool to review a draft annual report. Everything you enter, as well as any work created by ChatGPT, is then included into its database and available to future users, which will likely include personal information and confidential information. It’s then entirely possible for malicious users to use reverse prompt engineering techniques and prompt injection to dig up that material.

Given that organisations are responsible for safeguarding and protecting all such information and content, the gravity of the potential legal, financial, ethical and/or reputational risks should be all too apparent.

“Waking up from a restless sleep, sweaty and short of breath, the haunting visions of both utopian and dystopian futures linger in our minds. The world stands at the precipice of unprecedented technological advancements, and the possibilities seem limitless, yet terrifying at the same time. However, just like most things in our existence, our future will not be determined by a dichotomy of two extremes.”

2. Intellectual property

Compounding that vulnerability is the possibility that providing copyrighted content to ChatGPT as part of a prompt or request may infringe the content authors’ or rights holders’ intellectual property. Not only could this put a business under the risk of a copyright claim, it could also breach ChatGPT’s terms of use, which require users to ensure they don’t use the service in a way that infringes, misappropriates or violates any person’s rights, and more interestingly, represent the output from ChatGPT as human-generated when it is not.

3. Accuracy and ethics

ChatGPT’s knowledge is based on data up until 2021, which can limit its accuracy or lead to outdated responses. OpenAI has acknowledged this issue, as well as the AI’s tendency to create “hallucinations” or plausible but incorrect answers.

There’s also a concern with bias in ChatGPT’s responses, as it is influenced by societal dynamics from user interactions, potentially leading to misinformation. Without diligent human oversight, businesses risk reputational damage and potential legal issues.

The ethical use of AI technologies like ChatGPT has raised demands for regulatory oversight, in the way of other industries such as food, medical and aviation. But until such regulations exist, these technologies are governed by guidelines set by their creators, reflecting their ethical choices.

4. Cybersecurity

ChatGPT’s potential for misuse in cybersecurity is significant because criminals can leverage its writing and code-correcting abilities to craft sophisticated malware or phishing attacks.

Conclusion

As directors navigate the landscape of integrating artificial intelligence like ChatGPT into their businesses, it becomes vital to understand the implications, benefits, and risks it holds. It is imperative to balance the lure of increased productivity and opportunities with the potential challenges surrounding trust, information security, legal and ethical implications.

As the technology rapidly evolves, so will its implications, necessitating vigilant monitoring. Businesses should identify safe use cases matching their risk profile.

Possible strategies range from banning ChatGPT altogether to staff training and creating internal use policies, but there are two immediate, key actionable takeaways for directors from a governance perspective:

  1. Ensure your organisation does not input personal information, confidential information, or copyrighted content to ChatGPT (or other AI tools) as part of a query or prompt, or do so only by using modes that prevent the use of your inputs by the provider (either to train their AI tool or otherwise) such as ChatGPT’s incognito mode or ChatGPT Business.
  2. Always have human oversight, sense, and fact-checking processes in place requiring ChatGPT outputs to be carefully checked by trusted, validated sources before using any of it in relation to business activities. ChatGPT’s responses or output can include bias, inaccurate or flat out wrong information. There has been incidents where imaginary cases produced by ChatGPT has been found in legal citations, which were requested from the New Zealand Law Society.

Our future is not, in the end, determined by the technology we create, but by how we choose to use it. 


Dentons Kensington Swan logo