IMHO: AI expertise on boards: real growth or skills inflation?
An increased number of AI experts on boards raises questions about the realities of board composition and the dynamics of skills inflation.
Directors need to be acutely aware of the potential risks and consequences of using artificial intelligence tools.
ChatGPT requires only the mildest encouragement to become wildly florid. Invited to paint a picture of a future AI world, it writes:
“Waking up from a restless sleep, sweaty and short of breath, the haunting visions of both utopian and dystopian futures linger in our minds. The world stands at the precipice of unprecedented technological advancements, and the possibilities seem limitless, yet terrifying at the same time. However, just like most things in our existence, our future will not be determined by a dichotomy of two extremes.”
It can appear at once both daunting in its possibilities and comically inept, but directors need to understand the benefits and risks of using this technology in their businesses, and be able to intelligently explain the pros and cons to their stakeholders.
The potent capabilities of ChatGPT offer remarkable productivity opportunities. However, like all emerging technologies, it also comes bearing dangerous gifts. Directors need to be acutely aware of potential risks and consequences from trust, information security and legal perspectives. We see four main risk categories:
OpenAI, which developed ChatGPT, collects a broad swathe of user information. This raises the distinct possibility of private information – for which you are responsible – finding its way into the wrong hands.
The ChatGPT terms of use grant access to a great deal of your data: IP addresses, browser information, everything you ask, everything you type into their engine. And those terms also entitle them to share all of that data with unspecified third parties, without informing you.
New Zealand Privacy Commissioner Michael Webster expects anyone using systems that can take personal information to create new content to be thinking about the consequences of using generative AI technologies before they start.
Not only that, you are, when you use this technology, potentially putting customer, client, and even company/ firm confidential data into the public arena. When the tool asks questions or performs tasks, any data you give is then indexed for future use by ChatGPT.
Let’s say you ask the tool to review a draft annual report. Everything you enter, as well as any work created by ChatGPT, is then included into its database and available to future users, which will likely include personal information and confidential information. It’s then entirely possible for malicious users to use reverse prompt engineering techniques and prompt injection to dig up that material.
Given that organisations are responsible for safeguarding and protecting all such information and content, the gravity of the potential legal, financial, ethical and/or reputational risks should be all too apparent.
“Waking up from a restless sleep, sweaty and short of breath, the haunting visions of both utopian and dystopian futures linger in our minds. The world stands at the precipice of unprecedented technological advancements, and the possibilities seem limitless, yet terrifying at the same time. However, just like most things in our existence, our future will not be determined by a dichotomy of two extremes.”
Compounding that vulnerability is the possibility that providing copyrighted content to ChatGPT as part of a prompt or request may infringe the content authors’ or rights holders’ intellectual property. Not only could this put a business under the risk of a copyright claim, it could also breach ChatGPT’s terms of use, which require users to ensure they don’t use the service in a way that infringes, misappropriates or violates any person’s rights, and more interestingly, represent the output from ChatGPT as human-generated when it is not.
ChatGPT’s knowledge is based on data up until 2021, which can limit its accuracy or lead to outdated responses. OpenAI has acknowledged this issue, as well as the AI’s tendency to create “hallucinations” or plausible but incorrect answers.
There’s also a concern with bias in ChatGPT’s responses, as it is influenced by societal dynamics from user interactions, potentially leading to misinformation. Without diligent human oversight, businesses risk reputational damage and potential legal issues.
The ethical use of AI technologies like ChatGPT has raised demands for regulatory oversight, in the way of other industries such as food, medical and aviation. But until such regulations exist, these technologies are governed by guidelines set by their creators, reflecting their ethical choices.
ChatGPT’s potential for misuse in cybersecurity is significant because criminals can leverage its writing and code-correcting abilities to craft sophisticated malware or phishing attacks.
As directors navigate the landscape of integrating artificial intelligence like ChatGPT into their businesses, it becomes vital to understand the implications, benefits, and risks it holds. It is imperative to balance the lure of increased productivity and opportunities with the potential challenges surrounding trust, information security, legal and ethical implications.
As the technology rapidly evolves, so will its implications, necessitating vigilant monitoring. Businesses should identify safe use cases matching their risk profile.
Possible strategies range from banning ChatGPT altogether to staff training and creating internal use policies, but there are two immediate, key actionable takeaways for directors from a governance perspective:
Our future is not, in the end, determined by the technology we create, but by how we choose to use it.