Company Directors' Course Rarotonga
Held in Rarotonga, this Company Directors’ Course is for those wanting to disconnect from work to focus on deep diving into governance.
Since generative artificial intelligence (GenAI) burst onto the scene with Open AI’s ChatGPT late in 2022, industry attitudes towards the technology continue to shift. Internationally, commentators have been extolling its user friendliness with a sense of awe, but also debating the ethics of its use.
It is bigger than one industry disruptor and is potentially the dawn of a new era with continuing advancements at its core, creating and discovering, summarising, and automating what we do. We know potential applications for AI are vast and can extend to almost any industry or sectors that rely on decision-making and problem-solving such as:
Whether businesses are at the forefront in developing GenAI or are operating its technology, careful consideration around the ethical, legal and societal implications will be essential. And critical to its successful use will be in its implementation within the business community and the role for good governance, for which there is no substitute.
Traditional AI has already influenced ethical issues and certain risks surrounding data privacy, security, policies and workforces. GenAI is likely to challenge business risks in areas such as misinformation, plagiarism, copyright infringements and harmful content.
International litigation in the United States and United Kingdom has typically centred on companies who develop AI, and involve allegations of infringement of intellectual property rights, violation of privacy or property rights, or consumer protection laws.
In the Global Risks Perception Survey that underpins the Global Risks Report 2023, more than four in five respondents anticipated consistent volatility over the next two years.
Considering technological advancements including AI, Marsh has stated: “Sophisticated analysis of larger data sets will enable the misuse of personal information through legitimate legal mechanisms, weakening individual digital sovereignty and the right to privacy, even in well-regulated, democratic regimes.”
Potential scenarios for directors and officers liability include:
Boards will have to maintain their intellectual curiosity, developing an understanding of those generative AI components and any model’s potential risk of use in their businesses. Risk mapping and the development of a company’s risk posture around GenAI technology will help provide a framework for decision-making.
The approach to integrating AI into a business will continually challenge those decisions on capex and investments, and may pivot on a board’s understanding of broader risks and opportunities in deciding on the balance between a strategic step-by-step process, or a complete overhaul and replacement of existing technology. Both during and upon integration, subsequent considerations will need to extend to those necessary human oversights and inputs that will be required to minimise errors and misinterpretations in its operational use and interactions.
In New Zealand, the legal risks of GenAI’s emergence will be challenged just like they are in other jurisdictions. Due to the societal and ethical challenges presented, governments will need to work closely with the courts. There are no AI-specific laws in place with it being covered under current New Zealand legislation, such as the Privacy Act, the Human Rights Act, the Fair Trading Act, Patents Acts, and the Harmful Digital Communications Act.
However, it is reasonable that legislators and regulators may look abroad to other jurisdictions for insights, especially any fast evolving or recent trends. On 30 October 2023, the White House released an executive order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. This order highlights the significant regulatory challenges and evolving regulatory landscape in the US.
Fascinatingly, within a few weeks of this announcement a co-founder of OpenAI was fired and rehired within a few days. This illustrates the influence that certain individuals can have on their own businesses, the interest of all shareholders, and the influence on political leaders and an international community that is seeking to understand such technology and the setting of laws and regulations in our future society.
Claims overseas are already trending toward privacy, unfair competition, copyright, trademark, libel and facial recognition cases, with plaintiffs mainly focusing on the developers of AI technology. As litigation in this area continues to develop, the insurance industry has a huge opportunity to also adapt alongside. There are products already on the market that are specific to AI developers and users of the technology.
In establishing the right balance for the use of AI, stakeholders need to check AI’s outputs as they would their own and adapt:
For more in this space, see the webcast The OpenAI Saga – governance hallucination?