KORDIA
Australia's Cyber Security Act
New legislation aims to bolster the security and resilience of Australia's cyber environment and critical infrastructure.
Boards need to ensure that an Aotearoa New Zealand and Responsible AI governance lens is applied when integrating AI into strategies, policies and usage, say expert speakers at the upcoming IoD Governing AI Forum.
Responsible AI consultant Frith Tweedie and data/tech consultant Dr Karaitiana Taiuru are among speakers at the Forum, which takes place in Auckland on 25 July.
Dr Taiuru leads a consultancy that advises on AI, data sovereignty, and technology ethics, and has a special focus on how these interact with te ao Māori. He says social expectations, cultural differences and varied regulation can all impact how AI needs to be governed in different markets.
“Take intellectual property (IP) rights, which are an important consideration. If an organisation is using, say, generative AI, that will learn from the company's data, they may be compromising their IP rights,” he says. The extent to which AI has leveraged material not owned by the organisation can also be an issue.
“At this stage there is debate, if something's created by AI, whether that is copyrightable. In America, they've said ‘no, it's not able to be copyrighted’. In China, they have said ‘yes, it can be copyrighted’. We are yet to test the waters in that area.”
Dr Taiuru suggest boards take a holistic approach to investigating and implementing AI in order to ensure it aligns with organisational strategies and values.
“Boards need to be thinking about how they will weave AI into all of their governance policies. In particular, if it's an organisation that deals with iwi businesses. When we look at international research, if you're a person of colour, Pacifica, sometimes Asian, the technology can fail.”
Potential bias or negative social impact is not a reason to be fearful of AI, but it is a good reason to think about how it may operate in the Aotearoa New Zealand environment, he says. As more boards look seek to spur innovation in their business through AI, there is an opportunity “to actually get things right”.
In the New Zealand context, that means considering AI against a backdrop that includes Tiriti of Waitangi obligations, he says, which include equity and equality.
“Good AI policies, and good data governance, will take these into account – it’s something that good board-led policies will do anyway. We're at such an early stage of AI in business that there's no reason why New Zealand can't be a world leader in getting things right.”
Frith Tweedie helps organisations build good AI governance and privacy practices. She serves on the executive council of the AI Forum, and the global advisory board of the AI Governance Center established by the International Association of Privacy Professionals. She also delivers the IAPP’s AI Governance Professional course, training professionals to understand and execute responsible AI governance. Boards need to be clear on their AI strategy, Tweedie says, including what business issues they want AI to solve, and how much risk they are prepared to tolerate.
“A lot of boards might say we have a high risk appetite for innovation, but a low risk appetite for non-compliance and reputation damage. When it comes to AI, they need to think about how they will reconcile the two,” Tweedie says.
Organisations developing AI systems will have a different risk profile to an organisation that simply allows staff to use generative AI tools. “A risk-based approach that right-sizes an organisation’s approach to AI governance according to their AI risk profile is critical”.
Boards should make sure they have visibility of how AI is being used in their organisation and potential risk areas, including:
“Boards need to set the “tone from the top” when it comes to the responsible use of AI,” she says. “They should be asking management how they’re identifying potential AI risks and how they’re going about managing them”.
“A Responsible AI framework is much broader than just a policy. It's about thinking holistically about things like what governance and risk management structures you have in place for AI, what kind of education and training is being provided to staff on both the capabilities and limitations of AI and how you’re monitoring and testing AI systems. Ultimately, it’s about taking responsibility for how you're developing and using AI so that it isn't harming the people to whom it's applied, and thereby creating risk for your organisation”.
And she emphasises that AI is not just an IT issue – AI systems are “sociotechnical” systems, meaning they’re technical tools that also have social impacts on the people who use and are affected by them. Implementing a Responsible AI framework enables organisations to identify and address potential risks, resulting in better performing models, regulatory compliance and ultimately trust in your AI activities.
Compliance is another key consideration. While AI is not specifically mentioned in many current laws and regulations in New Zealand, it does not operate “in a legal vacuum”. Privacy, intellectual property, consumer protection and competition laws are just some that may be applicable to AI activities, Tweedie says. “And if your organisation has a global footprint, it’s critical to have an understanding of emerging AI-specific legislation like the EU’s AI Act” .
Dr Taiuru adds that, while new to many organisations, AI is a mature technology so boards should not think of it in terms of a “dotcom bubble”.
“It's definitely not going away. So boards need to realise that AI has ramifications across the whole spectrum of risks. It needs to be considered, and audited.” Tweedie encourages boards to treat AI governance as an opportunity to drive trust and value rather focusing only on risk management. “Trust in AI gives you a licence to continue innovating”.
Learn more about AI in the New Zealand context at the IoD Governing AI Forum this July. Register now