Challenging your assumptions
Assumptions are a core part of normal life that help us to act and react at pace – but they can also get in our way and stymie innovation.
Australia is leading discussions on regulating AI. A recently released document outlines proposals that could soon impact businesses operating in high-risk sectors. For New Zealand directors with interests across the Tasman, these developments are worth paying attention to.
The Australian Government’s proposals include a mix of voluntary guidelines and potential mandatory rules, meaning companies and organisations offering AI products or services could face new obligations. As Australia explores its regulatory approach, the question for New Zealand is: should we be following their lead?
The Voluntary AI Safety Standard, introduced in September 2024, is the first step in guiding organisations toward responsible AI use. It includes 10 voluntary guardrails designed to mitigate risks while allowing innovation.
These guardrails emphasise transparency, risk management and human oversight. For example, transparency requires organisations to disclose how AI decisions are made. In practical terms, this could mean detailed documentation of AI algorithms.
For high-risk sectors, Australia is proposing mandatory compliance, requiring:
Australia plans to finalise the mandatory AI regime by mid-2025 with legislation potentially taking effect by 2026. For directors of organisations operating in Australia, this means thinking about compliance measures, particularly if AI is central to their business models. Selling AI products or services in Australia may trigger compliance obligations.
Australia’s regulatory approach aligns with international trends but differs in its execution. The European Union (EU), with its EU Artificial Intelligence Act, also adopts a risk-based approach, categorising AI systems by risk levels – from unacceptable to minimal.
The EU imposes stringent rules on high-risk AI, including requirements for third-party conformity assessments, and bans certain AI uses, such as social scoring (classifying people for the allocation of public assistance benefits) by governments. Non-compliance can result in significant penalties of up to six per cent of a company’s global turnover.
In contrast, Australia’s regime is more flexible and phased. It starts with voluntary guardrails to allow industries time to adapt before implementing mandatory regulations. The mandatory regulations are focused on high-risk AI systems, such as AI used in healthcare, law enforcement or critical infrastructure.
While Australia has not yet proposed outright bans on specific AI use cases such as real-time biometric identification, these are areas that may see future restrictions.
New Zealand is in the initial stages of AI regulation development, with no comprehensive framework introduced yet. The Minister of Science, Innovation and Technology, Hon Judith Collins KC, has emphasised the adaptability of New Zealand’s existing laws and the development of an AI roadmap to ensure safe innovation.
Ongoing discussions, including a recent Cabinet paper, highlight the need for a “light-touch, proportionate and risk-based approach”. The key challenges in designing appropriate regulation can be found here.
While New Zealand is beginning to shape its AI regulatory framework, several commentators have identified gaps that may need addressing. For instance, relying on existing laws such as the Privacy Act 2020, which was not specifically designed for AI, has raised concerns about whether these frameworks are robust enough to manage the unique challenges of AI, especially in sectors such as healthcare.
Additionally, there is growing awareness around Māori data sovereignty, a distinct issue for New Zealand that requires more coordinated and culturally appropriate governance solutions.
When operating overseas, directors need to stay informed of local regulatory environments and how AI regulations are evolving in key markets, such as the EU and Australia. Organisations exporting AI solutions or offering them to international customers may find their AI systems subject to audits or transparency requirements, depending on how these jurisdictions define the scope of their regulations.
Directors must ensure their organisations are planning accordingly and assess whether these evolving regulatory regimes will affect their business arrangements.
In New Zealand, directors need to ensure their organisation’s use of AI complies with existing legal frameworks, such as the Privacy Act 2020 and the Copyright Act 1994. For a summary of New Zealand legislation that applies to AI, see the Understanding AI: A glossary for boards and directors.