Is your organisation ready for AI? Why fit-for-purpose governance matters now more than ever
As artificial intelligence (AI) technologies evolve at pace, organisations face increasing pressure to adopt them both strategically and responsibly. This is especially challenging given the rapid development of new generative AI solutions such as agentic AI, which promises exceptional efficiency and productivity gains, but also presents heightened risks. Organisations must be ready to respond to emerging and proposed legal regulation of AI, both in Australia and globally.
A tailored, well-designed AI governance framework is essential for helping organisations navigate these challenges and maximise the full potential of AI in line with their strategic goals.
Why are specific governance frameworks required for AI?
Planning for and controlling AI usage is rarely straightforward. Traditional governance practices can be ill-equipped to identify and respond to the unique and evolving risks associated with AI technologies, and many traditional governance frameworks are unsuitable for managing these risks.
These risks can arise whether an organisation has chosen to purchase an off-the-shelf AI tool that will be trained on the organisation’s own data, commission a bespoke solution or develop a solution in-house. For example:
- Procurement: traditional IT procurement typically involves an assessment of well-understood risks that may be less relevant to the adoption of AI solutions, which present novel risk factors. The management of these may include identifying the source of training data, assessing the accuracy of the model and reviewing the vendor or developer’s responsible AI policies and practices.
- Deployment: the deployment of an AI solution may raise significant new risks such as inadvertent disclosure of an organisation’s proprietary information when a public AI tool uses user prompts to train its underlying models.
Where traditional frameworks fail to identify AI-specific risks, they may lead an organisation into a false sense of security. The organisation may miss the opportunity to implement measures to control and respond to risks, for example, training personnel before they start using AI as part of their day-to-day work. In addition, standard governance frameworks are often unable to keep up with the rapid pace of AI technological development, which may render them unfit for purpose.
Organisations should adopt governance frameworks that are targeted towards AI technologies and relevant to their structure, size and operations. These frameworks should guide all stages of the AI lifecycle – from procurement or development to testing, implementation and use – and be informed by experts with a deep understanding of the organisation and its business, the relevant technological and legal risks and the best practices for addressing these.
What does AI governance entail?
An effective AI governance framework should include policies and guidelines which control how an organisation adopts, utilises and manages AI technologies and solutions. Typically, this includes:
- an AI usage policy: outlining when and how personnel may use AI, the specific tools (or categories of tools) that may be used and the types of data and information that can be used as input into those tools;
- decision-making frameworks: such as a risk assessment or use case assessment framework to guide responsible adoption of AI; and
- incident response policies and procedures: providing clear guidance for both leadership and employees on what to do when something goes wrong.
A well-designed AI governance framework provides peace of mind to organisations and stakeholders and increases confidence that AI can be used effectively and responsibly in accordance with the organisation’s strategic goals.
Organisations must also be aware that an AI governance framework should not be used as a ‘set-and-forget’ approach – after adopting an AI governance framework, it is crucial that the framework is reviewed regularly and updated where necessary to account for new technological, regulatory and strategic developments.
How AI governance reduces risk
A strong AI governance framework helps organisations mitigate key risks associated with AI use, while also building internal capability and stakeholder trust. Key areas where governance plays a critical role include:
- Legal liability: adoption of AI tools carries risks of organisations inadvertently breaching their obligations around privacy, confidentiality and intellectual property. There is a widespread lack of understanding of these risks – for example, employees often do not realise that proprietary or confidential information input into a public AI tool may be subsequently used for model training, leading to risks of inadvertent disclosure. AI governance can help reduce this risk by setting out guardrails relating to the tools which may be used, the circumstances in which they may be used and the data that may be used as input.
- Organisational and reputational risks: adoption of AI tools without proper governance can compromise an organisation’s business operations and customer service – for example, the chatbot deployed on Air Canada’s website wrongly claimed that a customer was entitled to a refund. Similarly, overreliance on AI tools to automate decision making is risky due to such tools’ lack of explainability and transparency, making it difficult for organisations to explain their decision-making processes to stakeholders. A best-practice AI governance framework will set out mandatory risk assessments ensuring that such risks are not only identified and planned for, but that they may be appropriately responded to if they do eventuate. In addition, an AI framework will set out controls to ensure that AI tools, whether procured off-the-shelf or developed in-house, undergo appropriate and sufficient testing before being rolled out in full, further reducing the likelihood of risks eventuating.
- Emerging regulatory frameworks: adopting an AI governance framework places organisations in the best possible position to be prepared for the implementation of legal regulations around responsible AI usage and development. The Australian Government’s Voluntary AI Safety Standard published in September 2024 recommends that organisations have an accountability process covering governance, internal capability and regulatory compliance. Future legislative developments are likely to be driven by industry-standard best practice around responsible AI and a well-designed AI governance framework will allow organisations to be in the best possible position to ensure compliance.
Organisations that fail to adopt a tailored and effective AI governance framework may face increased legal, reputational and commercial risks. Adopting an AI governance framework is one of the key steps which can promote consistent decision-making and responsible AI use, allowing organisations to maximise the benefits of AI while ensuring compliance with emerging regulatory requirements.
If you’re exploring AI solutions and want to ensure your organisation is managing risk and regulation effectively, please reach out to our team for guidance or if you would like to discuss any aspects of this article.
Contact