AI procurement in Australia: trends, risks and what to expect in 2026
Artificial intelligence (AI) adoption is accelerating across Australian businesses, and the regulatory framework governing its use and procurement continues to evolve. Instead of introducing specific and comprehensive AI legislation, Australia has largely adopted a principles-based approach that relies on existing (but not AI specific) laws and regulatory guidance.
As a result, many of the key risks associated with AI deployment should be managed through procurement strategy and contracting.
We assess the state of play for AI procurement contracting in Australia, tracking trends, observations and legal developments across 2025 and provide a brief forecast for 2026.
Key takeaways
- Australia’s regulatory AI framework is evolving. In the absence of comprehensive and AI specific legislation, businesses procuring AI solutions must consider a mix of existing laws, regulatory guidance and voluntary standards.
- Many AI risks are currently managed through contracts. Clear scope, definitions, data rights, risk allocation and transition out arrangements are critical when negotiating AI procurement agreements.
- Governance and oversight will become increasingly important. As AI capabilities evolve, particularly with more autonomous systems, organisations should ensure procurement, governance and risk frameworks keep pace.
What laws and regulations currently apply to AI in Australia?
Australia has generally avoided adopting a sweeping, economy-wide regulatory approach seen in Europe and Asia. Instead, it has adopted a principles-based and voluntary approach, as reflected in the National AI Plan which was released by the Commonwealth Government in December 2025.
The National AI Plan anticipates that:
- existing laws may be modernised or adapted to address AI risks and harms where appropriate;
- regulatory guidance published by regulators will assist with administering and clarifying the application of existing laws to AI; and
- industry sector-specific regulation may be developed for high-risk AI.
For Australian businesses procuring AI solutions, the current legislative landscape (as at the date of this article) includes:
- laws not specific to AI including the Online Safety Act 2021 (Cth), the Copyright Act 1968 (Cth), the Australian Consumer Law, Corporations Act 2001 (Cth) and the Privacy Act 1988 (Cth) which nonetheless remain broadly applicable to the use and procurement of AI;
- regulatory guidance, including material issued by ASIC and the Office of the Australian Information Commissioner on the use, development and governance of AI as they can assist with interpreting existing laws in respect of AI; and
- voluntary standards and frameworks, such as ISO/IEC 42001 or ISO/IEC 23894, or principles from the Guidance for AI Adoption published in 2025 (which have now replaced the previous Voluntary AI Safety Standards published in 2024). The Guidance for AI Adoption is non-binding. The National AI Centre is due to phase in additional resources over the coming year.
Key risks in AI procurement – and how businesses can manage them
Defining the scope of the AI solution
As is the case with most technology procurement contracts, documenting the scope of what is being provided by an AI solution is critical. Poorly defined functionality or deliverables can lead to misaligned expectations and disputes about performance.
Ways to manage risk include:
- defining clear, measurable objectives for the customer’s business and AI solution;
- setting out detailed functional specifications, including as they relate to the AI models or technologies which are being deployed;
- aligning deliverables with agreed objectives and specifications to ensure customers receive the full and intended benefit of the contract;
- implementing governance mechanisms to monitor performance and success metrics, ensuring that the right personnel have input on technical and commercial elements of procuring the AI solution; and
- including change control processes to manage evolving AI capabilities during the contract term.
Dependency on AI Solutions
Ongoing dependency on AI solutions can create business risk. For customers, this can include leverage over future pricing, limited flexibility to transition to alternative solutions, limited rights in any embedded technology moving forward and constraints on producing similar outputs.
Ways to manage this risk include:
- clear pricing adjustment mechanisms for the contract term and renewal periods;
- transition-out arrangements addressing ownership and use of data, outputs, artefacts and assets, pricing, timeframes and resourcing; and
- clearly defining IP ownership and usage rights for relevant components of the AI solution, including inputs, outputs and embedded technology.
Data use and training rights
Negotiations frequently focus on who owns, and how datasets, inputs and outputs may be used.
Vendors often seek rights to use customer data to improve or train AI systems, and rights over derived datasets.
Way to manage this risk include:
- clearly defining ‘data’ and identifying ownership or usage rights;
- recognising that data may include prompts, fine-tuning of prompts, personal information, commercially sensitive information and outputs (e.g. data derived from inputs);
- distinguishing between intellectual property rights in data and rights in underlying algorithms used to implement an AI solution, which vendors are less likely to provide rights or access to;
- considering whether traditional contracting concepts such as ‘Background IP’, and ‘Developed IP’ are appropriate solutions;
- establishing contractual controls based on agreed ownership and permitted uses (noting where ownership by one party is not conceded, often a right to use and access by the other party is just as valuable); and
- including targeted audit or reporting rights if vendors use customer data for training or improvement.
Privacy and data security
AI vendors often operate across multiple jurisdictions and may offer different degrees of privacy and data security protections.
Businesses should consider key AI security issues like where data is hosted, how it is processed, what safeguards apply, as well as potential manipulation of input data.
Ways to manage this risk include:
- assessing whether data can be hosted and processed in Australia;
- where data is transferred offshore, assessing applicable legal protections and compliance with Australian Privacy Principle 8 under the Privacy Act 1988 (Cth), which generally requires the overseas recipient of data to agree to comply with the Privacy Act 1988 (Cth);
- ensuring internal technical teams understand which third parties process customer data;
- conducting appropriate due diligence on vendors and sub processors; and
- undertaking collaborative privacy impact assessments to understand data flows and security controls.
Allocation of risk
Risk allocation is often a key negotiation point between AI vendors and customers.
Customers should first identify the specific risks they want to manage, such as hallucinations, accuracy limitations or data security issues.
Ways to manage risk include:
- higher liability caps for high-risk heads of liability such as data breaches, fraud, third party IP infringement and confidentiality breaches;
- clear definitions of what types of loss or damage are recoverable and what is out of scope (eg indirect or consequential loss). In AI procurement agreements, this may turn on items such as loss of data, loss of profits, loss of goodwill or remediation costs;
- a combination of warranties to set clear performance and quality baselines, indemnities for specific high-impact risks (including those heads of liability above) and service level commitments; and
- appropriate insurance policies to cover any excess loss, including cybersecurity and technology errors and omissions insurance with AI-specific liability endorsements.
Transition-out arrangements
Transition-out arrangements help ensure a smooth and predictable handover of services, data, assets and knowledge, to reduce disruption, additional cost and operational risk for customers at the end of the contract.
Effective provisions may include:
- clear post-contract rights to datasets, (raw and processed), prompts, algorithms, assets and other artefacts, and considering whether these need to be transferred to a new provider;
- provisions addressing knowledge transfer, timeframes, fees and resourcing;
- consideration of upstream licensing restrictions that could affect future use, licensing to third parties or transferring to another provider; and
- assessing whether elements of the AI solution may create ongoing regulatory compliance risks.
Standardisation
Some agreements can feature suggested ‘industry standard’ AI terms. However, these often do not align with the risk profiles of the parties and so should be treated with caution.
Businesses may consider:
- aligning procurement agreements with internationally reputable standards such as NIST AI RMF, ISO/IEC 42001 or ISO/IEC 23894, or aspects of the Guidance for AI Adoption published by the Commonwealth Government; and
- adapting standard contractual clauses such as the European Model Contractual Clauses for AI procurement or the Australian government’s Digital Transformation Agency’s model contractual clauses as a starting point. Both sets of clauses were developed initially for public sector agencies but can be adapted to the private sector.
What do we expect to see in 2026?
- More targeted AI procurement, with businesses focusing on specific productivity use cases rather than a broad experimental all-encompassing solution.
- Greater focus on AI risk management, particularly where AI solutions are used in high-risk areas requiring decision making like finance, healthcare and regulatory liability. In the absence of legislation in Australia, customers are likely to seek contractual protections to deal with risks faced by their organisation.
- Continued development of agentic AI – systems capable of acting with greater autonomy – which will raise new questions around governance, responsibility and liability.
Our Technology team regularly advises on AI procurement, including governance frameworks, procurement playbooks and contracting strategies. If you would like to discuss how these developments may affect your organisation, please get in touch.
Contacts
