ASIC report on AI governance – learnings for licensees
ASIC has released Report 798 Beware the gap: Governance arrangements in the face of AI innovation (Report). The Report analyses how licensees in ASIC regulated sectors are using, monitoring, and planning to use artificial intelligence (AI).
ASIC analysed 624 AI use cases that 23 licensees in the banking, credit, insurance and financial advice sectors were using, or developing, as of December 2023. These were use cases that directly or indirectly impacted consumers. ASIC also asked licensees about their risk management and governance arrangements for AI and their plans for AI in the future.
There were eight key findings.
1.The extent of AI use varied significantly but overall adoption is accelerating
The extent to which licensees used AI varied significantly. Some licensees had been using forms of AI for several years and others were early in their journey.
61 per cent of licensees in the review advised they planned to increase their use of AI in the next 12 months. The remainder were planning to maintain their current AI use.
2. Most current use cases applied long-established, well-understood techniques. But there is a shift towards more complex and opaque techniques, including generative AI
While most current use cases used long-established, well-understood techniques, there is a shift towards more complex and opaque techniques. The adoption of generative AI particularly is increasing exponentially.
Generative AI made up 22 per cent of uses in development, but supervised learning (classification) was the most common model. Classification models were mostly used to predict if a consumer was likely to take out a financial product using explainable models such as logistic regression.
3. The way AI was used was mostly cautious
Existing AI deployment strategies were mostly cautious, including for generative AI. AI augmented human decisions or increased efficiency; generally, AI did not make autonomous decisions. Most use cases did not directly interact with consumers.
Common uses were generating first drafts of documents, call analysis, chat bots for internal use, and internal assistance. Most of the data used by these models was from internal sources such as customer financial information (eg asset holdings) or details provided by customers when lodging information (eg requesting quotes).
Common uses also involved predicating credit risk defaults, analysing consumer spending patterns, chatbots answering simple customer questions, fraud detection, document support for internal processes, predication of customer retention, and actuarial models for risk, cost and demand modelling.
4. There were gaps in arrangements for managing some AI risks
Only half of licensees had specifically updated their risk management policies or procedures to address AI. Some also relied on existing policies and procedures without making changes. Although policies and procedures may address privacy and security, none were specifically adjusted for AI privacy and security.
Only 12 of the licensees in the review had AI policy documents, guidance, or checklists that referenced fairness, discrimination, and bias risks.
No licensees had implemented contestability arrangements for AI although some referred to the availability of internal dispute resolution.
5. There were gaps in licensees’ assessments of AI risks
Some licensees assessed risks through the lens of the business rather than the consumer. ASIC found some gaps in how licensees assessed risks, particularly risks to consumers specific to the use of AI, such as algorithmic bias.
The review highlighted that most licensees had not considered whether they need to disclose their use of AI, especially where the consumer is not directly affected.
Discussions with licensees resulted in licensees questioning:
- how much AI had to be involved in an interaction or decision before it should be disclosed;
- whether consumers would find disclosure useful; and
- whether it was necessary to introduce transparency now, given some models had been in use for a long time.
Most licensees used their models with a ‘human in the loop’, where a human oversees the AI work (although the extent of the oversight differs). Some licensees decide a human will be accountable for each decision where AI is involved, while others used periodic human checks. This oversight becomes increasingly difficult the more complex the AI model.
6. AI governance arrangements varied widely. ASIC identified weaknesses that create the potential for gaps as AI use accelerates
AI governance arrangements varied widely. ASIC saw weaknesses that create the potential for gaps as AI use accelerates.
ASIC identified three broad categories of approaches to governance forming a spectrum from least to most mature:
- The least mature took a latent approach that had not considered AI-specific governance and risk.
- The most mature took a strategic, centralised approach.
- Licensees falling in between generally adopted decentralised approaches that leveraged existing frameworks.
Even where licensees did have appropriate policies and procedures, some were not evolving to meet usage changes and the introduction of new AI models.
Some of the poorer governance arrangements did not have clear strategies, ad hoc reporting to board or relevant committees, did not have committees to oversee AI usage, and did not have specific ethics principles and codes of conduct for AI usage.
7. The maturity of governance and risk management did not always align with the nature and scale of licensees’ AI use
The maturity of governance and risk management did not always align with the nature and scale of licensees’ AI use. ASIC expected licensees with the most mature governance frameworks would have the greatest AI use. Instead, most licensees were updating governance arrangements while increasing AI usage, rather than creating governance arrangements before introducing new AI models.
Of the 23 licensees reviewed, 14 were planning to increase their use of AI. Of these, 13 were also planning, or had commenced, an uplift in AI governance. This presents risk where governance changes do not occur before increases in use.
8. Many licensees relied heavily on third parties for their AI models, but not all had appropriate governance arrangements in place to manage risk
Many licensees relied heavily on third parties for their AI models, but not all had appropriate governance arrangements in place to manage the associated risks.
30 per cent of all use cases in the review had models developed by third parties. Most licensees relied on third parties for at least 50 per cent of their models, although some did not have robust third-party management procedures. Better practices saw licensees setting the same expectations for models developed by third parties as for internally developed models.
A better practice case study
One licensee had supplier risk frameworks in place complementing its model risk requirements for third-party developed models, and set clear expectations, including to:
- obtain proof of independent validation from the supplier and validate the model internally before use;
- establish service-level agreements to ensure models are implemented appropriately, including back-ups and disaster recovery plans; and
- establish a process to be notified of model changes, to obtain performance monitoring results, and to consider fourth-party risks.
The licensee reported: ‘All third-party models are subject to the same governance principles [as internally developed models].’
ASIC recommends that licensees consider their regulatory obligations, and how these obligations fit in with their usage and management of AI. These obligations include:
- doing all things necessary to ensure financial services or credit services are provided in a way that meets all the elements of ‘efficiently, honestly and fairly’;
- not engaging in unconscionable conduct;
- not making false or misleading representations;
- having measures for complying with these obligations, including general obligations;
- having adequate technological and human resources;
- having adequate risk management systems;
- remaining responsible for outsourced functions;
- company directors and officers discharging their duties with a reasonable degree of care and diligence.
ASIC strongly recommends AI governance and policies being introduced before the introduction of AI usage and recommends specific AI policies and procedures. These may include code of conduct for AI usage, privacy, consumer facing usage, and internal dispute resolution.
This article was written with the assistance of Charlotte Pratt, Law Graduate.