The adequacy of judicial review as a means of overseeing the use of automation in administrative decision-making

Insights14 May 2024

By Nicola Johnson and Michelle Harradine

Judicial review is an important mechanism for promoting accountability in executive action and protecting individual rights and interests. But the assessment of its adequacy in overseeing automated decision-making presents several intersecting questions: was there a ‘decision’ that can be the subject of judicial review? Who was the ‘decision-maker’? On what grounds might an automated decision be challenged on judicial review?

Jurisdiction

The requirement to establish the Court’s jurisdiction to consider an application for judicial review presents unique challenges for those wishing to seek judicial review of automated decisions:

  • In Pintarich v Deputy Commissioner of Taxation [2018] FCAFC 79, the Court held that a computer-generated letter did not constitute a ‘decision’ for the purposes of the Administrative Decisions (Judicial Review) Act 1977 (Cth). The majority held that a valid decision had two elements: a mental process of deliberation, assessment and/or analysis and an objective manifestation of that decision. The Court found the computer-generated letter lacked the requisite mental process of reaching a conclusion.
  • Similarly, for the purposes of s75(v) of the Constitution, the term ‘officer of the Commonwealth’ has been held to mean someone of some conceivable tenure, who was directly appointed by the Commonwealth, who accepted office and salary from the Commonwealth, and who could be removed by the Commonwealth. It is difficult to see how a computer system making fully automated decisions could meet that definition.

While the enactment of deeming provisions to enable the use of automated decision-making is increasing and may overcome these jurisdictional issues, there remains a need for more clarity in respect of the review rights of those affected by automated decisions. There would be utility in reforming legal definitions informing jurisdictional assessments to reflect the increasing use of automation in administrative decision-making.

Grounds of judicial review

The categories of jurisdictional error are not closed but there are familiar grounds of judicial review on which jurisdictional error might be established: misunderstanding the applicable law; asking the wrong question; exceeding the bounds of reasonableness; identifying a wrong issue; ignoring relevant material; relying on irrelevant material; making an erroneous finding or reaching a mistaken conclusion; and failing to observe the requirements of procedural fairness: LPDT and Minister for Immigration [2024] HCA 12.

Procedural fairness

Automated decision-making poses several challenges to the concept of procedural fairness and, in turn, forms a potential ground of judicial review. When decisions are (fully or partly) automated, the opacity of automation challenges the right to have a reasoned decision and effective judicial review. As a matter of good administrative practice and to defend an allegation that an automated decision was not procedurally fair, a person whose rights are affected by an automated decision ought to be told that a machine has made, or has materially contributed to the making of, the proposed decision before it is finalised.

Adequacy of reasons

In many cases, the requirements of procedural fairness will mean that there is also a legal duty to provide reasons. Sometimes this requirement is expressly imposed by statute, either specifically in relation to a particular decision, or more generally under certain circumstances. In other cases, it is implied because of the nature of the function, the person exercising it and the impact it has on those affected by it.

If an automated decision is based on insufficient evidence, the decision may be open to challenge because it is not supported by reasons that indicate the material to justify a finding. That is, the reasons do not adequately justify the result reached and the court infers from a lack of good reasons that none exist.

Bias

It might be thought that eliminating the personal element of decision-making by using automation could minimise, if not completely alleviate, any risk of bias. While an advantage of automated systems can be that they reduce opportunities for human bias, prejudices and error, this is not always the case. Instead, bias might arise in automated administrative decision-making where:

  • programmers have built matching algorithms that have biased assumptions or limitations embedded in them, thereby infecting a program at the design stage and reflecting either deliberate choices or unconscious biases of the programmer.
  • historical data is used to train machine learning systems and this data is itself reflective of conscious or unconscious biases of earlier human decision-makers.

Conclusion

While there are a range of grounds on which it might be alleged that an automated decision is affected by jurisdictional error, the prospects of passing the preliminary hurdle of jurisdiction are not promising under the ADJR Act and are yet to be tested for section 75(v) of the Constitution.

Even if the prerequisite of jurisdiction is satisfied and grounds of judicial review are made out, judicial review proceedings are limited in their ability to address any systemic issues with the use of automated decision-making. To avoid decisions being set aside on judicial review, it is essential that public sector entities adopting automated processes consider issues of legality during the design and implementation processes.

Nicola Johnson and Michelle Harradine discussed judicial review of automated decisions in the third webinar of our 2024 Administrative Law webinar series.

Contact

Hall & Wilcox acknowledges the Traditional Custodians of the land, sea and waters on which we work, live and engage. We pay our respects to Elders past, present and emerging.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of service apply.