Healthcare, Law & Ethics – AI in healthcare, legal and ethical issues in Australia

Insights17 Nov 2022
The development and use of artificial intelligence (AI) in health and biotechnology is creating opportunities and benefits for health care providers and consumers.

By Alison Choy Flannigan, Partner, Co-Lead Health & Community, Hall & Wilcox, Australia

The development and use of artificial intelligence (AI) in health and biotechnology is creating opportunities and benefits for health care providers and consumers. Presently, AI is being used in medical fields such as diagnostics, e-health and evidenced based medicine. A prime example has been the use of AI in the detection and management of diagnostic imaging such as thoracic imaging, mammography, and colorectal polyps.[1] 

However, a number of legal, regulatory, ethical, and social issues have arisen with the use of AI in the healthcare sector. The issue is: can the law keep up with the pace?

Duty of care, negligence

The potential liability for injury caused to a resident or patient due to AI will depend on the circumstances of the adverse event but may include:

  • the treating clinician, such as the GP, who relied upon the technology;
  • the developer of the algorithm;
  • the programmer of the software; or
  • the hospital.

Proving causation in negligence under civil liability legislation may be difficult when machine learning occurs in a multi-layered, fluid environment when the machine itself is influencing the output. Answers may be complex and difficult to find given the legal, regulatory, ethical, and social issues at play.

Additionally, the use of AI in the clinical practice of healthcare may also present challenges regarding the duty of care for health practitioners. For example, do clinicians have to disclose to their patients that they reached a clinical decision using AI? If so, to what extent will clinicians have a responsibility to educate their patients on the complexities of AI?

We are watching closely how the law of negligence and duty of care adapts to this new technology.

Product liability

Product liability laws may also be relevant including consumer guarantees under the Australian Consumer Law.

Regulatory changes for software based medical devices under the Therapeutic Goods Act

The Therapeutic Goods Act 1989 (Cth) (Therapeutic Goods Act) regulates software-based medical devices, including software that functions as a medical device in its own right and software that controls or interacts with a medical device.On 25 February 2021, the Therapeutic Goods Administration (TGA) implemented reforms to the regulation of software-based medical devices, including new classification rules for software-based medical devices according to their potential to cause harm through the provision of incorrect information.[2]

The changes include:

  • clarifying the boundary of regulated software products (including ‘carve outs’);
  • introducing new classification rules; and
  • providing updates to the essential principles to clearly express the requirements for software-based medical devices.

Certain software-based medical devices have been ‘carved out’  from the scope of the TGA regulation either by exclusion or exemption:

  • exclusion means that the devices are completely unregulated by the TGA; and
  • exemption means that the TGA retains some oversight for advertising, adverse events and notification. However, registration of the device on the Australian Register of Therapeutic Goods (ARTG) is not required.

Certain clinical support decision support systems have been exempted.

Excluded products include consumer health products involved in prevention, management and follow up that do not provide specific treatment or treatment suggestions, enabling technology for telehealth, healthcare or dispensing, digitisation of paper based or other published clinical rules or data including simple calculators and electronic patient records, population-based analytics, and laboratory information management systems.

The TGA published the following guidelines ‘Regulatory changes for software based medical devices‘ in August 2021.

In Australia, the Therapeutic Goods Act defines ‘therapeutic goods’ and ‘medical devices’ very broadly, particularly if therapeutic claims are made.

Section 41BD of the Therapeutic Goods Act defines ‘medical device’ as:

  1. any instrument, apparatus, appliance, software, implant, reagent, material, or other article (whether used alone or in combination, and including the software necessary for its proper application) intended, by the person under whose name it is or is to be supplied, to be used for human beings for the purpose of one or more of the following:
    1. diagnosis, prevention, monitoring, prediction, prognosis, treatment, or alleviation of disease;
    2. diagnosis, monitoring, treatment, alleviation of or compensation for an injury or disability;
    3. investigation, replacement, or modification of the anatomy or of a physiological or pathological process or state;
    4. control or support of conception;
    5. in vitro examination of a specimen derived from the human body for a specific medical purpose;

and that does not achieve its principal intended action in or on the human body by pharmacological, immunological, or metabolic means, but that may be assisted in its function by such means;

According to TGA Guidelines[3], software will be considered a medical device where its intended medical purpose includes one or more of the following:

  • diagnosis, prevention, monitoring, prediction, prognosis or treatment of a disease, injury or disability;
  • compensation for an injury or disability;
  • investigation, replacement, or modification of the anatomy or of a physiological process or state; or
  • to control or support conception.

The term ‘Software as a Medical Device’ (SaMD) includes software that is an accessory or controls a medical device. SaMD must be included on the ARTG before they are supplied in Australia unless an exemption applies (such as a clinical trial or the Special Access Scheme (SAS)).

Presently, the TGA regulates software under the existing medical device framework.

One of the main regulatory hurdles with registration of AI is that it is fluid and constantly changing whereas the TGA review of medical devices is currently based upon a pre-market product at a fixed period of time. The traditional framework of medical device regulation is not designed for adaptive artificial intelligence and machine learning techniques.

Privacy

The digitisation of medical devices may help improve the benefit and functionality of the device. However, SaMDs also present an increased risk of cyber threats that can potentially harm patients. As recognised by the TGA Guidelines on medical device cyber security (TGA Cyber Security Guidelines)[4], these risks include:

  • denial of intended service or therapy;
  • alteration of device function to directly cause patient harm; and
  • loss of privacy or alteration of personal health data.

The Therapeutic Goods Act currently aims to mitigate the risk of cyber threats through regulatory controls, such as compliance with the Essential Principles. According to the TGA Cyber Security Guidelines, the Essential Principles require that a manufacturer minimise the risks associated with the design, long-term safety, and use of the device, which implicitly includes minimisation of cyber security risk.

Manufacturers of SaMDs must also consider their obligations under privacy legislation, including the Office of the Australian Information Commissioner’s (OAIC) Notifiable Data Breach Scheme under the Privacy Act 1988 (Cth).

However, as the complexity of AI and medical devices evolve, it may be necessary to have regulatory controls that specifically address cyber security risk in SaMDs.

Ethical issues

There have been a number of working groups established to discuss ethical issues concerning the use of AI in healthcare.

In 2017, the World Health Organisation and its Collaborating Centre at the University of Miami organised an international consultation on the subject. A theme issue of the WHO Bulletin devoted to big data, machine learning and AI was published in 2020.

The European Union on Ethics in Science and New Technologies published a ‘Statement on Artificial Intelligence, Robotics and Autonomous Systems’ (the Statement) in March 2018.

Further, in February 2020, the European Commission published a report on the safety and liability implications of AI, the Internet of Things and robotics (the Report).[5]

While the Report argued that ‘the existing Union and national liability laws are able to cope with emerging technologies’, it also identified some challenges raised by AI that require adjustments to the current regulatory framework.

Australia is not a member of the EU. However, its therapeutic goods regulation more heavily aligned with the EU than the US.

The Statement proposed a set of basic principles and democratic prerequisites, based on the fundamental values laid down in the EU Treaties and in the EU Charter of Fundamental Rights. These principles and our commentary are set out below.

Human dignity: the principle of human dignity, understood as the recognition of the inherent human state of being worthy of respect, must not be violated by ‘autonomous’ techniques. It implies that there have to be (legal) limits to the ways in which people can be led to believe that they are dealing with human beings when in fact they are dealing with algorithms and smart machines.

Should we be transparent in telling people that they are interfacing with AI?

Autonomy: the principle of autonomy implies the freedom of the human being, the freedom of human beings to set their own standards. The technology must respect the choice of humans when to delegate decisions and actions to them.

What should we delegate to machines? Surely, the best care is the human touch and people should come first.

Responsibility: autonomous systems should only be developed and used in ways that serve the global social and environmental good. Applications of AI and robotics should not pose unacceptable risks of harm to human beings.

This is consistent with the principle that we should do no harm.

Justice, equity, and solidarity: AI should contribute to global justice and equal access.

It is important to grant equity of access and that the benefits of AI not only be provided to those countries or people who can pay for the technology.

Democracy: key decisions should be subject to democratic debate and public engagement.

The use of AI should be done in accordance with community expectations and standards.

Rule of law and accountability: rule of law, access to justice and the right of redress and a fair trial should provide the necessary framework for ensuring the observance of human rights standards and potential AI specific regulation.

There should be adequate compensation for negligence.

Security, safety, bodily and mental integrity: safety and security of autonomous systems includes external safety for the environment and users, reliability, and internal robustness (e.g., against hacking) and emotional safety with respect to human-machine interaction.

The use of AI in health care should be appropriately regulated to ensure that it is safe.

Data protection and privacy: autonomous systems must not interfere with the right to privacy of personal information and other human rights, including the right to live free from surveillance.

The protection of privacy and data protection is important.

Sustainability: AI technology must be in line with the human responsibility to ensure the sustainability of mankind and the environment for future generations.

Case law

There is a lack of Australian case law specific to negligence and AI in a health setting.

The ‘Watson for Oncology’ (WFO) clinical decision-support system in the US is an example of possible future challenges with the use of AI in healthcare.[6] The WFO uses AI algorithms to assess medical records and assist physicians with selecting cancer treatments for their patients. This software recently received criticism after a news report (the Report) alleged that the WFO provided ‘unsafe and incorrect treatment recommendations’.[7] According to the Report, the WFO was fed hypothetical, or ‘synthetic’, patient data by doctors at the Memorial Sloan Kettering (MSK) Cancer Centre. As such, it was argued that the WFO was biased towards MSK’s treatment options.

The Chief Medical Officer of the developer of WFO, Dr Nathan Levitan, has since addressed these criticisms. According to Dr Levitan, the ‘unsafe and incorrect’ recommendations were identified by IBM Co. Limited’s quality management system and corrected before ever reaching a patient. Additionally, Dr Levitan argued that the use of ‘synthetic patient data’ is necessary to ensure that recommended treatment options reflect current practice.[8]

Commentary

An identified issue with AI is bias arising from the data used and the assumptions made by developers. Further, education is required for medical practitioners to understand the use and limitations of AI in health care. In addition, it is recommended that companies who develop AI for use in the health care sector have multi-disciplinary clinical governance committees who oversee the development of the product from a clinical point of view. Ultimately, it is the treating clinician’s responsibility to decide treatment options for patients using AI as a tool.

It will be interesting to observe case law developing in relation to liability for medical negligence and AI.  Who will be responsible for medical negligence when AI caused the injury? The clinician, the company who developed the software, the programmers or the data analytic specialists?

[1] Barua et al, ‘Artificial intelligence for polyp detection during colonoscopy: a systematic review and meta-analysis’ (2021) 53(3) Endoscopy 277.
[2] Therapeutic Goods Legislation Amendment (2019 Measures No. 1) Regulations 2019 (Cth).
[3] How the TGA regulates softwarebased medical devices
[4] Medical device cyber security guidance for industry
[5] European Commission: ‘Report on the safety and liability implications of Artificial Intelligence, the Internet of Things and robotics‘.
[6] Ethical and legal challenges of artificial intelligence-driven healthcare.
[7] IBM’s Watson supercomputer recommended ‘unsafe and incorrect’ cancer treatments, internal documents show.
[8] Confronting the Criticisms Facing Watson for Oncology.

Hall & Wilcox acknowledges the Traditional Custodians of the land, sea and waters on which we work, live and engage. We pay our respects to Elders past, present and emerging.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of service apply.