Legal and ethical issues with the use of AI – including ChatGPT – in healthcare

Insights4 Sept 2023

Technology has developed so fast in relation to AI in recent years that the use of AI, including Chat GPT, is now becoming the norm in our lives, including healthcare.

There has been a notable surge of text-based artificial intelligence products, including but not limited to ChatGPT, GPT-4, Bard, and other prominent large language models (LLMs).

Many software companies are incorporating AI solutions into their products.

The development and use of artificial intelligence (AI) in health and biotechnology is creating opportunities and benefits for healthcare providers and consumers.

Presently, AI is being used in medical fields such as diagnostics, e-health and evidenced based medicine. A prime example has been the use of AI in the detection and management of diagnostic imaging such as thoracic imaging, mammography, and colorectal polyps.[1]

However, a number of legal, regulatory, ethical, and social issues have arisen with the use of AI in the healthcare sector. The issue is: can the law keep up with the pace?

The use of AI (such as Chat GPT) and privacy

Health records are classified in Australia as sensitive information for which special privacy protections are in place under the Commonwealth Privacy Act 1988 and relevant state and territory legislation, such as the Health Records Act 2001 (NSW) and the Privacy and Personal Information Protection Act 1998 (NSW) (Privacy legislation).

The issue with machine learning is that it uses text that has been inputted.

Therefore, healthcare providers must be especially vigilant to avoid inputting sensitive information such as health information into programs such as LLMs and should have policies and procedures and training on the use of LLMs (including Chat GPT) by staff and contractors of their personal information.

An intentional or unintentional disclosure of health information into LLM technology (such as ChatGPT) could be a notifiable data breach under the Privacy Act.

Intellectual property

It is important to note that many LLMs such as ChatGPT do not offer protection of your intellectual property. If you grant access to your intellectual property to LLMs and that intellectual property is used to create a new work, you could lose your intellectual property rights.

Therefore, it is important to register your intellectual property (such as patents) where relevant and not input valuable intellectual property into LLMs.

Further, Australian courts have held that there are no copyright or patentable inventions created in computer generated works.

Section 32 of the Copyright Act 1968 (Cth) makes it clear that copyright subsists in the original works of an author who was a ‘qualified person’, defined to include an Australian citizen or a person residing in Australia. There is also case law that established that the creation of original works must involve ‘independent intellectual effort’, such that an automated process cannot attract copyright.[2]

In Australia, patents may only be granted for inventions by human. The ‘inventor’ listed in an application for a patent under the Patents Act must be a natural person.

This does not mean that an invention made by an AI system in not capable of being granted a patient. It will be necessary to identify a human ‘inventor’, for example the developer of the AI software.[3] However, the High Court’s comments suggest that an appropriate case for determination may involve a natural person claiming to be the inventor and encountering rejection on the basis that the true inventor is an AI system.

This has huge implications for health and medical research, if AI is used in the creation of intellectual property, research papers and data because when you go to commercialise the IP, for example sell or licence it to a large medical device company or pharmaceutical company or list a biotech start-up on the ASX, proving ownership of the IP becomes critical.

Again, healthcare providers should have policies and procedures and training on the huge impact of LLMs (including ChatGPT) in health and medical research.

Duty of care, negligence

The potential liability for injury caused to a resident or patient due to AI will depend on the circumstances of the adverse event but may include:

  • the treating clinician, such as the GP, who relied upon the technology;
  • the developer of the algorithm;
  • the programmer of the software; or
  • the hospital.

Proving causation in negligence under civil liability legislation may be difficult when machine learning occurs in a multi-layered, fluid environment when the machine itself is influencing the output. Answers may be complex and difficult to find given the legal, regulatory, ethical, and social issues at play.

Additionally, the use of AI in the clinical practice of healthcare may also present challenges regarding the duty of care for health practitioners. For example, do clinicians have to disclose to their patients that they reached a clinical decision using AI? If so, to what extent will clinicians have a responsibility to educate their patients on the complexities of AI?

We are watching closely how the law of negligence and duty of care adapts to this new technology.

Product liability

Product liability laws may also be relevant, including consumer guarantees under the Australian Consumer Law.

Regulatory changes for software based medical devices under the Therapeutic Goods Act

The digitisation of medical devices may help improve the benefit and functionality of the device. However, Software as a Medical Device’ (SaMD) also present an increased risk of cyber threats that can potentially harm patients. As recognised by the TGA Guidelines on medical device cybersecurity (TGA Cybersecurity Guidelines)[4], these risks include:

  • denial of intended service or therapy;
  • alteration of device function to directly cause patient harm; and
  • loss of privacy or alteration of personal health data.

The Therapeutic Goods Act currently aims to mitigate the risk of cyber threats through regulatory controls, such as compliance with the Essential Principles. According to the TGA Cybersecurity Guidelines, the Essential Principles require that a manufacturer minimise the risks associated with the design, long-term safety, and use of the device, which implicitly includes minimisation of cyber security risk.

Manufacturers of SaMDs must also consider their obligations under privacy legislation, including the Office of the Australian Information Commissioner’s (OAIC) Notifiable Data Breach Scheme under the Privacy Act.

However, as the complexity of AI and medical devices evolve, it may be necessary to have regulatory controls that specifically address cyber security risk in SaMDs.

The Therapeutic Goods Act 1989 (Cth) regulates software-based medical devices, including software that functions as a medical device in its own right and software that controls or interacts with a medical device. On 25 February 2021, the Therapeutic Goods Administration (TGA) implemented reforms to the regulation of software-based medical devices, including new classification rules for software-based medical devices according to their potential to cause harm through the provision of incorrect information.[5]

The changes include:

  • clarifying the boundary of regulated software products (including ‘carve outs’);
  • introducing new classification rules; and
  • providing updates to the essential principles to clearly express the requirements for software-based medical devices.

Certain software-based medical devices have been ‘carved out’ from the scope of the TGA regulation either by exclusion or exemption:

  • exclusion means that the devices are completely unregulated by the TGA; and
  • exemption means that the TGA retains some oversight for advertising, adverse events and notification. However, registration of the device on the Australian Register of Therapeutic Goods (ARTG) is not required.

Certain clinical support decision support systems have been exempted.

Excluded products include consumer health products involved in prevention, management and follow up that do not provide specific treatment or treatment suggestions, enabling technology for telehealth, healthcare or dispensing, digitisation of paper based or other published clinical rules or data including simple calculators and electronic patient records, population-based analytics, and laboratory information management systems.

The TGA published the following guidelines ‘Regulatory changes for software based medical devices’ in August 2021.

In Australia, the Therapeutic Goods Act defines ‘therapeutic goods’ and ‘medical devices’ very broadly, particularly if therapeutic claims are made.

Section 41BD of the Therapeutic Goods Act defines ‘medical device’ as:

  1. any instrument, apparatus, appliance, software, implant, reagent, material, or other article (whether used alone or in combination, and including the software necessary for its proper application) intended, by the person under whose name it is or is to be supplied, to be used for human beings for the purpose of one or more of the following:
    1. diagnosis, prevention, monitoring, prediction, prognosis, treatment, or alleviation of disease;
    2. diagnosis, monitoring, treatment, alleviation of or compensation for an injury or disability;
    3. investigation, replacement, or modification of the anatomy or of a physiological or pathological process or state;
    4. control or support of conception;
    5. in vitro examination of a specimen derived from the human body for a specific medical purpose;

and that does not achieve its principal intended action in or on the human body by pharmacological, immunological, or metabolic means, but that may be assisted in its function by such means;

According to TGA Guidelines[6], software will be considered a medical device where its intended medical purpose includes one or more of the following:

  • diagnosis, prevention, monitoring, prediction, prognosis or treatment of a disease, injury or disability;
  • compensation for an injury or disability;
  • investigation, replacement, or modification of the anatomy or of a physiological process or state; or
  • to control or support conception.

The term ‘SaMD’ includes software that is an accessory or controls a medical device. SaMD must be included on the ARTG before they are supplied in Australia unless an exemption applies (such as a clinical trial or the Special Access Scheme (SAS)).

Presently, the TGA regulates software under the existing medical device framework. This includes software and mobile apps that meet the definition of ‘medical devices’.[7] For example, iPredict System which was approved in February 2022 can automatically screen people at risk of developing diabetic retinopathy, age-related macular degeneration and glaucoma.[8]

One of the main regulatory hurdles with registration of AI is that it is fluid and constantly changing, whereas the TGA review of medical devices is currently based upon a pre-market product at a fixed period of time. The traditional framework of medical device regulation is not designed for adaptive artificial intelligence and machine learning techniques.

The TGA has issued guidance materials aimed at facilitating the identification of software-based medical devices and their classification.[9]

Clinical and technical evidence must be provided to establish the safety and performance of the product utilising LLMs, adhering to the equivalent standard as other medical devices.  More stringent clinical and technical evidence will be required for products of higher risk.[10]

Technical requirements:[11]

  • Software developers will need to understand and demonstrate the sources and quality of text inputs used to train and test the model, and in clinical studies, in addition to showing how the data is relevant and appropriate for use on Australian populations.
  • It is important to note that where there are no medical purpose or claims associated with the product using the LLM or if it does not meet the definition of a medical device as defined in the section 41BD of the Therapeutic Goods Act, it is unlikely to be a medical device and is not regulated by the TGA but may still be regulated by other laws, including the Australian Consumer Law.

Ethical issues

There have been a number of working groups established to discuss ethical issues concerning the use of AI in healthcare.

Australia possesses a range of anti-discrimination laws operating at both the state and federal levels. Notably, the Age Discrimination Act 2004, the Disability Discrimination Act 1992, Racial Discrimination Act 1975, and the Sex Discrimination Act 1984.[12]

Instances have arisen where AI systems, employing historical data, have reproduced biases or reproduced biases or prejudices inherent in the original data, as well as imperfections in its collection.[13]

On 5 April 2019 the Minister for Industry, Science and Technology issued a discussion paper with the objective of facilitating conversation surrounding designing, developing, deploying and operating Artificial Intelligence (AI) systems in Australia.[14] Notably, the paper sought feedback on draft AI ethics principles, intended to be aspirational and to complement (not substitute) existing AI regulations and practices.[15]

These Principles are as follows:

  • Human, societal and environmental wellbeing: AI systems should benefit individuals, society and the environment.
  • Human-centred values: AI systems should respect human rights, diversity, and the autonomy of individuals.
  • Fairness: AI systems should be inclusive and accessible, and should not involve or result in unfair discrimination against individuals, communities or groups.
  • Privacy protection and security: AI systems should respect and uphold privacy rights and data protection, and ensure the security of data.
  • Reliability and safety: AI systems should reliably operate in accordance with their intended purpose.
  • Transparency and explainability: There should be transparency and responsible disclosure so people can understand when they are being significantly impacted by AI, and can find out when an AI system is engaging with them.
  • Contestability: When an AI system significantly impacts a person, community, group or environment, there should be a timely process to allow people to challenge the use or outcomes of the AI system.
  • Accountability: People responsible for the different phases of the AI system lifecycle should be identifiable and accountable for the outcomes of the AI systems, and human oversight of AI systems should be enabled.

Case law

There is a lack of Australian case law specific to negligence and AI in a health setting.

The ‘Watson for Oncology’ (WFO) clinical decision-support system in the US is an example of possible future challenges with the use of AI in healthcare.[16] The WFO uses AI algorithms to assess medical records and assist physicians with selecting cancer treatments for their patients. This software recently received criticism after a news report (the Report) alleged that the WFO provided ‘unsafe and incorrect treatment recommendations.[17] According to the Report, the WFO was fed hypothetical, or ‘synthetic’, patient data by doctors at the Memorial Sloan Kettering (MSK) Cancer Centre. As such, it was argued that the WFO was biased towards MSK’s treatment options.

The Chief Medical Officer of the developer of WFO, Dr Nathan Levitan, has since addressed these criticisms. According to Dr Levitan, the ‘unsafe and incorrect’ recommendations were identified by IBM Co. Limited’s quality management system and corrected before ever reaching a patient. Additionally, Dr Levitan argued that the use of ‘synthetic patient data’ is necessary to ensure that recommended treatment options reflect current practice.[18]

Commentary

We have identified above material privacy and IP concerns in relation to the use of AI products such as ChatGPT.

An identified issue with AI is bias arising from the data used and the assumptions made by developers. Further, education is required for medical practitioners to understand the use and limitations of AI in health care. In addition, it is recommended that companies who develop AI for use in the health care sector have multi-disciplinary clinical governance committees who oversee the development of the product from a clinical point of view. Ultimately, it is the treating clinician’s responsibility to decide treatment options for patients using AI as a tool.

It will be interesting to observe case law developing in relation to liability for medical negligence and AI. Who will be responsible for medical negligence when AI caused the injury? The clinician, the company who developed the software, the programmers or the data analytic specialists?

This article first appeared in the Internet Law Bulletin, 2023, Vol 25 No 9.

Footnotes

[1] Barua et al, ‘Artificial intelligence for polyp detection during colonoscopy: a systematic review and meta-analysis’ (2021) 53(3) Endoscopy 277.
[2] Telstra Corporation Limited v Phone Directories Company Pty Ltd [2010] FCAFC.
[3] Commissioner of Patents v Thaler [2022] FCAFC 62; Thaler v Commissioner of Patents [2022] HCA Trans 199.
[4] Medical device cyber security guidance for industry.
[5] Therapeutic Goods Legislation Amendment (2019 Measures No. 1) Regulations 2019 (Cth).
[6] How the TGA regulates software-based medical devices.
[7] Noting that mobile apps which are simply sources of information or tools to manage a healthy lifestyle are not medical devices.
[8] Public Summary Arif Systems – iPredict – Automated retinopathy analysis system application software.
[9] Regulatory changes for software based medical devices.
[10] Regulation of software based medical devices.
[11] Regulation of software based medical devices – Artificial Intelligence Chat, Text, and Language.
[12] Australia’s anti-discrimination law.
[13] Discriminating algorithms: 5 times AI showed prejudice.
[14] Australia’s AI Ethics Framework.
[15] Australia’s AI Ethics Principles.
[16] Ethical and legal challenges of artificial intelligence-driven healthcare.
[17] IBM’s Watson supercomputer recommended ‘unsafe and incorrect’ cancer treatments, internal documents show.
[18] Confronting the Criticisms Facing Watson for Oncology.

Commentary

We have identified above material privacy and IP concerns in relation to the use of AI products such as ChatGPT.

An identified issue with AI is bias arising from the data used and the assumptions made by developers. Further, education is required for medical practitioners to understand the use and limitations of AI in health care. In addition, it is recommended that companies who develop AI for use in the health care sector have multi-disciplinary clinical governance committees who oversee the development of the product from a clinical point of view. Ultimately, it is the treating clinician’s responsibility to decide treatment options for patients using AI as a tool.

It will be interesting to observe case law developing in relation to liability for medical negligence and AI. Who will be responsible for medical negligence when AI caused the injury? The clinician, the company who developed the software, the programmers or the data analytic specialists?

This article first appeared in the Internet Law Bulletin, 2023, Vol 25 No 9.

[1] Barua et al, ‘Artificial intelligence for polyp detection during colonoscopy: a systematic review and meta-analysis’ (2021) 53(3) Endoscopy 277.
[2] Telstra Corporation Limited v Phone Directories Company Pty Ltd [2010] FCAFC.
[3] Commissioner of Patents v Thaler [2022] FCAFC 62; Thaler v Commissioner of Patents [2022] HCA Trans 199.
[4] Medical device cyber security guidance for industry.
[5] Therapeutic Goods Legislation Amendment (2019 Measures No. 1) Regulations 2019 (Cth).
[6] How the TGA regulates software-based medical devices.
[7] Noting that mobile apps which are simply sources of information or tools to manage a healthy lifestyle are not medical devices.
[8] Public Summary Arif Systems – iPredict – Automated retinopathy analysis system application software.
[9] Regulatory changes for software based medical devices.
[10] Regulation of software based medical devices.
[11] Regulation of software based medical devices – Artificial Intelligence Chat, Text, and Language.
[12] Australia’s anti-discrimination law.
[13] Discriminating algorithms: 5 times AI showed prejudice.
[14] Australia’s AI Ethics Framework.
[15] Australia’s AI Ethics Principles.
[16] Ethical and legal challenges of artificial intelligence-driven healthcare.
[17] IBM’s Watson supercomputer recommended ‘unsafe and incorrect’ cancer treatments, internal documents show.
[18] Confronting the Criticisms Facing Watson for Oncology.

Contact

Hall & Wilcox acknowledges the Traditional Custodians of the land, sea and waters on which we work, live and engage. We pay our respects to Elders past, present and emerging.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of service apply.