Health Law

Feb 2019

Are Current Tort Liability Doctrines Adequate for Addressing Injury Caused by AI?

Hannah R. Sullivan and Scott J. Schweikart, JD, MBE
AMA J Ethics. 2019;21(2):E160-166. doi: 10.1001/amajethics.2019.160.

Abstract

As capabilities of predictive algorithms improve, machine learning will become an important element of physician practice and patient care. Implementation of artificial intelligence (AI) raises complex legal questions regarding health care professionals’ and technology manufacturers’ liability, particularly if they cannot explain recommendations generated by AI technology. The limited literature on liability for innovation provides opportunities to consider possible implications of AI for medical malpractice and products liability and new legal solutions for addressing liability issues surrounding “black-box” medicine.

Liability When Patients Are Injured Through New Technologies

Artificial intelligence (AI) is widely employed in health care, with a recent report showing that 86% of provider organizations, technology vendors, and life science companies use some form of AI.1 AI can be broadly defined as machine intelligence that “performs tasks that normally require human intelligence”2 or “that work[s] to achieve goals.”3 Among the most compelling applications of AI is the use of predictive algorithms in precision medicine. Algorithms in precision medicine guide care by predicting patient risks, making accurate diagnoses, selecting drugs, and even prioritizing patients to preserve or assign limited health resources.4 Significantly, the mechanisms behind such recommendations are unknown and currently undiscoverable; an algorithm that cannot demonstrate the path to its conclusion is ultimately a black box.5,6 The unknowable reasoning of “black-box” AI, often referred to as its opacity, stems from “deep neural networks,” with their “reasoning … embedded in the behavior of thousands of simulated neurons, arranged into dozens or even hundreds of intricately interconnected layers.”5 When provided with input data, for example, such as an MRI brain scan, a neural network trained on a large data set can find a “complex underlying pattern in the data”7 and produce an output, such as a tumor classification, but is incapable of explaining the reasoning that led to its conclusion.7-9 Modeled after the human brain, the neural network also learns in similar ways, including through self-teaching. When given additional data, the neural network can modify its decision-making process for a more accurate response, without any explanation of how it has done so. Becoming more autonomous with each improvement, the algorithms by which the technology operates become less intelligible to users and even the developers who originally programmed the technology.10

Given the opaque nature of black-box AI, key legal questions emerge when confronted with possible medical malpractice caused by such technology. For example, consider a situation in which a black-box AI system assists in detection of breast cancer using mammography data and suggests an erroneous diagnosis, resulting in injury to a patient. Are our legal doctrines of tort liability sufficient to handle medical malpractice resulting from the use of black-box AI? If not, what modifications to traditional tort law might be required to address AI systems involved in medical malpractice?

Traditional Tort Liability

Liability for medical errors falls under tort law. A tort is a civil claim in which a party requests damages for injuries caused by a harmful, wrongful act of another. Patients may recover compensatory and punitive damages from physicians, health care organizations, pharmaceutical companies, and medical device manufacturers if they are injured as a result of the party’s failure to meet judicially accepted standards. Typical tort claims in the realm of medicine and health include medical malpractice (negligence), respondeat superior (vicarious liability), and products liability.

Physician liability: malpractice (negligence). Liability for medical errors falls under a negligence framework, the “most publicly visible legal mechanism” for protecting quality of care, which requires physicians to compensate patients for injuries for which the physician is responsible.11 The legal definition of negligence is “conduct which falls below the standard established by law for the protection of others against unreasonable risk of harm.”12 In judicial determinations, a physician’s actions are judged not against those of a reasonable man, but rather against those of a reasonable physician—with the same knowledge, skills, and expertise—under like circumstances.13 However, courts do not purport to possess the knowledge necessary to determine sound medical judgment. Thus, expert testimony of qualified physicians is required to establish the standard of care or what is “reasonable to expect of a professional given the state of medical knowledge at the time of the treatment in issue.”14 Given the nature of medical practice, custom is largely dispositive. Expert testimony may be based upon available clinical literature, statements by the Food and Drug Administration (FDA), practice guidelines issued by medical societies (providing a ready-made standard), the Physicians’ Desk Reference, and expert reliance on research findings.11 Standards of care evolve over time with advances in medical knowledge and technology, and hence new developments in technology might create uncertainty for physicians about what is the current standard of care.

Health care organizations: respondeat superior (vicarious liability). In addition to physician liability, the doctrine of respondeat superior places vicarious liability on employers for the negligent acts of employees acting within the scope of their employment.15 Under this doctrine, “hospitals can be held vicariously liable for the acts of their employees, including physicians, who commit malpractice.”16 Alternatively, hospitals and other health care providers may be held separately negligent for failing to exercise due care in hiring, training, or supervising employees, or for failing to maintain adequate facilities and equipment.17

Manufacturers and pharmaceutical companies: products liability. Under products liability theory, patients are entitled to recovery when they are injured by products that are “not reasonably safe” due to defective design, manufacture, or warning. The relevant law states that manufacturers of prescription drugs and medical devices, those “that may be legally sold or otherwise distributed pursuant only to a health-care provider’s prescription,” are liable for harm to persons caused by defects.15 A product is defectively designed “if the foreseeable risks of harm posed by the drug or medical device are sufficiently great in relation to its foreseeable therapeutic benefits” such that reasonable providers would not prescribe it to “any class of patients.”18 Warnings or instructions are inadequate if they fail to reasonably disclose risks “to prescribing and other health-care providers who are in a position to reduce the risks of harm.”18 The law reflects the FDA’s determination that prescription medical products have inherent and unavoidable risks and thus require physician approval before use. It also emphasizes that the physician plays an important role in patients’ choices.

Thus, a key difference arises when the products liability doctrine is applied to cases involving medicine and health care, in that such cases are typically subject to the learned intermediary doctrine. The learned intermediary doctrine addresses how patient-focused liability doctrines apply to the use of pharmaceuticals and medical devices, wherein physicians intervene between the manufacturer and the ultimate consumer.19 Essentially, the learned intermediary doctrine “prevents plaintiffs from suing medical device manufacturers directly,” as the manufacturer has no duty to the patient directly.16 Under this doctrine, the “physician, rather than the patient, is considered the end consumer of medical devices because the health care provider is in the best position to weigh the risks against the possible benefits of using the device.”16 The physician as end consumer means that manufacturers may fulfill their duty to warn about the potential dangers of their products by providing warnings to the physicians who will be using them. If a physician subsequently fails to properly warn a patient and adequately disclose the risks and benefits associated with the product, it is the physician who will face liability.

Applying Current Liability Doctrines to AI

Applying the aforementioned tort liability schemes to AI technologies is difficult because, as Yavar Bathaee notes, the law “is built on legal doctrines that are focused on human conduct, which when applied to AI, may not function.”20 Matthew Scherer explains that a large source of this difficulty stems from the opaque nature and unforeseeable results of black-box AI. For example, if the designers of AI cannot foresee how it will act after it is released in the world, how can they be held tortiously liable? And if the legal system absolves designers from liability because AI actions are unforeseeable, then injured patients may be left with fewer opportunities for redress.3

One problem with black-box AI’s fitting into current liability schemes is its increased autonomy. According to Mark Chinen, “The more autonomy machines achieve, the more tenuous becomes the strategy of attributing and distributing legal responsibility for their behavior to human beings.”21 As the AI system becomes more autonomous, fewer parties (ie, clinicians, health care organizations, and AI designers) actually have control over it, and legal standards founded on agency, control, and foreseeability collapse—directly impacting opportunities for recovery of damages based on legal theories of negligence and vicarious liability. Additionally, it is challenging to find a responsible party, as so many different entities—software developers, hardware engineers, designers, and corporations—go into the creation of AI systems. As Scherer notes, it may be unfair to “assign blame to the designer of a component whose work was far-removed in both time and geographic location from the completion and operation of the AI system.”3

Also, there are problems in applying the standard products liability model to AI. One is that, as discussed earlier, an injured patient cannot sue a manufacturer directly because of the learned intermediary doctrine. Additionally, products liability claims in the health care context require that the injuring product be deemed a “medical device.”2,4 The “hardware components” of the AI system would be deemed the “device” for products liability purposes, not the software.16 The legal reasoning of not allowing products liability to extend to software is that software, as opposed to hardware, is “technology that helps healthcare providers make decisions by providing them with information or analysis” and that the final decision of care rests with the health care professional,4 while “blatant hardware defects” would instead be subject to products liability suit against the manufacturer.16 As AI becomes further integrated into medicine and health care, it becomes clear that current legal standards and doctrines regarding medical malpractice are insufficient. The innovations are unprecedented and solutions to the problems they present are necessary.

Possible Legal Solutions to Address AI Liability

In light of significant challenges in applying the current tort framework to AI, legal and computer science experts have offered possible solutions that involve modifications to the current law or the creation of new legal doctrines.

AI personhood. One possible solution is to confer “personhood” on the artificially intelligent machine itself, viewing the machine as an independent “person” under the law. Viewing the machine itself as a person resolves agency questions, which are important for analysis of vicarious liability claims (ie, respondeat superior), as the machine will be viewed as the “principal” and no longer as an agent.22 The machine, deemed a principal under this model of personhood, will have burdens and duties of its own and will then be sued directly for any negligence claims. In such instances, the AI system will be required to be insured (similar to how physicians possess medical malpractice insurance themselves) and such claims will be paid out from the insurance; the AI system will be deemed a quasi-juridical person and treated the “same as any other physician.”16 Funding for such insurance may come from users of the AI technology, allowing for a “different form of cost-spreading” that promotes fairness, as its focus extends beyond the technology’s creators and encourages users of such technology to also bear some cost.22

Common enterprise liability. A common enterprise theory of liability is another possible solution to harm caused by AI. David Vladeck notes that, instead of assigning fault to a specific person or entity (or trying to determine if there was a fault at all), if some injury is caused by an AI system, then all groups involved in the use and implementation of the AI system should jointly bear some responsibility.22 The benefit of this solution is that all parties involved share the burden and that no finding of fault (which may be impossible because of the black-box nature of AI) is required. Instead, an inference of liability is shared among all relevant parties, thus allowing injured parties to be made whole.

Modify the standard of care. Another possible solution is to simply modify the duties and standard of care of health care professionals using black-box AI. Nicholas Price suggests a standard that would require facilities and health care professionals to exercise “due care in procedurally evaluating and implementing black-box algorithms.”7 Under this standard of care, facilities and clinicians would have a duty to evaluate black-box algorithms and to validate the algorithmic results.7 Under this model, health care professionals are responsible for harm if they did not take adequate measures in properly evaluating the black-box AI technologies used in caring for the patient.

Conclusion

The rise of black-box AI and its use in medicine complicates application of existing tort law when trying to resolve claims of malpractice. If a patient becomes injured by use of an AI technology (black-box AI in particular), current legal models are insufficient to address the realities of these innovations. New legal solutions that craft novel legal standards and models that address the nature of AI, such as AI personhood or common enterprise liability, are necessary to have a fair and predictable legal doctrine for AI-related medical malpractice.

 

References

  1. Tata Consultancy Services. Getting smarter by the sector: how 13 global industries use artificial intelligence. https://sites.tcs.com/artificial-intelligence/. Accessed November 27, 2018.

  2. Chung J, Zink A. Hey Watson—can I sue you for malpractice? Examining the liability of artificial intelligence in medicine. Asia Pac J Health Law Ethics. 2018;11(2):51-80.
  3. Scherer MU. Regulating artificial intelligence systems: risks, challenges, competencies, and strategies. Harv J Law Technol. 2016;29(2):353-400.
  4. Price WN. Artificial intelligence in health care: applications and legal implications. The SciTech Lawyer. 2017;14(1):10-13.
  5. Knight W. The dark secret at the heart of AI. MIT Technology Review. April 11, 2017. https://www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai/. Accessed August 15, 2018.

  6. Pande V. Artificial intelligence’s “black box” is nothing to fear. New York Times. January 25, 2018. https://www.nytimes.com/2018/01/25/opinion/artificial-intelligence-black-box.html. Accessed August 15, 2018.

  7. Price WN. Medical Malpractice and Black-Box Medicine. Big Data, Health Law, and Bioethics. Cambridge, UK: Cambridge University Press; 2018.

  8. Moshen H, El-Dahshan EA, El-Horbaty EM, Salem AM. Classification using deep learning neural networks for brain tumors. Future Comput Inform J. 2018;3(1):68-71.
  9. Paul JS, Plassard AJ, Landman BA, Fabbri D. Deep learning for brain tumor classification. Proc SPIE Int Soc Opt Eng. 2017;10137(1013710).

  10. Bleicher A. Demystifying the black box that is AI. Scientific American. August 9, 2017. https://www.scientificamerican.com/article/demystifying-the-black-box-that-is-ai/. Accessed November 15, 2018.

  11. Furrow BR, Greaney TL, Johnson SH, Jost T, Schwartz R. Health Law: Cases, Materials, and Problems. 8th ed. St. Paul, MN: West Academic Publishing; 2018.

    Google Scholar

  12. Restatement (Second) of Torts §282 (Am Law Inst 1965).

  13. Restatement (Third) of Torts §12 (Am Law Inst 2010).

  14. Nowatske v Osterloh, 543 NW2d 265, 272 (Wis 1996).

  15. 27 Am Jur 2d Employment Relationship §356 (Thomson Reuters 2002).

  16. Allain JS. From jeopardy to jaundice: the medical liability implications of Dr. Watson and other artificial intelligence systems. LA Law Rev. 2013;73(4):1049-1079.
  17. Meera T, Phanjoubam M, Nabachandra H. Hospital’s liability in malpractice suits. J Med Soc. 2016;30(1):1-2.
  18. Restatement (Third) of Torts §6 (Am Law Inst 1998).

  19. Marcus v Specific Pharmaceuticals, Inc, 77 NYS2d 508 (1948).

  20. Bathae Y. The artificial intelligence black box and the failure of intent and causation. Harv J Law Technol. 2018;31(2):889-938.
  21. Chinen MA. The co-evolution of autonomous machines and legal responsibility. Va J Law Technol. 2016;20(2):338-393.
  22. Vladeck DC. Machines without principles: liability rules and artificial intelligence. Wash Law Rev. 2014;89(1):117-150.

Citation

AMA J Ethics. 2019;21(2):E160-166.

DOI

10.1001/amajethics.2019.160.

Acknowledgements

Hannah R. Sullivan and Scott J. Schweikart contributed equally to this work.

Conflict of Interest Disclosure

The author(s) had no conflicts of interest to disclose. 

The viewpoints expressed in this article are those of the author(s) and do not necessarily reflect the views and policies of the AMA.