Case and Commentary

Nov 2025
Peer-Reviewed

How Should We Think About Ambient Listening and Transcription Technologies’ Influences on EHR Documentation and Patient-Clinician Conversations?

Sara Gerke, Dipl-Jur Univ, MA and David A. Simon, PhD, JD, LLM
AMA J Ethics. 2025;27(11):E787-795. doi: 10.1001/amajethics.2025.787.

Read in:

Abstract

This commentary on a case analyzes how integrating ambient listening and transcription technologies powered by artificial intelligence into the electronic health record documentation process influences documentation practices and clinical encounters. The commentary offers best-practice recommendations for informed consent processes and patient-clinician relationship formation.

Case

XY is a patient at University Health and prepares for an upcoming routine visit with their new physician, Dr M. XY reviewed and completed health questionnaires and signed consent forms prior to their visit. XY did not consent to the use of a new ambient listening and transcription artificial intelligence (AI) technology, which University Health clinicians use to more easily integrate information from patient interviews into a patient’s health record. The consent form for this technology specifies that not consenting will not interfere with the course of care one would receive during a “normal visit.” XY plans to ask Dr M a few questions about cannabis and psychedelic mushroom use, because XY wonders if it is causing some new symptoms, but XY is not comfortable having audio evidence of their substance use recorded, so XY does not consent.

On the day of XY’s visit with Dr M, XY learns from the medical assistant (MA) that the AI recording and transcription technology is already widely in use at University Health and that “most patients consent.” MA is not sure how to respond when XY says, “But the consent form said that refusing to be recorded wouldn’t interfere with my getting care.” MA remains unsure how the visit will proceed without XY’s consent to use the technology and asks Dr M to help explain. Dr M explains to XY, “I rely on this technology to do my documentation and dictation now. Is there something specific you’re concerned about?”

XY feels awkward, hesitates, but finally capitulates and consents to being recorded. XY gets the orders needed for routine health screenings, but when asked by Dr M if they have any more questions or concerns, XY says, “No.”

Commentary

In practice, many clinicians now use ambient listening and transcription technologies powered by AI (“ambient clinical documentation tools” or “tools”), similar to the tool in this case. For example, one large nonprofit health care organization in the United States recently announced it would implement an ambient clinical tool in 40 hospitals and over 600 medical offices.1 Tools like this one typically record and transcribe the patient-clinician encounter and generate a summary of the conversation in a clinical note. In the initial versions of such tools, the recording, transcript, and note were typically first sent to a third-party company for human review. But newer versions skip this extra step2,3: one AI-based tool “securely drafts clinical notes, recording in-office and telehealth patient visits with patient consent . . . and produces a draft note for immediate physician review and completion.”2

Ambient clinical documentation tools promise to increase the quality and efficiency of documentation, reduce clinician burnout, and improve the quality of care.2 Because such tools record and transcribe the patient-clinician encounter and summarize the conversation in a note for the clinician’s review, the clinician can spend more time listening to and treating the patient. But for all they promise, these tools also raise several ethical and legal issues, ranging from privacy and security to liability. For example, patients might be uncomfortable with their conversation being recorded, or the ambient clinical documentation tool might make mistakes.4 Thus, in addition to considering the potential benefits of these tools, hospitals, health care facilities, and other stakeholders should carefully consider their risks before adopting them in clinical practice.

Inspired by the case of patient XY and Dr M, this commentary uses the concept of informed consent to evaluate the possible impact of such technologies on the patient-clinician relationship. After discussing informed consent, we focus on the legal protections and limits of informed consent and related laws. We then discuss other ethical issues raised in this case, such as the power imbalance between Dr M and patient XY. We conclude with best-practice recommendations for hospitals and clinicians implementing these AI tools.

Informed Consent

According to the American Medical Association Code of Medical Ethics, “[t]he process of informed consent occurs when communication between a patient and physician results in the patient’s authorization or agreement to undergo a specific medical intervention.”A staple of medical practice, informed consent refers to 2 distinct practices. One is legal. It typically requires the physician to disclose the material risks of proposed treatments and alternatives.The other is ethical, as informed consent is grounded in the ethical principle of respect for autonomy: patients ought to be able to make decisions about their bodies based on the full information available.

Principles that underlie the legal doctrine of informed consent overlap but are not coextensive with the ethical principles that animate the ethical process of informed consent. Under both, for example, patient XY has an autonomy interest and therefore a right to information about risks related to the treatment that might affect their decision to undergo it. But physicians’ ethical duties to disclose the uses and risks of the ambient clinical documentation tool are likely broader than physicians’ legal duties to do so, if any exist.

Limits of the Legal Doctrine of Informed Consent 

Although a patient-focused concept, the legal doctrine of informed consent does not entitle the patient to every conceivable morsel of information. And the doctrine’s limitations, just like its protections, can affect the patient-clinician relationship.

For instance, the physician often has a legal duty to disclose only those risks that “a reasonable person, in what the physician knows or should know to be the patient’s position, would be likely to attach significance to . . .  in deciding whether or not to forego the proposed therapy.”6 With a few exceptions, physicians do not have a duty to disclose the use of AI technologies to help formulate treatment recommendations under the legal doctrine of informed consent.7 If this is correct, it follows that physicians (like Dr M) are also unlikely to have a general duty to disclose to patients (like XY) the use and, ultimately, the risks of ambient clinical documentation tools, which in their current versions play no part in the decision-making role of the treating physician. Additionally, the legal doctrine of informed consent typically applies to a “medical or surgical procedure” or “treatment.”8 However, ambient clinical documentation tools seem unlikely to constitute some form of medical or surgical procedure or treatment that requires disclosure. The reason is that the tools are currently unlikely to be “devices” under federal law as long as they just transcribe and do not interpret or analyze patient records.9,10 While the Health Data, Technology, and Interoperability (HTI-1) final rule of the Office of the National Coordinator (ONC) for Health Information Technology (IT) has recently implemented transparency requirements for certain algorithms that are considered “Predictive Decision Support Interventions” and are part of ONC-certified health IT, these requirements apply to health IT developers and typically do not extend to physicians.11 Finally, as an example of new developments at the state level, California has implemented disclosure requirements for providers using generative AI to generate “patient communications pertaining to patient clinical information,” but such requirements only apply to patient-facing clinical communications and also do not apply in cases in which a physician reads and reviews the AI-drafted communication.12 In short, the legal doctrine of informed consent does not currently appear to require physicians like Dr M to disclose to patients like XY the uses and risks of an ambient clinical documentation tool.

Yet the likely absence of such a duty under the legal doctrine of informed consent could negatively affect what information physicians voluntarily disclose to patients like XY. For example, without a legal obligation to disclose the tool’s risks (eg, the risk of mistranscription that could be used in future diagnosis), physicians could have less incentive to learn about them, making further discussions with patients like XY about those risks difficult. And if a physician cannot answer basic questions about the tool, then the patient might infer that the physician lacks competence, thereby undermining trust and the patient-clinician relationship. For example, if patients do not trust their physician, they might be less likely to share information—including their cannabis and psychedelic drug use, as in the case of XY—or be less likely to consent to treatment.

Patients like XY might also be reluctant to share information if they know that these tools sometimes “hallucinate” and if they believe physicians like Dr M do not take the necessary time to identify false or invented information in the AI-generated note, particularly if they cannot explain how these tools work.13 Although the risk of incorrect information could also exist when physicians draft their own notes, patients might (perhaps rightly so) believe that the physician has more control than when an ambient clinical documentation tool generates the note for them. This concern can be particularly well-founded in cases in which a tool is insufficiently trained on diverse accents and speech patterns and the patient has a strong non-native English accent, which can lead to inaccuracies in the AI-generated note.14 Additionally, AI algorithms might introduce new forms of bias or distortions, not simply when transcribing but when drawing inferences and summarizing conversations.1 Without an honest and informed patient-clinician interaction, treatment decisions are less likely to advance the patient’s interest and might negatively impact patient outcomes.15

Patients with less health literacy or defer to authority might be hesitant to question a clinician. 

That said, the risks of ambient clinical documentation tools do not always outweigh their benefits. Properly used, such tools might actually improve patient care, despite the weak incentives provided by the legal doctrine of informed consent. For example, patients like XY, despite their misgivings about the technology, might ultimately consent to its use and develop a better, more trusting relationship with their physician than they would have if the physician had been typing on a keyboard or clicking a mouse. If physicians like Dr M experience less burnout from using the technology, they can pay closer attention to their patients. Both of these results could, in turn, translate into better care. Thus, even if there are some risks that these technologies could undermine trust and negatively affect patient care, they will not necessarily do so in every case. And nothing about the legal doctrine of informed consent prohibits disclosures that could help to realize these benefits by informing patients of the use of and risks associated with ambient clinical documentation tools.

Other Legal Protections

Gaps in the legal doctrine of informed consent can be partially closed by other laws, helping to augment trust in the patient-clinician relationship. The Health Insurance Portability and Accountability Act (HIPAA) and the corresponding Privacy Rule, for example, protect the use and disclosure of “individually identifiable health information” (so-called “protected health information” or PHI) by “covered entities,” such as clinicians, hospitals, and payers, as well as their “business associate[s]” who have, for example, agreements with hospitals.16 In the hypothetical case, HIPAA protects XY’s PHI (eg, XY’s diagnosis) because Dr M is a covered entity. And the company that offers the tool is likely a business associate since it provides services (eg, the recording and transcription) on behalf of the covered entity.

Other federal and state laws can also help safeguard patient privacy, dignity, and autonomy. For example, federal and state wiretapping laws might require patient consent to recording.17 For example, in US v Hollern, a chiropractor was criminally convicted under the Federal Wiretap Act because he did not obtain proper patient consent: patients signed consent forms authorizing recording and use for medical purposes, but the chiropractor used the recordings to coach other chiropractors on how “to convince the patient to agree to a lengthy course of treatment, preferably paid for in advance.”18 State civil laws can operate in similar ways. In California, for example, third-party platforms used to facilitate communication with patients faced civil liability for “intercepting” data without the patient’s consent.19 In the case of XY, University Health could face criminal charges or civil claims under state wiretapping or privacy laws if it does not obtain informed consent that covers all of the relevant activities. Each of these protections can help to encourage a sense of privacy and trust.

Limitations of Other Legal Protections

Despite the additional protection offered by these laws, they also have limitations that can affect the patient-clinician relationship. For example, although HIPAA protects the use and disclosure of PHI, patients might authorize a covered entity to disclose their PHI in writing, such as for marketing or sale,20,21 even if they do not actually know what they authorize. Additionally, HIPAA does not apply to deidentified data when covered entities remove specific “identifiers” from PHI, such as name, address, phone numbers, and biometrics.22

These legal gaps can affect patient privacy. For instance, XY might sign an agreement that purports to protect their privacy under HIPAA, but University Health might share properly deidentified data with a third party, as HIPAA does not apply in this situation. However, the ability to relatively easily “re-identify” data using other datasets undermines HIPAA’s privacy protections.23 In particular, voice recordings that integrate with other EHR information might need to be “deidentified” using a more sophisticated technology to alter the patient’s voice.24 The other option would be to destroy the recording altogether and keep only a deidentified transcript of the conversation. However, this option could generate other risks, such as an inability of physicians (who rely on memory) to identify errors if the transcript was not (properly) validated before the destruction of the recording. If errors are not fixed and false notes are included in health records—and if clinicians are unable to correct the errors—patients could be harmed over time.

Patients like XY might worry that these limitations reduce their privacy, which could negatively impact their care and relationship with their physicians. For example, if XY thinks University Health is going to share their data, even if properly deidentified in compliance with HIPAA, she might have second thoughts about sharing information about drug use with Dr M. Similarly, a broad consent form that protects University Health from liability under wiretapping laws could undermine XY’s trust in the privacy of their conversation with Dr M, making XY less likely to provide complete and accurate information.

Framing, Power Asymmetry, and Equity

While legal considerations influence the patient-clinician relationship, the ethical considerations in informed consent could have a broader impact on this relationship. A patient like XY might choose to consent out of fear rather than agreement. For example, XY might fear Dr M will label them “difficult” or be more dismissive of their concerns if they do not consent, which could undermine their trust in Dr M.

How the care team frames the process of consent can exacerbate or mollify this concern. For example, in the hypothetical case, MA tells XY that “most patients consent.” When XY asks about a potential decision to refuse to consent to the technology, Dr M’s answer is framed with consent as a default. Rather than explain the risks and ask for XY’s consent to record and use the conversation, Dr M explains the software’s purpose and asks XY “Is there something specific you’re concerned about?” Even if Dr M did not intend to influence XY’s decision, this framing places the burden on the patient to question the physician’s authority. It is also dismissive and feeds into the existing patient-clinician power asymmetry by suggesting that there is nothing to worry about. XY might consent simply to avoid Dr M’s forming an unfavorable impression before treatment begins.

Power asymmetries can disproportionately impact patients with low health literacy, cultural differences, or disadvantaged backgrounds. Patients with low health literacy or those who come from a culture that prizes deference to authority might be hesitant to question a physician,25,26 particularly when a decision is framed as being contrary to the physician’s preference or recommendation.27 In the hypothetical case, for example, XY could be influenced to consent by Dr M’s and MA’s describing the use of the tool as common and typical—as opt-out rather than opt-in. Finally, patients from financially disadvantaged backgrounds might have public insurance, limited resources, or private insurance with limited provider networks, making it difficult for them to locate or see another clinician who does not use an ambient clinical documentation tool. XY might not have access to clinicians other than Dr M because XY lacks reliable transportation or because Dr M is part of a health management organization. The way clinicians like Dr M approach the conversation to obtain consent thus can have a particularly large influence on patients’ decisions in certain contexts.

Best Practice Recommendations

While obtaining informed consent is important for managing risks, it does not address the panoply of risks confronting patients, clinicians, and technology manufacturers. To combat some of these risks, health systems and clinicians should consider implementing and monitoring alternative workflows, ideally to both build trust and respect individual autonomy. For example, providers could implement consent processes that allow patients to watch a short video on the ambient clinical documentation tool, including benefits and potential privacy and security risks, as well as how the hospital is actively addressing those risks. Medical assistants like MA and physicians like Dr M should also be available to answer any questions patients like XY might have after watching the video. For example, physicians could explain to patients that they review the notes and check them for any mistakes before finalizing and entering them into the EHR.

Using some of these strategies could help increase trust, even when using an ambient clinical documentation tool is the default option. For instance, a short video explaining to patients how their data would be protected could make them feel safer sharing personal information with the physician. Providers could ensure that they offer internet-based portals through which patients can easily view their medical records,28 including the AI-generated notes that were checked and approved by their physicians. Providers might also inform patients in a message sent through the portal, in simple terms, what rights they have to correct their records and why it is important to do so. At the same time, however, health systems should be careful not to overload the patient with the feeling that the onus is on them to catch errors.

Another key factor in maintaining trust is ensuring that the ambient clinical documentation tools actually deliver on their mission to reduce burnout and increase physician attention. If they do, they could enable physicians to better understand, relate to, and empathize with patients on a personal level.29,30 If they don’t, physicians will face 3 interrelated problems: decreasing patient trust, flatlining or increasing burnout, and potentially worse patient outcomes.2,3 For example, providers should implement systems to measure the effects of these tools by comparing baseline measurements of the time physicians spent interacting with patients and completing paperwork before the adoption of the ambient clinical documentation tool to the post-implementation time spent on the same activities.

References

  1. Kaiser Permanente improves member experience with AI-enabled clinical technology. News release. Kaiser Permanente; August 14, 2024. Accessed June 13, 2025. https://about.kaiserpermanente.org/news/press-release-archive/kaiser-permanente-improves-member-experience-with-ai-enabled-clinical-technology

  2. Nuance Communications. Nuance announces general availability of DAX Copilot embedded in Epic, transforming healthcare experiences with automated clinical documentation. PR Newswire®. January 18, 2024. Accessed October 10, 2025. https://www.prnewswire.com/news-releases/nuance-announces-general-availability-of-dax-copilot-embedded-in-epic-transforming-healthcare-experiences-with-automated-clinical-documentation-302037590.html

  3. Trang B. As AI promises to revolutionize medical note-taking, concerns mount about accuracy and harm. STAT. March 28, 2023. Accessed June 13, 2025. https://www.statnews.com/2023/03/28/hospitals-ai-artificial-intelligence-health-microsoft-nuance-notes/

  4. Gerke S, Simon DA, Roman BR. Liability risks of ambient clinical workflows with artificial intelligence for clinicians, hospitals, and manufacturers. JCO Oncol Pract. 2025:OP2401060.

  5. American Medical Association. Opinion 2.1.1 Informed consent. Code of Medical Ethics. Accessed June 13, 2025. https://code-medical-ethics.ama-assn.org/ethics-opinions/informed-consent

  6. Canterbury v Spence, 464 F2d 772, 780 (DC Cir 1972).

  7. Cohen IG. Informed consent and medical artificial intelligence: what to tell the patient? Georgetown Law J. 2019;108(6):1425-1470.

  8. Norwood Hospital v Munoz, 564 NE2d 1017, 1021 (Mass 1991).

  9. Center for Devices and Radiological Health; Center for Biologics Evaluation and Research. Changes to existing medical software policies resulting from section 3060 of the 21st Century Cures Act: guidance for industry and Food and Drug Administration staff. US Food and Drug Administration; 2019. Accessed October 17, 2025. https://www.fda.gov/media/109622/download

  10. Uche-Anya EN, Gerke S, Berzin TM. Video endoscopy as big data: balancing privacy and progress in gastroenterology. Am J Gastroenterol. 2024;119(4):600-605.
  11. Office of the National Coordinator for Health Information Technology, US Department of Health and Human Services. Health data, technology, and interoperability: certification program updates, algorithm transparency, and information sharing; final rule. Fed Regist. 2024;89(6):1192-1438.

  12. Health Care Services: Artificial Intelligence, AB-3030 (Cal 2024).

  13. Associated Press. Hospitals’ AI transcription tool invents things no one ever said, researchers say. Medpage Today. October 28, 2024. Accessed June 15, 2025. https://www.medpagetoday.com/practicemanagement/informationtechnology/112610

  14. Hutiri WT, Ding AY. Bias in automated speaker recognition. In: Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency. Association for Computing Machinery; 2022:230-247.

  15. Birkhäuer J, Gaab J, Kossowsky J, et al. Trust in the health care professional and health outcome: a meta-analysis. PLoS One. 2017;12(2):e0170988.

  16. Administrative Data Standards and Related Requirements, Definitions. 21 CFR §160.103 (2025).

  17. de Meneses FGA, Mulcahy BP, Brook K. Wiretap laws and the perioperative physician—the current state of affairs. J Clin Anesth. 2023;89:111178.

  18. US v Hollern, 366 F Appx 609-610, 613-614 (6th Cir 2010).

  19. Yockey v Salesforce, 745 F Supp 3d 945 (ND Cal 2024).

  20. Uses and Disclosures of Protected Health Information: General Rules. 45 CFR §164.502(a) (2025).

  21. Uses and Disclosures for Which an Authorization Is Required. 45 CFR §164.508 (2025).

  22. Other Requirements Relating to Uses and Disclosures of Protected Health Information, Implementation Specifications: Requirements for De-Identification of Protected Health Information. 21 CFR §164.514(b)(2) (2025).

  23. Gerke S, Minssen T, Cohen G. Ethical and legal challenges of artificial intelligence-driven healthcare. In: Bohr A, Memarzadeh K, eds. Artificial Intelligence in Healthcare. Academic Press; 2020:295-336.

  24. Understanding identifiable data. Office of Research, Innovation, and Economic Development, Division of Research Integrity and Assurance, University of Tennessee Knoxville. Accessed December 20, 2024. https://research.utk.edu/research-integrity/human-research-protection-program/for-researchers/guidance-2/understanding-identifiable-data/

  25. Menendez ME, van Hoorn BT, Mackert M, Donovan EE, Chen NC, Ring D. Patients with limited health literacy ask fewer questions during office visits with hand surgeons. Clin Orthop Relat Res. 2017;475(5):1291-1297.
  26. Katz MG, Jacobson TA, Veledar E, Kripalani S. Patient literacy and question-asking behavior during the medical encounter: a mixed-methods analysis. J Gen Intern Med. 2007;22(6):782-786.
  27. Gong J, Zhang Y, Yang Z, Huang Y, Feng J, Zhang W. The framing effect in medical decision-making: a review of the literature. Psychol Health Med. 2013;18(6):645-653.
  28. Cohen IG, Gerke S, Kramer DB. Ethical and legal implications of remote monitoring of medical devices. Milbank Q. 2020;98(4):1257-1289.
  29. Safran DG, Karp M, Coltin K, et al. Measuring patients’ experiences with individual primary care physicians. Results of a statewide demonstration project. J Gen Intern Med. 2006;21(1):13-21.
  30. Wu Q, Jin Z, Wang P. The relationship between the physician-patient relationship, physician empathy, and patient trust. J Gen Intern Med. 2022;37(6):1388-1393.

Editor's Note

The case to which this commentary is a response was developed by the editorial staff. Background image by Paul Dolan.

Citation

AMA J Ethics. 2025;27(11):E787-795.

DOI

10.1001/amajethics.2025.787.

Acknowledgements

This work was funded by the European Union (Grant Agreement no. 101057321).

Conflict of Interest Disclosure

Contributors disclosed no conflicts of interest relevant to the content.

The people and events in this case are fictional. Resemblance to real events or to names of people, living or dead, is entirely coincidental. The viewpoints expressed in this article are those of the author(s) and do not necessarily reflect the views and policies of the AMA or European Union.