AI in medical care: A map for clinical practice and patient protection

Introduction

Artificial intelligence (AI) has seen an exponential increase in the already rising interest within the medical field since the introduction of ChatGPT in November 2022. The mobile application providing a user interface to the ChatGPT large language model acquired a record 100 million users within two months of release and currently has a million visitors per month.1,2 While ChatGPT’s release made decades of AI advances widely accessible, the medical field has been utilizing AI functionalities since the 1940s including automated analytics, data synthesis, and optimization strategies.3–5 These early efforts have gained impact in the past 15 years at the nexus of complementary advances in big data analytics, cloud computing and associated memory and storage gains, and vast quantities of curated healthcare data from electronic health records and other emerging sources.6–11

 

AI in Medicine: Current and Future Uses

Although the scope of AI is rapidly expanding, common uses in medicine have included risk stratification of patients based on personal or medical profiles,12–14 identification of abnormal laboratory or imaging findings and alerts to patients, physicians, and other health professionals,15,16 diagnosis of conditions based on available clinical information,17,18 and speech-to-text tools for clinical documentation.19 For example, cardiac risk stratification, sepsis alerts, and drug-drug interaction pop-up boxes are all forms of AI. Thus, artificial intelligence is already highly integrated within everyday clinical practice.

Promising future avenues for AI include new opportunities to enhance medical care, including medical education and research. AI offers potential to simulate patient scenarios, synthesize large quantities of information, and provide recommendations for diagrams, references, and other learning tools.2 Medical researchers have begun using ChatGPT and other forms of AI to analyze large quantities of data including unstructured text from articles, and even generate research hypotheses.20–23 And certainly, the uses for AI in clinical care will continue to expand as well; the exponential growth in mobile health applications, telemedicine, and personalized care is well underway.24–29 As AI moves into new areas of medicine, we can anticipate the rise of new challenges and dilemmas.

 

Challenges and Concerns

AI has the same potential to transform the medical field as other industries, but many questions remain regarding ethics, regulation, and medico-legal issues, among others.30,31

Ethics

Early experiences with ChatGPT have further highlighted the well-studied ‘black box’ challenge of AI, where users have difficulty understanding AI outputs and developers have difficulty explaining on what the output is based.32 In medicine, this challenge raises concerns about clinicians’ confidence in using AI outputs within their clinical judgment and their ability to explain decisions and guidance to their patients.31 Protecting patient autonomy and allowing informed consent, with transparency about the use and involvement of AI in care, should be a guiding principle.33,34

 

Regulation

Regulatory issues have been at the forefront of discussions about AI, with congressional representatives holding hearings on the topic and leaders of AI firms imploring legislators to seek greater regulation of their own technology. Potential bias can result from unrepresentative data used to train AI models and exacerbate existing inequities.4 One such example is positive and negative sentiments toward names associated with a particular racial group, but there have been many such issues and more can be anticipated in the future.35

 

Responsibility for Care

Finally, especially in medicine, the use of automated analytics and decision-making in medicine makes the responsibility-and liability-for care more ambiguous.30 For example, if a large language model like ChatGPT reviews a chart and concludes that a patient has no history of diabetes and the patient also recalls no such history, but an elevated glucose measurement is actually buried among old laboratory results, would the clinician be responsible for the discrepancy? We will need to develop standards of care for use of AI in clinical work.

 

Winners and Losers in an Information Revolution

In addition to the concerns and challenges noted above, we must continuously evaluate those who are benefiting and those who may be harmed by AI’s integration into medicine. As with all technological advancements, not every individual or group stands to benefit.36–38 Media outlets have begun to report accounts of writers, editors, and other professionals having their roles eliminated and replaced by ChatGPT, and restructuring can be expected within the medical profession as well.39–41 There will be pressures on physicians, nurses, and other healthcare professionals to do more with less by using AI to streamline decision-making at the expense of patient care, and clinicians must guard against any overemphasis on efficiency and revenue that does not directly benefit patients. This is particularly true because physician ownership of practices has dropped from 72% in 1988 to a mere 24% in 2022, meaning that non-clinicians and individuals with business interests have more input than ever in patient care and decisions about what tasks can-or should-be automated.42,43

We have already seen extreme examples of AI being used to undermine patient care, such as reports of major insurers using AI to automate the process of insurance denials and have physician ‘reviewers’ sign the denials without adequate time for review.44 Concerns regarding staffing ratios in health facilities and impact on patient care were being raised prior to COVID-19, and have only increased since.45–47 The expansion of AI will raise new ethical and patient safety issues that cannot all be foreseen.

 

Physicians and Nurses Can Ensure Patients Are Winners

In the setting of new and unpredictable shifts in health care delivery, physicians and other clinicians must champion patients’ needs respectfully but firmly. Patients cannot reasonably be expected to solve their own collective action problem and advocate for AI protections, staffing, or other needed regulations in healthcare.48,49 Nurses have done admirable work advocating for safe staffing, and physicians are well positioned to understand both the potential and the pitfalls of AI’s integration into medicine. Clinicians and public health advocates need to be at the table, both to address the workflows and wellbeing of clinicians and to advocate for patients. For clinicians to be effective patient advocates, we need to work effectively with legislators on protections for patients. Physicians and nurses faced repercussions, up to and including losing their jobs, such as in the case of Dr. Ming Lin, who was fired for advocating for patients during COVID.50 All clinicians need to be able to speak up freely about the role of AI and regulatory issues relevant to patient safety.

 

A Path Forward: Understand AI and Protect Patients

Physicians have guided modern medicine since the founding of the first medical schools and general hospitals in the 1700s, and we will guide our profession through this new era as well. Let us learn from those who have developed artificial intelligence. Used well, the potential is incredible, and, unlike medicine, the learning is often free and widely distributed. Those willing to do the work can have the knowledge, and those with the knowledge will have the most compelling voices for patient advocacy in the new era for AI in medicine. Those pioneering recent advances in artificial intelligence will need to learn from healthcare professionals as well. Our ethical principles, understanding of humanity, and commitment to patients will be a cornerstone for AI in medicine.

×