facebook-pixel

Voices: Utah’s willingness to test AI in health care, rather than fear it, is a bold approach that should be emulated

We cannot solve health care’s biggest challenges by doing things the same way we always have.

(Elise Amendola | Associated Press file photo) Generative A.I. application Doctronic can now renew Utahns' prescriptions without a doctor's approval, through a pilot program facilitated by Utah's Office of Artificial Intelligence Policy.

In the same week OpenAI launched ChatGPT for Health, the state of Utah garnered national attention by becoming the first state to allow artificial intelligence to legally participate in independent medical decision-making.

Through an innovative policy sandbox, Utahns can now have routine prescriptions renewed by an AI that reviews their personal medical history and safety profile, escalating to a human doctor only when necessary. This decision has since received wide-ranging reactions including significant pushback. The Utah Medical Association raised concerns about AI “making decisions that should be made by a provider,” and the American Medical Association warned of “serious risks” without physician input.

My experience working for the past four years on AI implementation at UC San Diego Health, a system leading the way in AI, gives me a different perspective.

The Utah Medical Association’s and AMA’s concerns have merit, but they defend a status quo that is failing too many patients. Instead of enacting stifling regulation, states should allow responsible innovators policy leeway to try to fix pressing problems — specifically, rising health care costs and limited access to primary care. These problems are crushing Americans and are projected to only get worse. Using AI to handle low-risk medication renewals is low-hanging fruit for reducing medication non-adherence — a major driver of disease progression, worse outcomes and billions of dollars in avoidable costs each year.

Critically, Utah designed this pilot with cautious and thoughtful guardrails. It excludes riskier medications including narcotics, stimulants, injectables and antibiotics for short-term illness. It also requires strict identity verification and has a low-bar to escalate renewal requests outside of the norm for physician review. Further, the agreement is limited to a pilot phase of one year with a three-phase surveillance period: For the first 250 patients, every AI decision is reviewed by a human doctor before the prescription is sent to the pharmacy. This is followed by a retrospective human review for the next 1,000 patients and an ongoing audit of 5-10% of all renewals.

AI skeptics seem to assume that if patients don’t get AI-enabled care, they will instead receive high-quality mainstream care — a system where every prescription refill involves a thorough conversation with a well-rested doctor who knows the patient’s entire history and is up to date on the evidence. That ignores the reality on the ground. Real health care is often characterized by long wait times for appointments and brief transaction-focused visits. As argued recently in the New England Journal of Medicine, we must compare AI to the health care system we have, not the one we wish we had.

If AI can safely handle a routine refill for a low-risk medication while evaluating for interactions and contraindications with a precision that never fatigues, it frees up human capacity for higher-impact tasks that need human touch.

Dr. Bob Wachter, Chair of Medicine at UCSF acknowledges the gravity of granting AI a privilege we have previously reserved for doctors, but does so with less alarm than his colleagues. He points out that we have long accepted increasing patient autonomy, reclassifying drugs like ibuprofen from prescription-only to over-the-counter. Moreover, we currently tolerate cursory telehealth consultations where patients obtain prescriptions for conditions like obesity or hair loss from physicians they have never met.

“Assuming an AI tool goes through rigorous assessment and certification,” Wachter notes, “is bypassing the physician entirely any riskier than that? I doubt it.” In a system that is chronically overextended, he argues we need to rapidly learn to differentiate between “safe AI-enabled care and reckless self-diagnosis and therapy.”

Finally, Utah’s approach respects that AI in health care must be built on principles of transparency and patient autonomy. “The state doesn’t take a position on whether people should like this business,” noted Zach Boyd, Director of Utah’s Office of AI Policy. The state’s role is to ensure safety and transparency, allowing patients to make informed decisions. Patients should know when AI is involved in their care and have the right to opt-in or out based on their own values. Doctronic and the State have committed to such transparency and plan to make the results of this pilot public.

“We hope to be sharing this broadly as we go,” said Matt Pavelle, co-CEO of Doctronic. “We want everyone to understand what’s going well, what’s not going well.”

Given the scarcity of data on AI impact in real-world settings, this pilot is sorely needed.

The fate of Doctronic and the state of Utah’s pioneering experiment remains to be seen, but thoughtful, cautious evaluation of these tools is necessary and timely. We cannot solve health care’s biggest challenges by doing things the same way we always have. Utah’s willingness to test the future, rather than fear it, is an approach that should be emulated.

(Matthew Allen) Matthew Allen is from Cedar City and is a senior medical student at the UC San Diego School of Medicine.

Matthew Allen is from Cedar City and is a senior medical student at the UC San Diego School of Medicine. He has published on implementation and evaluation of AI in health care and works with UCSD’s Chief Health AI Officer.

The Salt Lake Tribune is committed to creating a space where Utahns can share ideas, perspectives and solutions that move our state forward. We rely on your insight to do this. Find out how to share your opinion here, and email us at voices@sltrib.com.