Jessica Kim
Cohen October 26, 2019 01:00 AM
Artificial intelligence can diagnose diseases from medical images on par with healthcare
professionals. It can outperform
radiologists when screening for lung cancer. And it can even detect
post-traumatic stress disorder in veterans by analyzing voice
recordings.
It sounds like a page from science
fiction—but studies issued during the past year alone have claimed AI can do
all of the above, and more.
Early findings like those are
raising interest in AI’s potential to overhaul patient care as we know it. Top
healthcare CEOs are eyeing the space, with nearly 90% of CEOs indicating
they’ve seen AI developers targeting clinical practice, according to a Power Panel survey Modern Healthcare conducted this year.
Yet despite AI’s performance
becoming more advanced—with accuracy rates for diagnosing and detecting disease
climbing higher and higher—a question remains: What happens if something goes
wrong?
“We do many different projects
related to use of AI,” said Dr. Matthew Lungren of his work as associate
director of the Stanford Center for Artificial Intelligence in Medicine and
Imaging. That includes working on AI systems that can detect brain aneurysms
and diagnose appendicitis. “But just because we can develop things, doesn’t
necessarily mean that we have a solid road map for deployments,” he added.
That’s particularly true when it
comes to deducing liability, or who’s responsible should patient harm arise
from a decision made by an AI system.
Liability hasn’t been explored in
depth, said Lungren, who, with co-authors from Stanford University and Stanford
Law School, penned a commentary on medical malpractice concerns with AI for the
Harvard Journal of Law & Technology this year.
These types of technologies are
still a ways off from being deployed in hospitals. According to a recent report from the American Hospital Association’s Center
for Health Innovation, AI technologies that help diagnose disease and recommend
customized treatment plans are still in development.
“Because the technology is so new,
there’s no completely analogous case precedent that you would apply to this,”
said Zach Harned, a Stanford Law School student who co-authored the article
published in the Harvard Journal. “But there are some interesting analogues you
might be able to draw.”
There haven’t been significant
court cases litigating AI in medicine yet, according to legal experts who spoke
with Modern Healthcare.
But courts might point to legal
doctrines like those applying to medical malpractice; respondeat superior,
the doctrine often cited to say an employer is responsible for acts of their
employees; or those applying to product liability to implicate physicians,
hospitals or vendors, respectively.
“This is quite unsettled,”
acknowledged Nicholson Price, a law professor at University of Michigan Law
School. “We can make some guesses, we can make some predictions, we can make
some analogies—but it’s still TBD.”
Five steps to limit liability risks with
AI
1. Conduct thorough risk assessments
of the system and the vendor, including evaluating the underlying model and
testing it on the hospital’s own data
2. Build comprehensive contracts,
outlining who will assume liability in given scenarios and requirements for
appropriate use
3. Follow labeling provided by the
vendor to ensure the system is being used for its intended purpose and
according to its FDA clearance
4. Establish practices for physicians
to follow if they disagree with the AI, to ensure the physician’s judgment
is still the main mechanism behind care decisions
5. Review the system’s performance on
a continuing basis and in partnership with
the vendor
Physicians and malpractice
Despite its advances, AI is in most
cases used as a tool for advice, not decisionmaking. That means a patient might
be able to sue a physician for malpractice, or negligence, if the provider
dispenses an incorrect treatment decision—even if it was suggested by an AI
system.
That’s because physicians are
typically expected to take responsibility for patient treatment decisions, said
Rebecca Cady, chief risk officer at Children’s National Hospital in Washington,
D.C., and a board member of the AHA’s American Society for Health Care Risk
Management. They’re expected to exercise “independent and reasonable clinical
judgment,” she said, and would “not be able to avoid liability by pointing at
problems with the AI system.”
That puts the physician in a tough
spot, particularly if an AI system recommends a treatment or care management
strategy that deviates from the standard of care. “At least in the short term,
physicians are going to be liable for injuries that arise from their failure to
follow the standard of care,” Price said. “Even if the AI made a good guess,
because you stepped outside the standard of care, you may well be liable.”
That suggests the safest way for
physicians to use AI, from a liability perspective, is as a “confirmatory tool”
for existing best practices, rather than as a way to improve care with new
insights, Price and co-authors from Harvard Law School argued in a perspective published in
JAMA this month. As a result, the law may actually encourage
physicians to “minimize the potential value of AI,” they wrote.
However, there are some aspects of
AI that could complicate a traditional malpractice case.
In healthcare, drugmakers and
product vendors are typically protected from liability because they provide
relevant information to a “learned intermediary”—the physician.
“The idea here is that the doctor
knows what’s going on and is making an informed decision,” Price said. But with
certain types of AI, “the doctor doesn’t know what’s going on—because nobody
knows what’s going on,” he added. “That creates an interesting tension with a
set of existing (legal) doctrine.”
What’s in the box?
That hits at the heart of the
so-called black box problem in AI, in which systems are unable to explain how
they crunched data and analyzed information to reach their recommendations.
If a physician can’t verify how an
AI system made its decision, that may make it more difficult to ascribe
liability to the doctor, said Linda Malek, chair of law firm Moses &
Singer’s healthcare, and privacy and cybersecurity practice groups. But that
might not matter to a patient in the wake of a poor outcome.
“Those are really technical
distinctions,” she said of the inner workings of various types of AI systems.
“Your typical patient is not going to understand those.”
One precaution physicians could take
is to check whether their malpractice insurer covers patient care that uses AI
recommendations any differently than other types of care, said I. Glenn Cohen,
faculty director of the Petrie-Flom Center for Health Law Policy, Biotechnology
and Bioethics at Harvard Law School and a co-author on the JAMA perspective
with Price.
“If your hospital tells you ‘we’re
now implementing this,’ it’s one thing for the hospital to check in to see how
they’re covered—but as a physician, you want to make sure you’re covered, as
well,” he said.
Malpractice insurance tends to
focus on particular types of harm, rather than what led those harms to occur,
said Michelle Mello, a law professor who holds a joint appointment at Stanford
Law School and Stanford University School of Medicine. That suggests that
unless an insurer specifically excludes use of AI from its coverage, it’s
covered the same as typical care.
“I haven’t heard of that
happening,” she added.
Hospitals and health systems, too,
could be accused of negligence if an AI system proves ineffective.
There are a few ways to think about
that. It might be similar to a hospital being accused of negligent
credentialing if an organization gives privileges to an unqualified doctor,
Cohen said. It could also be considered analogous to what the industry has seen
in some data breach cases, when a hospital is investigated for not
appropriately vetting a vendor that exposed patient data, according to Malek.
How hospitals should handle it
To avoid that risk, hospitals
should approach AI from two levels.
First, they should ensure
physicians are still the party rendering the ultimate care decision. “AI should
not be a substitute for clinician judgment,” Cady stressed.
Second, they should document that
they’ve done their due diligence in selecting a vendor by conducting thorough
risk assessments of both the manufacturer of the AI system and the AI system
itself before signing any contracts.
Risk assessments could include
evaluating the AI system’s error rate, reviewing the underlying model,
assessing the data the system was trained on, and testing the system on the
organization’s own data to ensure the algorithm works for that hospital’s
specific patient population. Hospitals should also look into whether the system
has been cleared by the Food and Drug Administration, and if not, whether that
opens up risks.
“If the system is not FDA-approved,
the hospital and provider could face claims related to off-label product use,”
Cady said.
The FDA’s evolving approach to regulating
AI technologies
·
So far, AI technologies cleared by the Food and Drug
Administration have used “locked” algorithms, meaning they don’t continually
adapt in response to new data. Instead, they’re manually updated by the
manufacturer.
·
But in April, Dr. Scott Gottlieb, who was then FDA commissioner,
said “continuously learning” algorithms hold a “great deal of promise,” and
released a discussion paper on how to best regulate these types of AI.
·
The FDA is currently reviewing feedback on the discussion paper,
said Bakul Patel, director of the digital health division at the FDA’s
Center for Devices and Radiological Health. The agency is working on
determining next steps for its regulatory strategy on AI.
·
Although the initial discussion paper was released under
Gottlieb’s tenure, Patel said he doesn’t expect the FDA’s direction for AI
oversight to change with new leadership.
·
Separately, the FDA last month issued a draft guidance on how it
plans to regulate clinical decision-support software—some of which might
include AI—by focusing its oversight on software that both helps manage serious
clinical conditions and doesn’t explain how it reaches its recommendations.
The FDA is developing a strategy to
regulate AI technologies. In April, the agency solicited public comment on how it could use pre- and
post-market evaluations to assess the safety of medical AI systems, and last
month it released a new draft guidance outlining how it plans to
regulate clinical-decision support software, some of which might include AI.
Hospitals should stay apprised of
what types of technologies the FDA has said do—and do not—fall under its
regulatory oversight. As part of its guidance on clinical-decision support
software, the FDA has proposed focusing its oversight on black box AI that
doesn’t explain its recommendations, rather than on those that offer more
insight into its decisions.
Limiting a hospital’s purchases to
AI systems that have been FDA-cleared is “the safest route to go,” Price said.
“But, frankly, that’s a relatively small subset of the technology that’s out
there and being developed,” he added.
And an AI system isn’t a one-time
purchase.
Given that AI products, unlike
traditional software, can continuously adjust how they make decisions in
response to new information, hospitals will need to do ongoing assessments of
the system. That’s something hospitals should consider including in their
contracts: establishing how the hospital and vendor will work together to
monitor and maintain the system.
“Adopting a product like this isn’t
going to be a one-shot deal,” Price said. “It’s going to need to be an ongoing
relationship, and part of that ongoing relationship needs to be figuring out
high-quality ways to measure and improve performance over time.”
Vendors and product liability
Companies that develop AI systems
aren’t off the hook, although ascribing liability to a software system, rather
than a person, may be more challenging.
Liability concerns may depend on
what type of AI system the hospital has implemented: assistive or autonomous.
Assistive AI, such as clinical
decision-support systems, could be looked at like a “physician GPS,” with the
physician ultimately in charge, said Lungren, from the Stanford Center for
Artificial Intelligence in Medicine and Imaging. There, physicians will likely
be considered responsible for evaluating the AI’s recommendations and
integrating them into patient care. Autonomous AI, by contrast, are systems
like IDx’s IDx-DR, which the FDA actually indicates is meant for use
without involving a specialist.
IDx officials say it takes on
liability for the system’s possible diagnosis of diabetic retinopathy when it
contracts with healthcare organizations. That follows recommendations from the
American Medical Association, which has advocated for developers of autonomous
AI systems to manage liability that arises from misdiagnosis.
“What we set out to do is to create
an autonomous AI for providers who have no experience doing the diabetic eye
exam,” said Dr. Michael Abramoff, IDx’s CEO and founder. “You cannot give them
a tool … that makes a decision that they cannot make by themselves, and then
say, ‘But now you’re liable.’ ”
Contracts also specify that IDx
only assumes liability for the system’s outputs and not for managing the
patient’s care that results from its diagnosis.
And IDx doesn’t claim to be
error-free. IDx-DR correctly identifies diabetic retinopathy that is more than
mild and diabetic macular edema 87.4% of the time, according to the clinical
study the FDA reviewed to evaluate the AI system. Previous studies have
suggested ophthalmologists correctly identify the conditions between one-third
and three-quarters of the time.
“It’s when it’s underperforming
compared to the labeling—to the clinical trial results—that’s where the
liability comes in,” Abramoff said of the AI system.
No comments:
Post a Comment