By Cade Metz and Craig S. Smith
March 21, 2019
Last year, the Food
and Drug Administration approved a device that can capture an image of your
retina and automatically detect signs of diabetic blindness.
This new breed of
artificial intelligence technology is rapidly spreading across the medical
field, as scientists develop systems that can identify signs of illness and disease in
a wide variety of images, from X-rays of the lungs to C.A.T. scans
of the brain. These systems promise to help doctors evaluate patients more
efficiently, and less expensively, than in the past.
Similar forms of
artificial intelligence are likely to move beyond hospitals into the computer
systems used by health care regulators, billing companies and insurance
providers. Just as A.I. will help doctors check your eyes, lungs and other
organs, it will help insurance providers determine reimbursement payments and
policy fees.
Ideally, such systems
would improve the efficiency of the health care system. But they may carry
unintended consequences, a group of researchers at Harvard and M.I.T. warns.
In a paper published
on Thursday in the journal Science,
the researchers raise the prospect of “adversarial attacks” — manipulations
that can change the behavior of A.I. systems using tiny pieces of digital data.
By changing a few pixels on a lung scan, for instance, someone could fool an
A.I. system into seeing an illness that is not really there, or not seeing one
that is.
Software developers
and regulators must consider such scenarios, as they build and evaluate A.I.
technologies in the years to come, the authors argue. The concern is less that
hackers might cause patients to be misdiagnosed, although that potential exists.
More likely is that doctors, hospitals and other organizations could manipulate
the A.I. in billing or insurance software in an effort to maximize the money
coming their way.
Samuel Finlayson, a
researcher at Harvard Medical School and M.I.T. and one of the authors of the
paper, warned that because so much money changes hands across the health care
industry, stakeholders are already bilking the system by subtly changing
billing codes and other data in computer systems that track health care visits.
A.I. could exacerbate the problem.
“The inherent
ambiguity in medical information, coupled with often-competing financial incentives,
allows for high-stakes decisions to swing on very subtle bits of information,”
he said.
The new paper adds to
a growing sense of concern about the possibility of such attacks, which could
be aimed at everything from face recognition services and driverless cars to iris scanners and fingerprint
readers.
An adversarial attack
exploits a fundamental aspect of the way many A.I. systems are designed and
built. Increasingly, A.I. is driven by neural networks, complex mathematical systems that
learn tasks largely on their own by analyzing vast amounts of data.
By analyzing
thousands of eye scans, for instance, a neural network can learn to detect
signs of diabetic blindness. This “machine learning” happens on such an
enormous scale — human behavior is defined by countless disparate pieces of
data — that it can produce unexpected behavior of its own.
In 2016, a team at
Carnegie Mellon used patterns printed
on eyeglass frames to fool face-recognition systems into thinking the wearers
were celebrities. When the researchers wore the frames, the systems mistook
them for famous people, including Milla Jovovich and John Malkovich.
A group of Chinese
researchers pulled a similar trick by projecting infrared light from the
underside of a hat brim onto the face of whoever wore the hat. The light was
invisible to the wearer, but it could trick a face-recognition system into
thinking the wearer was, say, the musician Moby, who is Caucasian, rather than
an Asian scientist.
Researchers have also
warned that adversarial attacks could fool self-driving cars into seeing things
that are not there. By making small changes to street signs, they have duped
cars into detecting a yield sign instead of a stop sign.
Late last year, a
team at N.Y.U.’s Tandon School of Engineering created virtual fingerprints
capable of fooling fingerprint readers 22 percent of the time. In other words,
22 percent of all phones or PCs that used such readers potentially could be
unlocked.
The implications are
profound, given the increasing prevalence of biometric security devices and
other A.I. systems. India has implemented the world’s largest fingerprint-based
identity system, to distribute government stipends and services. Banks are
introducing face-recognition access to A.T.M.s. Companies such as Waymo, which
is owned by the same parent company as Google, are testing self-driving cars on
public roads.
Now, Mr. Finlayson
and his colleagues have raised the same alarm in the medical field: As
regulators, insurance providers and billing companies begin using A.I. in their
software systems, businesses can learn to game the underlying algorithms.
If an insurance
company uses A.I. to evaluate medical scans, for instance, a hospital could
manipulate scans in an effort to boost payouts. If regulators build A.I.
systems to evaluate new technology, device makers could alter images and other
data in an effort to trick the system into granting regulatory approval.
In their paper, the
researchers demonstrated that, by changing a small number of pixels in an image
of a benign skin lesion, a diagnostic A.I system could be tricked into
identifying the lesion as malignant. Simply rotating the image could also have
the same effect, they found.
Small changes to
written descriptions of a patient’s condition also could alter an A.I.
diagnosis: “Alcohol abuse” could produce a different diagnosis than “alcohol
dependence,” and “lumbago” could produce a different diagnosis than “back
pain."
In turn, changing
such diagnoses one way or another could readily benefit the insurers and health
care agencies that ultimately profit from them. Once A.I. is deeply rooted in
the health care system, the researchers argue, business will gradually adopt
behavior that brings in the most money.
The end result could
harm patients, Mr. Finlayson said. Changes that doctors make to medical scans
or other patient data in an effort to satisfy the A.I. used by insurance
companies could end up on a patient’s permanent record and affect decisions
down the road.
Already doctors,
hospitals and other organizations sometimes manipulate the software systems that control the
billions of dollars moving across the industry. Doctors, for instance, have
subtly changed billing codes — for instance, describing a simple X-ray as a
more complicated scan — in an effort to boost payouts.
Hamsa Bastani, an
assistant professor at the Wharton Business School at the University of
Pennsylvania, who has studied the manipulation of health care systems, believes
it is a significant problem. “Some of the behavior is unintentional, but not
all of it,” she said.
As a specialist in
machine-learning systems, she questioned whether the introduction of A.I. will
make the problem worse. Carrying out an adversarial attack in the real world is
difficult, and it is still unclear whether regulators and insurance companies will
adopt the kind of machine-learning algorithms that are vulnerable to such
attacks.
But, she added, it’s
worth keeping an eye on. “There are always unintended consequences,
particularly in health care,” she said.
A version of this article appears in print on March 24,
2019, on Page B5 of the New York edition with the
headline: A.I. Can Be a Boon to Medicine That Could Easily Go Rogue
https://www.nytimes.com/2019/03/21/science/health-medicine-artificial-intelligence.html?utm_source=American+Action+Forum+Emails&utm_campaign=9bd434a220-EMAIL_CAMPAIGN_2019_01_07_08_31_COPY_01&utm_medium=email&utm_term=0_64783a8335-9bd434a220-267125721
No comments:
Post a Comment