A deep
learning algorithm successfully penetrated a healthcare organization and fooled
both humans and an AI system with faked medical images.
April 05,
2019 - Deep learning has been hailed as a revolutionary tool for
supporting faster, more accurate, and more detailed clinical decisions in
radiology.
Almost every day,
researchers are releasing new studies that show the potential
of artificial intelligence to supplement the work of humans, with many models
meeting or surpassing the abilities of highly-trained physicians.
But what if curious
researchers – or someone with more nefarious intentions – turned all that power
against the clinicians they are supposed to be helping?
A new
study from a team of Israeli researchers shows just how
easy it has become to use deep learning as a way to alter medical images to add
incredibly realistic cancerous tumors and fool even the best radiologists the
majority of the time.
The team explains how
to successfully infiltrate a typical health system’s PACS infrastructure and
alter MRI or CT scan images using malware based on a type of machine learning
called generative adversarial networks (GANs) to inject fake tumors or remove
real cancers from the patient data.
These “deep
fakes,” which are becoming a growing concern in political
and social spheres – could have significant impacts on patient outcomes.
“Since 3D medical
scans provide strong evidence of medical conditions, an attacker with access to
a scan would have the power to change the outcome of the patient’s diagnosis,”
the team explained.
“For example, an
attacker can add or remove evidence of aneurysms, heart disease, blood clots,
infections, arthritis, cartilage problems, torn ligaments or tendons, tumors in
the brain, heart, or spine, and other cancers.”
There are numerous
motivations for conducting this type of attack, the study continues.
Hackers may wish to influence the outcome of an election or topple a political
figure by prompting a serious health diagnosis. Or they might change
images on a larger scale and hold the original data for ransom.
Individuals could use
the strategy to commit insurance fraud or hide a murder; researchers or drug
developers could fake their data to confirm a desired result.
Hundreds of commonly
used PACS systems have unsecured internet
connections that could provide an easy attack vector, the team noted, and the
creativity of healthcare hackers seems to know no bounds.
The researchers
conducted a simulated attack on a real hospital’s systems using a common
sub-$50 computer known as Raspberry Pi.
“The Pi was given a
USB-to-Ethernet adapter, and was configured as a passive network bridge
(without network identifiers),” the team said. “The Pi was also configured as a
hidden Wi-Fi access point for backdoor access.”
“We also printed a 3D
logo of the CT scanner’s manufacturer and glued it to the Pi to make it less
conspicuous.”
While the
participating hospital was fully aware of the team’s activities and consented
to the experiment, the healthcare organization was likely not very pleased by
the ease with which the hackers physically installed the hardware and gained
access to the network.
Equally concerning is
the realism of the images produced by the
deep learning models. Radiologists had an extremely difficult time
recognizing that the images had been altered, even when they were aware that
the images may have been altered.
When three
experienced clinicians were not told that they were looking at images that
included fake lung cancer tumors, they confirmed a cancer diagnosis 99 percent
of the time.
Source: Yisroel Mirsky et. al.
“When asked, none of
the radiologists reported anything abnormal with the scans with the exception
of [radiologist 2] who noted some noise in the area of one removal,” the team
noted.
The radiologists were
also convinced that the faked cancers were severe.
“With regards to the
injected cancers, the consensus among the radiologists was that one-third of
the injections require an immediate surgery/biopsy, and that all of the
injections require follow-up treatments/referrals,” said the study.
“When asked to rate
the overall malignancy of the [injected cancer] patients, the radiologists said
that nearly all cases were significantly malign and pose a risk to the patient
if left untreated.”
On images that had
real tumors removed, the radiologists gave the all-clear to the patients 94
percent of the time.
Even when
radiologists were warned that some of the images may have been altered, they
still made mistakes. Clinicians failed to note that injected tumors were
fake 61 percent of the time, and did not identify that tumors had been removed
on images 87 percent of the time.
In addition, the
participating radiologists were not very confident in their decisions.
When asked to rate their confidence that they caught real or fake cancers, all
of the clinicians showed serious doubts.
Source: Yisroel Mirsky et. al.
The method even
fooled an AI-based clinical decision support tool…one hundred percent of the
time. This is particularly concerning to artificial intelligence
proponents who believe that AI can catch human errors more efficiently
and improve the accuracy of
decision-making.
As AI becomes more
and more sophisticated, and as the number of ransomware attacks and data
breaches rises in healthcare, these sneaky, innovative threats may become more
common.
Organizations will
have to carefully secure their infrastructure – and take painstaking effort to
educate providers about how to practice medicine in a world where deep learning
can be used to tamper with pretty much anything.
“This paper
demonstrates how we should be wary of closed world assumptions: both human
experts and advanced AI can be fooled if they fully trust their observations,”
the team stated. “We hope that this paper, and the supplementary datasets, help
both industry and academia mitigate this emerging threat.”
No comments:
Post a Comment