fbpx
 
Home / News, Videos & Publications / News / Homeland & Cyber Security /

 AI Could Stop Cyberattacks On Hospital CT Scanners

 AI Could Stop Cyberattacks On Hospital CT Scanners

September 3, 2020

Homeland & Cyber Security

Forbes — If there’s one thing a hospital patient doesn’t want to think about as they prepare for a medical scan it’s the possibility a cyberattacker might have found a way to remotely tamper with the diagnostic images, or even quietly upped the radiation levels used to generate them.

The good news is that nobody has ever been confirmed to have done such a thing to a computed tomography (CT) X-ray scanner, which along with MRI (magnetic resonance imaging) and ultrasound systems form the backbone of modern hospital diagnosis.

There is a caveat of course – the moment when somebody tries must be growing closer, leaving researchers searching for a reliable way to head off the troubling possibilities.

Tom Mahler

Now a BGU team of researchers has devised a solution to the problem of defending medical imaging devices (MIDs) using an AI system trained with families of open source algorithms to monitor commands sent to CT scanners for something that doesn’t look right.

In a proof of concept study due to be published this month, this splits the AI defense into a context-free (CF) layer that filters for obviously suspect commands (an excessive radiation level, say), and a more sophisticated context-sensitive (CS) layer that compares an apparently legitimate command to the medical context in which it is being used (giving a child an adult radiation dose).

The AI, then, acts like a black box gatekeeper between the scanner and the commands it is sent, flagging to the human operator anything it thinks looks anomalous.

Ph.D. candidate Tom Mahler, under the supervision of BGU Profs. Yuval Elovici and Yuval Shahar in the BGU Department of Software and Information Systems Engineering (SISE), tested their AI system using a real hospital CT scanner with the agreement of an unnamed large manufacturer, feeding it a mixture of 1,991 anomalous or out-of-context instructions relating to different types of body scan.

“We don’t interfere with anything. We just monitor the traffic,” says Mahler. “When the system detects a problem it alerts the technician, who can decide whether they want to approve or reject the instruction.”

Using both the CF and CS approaches resulted in between 82% and 99% detection of rogue instructions. “This proves the concept works,” says Mahler, who adds that feeding the AI a larger data set would improve these results further. “Although we developed this solution for cyberattacks, it can also detect human errors.”

But is such a complex defense really necessary?

When the BGU researchers took a closer look at the issue in a 2018 study, they concluded (PDF) that CT scanners could be tampered with in many ways, including changing their scanning parameters and mechanical behavior, or simply taking them offline in a denial of service attack.

In a 2019 video, the BGU team showed that it was even possible to mess with CT images to confuse diagnosis by adding or removing tumors before they are stored on a picture archiving and communication system (PACS).

According to Mahler, “it’s just a matter of time before someone starts exploiting this new kind of attack,” most likely as part of extortion or ransomware aimed at taking a workstation offline or damaging the reputation of a hospital whose CT scanners are affected.

Right now, the biggest barrier to an attack is simply security by obscurity, namely that an attacker would need to know a fair amount about specific MID models to successfully target one with anything more sophisticated than denial of service.

Then it occurs to you that if Israeli researchers can work out how to tell a scanner to do something bad, perhaps anyone can.

Read more in Forbes >>