The Risk of Shadow AI in Healthcare and Why it Matters
Arun Sharma
Specialization
By Sumant Kawale
Shadow AI appears to be the latest issue associated with this technology. A “shadow” is an apt description for what’s happening across many industries and businesses, healthcare included. As the unauthorized deployment and use of AI tools by teams or individuals who don’t get a green light go-ahead from IT or security departments, shadow AI lives in the murky place between receiving approval for its use or not.
Shadow AI can cause a crisis in any business, but for those of us in healthcare, there are many ways its use can lead to extremely bad outcomes for every stakeholder: providers, payers, patients, and members.
Impacting Medical Diagnosis, Treatment, Patient Safety
Shadow AI is the latest technology risk facing healthcare. We’ve seen similar risks when Internet searches became easier to perform, and reliable information became accessible.
If you were dealing with a diagnosis as a provider that you were unfamiliar with, you could quickly look for information on any search engine. With a full page (or pages) of results, this was a simple way to locate information because you could readily identify the source and ignore those that were less reliable.
With AI, this gets harder, as it is not always easy to explicitly link sources to answers. And that’s a big problem when organizations, departments, and people rely on Shadow AI to inform decisions. With Shadow AI, IT and clinical departments haven’t had an opportunity to vet the data sources on which the AI is trained. Indeed, these departments likely aren’t aware of the use of Shadow AI within the organization.
Compliance and Data Privacy
Privacy is critical for payers, providers, and operating partners. Data exfiltration is a huge risk and is associated with significant penalties. Stakeholders work each day to confirm that healthcare data is safe and secure and remains within the organization. Nevertheless, data breaches are an ongoing problem for healthcare, with more than 274 million people having their data exposed in 2024.
Sending patient or member medical data to Shadow AI is akin to handing a burglar the keys to your house. You can do it, but don’t be surprised if you get home and all the valuables are gone. Using Shadow AI to process healthcare data can expose an organization to mishandled data or, much worse, a full-on breach.
Unmonitored AI Usage
While lockdowns and approved website lists are table stakes, AI tools have become too ubiquitous to completely lockdown. Data exfiltration is possible even when texting by using a smartphone and taking pictures. As a result, guiding the use of AI increasingly becomes a task that needs consistent reinforcement and ongoing openness and transparency when communicating about the topic, while reiterating the serious consequences of data breaches.
The optimal middle ground is to grade data sensitivity. Protected health information should never make its way out of secure IT system environments, especially not in email. Further, tools should be available in development environments to use the latest and greatest hosted AI models, again keeping the data in-house.
The latest APIs may sometimes be out of reach because of customer security concerns, but secure experimentation with mock data can be encouraged to understand how AI produces results.
How Organizations Can Respond
As with many decisions in life, there is no easy answer to the use of Shadow AI. Even with safeguards in place and explicit instructions on the use of AI in a healthcare organization, the use of Shadow AI will continue to occur.
Three ways to lessen the use of Shadow AI:
- We feel the right way to tackle this is to restrict critical answers to clinical questions, for example, to known sources only. This approach can be embedded in the organization’s knowledge management system, allowing the user to receive answers without knowing their exact source, while the AI team establishes the necessary safeguards to ensure the information is reliable and accurate.
- As with most business workflows, the right way to mitigate risk is via a distributed model. Monitoring is key to ensure that users do not copy and paste data or even open unauthorized websites that host AI solutions.
- Use HIPAA-certified AI services that have zero retention policies to mitigate the exfiltration risk associated with using external APIs. Many responsible organizations simply don’t use external APIs to ensure that no data is sent externally.
These recommendations only represent partial measures of an overall AI strategy that healthcare organizations should consider before employing AI solutions. In addition, organizations must balance internal pressures to work faster, drive down costs, and work smarter with the need to do all we can to protect patient and member data.
Healthcare data is sacrosanct. Because it contains highly personal and sensitive information that directly influences a person’s well-being, privacy, and security, safeguarding this data is essential to maintain trust with patients and members.
No matter how valuable AI seems to — increasing efficiency, cutting costs, or improving care — we can’t lose sight of those we serve.
Sumant Kawale is Senior Vice President of Technology Solutions at BirchAI, a Sagility company.
