Navigating the intersection of AI deployments and HIPAA compliance presents significant challenges. Understanding AI and HIPAA the compliance landscape is crucial, and the foundational steps involve cultivating awareness and adhering to best practices to mitigate potential pitfalls.
The integration of artificial intelligence within healthcare among practitioners has witnessed a notable surge in the past year, raising important considerations around AI and HIPAA Compliance. A 2025 survey conducted by a prominent medical association revealed that 66% of practitioners reported using AI in their practices, a substantial increase from the 38% reported in 2023. Furthermore, over two-thirds of the surveyed practitioners expressed positive views regarding the advantages of utilizing AI tools, highlighting the growing need to address data privacy and security in this evolving landscape.
How AI is Used in Healthcare
The 2024 HIMSS Healthcare Cybersecurity Survey indicated that AI is employed in various technical capacities, including support and data analytics, clinical care applications such as diagnostics, and administrative functions like virtual meeting transcription and content creation. Additionally, AI finds application in healthcare operations, research, patient engagement, and clinical training. As AI becomes more integrated into these areas, ensuring AI and HIPAA Compliance is critical to protect patient privacy and maintain regulatory standards.
An executive at the International Association of Privacy Professionals (IAPP) acknowledged that AI “is being incorporated into almost all aspects of a healthcare lifecycle.” They further elaborated, stating, “AI is helping researchers to discover new treatments and clinical therapies more efficiently. Within day-to-day healthcare operations, AI systems are used in ambulatory care to assist with time-sensitive diagnosis and to improve patient and care-provider interaction. In the hospital, AI is used to assist healthcare providers with real-time information, use robotics to assist with delicate surgeries, and in hospital administration to improve patient experience. For ongoing care, virtual AI assistance is becoming more common.”
The healthcare industry’s adoption of AI often aims to enhance operational efficiency, patient experiences, and patient outcomes, according to the vice president of information security and CISO at a major insurance company. In these instances, the healthcare sector’s use of AI mirrors that of other industries. AI systems, such as chatbots and generative AI (GenAI) tools, analyze customer and patient data to provide doctors, nurses, administrators, customer service representatives, and patients themselves with accurate and personalized information as quickly as possible.
This same expert noted that AI functions similarly in clinical settings. AI models, trained on extensive historical data, assist clinicians in delivering care to current patients. Here, AI tools manage a range of tasks, including interpreting the results of patient imaging exams, offering diagnostic advice, and recommending optimal treatment strategies.
Furthermore, AI is integrated into medical devices, surgical robotics, and wearables, enabling “real-time intraoperative decision support, precision automation and adaptive learning based on patient-specific data,” explained a senior member and program manager at a minimally invasive care and robotic-assisted surgery provider. This expert also pointed out that AI aids in remote patient monitoring “by leveraging real-time biometric data to predict and prevent health crises.”
How AI Jeopardizes HIPAA Compliance
Despite its numerous benefits in healthcare, AI introduces challenges to a healthcare organization’s ability to adhere to HIPAA standards for protecting protected health information (PHI). According to the vice president of security and CISO at a clinical data platform provider, “There are concerns about where data lives, who has access to it and how that data is being used. It’s something that tech departments struggle with every day.”
All forms of AI rely on data, ranging from traditional machine learning algorithms to more recent AI tools like large language models (LLMs). Much of the software utilized in healthcare applications now incorporates AI, and it may not always be clear to healthcare organizations whether this software safeguards data in accordance with HIPAA standards. This highlights growing concerns around AI and HIPAA Compliance, noted the same expert, who also serves as a director at a professional IT governance association.
An IEEE senior member emphasized that “AI-driven tools pose HIPAA compliance risks if PHI data is not securely managed at rest or in transit. Many advanced AI tools are cloud-based, and making useful data available to those tools without compromising on data security is a complex challenge.”
Healthcare industry experts have identified eight key ways in which AI can jeopardize HIPAA compliance:
- Regulatory Misalignment: “Traditional HIPAA frameworks were not designed for real-time AI decision-making,” explained a program manager. “For instance, an AI-driven surgical guidance system adjusting intraoperative parameters dynamically must align with HIPAA while ensuring split-second clinical decisions are not hindered.”
- Cloud-Based Data Transmissions: Many medical devices, such as surgical robots and wearables, transmit patient data to cloud-based platforms, thereby increasing the potential for breaches, noted the same expert.
- Data Exchanges with Outside Parties: Because numerous AI models are cloud-based or integrated into SaaS applications, healthcare organizations are transferring patient data outside their direct data protection measures and into these cloud-based environments, explained a vice president of information security and CISO. This movement across clouds elevates the risk of breaches and unintentional data exposure. It also complicates the healthcare organization’s ability to verify that the data shared with cloud-based AI models is protected and secured according to HIPAA standards.
- AI and ML Training Data: “Ensuring AI training data remains compliant is complex, as unencrypted, non-tokenized or non-de-identified data can lead to HIPAA violations,” cautioned an IEEE senior member.
- AI Model Bias and Data Leaks: A program manager warned that “some AI algorithms inadvertently retain PHI from training data, raising concerns about unintended data leaks.” They suggested that federated learning techniques could mitigate this risk by processing data locally on devices rather than on centralized servers.
- Public Large Language Models: Employees could unintentionally expose protected data by using public LLMs. A vice president of information security and CISO illustrated this by explaining that an employee using a GenAI tool to draft a letter to a patient about treatment options might inadvertently include details that reveal the patient’s protected data. The CEO and founder of a security consultancy added that clinicians, administrators, or support staff might use a public LLM to assist with note transcription, similarly exposing protected data.
- Lack of Data Visibility: Healthcare organizations might lack insight into whether vendors analyze any data shared or stored with them for their own purposes in ways that would contravene HIPAA regulations, according to the CEO and founder of a security consultancy. “There are a number of large entities — insurance, medical manufacturers — who have large data sets that they’re tapping into to optimize their products and services,” they elaborated.
- Patient Consent Policies: Healthcare organizations might find that their current consent policies do not adequately inform patients about how their data is used with AI tools, noted a vice president of information security and CISO.
Best Practices for Using AI in Compliance with HIPAA
Despite the intricate nature of the application, healthcare organizations cannot allow AI use cases to compromise their ability to comply with HIPAA.
According to a managing director at IAPP, “AI systems are not a special scenario falling outside of these existing robust compliance obligations. AI is just like any other technology — the rules for notice, consent, and responsible uses of data still apply. When considering AI and HIPAA Compliance, HIPAA-covered entities should be laser-focused on applying robust governance controls, whether data will be used to train AI models, ingested into existing AI systems, or used in the delivery of healthcare services.
Healthcare industry experts have proposed twelve best practices to navigate the primary risks and ensure that AI is applied in healthcare to maximize benefits while adhering to HIPAA regulations:
- Develop Clear Policies, Procedures, and Codes of Conduct: Establish specific guidelines on the application of AI and the how, when, and where of its compliant use with HIPAA. “We want to encourage the adoption of AI,” stated a vice president of information security and CISO, “but it must be secure and responsible.”
- Ensure Third-Party Contracts Address AI Risks: Organizations should verify that their partners, vendors, and suppliers meet the security and data protection standards necessary to mitigate AI risks, advised a vice president of security and CISO. They recommended that healthcare entities review existing contracts to ensure these needs are addressed and, if not, establish new agreements.
- Establish a Strong Governance Program: Governance should ensure that employees, partners, and vendors are educated and trained to follow and comply with the healthcare organization’s policies, procedures, and code of conduct, emphasized the CEO and founder of a security consultancy.
- Institute a Comprehensive Risk Management Program: A well-defined AI strategy and robust AI governance program do not eliminate risks without a strong risk management strategy, noted a vice president of information security and CISO.
- Implement Security Measures: Healthcare entities should identify and establish the security measures required to mitigate risks as outlined in their risk management strategy. The CEO and founder of a security consultancy recommended encryption, network traffic monitoring, access control tools, and a strong access management program.
- Select Appropriate Types of AI Tools and Software: AI tools possess varying levels of built-in security and protection, with public LLMs and GenAI tools presenting the highest likelihood of data exposure, explained a vice president of security and CISO. Therefore, healthcare organizations must provide clear guidelines for selecting and implementing software and AI tools to ensure alignment with organizational security and data privacy requirements.
- Incorporate Secure-by-Design Principles: Another method to enhance the protection of sensitive data is to integrate security and privacy considerations into the design of AI models and tools. “AI has to have privacy by design as a core tenant,” asserted a vice president of information security and CISO.
- Install a Zero-Trust Security Architecture: “Implementing multifactor authentication, granular access controls and encryption ensure that AI-powered devices and robotic systems remain compliant with HIPAA while protecting PHI,” stated a program manager.
- Use Edge AI and On-Device Processing: Instead of relying solely on centralized cloud processing, a program manager suggested running AI models and algorithms directly on devices such as surgical robots or wearable devices at the network edge to minimize data exposure risks.
- Consider Federated Learning for AI Training: “Instead of centralizing PHI for AI training,” explained a program manager, “federated learning enables AI models to be trained across multiple local devices without sharing raw patient data. This approach is gaining traction in digital health.”
- Perform Regulatory Sandboxing: “AI systems used in robotic-assisted surgery and digital health wearables should undergo continuous auditing, bias detection and explainability testing to ensure compliance without compromising AI performance,” advised a program manager.
- Seek Legal Support: The legal and compliance teams should collaborate closely with technology and business teams, recommended a vice president of information security and CISO, to ensure a thorough understanding of the risks posed by AI tools and the strategies for mitigating these risks. This collaboration is essential for navigating the complex landscape of AI and HIPAA Compliance, as well as ensuring adherence to other regulations, industry standards, and enterprise standards.