
HHS Sets New Rules to Ensure Nondiscrimination in Healthcare-Related AI
The United States Department of Health and Human Services’ Office for Civil Rights has issued new regulations under Section 1557 of the Affordable Care Act to address the use of artificial intelligence (AI) in healthcare. These rules aim to prevent discrimination based on race, sex, age, disability, and other factors while promoting the ethical and equitable use of AI tools in patient care.
Artificial intelligence (AI) is transforming healthcare, offering promising solutions such as improved diagnosis, streamlined clinical decision-making, and reduced clinician burnout. However, as these tools become more widespread, concerns about potential discrimination in their use have grown. To address these issues, the United States Department of Health and Human Services’ (HHS) Office for Civil Rights (OCR) has implemented regulations under Section 1557 of the Affordable Care Act. These regulations ensure that AI and similar technologies in healthcare adhere to the principles of nondiscrimination, safeguarding equitable care for all patients.
OCR’s guidelines, which took effect on July 5, 2024, and will be fully enforced by May 1, 2025, target the ethical deployment of patient care decision support tools. These tools, often powered by AI, are used in various healthcare activities, such as diagnosis, risk prediction, and resource allocation. While these innovations offer significant benefits, they also carry risks of unintentional bias. For example, algorithms trained on incomplete or unrepresentative data sets can lead to decisions that disadvantage certain populations, such as individuals with disabilities or patients from racial minority groups.
Under the new rules, healthcare providers and other covered entities must take proactive steps to identify and mitigate the risk of discrimination associated with these technologies. This includes evaluating the input variables used by AI tools and ensuring that they do not unfairly impact outcomes based on race, gender, age, or disability. For example, OCR highlighted the case of race-adjusted estimated glomerular filtration rate equations, which have historically resulted in reduced referrals for kidney care among Black patients. Providers are encouraged to adopt updated equations that eliminate race-based adjustments to prevent such disparities.
OCR’s approach emphasizes flexibility, allowing entities to tailor their compliance efforts based on their size, resources, and the nature of the tools they use. Recommendations for compliance include reviewing relevant research, consulting AI registries for tool safety, and establishing internal policies to monitor and address discrimination risks. Training staff to identify and address potential bias in AI-generated decisions is another critical component of these efforts.
The regulations also introduce transparency requirements, encouraging providers to disclose the use of AI tools to patients. By ensuring that patients are informed about how these technologies influence their care, OCR aims to build public trust in AI-driven healthcare. Click here to read more.