The Department of Health and Human Services has struck first in regulating the new artificial intelligence tools in health care The agency announced today it will require more transparency about artificial intelligence used in clinical settings, with the goal of helping providers understand algorithms’ potential risk, POLITICO’s Ben Leonard reports. Why? The final rule will fill a void in regulation, aiming to help providers, like hospitals and physician groups, choose safer artificial intelligence that avoids bias. As it stands, AI in health care is loosely regulated and little is known about how the algorithms work. It also fulfills requirements under President Joe Biden’s recent executive order on artificial intelligence. How? The regulations will require software developers to provide more data to customers with the aim of allowing providers to determine whether AI is “fair, appropriate, valid, effective and safe.” They will cover models analyzing medical imaging, generating clinical notes, and alerting clinicians to potential risks to patients. AI tool makers will have to disclose information on how the software works and was developed. That includes who funded the technology, what its intended decision-making role is and when clinicians should be cautious about using it. Software developers will also have to tell their customers how representative the AI’s training data is, how they attempted to mitigate bias and how the software was externally validated. Additionally, they’ll have to reveal performance measures, explain how they monitor performance over time and describe how often they update algorithms. The agency said the rule has a broad scope, covering models not directly involved in clinical decision-making that can impact care delivery, like those aiding supply chains. When? The regulations from the Office of the National Coordinator for Health IT within HHS will apply to clinicians using HHS-certified decision support software by the end of 2024.
|
No comments:
Post a Comment