HHS has rules to prevent discrimination in care, but they don’t go far enough to root out bias in health care artificial intelligence, legal experts argue in two new papers. They’re calling on regulators to address gaps in the system. In April, HHS’ Office for Civil Rights issued a final rule that said health providers can’t discriminate against patients by using biased decision-support tools, including those powered by AI. But Stanford’s Michelle M. Mello and Emory’s Jessica L. Roberts think many providers aren’t equipped to comply. While larger health systems might have the resources to vet AI systems and ensure they’re not delivering biased results, smaller health systems might not, according to the scholars. “It is dispiriting to contemplate that the OCR’s effort to protect vulnerable patients may have the weakest effect in the settings that have the biggest role in serving them,” Mello and Roberts wrote in JAMA Health Forum. Why it matters: Developers of AI tools aren’t required to share information about the data they use to train their systems or whether they’ve tested them for biased results. They should be required to, according to three Medstar Health researchers in JAMA. “All AI developer transparency disclosures should be stored in a central repository and accessible to the public,” wrote Raj M. Ratwani, Karey Sutton and Dr. Jessica E. Galarraga. What they recommend: The authors want HHS to require more transparency from technology developers. While the agency requires developers of tools it certifies to disclose some information about how their tools work, certification is voluntary. Ratwani, Sutton and Galarraga also said HHS should vet and monitor decision-support tools that rely on AI. To ensure that smaller health systems can keep up, the authors suggest that HHS agencies, such as the Health Resources and Services Administration or the Agency for Healthcare Research and Quality, fund the development of software for testing and monitoring discrimination in patient-support tools. The Patient-Centered Outcomes Research Institute, a private nonprofit that funds research on the effectiveness of health care tools, could also play a role, they said. Even so: Ensuring that AI is safe and free of discrimination costs money. “The question of who will pay for AI assessments looms large, especially in light of congressional efforts to ensure that no taxpayer dollars are used,” said Mello and Roberts, adding that the cost of compliance could stall AI adoption.
|
No comments:
Post a Comment