WHAT TOP TRADE GROUPS WANT ON AI — Many leading organizations across health care want the federal government to regulate artificial intelligence with a lighter touch. As interest in AI grows exponentially, Rep. Ami Bera (D-Calif.) solicited input from health care leaders on AI policymaking. Bera serves on the House Task Force on Artificial Intelligence, newly formed by Speaker Mike Johnson and Minority Leader Hakeem Jeffries. Pulse obtained responses from groups that represent doctors, medical device makers, technology developers, hospitals, health systems, insurers and others. Their feedback sheds light on what some of the most influential advocacy groups in the industry want to see as they call for regulation. Here are highlights of their suggestions: A ‘risk-based’ framework: Many groups, including the Consumer Technology Association and insurer lobby AHIP, said AI used in high-risk situations should undergo stricter federal scrutiny. AHIP said regulations should differentiate between what it sees as high-risk AI (which can directly impact patient care) and low-risk AI (used to expedite claim processing, for example), warning that “overly restrictive” policy could hamper innovation. Some of the groups criticize the Biden administration’s new rules mandating transparency in software used by most hospitals and doctors’ offices because they don’t limit the regulation to high-risk technology. “This is directly counter to … the risk-based approach to broader health regulation,” the Consumer Technology Association wrote to Bera. Skepticism about transparency: Some groups said mandated transparency should be limited. The American Health Information Management Association said that while guardrails must permit users to understand how AI comes to its conclusions, they shouldn't risk intellectual property. The Federation of American Hospitals called for similar transparency measures but said they shouldn't add “unnecessary burden.” AHIP said that organizations developing AI internally shouldn't be required to divulge “proprietary information” about it. “Revealing the inner workings of AI systems to the public or regulatory agencies should not be required, nor would it be beneficial,” the Healthcare Leadership Council, a group of health care CEOs, wrote. Change payment systems: CMS hasn't yet released a “useful" framework for reimbursing for artificial intelligence, CTA wrote, saying many AI models are excluded. Medical device trade group AdvaMed said that Medicare has the authority to pay for AI technology, but its regulatory framework isn’t specific enough to offer payment for it broadly. The FDA factor: In line with other groups, the American Hospital Association said the FDA’s existing rules regulating software as a medical device are a “solid foundation” for what the government should do. “Adapting these frameworks … could be a more … effective approach,” AHA wrote. Liability: The American Medical Association called for tech companies to be held accountable for their data practices. WELCOME TO WEDNESDAY PULSE. We heard there are only rats on the first floor of the Longworth House Office Building. We’re … skeptical. Reach us and send us your tips, news and scoops at bleonard@politico.com or ccirruzzo@politico.com. Follow along @_BenLeonard_ and @ChelseaCirruzzo.
|
No comments:
Post a Comment