| | | | By Ruth Reader, Gregory Svirnovskiy, Daniel Payne and Erin Schumaker | | | | El-Bassel | Columbia University | Nabila El-Bassel is hoping the data she gathered during a five-year campaign to reduce opioid overdose deaths will live on in artificial intelligence aimed at combating the problem. El-Bassel, a professor of social work at Columbia University in New York, spearheaded HEALing Communities, a National Institutes of Health-funded effort to reduce deaths from opioid use in four states, Kentucky, Massachusetts, New York and Ohio, by linking patients with care, increasing distribution of the overdose-reversal drug naloxone, improving safe prescribing of opioids and reducing stigma around drug addiction. Little could she have known when she started in 2019 how a pandemic would make reaching the program’s goal of reducing deaths by 40 percent much harder by making treatment more difficult to access and prompting more people to use drugs. Since the initiative ended this year, she’s worked on amassing the data gathered to build predictive AI she hopes will improve health outcomes for people addicted to opioids. She spoke with Ruth about how she sees AI helping combat the opioid crisis. The interview has been edited for length and clarity. What data were you able to collect through the HEALing Communities study? We conducted more than 300 in-depth interviews across the 67 communities. We collected the minutes of the meetings where the coalition met and deliberated. We conducted interviews with members of the coalition community advisory board, key stakeholders, people with lived experience — and data on stigma reduction. And then we collected administrative data from Medicaid and other sources and institutions that address the opioid crisis. How did you use that data during the study? We created a dashboard and trained coalition members to learn about how this data can inform their deployment of evidence-based practices. We also created something called agent-based modeling. It predicts how much intervention they need to deploy in every community to reduce overdose deaths. What are the main interventions? Increasing medication for opioid use disorder, linking people to harm-reduction programs, distributing naloxone everywhere and reducing stigma toward people who use drugs. What’s next? Good news: After the study completed, most of the counties found money to sustain what we created and hired a data person to continue.
| | JOIN US ON 3/21 FOR A TALK ON FINANCIAL LITERACY: Americans from all communities should be able to save, build wealth, and escape generational poverty, but doing so requires financial literacy. How can government and industry ensure access to digital financial tools to help all Americans achieve this? Join POLITICO on March 21 as we explore how Congress, regulators, financial institutions and nonprofits are working to improve financial literacy education for all. REGISTER HERE. | | | | | | Hershey, Pa. | Shawn Zeller/POLITICO | This is where we explore the ideas and innovators shaping health care. Could a rapid test, like those for pregnancy, tell people whether they were bitten by a venomous snake? Experts are working hard to make it happen, Science reports. Share any thoughts, news, tips and feedback with Carmen Paun at cpaun@politico.com, Daniel Payne at dpayne@politico.com, Ruth Reader at rreader@politico.com or Erin Schumaker at eschumaker@politico.com. Send tips securely through SecureDrop, Signal, Telegram or WhatsApp.
| | | The WHO sees dangers aplenty for kids online. | Christopher Furlong/Getty Images | It’s not only U.S. parents worried about all the time their kids are spending on screens. Our team in Europe reports that, for the first time, the World Health Organization’s European division is warning of the internet’s dangers to kids’ health. How’s that? The WHO sees multiple threats: — Social media and messaging platforms enable bullies. — Children can find inappropriate and violent content online. — Child predators use the web to abuse kids. — Marketers violate children’s privacy to try to get them to gamble, smoke, drink alcohol and eat unhealthy foods, among other things. “It seems obvious to everyone that children and adolescents have the right to be protected on every digital platform,” said Dr. Hans Kluge, the WHO’s regional director of Europe. To protect minors, the WHO recommends nations enact legislation to: — Protect kids from violence, exploitation, abuse and unhealthy marketing online — Mandate that both government and business protect children — Restrict data collection about children and bar its use for commercial purposes In the U.S.: States are moving to protect kids online, and momentum is building in Congress for legislation to establish new protections.
| | DON’T MISS POLITICO’S HEALTH CARE SUMMIT: The stakes are high as America's health care community strives to meet the evolving needs of patients and practitioners, adopt new technologies and navigate skeptical public attitudes toward science. Join POLITICO’s annual Health Care Summit on March 13 where we will discuss the future of medicine, including the latest in health tech, new drugs and brain treatments, diagnostics, health equity, workforce strains and more. REGISTER HERE. | | | | | | Small firms are skeptical of their big counterparts when it comes to governance of AI. | Josep Lago/AFP/Getty Images | If small companies want a say in setting the standards for artificial intelligence tools in health care, they should join the Coalition for Health AI. That’s what Dr. Brian Anderson, CHAI’s CEO and chief digital health physician at MITRE, a nonprofit that advises government agencies on technology, told Ruth after startups and the firms that invest in them complained about CHAI’s plan to establish “assurance labs” for AI tools by the end of this summer. Blowback: The startups and investors told Ruth they saw CHAI, which counts Google, Microsoft, Johns Hopkins University and the Mayo Clinic among its partners, trying to establish the rules for AI without consulting them — and with big conflicts of interest. Dr. John Halamka, who chairs CHAI’s board and is president of the Mayo Clinic Platform, said he thought major universities, which are developing AI tools or partnering with tech firms to do so, are likely to host the assurance labs. “Under CHAI’s proposal, several organizations that have been tasked with review authority actually operate their own AI incubator programs,” said Julie Yoo, general partner at venture capital firm Andreessen Horowitz. “Ultimately, the technologies developed in those incubators could be in direct competition with the ones they are tasked to review and validate.” Punit Soni, founder and CEO of Suki AI, which makes artificial intelligence tools that aim to reduce doctors’ administrative burdens, said it’s telling that CHAI’s partners are all large academic and tech players. “Working only with tech giants also increases the risk of regulatory capture by these large companies, which will also hinder innovation down the line," he said. Why it matters: Top U.S. officials have endorsed CHAI’s plans for assurance labs. For example, FDA Commissioner Robert Califf spoke to the group last week and told them he supported its efforts. Califf has repeatedly said his agency lacks the manpower to regulate advanced AI tools. What’s next? To his point that CHAI welcomes small firms, Anderson said some 1,800 entities have already signed up, over 600 of which are health systems. “CHAI will be a failure if all it is is academic institutions and big tech leading the effort,” he said. Halamka said that CHAI recognizes that firms in the AI sector differ and that membership dues will be on a sliding scale to ensure equitable access to its assurance labs. | | Follow us on Twitter | | Follow us | | | |
No comments:
Post a Comment