| | | | By Carmen Paun, Shawn Zeller, Erin Schumaker, Ruth Reader and Toni Odejimi | | | | The U.N. wants a say in guiding artificial intelligence's rollout in health care. | Daniel Slim/AFP via Getty Images | The United Nations has tasked a team of artificial intelligence experts to assemble a plan to help developing countries benefit from the AI boom. The group’s first recommendation, according to a draft report dated July 7 obtained by POLITICO’s Gian Volpicelli, is to create an independent international scientific panel similar to the Intergovernmental Panel on Climate Change. The AI panel should examine the state of the science around the technology and determine where more research is needed, while also evaluating capabilities, opportunities, risk and uncertainties associated with AI, the experts said. The group also said the new panel should examine how to use AI to boost health care, energy and education, among other areas. Additionally, the U.N. experts recommended creating an AI Capacity Development Network to support communication across borders and train public officials on AI governance. They said the network should aim to build a new generation of leaders to ensure that AI improves public health. Other recommendations include launching a global fund for AI to help developing countries adopt AI and creating an AI office at the U.N. Why it matters: The U.N. is trying to make its mark in the development and regulation of AI, as countries from Europe to the U.S. ponder government’s role. The U.N. also wants to ensure poorer countries aren’t left behind in adopting and using a technology that could improve their citizens’ health and solve other problems. What’s next? The final report is expected by the end of summer.
| | Understand 2024’s big impacts with Pro’s extensive Campaign Races Dashboard, exclusive insights, and key coverage of federal- and state-level debates. Focus on policy. Learn more. | | | | | | New Harbor, Maine | Shawn Zeller/POLITICO | This is where we explore the ideas and innovators shaping health care. The world’s first Miss AI is Kenza Layli, a Moroccan lifestyle influencer. Layli, who is entirely AI-generated, is the first winner of a beauty pageant organized by Fanvue, an influencer platform for both AI and human creators, CNN reports. Share any thoughts, news, tips and feedback with Carmen Paun at cpaun@politico.com, Daniel Payne at dpayne@politico.com, Ruth Reader at rreader@politico.com, Erin Schumaker at eschumaker@politico.com, or Toni Odejimi at aodejimi@politico.com. Send tips securely through SecureDrop, Signal, Telegram or WhatsApp.
| | | Paul (left) and Peters are teaming up to investigate Covid's origins and risky research. | AP | A bipartisan Senate investigation into high-risk biological research will commence in earnest with a hearing of the Homeland Security and Governmental Affairs Committee today. Bet on Covid-19’s origins coming up. When the committee’s chair, Democrat Gary Peters of Michigan, and ranking member, Republican Rand Paul of Kentucky, announced the joint effort in March, they said they’d try to get to the bottom of what caused the pandemic and also explore broader “national security threats posed by high-risk biological research and technology.” Why it matters: Paul and, polls show, most Americans believe Covid was created in a Chinese lab and accidently released. Outspoken scientists have long believed that it likely originated in an animal before jumping to people, but credible voices have also backed the lab-leak theory. Today, the Senate panel will hear from an advocate of “gain-of-function” research, which seeks to make pathogens more virulent or contagious and is at the heart of the lab-leak theory. Carrie Wolinetz, a lobbyist for researchers and universities at Lewis-Burke Associates who left a job as senior advisor to the NIH director last year, has said the research is needed to help public health officials combat new diseases. The senators will also hear from Kevin M. Esvelt, an associate professor at MIT’s Media Lab who’s argued that the research could cause death on a scale greater than a nuclear bomb and should be banned. Dr. Robert Redfield, who led the Centers for Disease Control and Prevention during the Trump administration, will also testify. He’s backed the lab-leak theory and says fellow scientists tried to suppress debate about it after Covid struck. The last witness, Gerry Parker, an associate dean at the College of Veterinary Medicine & Biomedical Sciences at Texas A&M, has previously pitched a detailed plan for making virus research safer. Takeaway: While Peters and Paul might seem like an unlikely investigative pairing given Paul’s vociferous criticism of the U.S. government's handling of the pandemic, Peters has his own critique of the response and research safeguards. Earlier this year, the Homeland Security panel advanced Peters’ bill, which aims to protect American genetic data from foreign adversaries. In 2022, Peters released a report on the initial Covid response, which found that Trump administration initiatives didn’t “reflect the severity of the crisis and ultimately failed to effectively mitigate the spread of COVID-19.”
| | Understand 2024’s big impacts with Pro’s extensive Campaign Races Dashboard, exclusive insights, and key coverage of federal- and state-level debates. Focus on policy. Learn more. | | | | | | AI's playing a bigger role in developing new medicines. | Gerard Julien/AFP via Getty Images | Drug companies are increasingly turning to artificial intelligence to develop new medicines. It’s something regulators will have to grapple with as more of the medicines come up for approval. That’s according to researchers who recently examined a cross-section of approved and investigational drugs and found AI was used in about 40 percent of the cases, our Phillip Kulubya reports. Machine learning and deep learning, which seeks to mimic the way human brains work, were found to be the most common types of AI used, making up 28 percent and 17 percent, respectively. The researchers found that, in 76 percent of the cases, AI was most often used to identify molecules that might be useful in drug development. Cancer and neurological therapies, at 32 percent and 28 percent, respectively, were the most common targets. Why it matters: Drug regulators are considering how they should treat the use of AI in finding new drugs. Timo Minssen, a study author and founding director of the University of Copenhagen’s Center for Advanced Studies in Bioscience Innovation Law, foresees challenges ahead. There’s AI’s “black box” problem, for one — how it’s often difficult to understand how the tools reach their conclusions. “How much specificity would medical authorities require to accept a prediction with regard to safety and efficacy of an AI system?” Minssen wonders. “What kind of data would they accept?” He says regulators need to communicate clearly how drugmakers can make the case for drugs they make with AI. What’s next? The Food and Drug Administration earlier this year said it plans to “develop and adopt a flexible risk-based regulatory framework” to consider AI-developed drugs. | | Follow us on Twitter | | Follow us | | | |
No comments:
Post a Comment