| | | | By Joseph Gedeon | With help from Maggie Miller and Mallory Culhane
| | — While AI promises troop safety and pinpoint precision in the battlefield, some experts warn it could unleash a Pandora's box of collateral damage, driven by flawed data and operating in a regulatory vacuum. HAPPY TUESDAY, and welcome to MORNING CYBERSECURITY! And happy birthday, 2024. We’re happy to be back. What’s my New Year’s resolution, you ask? I’m aiming to close some of my persistently open Internet browser tabs. Have any tips or secrets to share with MC? Or thoughts on what we should be covering? Find me on X at @JGedeon1 or email me at jgedeon@politico.com. You can also follow @POLITICOPro and @MorningCybersec on X. Full team contact info is below. Want to receive this newsletter every weekday? Subscribe to POLITICO Pro. You’ll also receive daily policy news and other intelligence you need to act on the day’s biggest stories.
| | CODED WARFARE — It’s a new year, but the grim reality of a high-casualty war still rages in Gaza. The conflict, marked by a devastating toll on civilians, is also showcasing a new dimension of AI-guided strikes that offers a glimpse into the next chapter of warfare — and how defacto human biases built into security systems could play out in an increasingly AI-driven world. — Collateral damage: On the surface, AI promises a new future of pinpoint precision — using code and algorithms to crunch data to guide munitions. And autonomous weapons prove to so far be relatively inexpensive, while also keeping troops out of harm's way. But some experts say the system comes with a deadly caveat: inherent discrimination and error. “AI algorithms are notoriously flawed with high error rates observed across applications that require precision, accuracy and safety-criticality,” machine learning engineering director at cybersecurity research firm Trail of Bits Heidy Khlaaf tells MC. “And this is because the nature of AI systems is to provide outcomes based on statistical and probabilistic inferences and correlations from historical data, and not any type of reasoning, factual evidence or causation.” Khlaaf explains that flaws in AI systems trained on historical data, such as Israel’s “Habsora” system (meaning “the Gospel” in English), “may infer a correlation between low-income status and crime, and may then automatically incriminate any low-income individual to be guilty.” — One caveat: Though AI systems can be flawed, the heavy death toll rooted in the system in Gaza stems from the intended design of Habsora, “and not how AI generally functions," Khlaaf argues. — Still, it’s not new: AI systems have existed for decades, but driven by advances in tech and geopolitical pressures, its footprint on the battlefield has accelerated in recent years — and not just in the Middle East. One example of a recent AI-powered attack comes from Ukraine, which unleashed a mass drone attack of 16 uncrewed aerial vehicles and surface vessels in October to damage Russian ships in the occupied Crimea. — Regulation in the analog age: The emerging tech was a hot topic both on the Hill and within the White House last year, and that’s set to continue this year too — with dozens of federal entities now honing in on actions from President Joe Biden’s sweeping AI executive order. And that includes prioritizing cybersecurity and reeling in biases in AI systems. But for now, there remains a regulation and governing vacuum when it comes to wartime AI-driven attacks. “Our legal and technical frameworks are woefully ill-prepared for AI-based warfare,” Khlaaf said. “And this will only come at the cost of innocent lives.”
| | CELEBRATION AT STATE — The State Department's Bureau of Cyberspace and Digital Policy is popping champagne to ring in the new year after Congress tossed them a sweet holiday bonus — a brand new "cyber, digital, and related technology" fund tucked inside this year’s National Defense Authorization Act. — The nitty gritty: As Maggie writes in, the 2024 NDAA signed into law by President Joe Biden in December included the State Department Authorization Act — which sets up a fund to help allies beef up the cybersecurity of their critical networks. It also formalizes a process that began in recent years with State Department aid sent to Costa Rica and Albania. Nate Fick, ambassador at large for cyberspace and digital policy, said in comments provided to Morning Cyber that he is “grateful for our strong partnership with Congress” on cyber-related issues. “These provisions support our programming to fortify global cyber resilience and stability, facilitate greater adoption of trusted and secure digital infrastructure, respond to emerging needs, and further align policies to support innovative, productive, and safe digital ecosystems around the world,” Fick said of the new cyber fund. — Shaking the money tree: While Fick describes the program as a “key tool” for fostering cyber relations, he admits the new firepower is just “a first step” and that the bureau plans to advocate for more congressional funding in the years to come. — But wait, there’s more: The NDAA didn’t stop at the piggy bank: It also minted a new chief AI officer at State and bumped up the agency’s “data-informed diplomacy” approach to cyber and IT issues. “The United States must position itself for continued leadership in this hotly contested, complex, and fast-moving geopolitical arena and the authorities provided by Congress will help us achieve this goal,” Fick said.
| | LOCALLY GROWN — The states are likely to lead in passing tech policy measures in 2024, Matt Perault, the director of the Center on Technology Policy at the University of North Carolina at Chapel Hill, said on today’s POLITICO Tech Podcast. “If you want to just track debates, you can track what’s going on in Washington, but in terms of where actual actual activity is, in terms of giving rights or imposing restrictions or requiring transparency, [it’s] really at the state level,” he said. — Why the shift?: Thirty-nine states had "trifectas" in 2023, meaning one party controlled both chambers of the legislature and the governor's mansion, according to Perault and Scott Brennen, the head of online expression policy at the Center on Technology Policy at UNC Chapel Hill. And a recent report from the pair found that 86 percent of tech laws passed in 2023 came from states with trifectas — with 40 states set to have trifectas in 2024. — ’24 outlook: Last year, online child safety was the hot topic, with nearly two dozen measures passed in 13 states. Expect that trend to continue, with one caveat: some social media laws are in legal limbo. Perault points specifically to challenges from tech industry groups NetChoice and the Computer and Communications Industry Association, which argue that social media measures passed in Florida and Texas violate platforms’ First Amendment rights to edit and moderate content on their sites. With the Supreme Court’s anticipated decisions in the Florida and Texas cases acting as a key catalyst, watch for either a wave of similar legislation across the country if they're upheld by the justices or watch the statehouse fire to sputter out if the court strikes them down. “If those laws are held up, then I think it’s very likely that we would see a large number of states with Republican trifectas pass similar laws this term,” Perault said. “If those are struck down, then I think we probably wouldn’t see as much activity.” — AI awakening: The rise of the machines is not lost on state lawmakers. Fifteen states took steps to address AI in 2023, setting up task forces, researching potential harms, and generally trying to get a handle on this sci-fi stuff. And with the trifecta trend still going strong, Perault predicts at least one state will pass a major AI law this year.
| | Always find a way to keep attackers out of arm’s distance.
| | | | TRIANGULATE THIS — In case you missed it, a four-year iPhone backdoor campaign targeted thousands of devices, including cyber firm Kaspersky employees, by exploiting a hidden hardware feature to gain unprecedented access. The malware transmitted sensitive data like recordings and location before rebooting the device and repeating the attack, and raises questions about its purpose. Dan Goodin with Ars Technica has the story. DIGITAL DOPPELGANGERS — Influential U.S. psychologist Martin Seligman, reflecting on his legacy, received a surprising gift from a former student: a virtual version of himself built using AI. Despite initial discomfort, Seligman ultimately accepted the lifelike chatbot, but ethical concerns about replicating personalities without consent remain — as well as the lack of clear legal boundaries around AI training data, reports POLITICO’s Mohar Chatterjee. CYBER JOB LOSS — The once booming cybersecurity industry faced its own share of layoffs in 2023 despite growing cyber threats, reflecting the wider economic uncertainty, writes Carly Page for TechCrunch. Chat soon. Stay in touch with the whole team: Joseph Gedeon (jgedeon@politico.com); John Sakellariadis (jsakellariadis@politico.com); Maggie Miller (mmiller@politico.com); and Heidi Vogt (hvogt@politico.com). | | Follow us on Twitter | | Follow us | | | |
No comments:
Post a Comment