KEEPING ON THE GUARDRAILS — If you’re concerned about a future of truth-shattering large language models, ask not what artificial intelligence can do for cyber, but what cyber can do for artificial intelligence. That’s one of my biggest takeaways from spending a week at the RSA security conference in San Francisco, where yours truly dedicated his 96 hours in the Bay Area (or, really, in a city-sized conference center within the Bay Area) to asking anyone and everyone about the epoch-shifting tech. That doesn’t mean AI won’t radically change the meaning of things like anti-virus or APTs. A 30-minute stroll past some of the 5,000-plus security vendors camped out on RSA’s enormous subterranean expo floor will at least show you it’s already supercharged the industry’s snake-oil problem. But while the rest of the Beltway seems to be running around wondering what to do about large language models — and just as importantly, who should do it — cyber experts inside and outside the government are already racing to keep the guardrails on AI. 3-part framework — Take acting national cyber director Kemba Walden, for example. During a closed-door press gaggle in one of the drabbest rooms at RSA, I asked Walden about AI, a topic that only plays a bit part in her office’s new cybersecurity strategy. Is that a glaring omission for a document meant to keep the U.S. on the long-term path to digital safety? AI is a triad of data, processing power and algorithms, Walden told me. And with revealing rapidity, she then proceeded to list how different subsections of the strategy (like say, sections 3.1, 3.3, 4.3, 4.6 and 2.4, to name a few) could ensure each ingredient of the AI stew wouldn’t go bad. “It's all the things that ride on cyberspace that I'm trying to secure,” Walden said. “Tech innovation is one of them. AI is one of them.” Back off Beijing — Or Rob Joyce, the director of the NSA’s cybersecurity directorate, who thankfully opted to hold his small press gathering off the conference grounds. In a windowed meeting room the next day, I peppered Joyce with question after question about AI: whether it was something an adversary like Beijing could really “steal,” why it mattered that the U.S. keep its edge, and whether the NSA was doing much to ensure it did. Joyce told me he views AI as an “accelerant” with far-reaching economic and military applications, and added he was “very concerned” about IP theft against U.S. AI giants, as I reported last week. The good news? He admitted his agency is already helping those companies batten down their digital doors against keyboard sleuths. Born of the same seed — One reason the security community appears outfront on AI is that there’s so much overlap between the two industries — a fact that became painfully obvious when my desperation for a bay view took me to the offices of a nearby tech giant. To companies like Google, the dividing lines between the “hard” problems of computer exploitation and the “soft” problems of AI trust and safety aren’t as big as they might appear, Royal Hansen, the company’s vice president of privacy, safety and security engineering, told me. “Security is a subset of quality,” Hansen said. And just as traditional security is about ensuring code is neither exploitable or exploitative, so too is trust and safety in AI largely about ensuring “things do what they’re supposed to do.” Part of a mindset — And if there’s one community of people who have it in their DNA to scour digital systems for unintended bugs, it's hackers, argued Sven Cattell, who has been finding holes in AI systems for more than a half-decade. Standing on RSA’s expo floor last week, Cattell first told me how some of the best AI security researchers he’s ever encountered are hackers who have learned enough AI to understand how it works, as opposed to AI experts who try to self-teach the hacking. But I couldn’t hear Cattell too well amid the din of vendors with substance-rich sales promotions, like an appearance by former NFL star Jerry Rice or a night of standup with comedian Nick Offerman. So, I called Cattell again on Sunday for clarification. “A lot of the trust and safety work in AI involves threat modeling,” said Cattell, president of the AI village at Defcon. And hackers are better at that than anyone because “you kind of have to do weird things in order to do a good threat model.”
|
No comments:
Post a Comment