Tuesday, January 16, 2024

🤝 Hackers helping hackers

Plus: Capitol Hill's AI hackathon | Tuesday, January 16, 2024
 
Axios Open in app View in browser
 
 
Axios Codebook
By Sam Sabin · Jan 16, 2024

Happy Tuesday! Welcome back to Codebook.

  • ⛰️ Are you in Davos this week? Join Axios House for conversations with OpenAI CEO Sam Altman, Palantir co-founder and CEO Alex Karp, and more. RSVP here.
  • 📬 Have thoughts, feedback or scoops to share? codebook@axios.com.

Today's newsletter is 1,491 words, a 5.5-minute read.

 
 
1 big thing: Open-source tools fire up supply chain attacks
Illustration of a computer cursor in an emergency box that reads,

Illustration: Aïda Amer/Axios

 

Open-source code and legitimate hacking tools have contributed to the rising popularity of a once-rare and complicated type of cyberattack, according to new research shared exclusively with Axios.

Why it matters: Malicious hackers of all levels — from nation-state groups to lower-level cybercriminals — have gotten better at executing what experts call a software supply chain attack.

  • In these schemes, hackers target a single third-party piece of software to access information from that organization's customers or to gain access to a target's network.

The big picture: Thousands of top consumer brands were vulnerable to widespread supply chain attacks last year — and many are being targeted this year through recently discovered flaws in Citrix's and Ivanti's products.

  • Typically, nation-state hacking groups turn to these types of attacks because they're more difficult for victim organizations to detect.

Driving the news: But last year, more cybercriminal groups started building tools and sharing their tips with one another — effectively lowering the barrier to entry for software supply chain attacks, according to a new report from cybersecurity company ReversingLabs.

  • That's in part because attackers are sharing more open-source tools and resources among each other to launch such attacks, per the report.

What they're saying: "It's a cat-and-mouse game, and every single time you develop the technology that can detect that type of attack, they just pivot somewhere else," Tomislav Peričin, chief software architect and co-founder of ReversingLabs, told Axios.

  • "To me, 2023 was the year of many, many different pivots that we saw."

By the numbers: ReversingLabs' researchers found a 28% increase in the number of malicious packages available across three major open-source repositories in the first nine months of 2023 compared to the same period in 2022.

  • Researchers also uncovered at least five new techniques hackers used last year to evade detection from basic network monitoring tools.

Details: Many of the malicious packages had code that would help hackers obfuscate or encrypt their activity from traditional security-monitoring tools.

Zoom in: In July, ReversingLabs released details about a campaign it called "Operation Brainleeches."

  • ReversingLabs found that hackers created phishing schemes based on packages hosted on popular open-source platform npm.
  • The files included all of the raw components needed to launch an email phishing campaign, researchers found, as well as the tools needed to host any files hackers might want to attach to their emails.

Between the lines: Cracking down on supply chain attacks requires that companies continuously audit the technologies they use.

  • Programmers must scan code for security flaws as they're building new products and government officials must continue to develop new software supply chain guidance, Peričin said.

What we're watching: Some of the new tactics ReversingLabs uncovered will likely stick around in 2024 as malicious actors continue to lower the barrier to entry.

Share on Facebook Tweet this Story Post to LinkedIn Email this Story
 
 
2. Predicting the hurdles for AI security policy
Illustration of a giant cursor with two concrete road work barriers in front of it.

Illustration: Aïda Amer/Axios

 

Cyber policymakers will face familiar hurdles as they attempt to rein in the risks of generative artificial intelligence, former officials write in a policy paper from the Aspen Institute released today.

Driving the news: An AI working group inside Aspen's Global Cybersecurity Group is sharing its first set of recommendations and predictions for regulating AI cybersecurity.

  • The paper's authors include former officials at the Department of Justice and the United Nations, as well as a former Apple executive.

Why it matters: The Biden administration is currently working through the requirements laid out in the president's AI executive order, which includes several cybersecurity provisions.

  • Some of those provisions set up new reporting rules and disclosures that, if done too hastily, could hinder innovation, the paper argues.

Between the lines: Many of the hurdles facing lawmakers, policymakers and regulators in their AI security work are the same ones they've faced in other facets of cybersecurity and data privacy, according to the paper.

Details: Government officials will be tempted to shorten the window companies have to report cyber incidents as generative AI helps attackers quickly identify exploitable security flaws. Officials and industry groups have spent years tussling over what an appropriate reporting timeline should be.

  • Generative AI will likely make attributing a cyberattack to a specific actor more difficult, the paper argues. Hackers will soon have more AI tools at their disposal that can help obfuscate their identities and activities.
  • The paper also warns that those taking on relatively new chief AI officer positions could face similar liability concerns as chief information security officers — who are often blamed for the security inadequacies at their organizations.
  • And governments are faced with a new set of data privacy and collection questions — even though they're still sorting out basic questions about what consumer data organizations can actually collect and how to use such information.

Yes, but: Governments, including the U.S., are still identifying the AI problems they're trying to regulate.

  • "If they do not know what conduct, outcomes, or values they are advancing for their citizens, their efforts are unlikely to be successful," the paper's authors warn.
Share on Facebook Tweet this Story Post to LinkedIn Email this Story
 
 
3. Zoom in: Bringing AI security to Capitol Hill
Illustration of a US Capitol dome made out of binary code.

Illustration: Brendan Lynch/Axios

 

Roughly 100 lawmakers, congressional staffers and others got a taste of how secure generative AI chatbots are at a private event on Capitol Hill last week.

Driving the news: The event, hosted by Hackers on the Hill and a few partner organizations, allowed participants to explore what a few popular large language models, including Meta's Llama 2, are able to do, organizers told Axios.

Why it matters: Lawmakers are currently drafting bills that would regulate everything from how government offices can use AI to the security standards that AI models must meet.

  • But few lawmakers and staffers have had the chance to talk directly with hackers about how these models can actually be manipulated.

What they're saying: "There's layers between staffers and the hackers building the solutions," Sven Cattell, founder of DEF CON's AI Village and organizer for Hackers on the Hill, told Axios. "I had no agenda other than getting 'boots on the ground' [hackers] to talk to [Congress]."

Details: The private red-teaming event took place in a House office building Wednesday, a day before Hackers on the Hill's broader, daylong cybersecurity event, Cattell and partner organization Robust Intelligence told Axios.

  • Participants were able to play around with different types of AI chatbots — including those that were trained on data across the web and those that were trained in a fire-walled environment that limited what data was available.
  • "Most of it was just hanging out and talking to [the lawmakers and congressional staffers] and getting asked a lot of different questions," Cattell said.

Between the lines: The Capitol Hill event was aimed at just letting staffers and lawmakers learn more about different AI models, unlike the event at last year's DEF CON where thousands of hackers were asked to beat prompts and make chatbots misbehave, Cattell said.

  • Cattell is hopeful the event will help inform ongoing discussions on the Hill and inside the Biden administration about what exactly an AI security vulnerability is and how exactly researchers should report one.

What's next: Cattell teased that he has more events and partnerships brewing in 2024 but declined to share specifics while they're being finalized.

  • "I have plans for this year, but you'll hear about them at RSA," he said.
Share on Facebook Tweet this Story Post to LinkedIn Email this Story
 
 

A message from Axios

Your trusted source for policy news
 
 

Axios Pro: Policy takes you into the halls of Congress, with an insider's look at the Hill and everything happening.

Subscribe today.

 
 
4. Catch up quick

@ D.C.

🗳️ OpenAI plans to lean on verified news sources and image authenticity programs to vet information about elections happening globally in 2024. (Axios)

🌐 The State Department's Bureau of Cyberspace and Digital Policy has faced hiring challenges and has struggled to clearly define different offices' roles in its first year, according to a government watchdog report. (CyberScoop)

👀 Chinese military organizations, state-run research institutions and universities have been able to buy Nvidia's semiconductors despite a ban in the U.S. on exporting such tech to China. (Reuters)

@ Industry

🔮 The World Economic Forum warned that the private sector will wrestle with "growing cyber inequity" between those that are cyber resilient and those that aren't this year, according to its 2024 cybersecurity outlook. (World Economic Forum)

💰 Spot Technologies, a data analysis and AI security company based in El Salvador, raised a $2 million round led by Femsa Ventures. (TechCrunch)

🪙 Tether's crypto token is one of the most popular cryptocurrencies among fraudsters and cybercriminals, according to a UN report. (Financial Times)

@ Hackers and hack

🚨 Ivanti has warned that two critical vulnerabilities in its VPN products are now under mass exploitation. (BleepingComputer)

📈 Ransomware gangs targeted more than 4,300 victims in 2023, nearly double 2022's total, according to threat intelligence company Cyberint. (Security Magazine)

📚 The British Library has had to cancel fellowships and delay payments to authors and is only just now bringing its archives back online after an October ransomware attack. (The Guardian)

Share on Facebook Tweet this Story Post to LinkedIn Email this Story
 
 
5. 1 fun thing
Image of a tweet with a photo of NSA conference swag seen at ShmooCon in DC.

Screenshot: @tvidas/X

 

The National Security Agency arguably brought the best swag to ShmooCon this weekend — including a notepad that says "Secrets I Heard From the NSA" and a sticker with the NSA "seal" of approval.

Share on Facebook Tweet this Story Post to LinkedIn Email this Story
 
 

A message from Axios

Your trusted source for policy news
 
 

Axios Pro: Policy takes you into the halls of Congress, with an insider's look at the Hill and everything happening.

Subscribe today.

 

☀️ See y'all Friday!

Thanks to Scott Rosenberg and Megan Morrone for editing and Khalid Adad for copy editing this newsletter.

If you like Axios Codebook, spread the word.

HQ
Are you a fan of this email format?
Your essential communications — to staff, clients and other stakeholders — can have the same style. Axios HQ, a powerful platform, will help you do it.
 

Axios thanks our partners for supporting our newsletters.
Sponsorship has no influence on editorial content.

Axios, 3100 Clarendon B‌lvd, Arlington VA 22201
 
You received this email because you signed up for newsletters from Axios.
To stop receiving this newsletter, unsubscribe or manage your email preferences.
 
Was this email forwarded to you?
Sign up now to get Axios in your inbox.
 

Follow Axios on social media:

Axios on Facebook Axios on Twitter Axios on Instagram
 
 
                                             

No comments:

Post a Comment

Buy this A.I. Stock Before Elon’s “A.I. Day” May 16

You may have missed the biggest gains......................................................................................