Friday, March 12, 2021

AI is changing the world. What should governments do about it?

Hey readers,

 

On the latest episode of The Weeds podcast, I talked with Matt Yglesias about artificial intelligence. He's a policy guy, so a lot of the conversation focused on a simple question: If AI development has the risk of devastating effects on human civilization, what should the government do?

 

There's a new National Security Council (NSC) report that tries to answer that. The NSC takes seriously the idea that we're careening toward a radical new era in human history, where lots of decisions are guided by artificial intelligence systems that are –– in some capacities at least –– superhuman.

 

That power could do a lot of good, like speeding medical research and developing new technologies we can deploy against other pressing global problems.

 

It could also do a lot of bad, like if we end up unable to meaningfully direct AI toward human goals (read my explainer on the threat AI poses for humanity for more detail).

 

Public discussion of this threat often ends up focused on killer robots (there aren't going to be killer robots; the dangers of AI are mostly unrelated to that). It's not that computers will wake up with human motives, but that AI's pursuit of the goals we set for it has destructive side effects –– and that those can be difficult to manage when innovation is happening at a computer's breakneck pace.

 

The NSC report gets this basic story right.

 

AI and international competition

 

But there's an important way in which the NSC report falls short. Recognizing that AI poses enormous risks and that it will be powerful and transformative, the report concludes that the US should make sure it is a world leader in AI, rather than China.

 

The current government of China is obviously committing human rights violations and shouldn't have more tools to do so. That's a problem that's hardly specific to AI. The US and the global community should absolutely devote more attention and energy to addressing it.

 

But AI safety, broadly speaking, is a different challenge altogether. If the US works toward AI in a mindset of an arms race with China, it could lead to greater harm for humanity. Adopting such a posture, the US will be more likely to cut corners, evade transparency measures, and push ahead despite cautionary signs. It could also mean policymakers and researchers don't pay enough attention to the "AI alignment" problem, which could be devastating.

 

AI alignment is the work of trying to design intelligent systems that are accountable to us –– that handle misspecified goals gracefully, fully report what they know, and partner with us even when they're in some narrow senses more powerful than us.


Right now, our capacity to build AIs is racing ahead of our capacity to understand and align them. Trying to make sure AI advancements happen in the US first can just make that problem worse, if the US doesn't also invest in the research –– which is much more immature, and has less obvious commercial value –– to build aligned AIs.

 

The problem with leaving AI to the Defense Department

 

The limited scope of the National Security Council report is a fairly obvious consequence of who they are and what they do.

 

Right now, the part of the US government that takes artificial intelligence risks seriously is the Defense Department. That's because AI risk is weird, confusing, and futuristic, and the Defense Department has more latitude than the rest of the government to spend resources seriously investigating weird, confusing, and futuristic things.

 

But AI isn't just a defense issue; it will affect most aspects of society, like education, criminal justice, medicine, and the economy. And to the extent it is a defense issue, that doesn't mean that traditional defense approaches make sense.

 

If before the invention of electricity, the only people working on electricity had been armies interested in electrical weapons, they'd not just be missing most of the effects of electricity on the world, they'd even be missing most of the effects of electricity on the military, which have to do with lighting, communications, and intelligence, rather than electric weapons.

 

The NSC, to its credit, takes AI seriously, including the non-defense applications. But the NSC and the Defense Department shouldn't be the only government agencies concerned with AI.

 

Some AI work, at least, needs to be happening in a context insulated from arms-race concerns and fears of China. There should be more stabs at collaborative global work that aims at succeeding for the sake of all humans.  The perspectives that will create room for just might be crucial ones.

 

––Kelsey Piper, @kelseytuoc

 
Learn more about RevenueStripe...
Joe Biden just launched the second war on poverty
Shaw Thew/EPA/Bloomberg via Getty Images

The first war on poverty cut it in half. Joe Biden could do it again. Read more.

China's genocide against the Uyghurs, in 4 disturbing charts
Greg Baker/AFP/Getty Images

From internment camps to mass sterilization, here's why the ethnic minority's birthrate is plunging. Read more.

WHAT WE'RE READING

Access the web version of this newsletter here.

 
Learn more about RevenueStripe...
This email was sent to edwardlorilla1986.paxforex@blogger.com. Manage your email preferences, or unsubscribe to stop receiving all emails from Vox. If you value Vox's unique explanatory journalism, support our work with a one-time or recurring contribution.
View our Privacy Notice and our Terms of Service.
Vox Media, 1201 Connecticut Ave. NW, Washington, DC 20036.
Copyright © 2021. All rights reserved.

No comments:

Post a Comment

Overnight Option Trades… Show me how.

Fellow Reader, ONE single trade. ONE specific time every week. It all came to life when Christian hit a signific...