Good morning! Things got rather testy online when Google debuted its new AI image generator. Senior reporter Sigal Samuel is here to explain the drama and why it's so interesting — whether or not you care about creating AI images of popes. —Caroline Houck, senior editor of news |
|
|
Pavlo Gonchar/SOPA Images/LightRocket via Getty Images |
Nobody knows what AI images should look like |
Just last week, Google was forced to pump the brakes on its AI image generator, called Gemini, after critics complained that it was pushing bias ... against white people. The controversy started with — you guessed it — a viral post on X. According to that post from the user @EndWokeness, when asked for an image of a Founding Father of America, Gemini showed a Black man, a Native American man, an Asian man, and a relatively dark-skinned man. Asked for a portrait of a pope, it showed a Black man and a woman of color. Nazis, too, were reportedly portrayed as racially diverse. After complaints from the likes of Elon Musk, who called Gemini's output "racist" and Google "woke," the company suspended the AI tool's ability to generate pictures of people. "It's clear that this feature missed the mark. Some of the images generated are inaccurate or even offensive," Google Senior Vice President Prabhakar Raghavan wrote, adding that Gemini does sometimes "overcompensate" in its quest to show diversity. Raghavan gave a technical explanation for why the tool overcompensates: Google had taught Gemini to avoid falling into some of AI's classic traps, like stereotypically portraying all lawyers as men. But, Raghavan wrote, "our tuning to ensure that Gemini showed a range of people failed to account for cases that should clearly not show a range." This might all sound like just the latest iteration of the dreary culture war over "wokeness" — and one that, at least this time, can be solved by quickly patching a technical problem. (Google plans to relaunch the tool in a few weeks.) But there's something deeper going on here. The problem with Gemini is not just a technical problem. It's a philosophical problem — one for which the AI world has no clear-cut solution. | Screenshot of @EndWokeness's post on X with images they say were generated by Gemini |
Imagine that you work at Google. Your boss tells you to design an AI image generator. That's a piece of cake for you — you're a brilliant computer scientist! But one day, as you're testing the tool, you realize you've got a conundrum. You ask the AI to generate an image of a CEO. Lo and behold, it's a man. On the one hand, you live in a world where the vast majority of CEOs are male, so maybe your tool should accurately reflect that, creating images of man after man after man. On the other hand, that may reinforce gender stereotypes that keep women out of the C-suite. And there's nothing in the definition of "CEO" that specifies a gender. So should you instead make a tool that shows a balanced mix, even if it's not a mix that reflects today's reality? This comes down to how you understand bias. Computer scientists are used to thinking about "bias" in terms of its statistical meaning: A program for making predictions is biased if it's consistently wrong in one direction or another. (For example, if a weather app always overestimates the probability of rain, its predictions are statistically biased.) That's very clear, but it's also very different from the way most people use the word "bias" — which is more like "prejudiced against a certain group." The problem is, if you design your image generator to make statistically unbiased predictions about the gender breakdown among CEOs, then it will be biased in the second sense of the word. And if you design it not to have its predictions correlate with gender, it will be biased in the statistical sense. So how should you resolve the trade-off? "I don't think there can be a clear answer to these questions," Julia Stoyanovich, director of the NYU Center for Responsible AI, told me when I previously reported on this topic. "Because this is all based on values." Embedded within any algorithm is a value judgment about what to prioritize, including when it comes to these competing notions of bias. So companies have to decide whether they want to be accurate in portraying what society currently looks like, or promote a vision of what they think society could or even should look like — a dream world. |
Fatih Aktas /Anadolu via Getty Images |
How can tech companies do a better job navigating this tension? |
The first thing we should expect companies to do is get explicit about what an algorithm is optimizing for: Which type of bias will it focus on reducing? Then companies have to figure out how to build that into the algorithm. Part of that is predicting how people are likely to use an AI tool. They might try to create historical depictions of the world (think: white popes) but they might also try to create depictions of a dream world (female popes, bring it on!). "In Gemini, they erred towards the 'dream world' approach, understanding that defaulting to the historic biases that the model learned would (minimally) result in massive public pushback," wrote Margaret Mitchell, chief ethics scientist at the AI startup Hugging Face. Google might have used certain tricks "under the hood" to push Gemini to produce dream-world images, Mitchell explained. For example, it may have been appending diversity terms to users' prompts, turning "a pope" into "a pope who is female" or "a Founding Father" into "a Founding Father who is Black." But instead of adopting only a dream-world approach, Google could have equipped Gemini to suss out which approach the user actually wants (say, by soliciting feedback about the user's preferences) — and then generate that, assuming the user isn't asking for something off-limits. What counts as off-limits comes down, once again, to values. Every company needs to explicitly define its values and then equip its AI tool to refuse requests that violate them. Otherwise, we end up with things like Taylor Swift porn. AI developers have the technical ability to do this. The question is whether they've got the philosophical ability to reckon with the value choices they're making — and the integrity to be transparent about them. —Sigal Samuel, senior reporter |
|
|
| The protest vote against Biden |
Michigan's primary Tuesday tested President Biden's viability with Muslim voters amid the war in Gaza. |
|
|
- Can a temporary ceasefire deal be reached before Ramadan begins? US President Joe Biden has suggested some pretty optimistic timelines; Israel and Hamas not so much. [Guardian]
- Also on Israel and Gaza, it's worth stating: Trump and Biden are not the same. There's a lot to dislike about Biden's Israel policy, Zack Beauchamp argues. But Trump's positions would be worse. [Vox]
- An attempt to take down the new face of the Russian opposition: The widespread, misogynistic online disinfo campaign against Alexei Navalny's widow, Yulia Navalnaya, uncovered. [Wired]
|
Didier Lebrun / Photonews via Getty Images | - Whether you love it or hate it: It's hard not to notice the abnormally warm temperatures across much of the US. This January was the hottest January ever measured, and February looks set to follow. Here's why. [Vox]
- The real-life sci-fi epic happening underneath our feet: Ants are so damn fascinating. Did you know there are 20 quadrillion of them on our planet? Dive into their "strange and turbulent global society." [Aeon]
| - Someone stop Andrew Tate: The trend of some teenage British boys' turn toward misogyny, explained. [Cosmopolitan]
|
|
|
How scientists are searching for aliens |
They're not looking for UFOs or decoding government secrets. They're doing something much simpler. |
| |
|
Today, Explained and the Vox Media Podcast Network are coming to SXSW March 8-10! See Noel King live on the Vox Media Podcast Stage with Charlamagne Tha God and Angela Rye, plus other influential podcasting voices like Brené Brown, Esther Perel, Kara Swisher, Preet Bharara, and Trevor Noah. Learn more at voxmedia.com/live. Also: Are you enjoying the Today, Explained newsletter? Forward it to a friend; they can sign up for it right here. Today's edition was produced and edited by Caroline Houck. We'll see you tomorrow! |
|
|
This email was sent to edwardlorilla1986.paxforex@blogger.com. Manage your email preferences or unsubscribe. If you value Vox's unique explanatory journalism, support our work with a one-time or recurring contribution. View our Privacy Notice and our Terms of Service. Vox Media, 1201 Connecticut Ave. NW, Floor 12, Washington, DC 20036. Copyright © 2024. All rights reserved. |
| |
|
No comments:
Post a Comment