Google's research arm on Wednesday showed off a whiz-bang assortment of artificial intelligence (AI) projects it's incubating, aimed at everything from mitigating climate change to helping novelists craft prose, Jennifer A. Kingson reports. Why it matters: AI has breathtaking potential to improve and enrich our lives — and comes with hugely worrisome risks of misuse, intrusion and malfeasance, if not developed and deployed responsibly. Driving the news: The dozen-or-so AI projects that Google Research unfurled at a Manhattan media event are in various stages of development. On the "social good" side: - Wildfire tracking: Google's machine-learning model for early detection is live in the U.S., Canada, Mexico and parts of Australia.
- Flood forecasting: A system that sent 115 million flood alerts to 23 million people in India and Bangladesh last year has since expanded to 18 additional countries.
- Maternal health/ultrasound AI: Using an Android app and a portable ultrasound monitor, nurses and midwives in the U.S. and Zambia are testing a system that assesses a fetus' gestational age and position in the womb.
- Preventing blindness: Google's Automated Retinal Disease Assessment (ARDA) uses AI to help health care workers detect diabetic retinopathy.
- The "1,000 Languages Initiative": Google is building an AI model that will work with the world's 1,000 most-spoken languages.
On the more speculative and experimental side: - Self-coding robots: In a project called "Code as Policies," robots are learning to autonomously generate new code.
- Wordcraft: Several professional writers are experimenting with Google's AI fiction-crafting tool.
The big picture: Fears about AI's dark side — from privacy violations and the spread of misinformation to losing control of consumer data — recently prompted the White House to issue a preliminary "AI Bill of Rights," encouraging technologists to build safeguards into their products. - While Google published its principles of AI development in 2018 and other tech companies have done the same, there's little-to-no government regulation.
- Although investors have been pulling back on AI startups recently, Google's deep pockets could give it more time to develop projects that aren't immediate moneymakers.
Yes, but: Google executives sounded multiple notes of caution as they showed off their wares. - AI "can have immense social benefits" and "unleash all this creativity," said Marian Croak, head of Google Research's center of expertise on responsible AI.
- "But because it has such a broad impact on people, the risk involved can also be very huge. And if we don't get that right ... it can be very destructive."
Still, there's fun stuff: This summer, Google Research introduced Imagen and Parti — two AI models that can generate photorealistic images from text prompts (like "a puppy in a nest emerging from a cracked egg"). Now they're working on text-to-video: - Imagen Video can create a short clip from phrases like "a giraffe underneath a microwave."
- Phenaki is "a model for generating videos from text, with prompts that can change over time and videos that can be as long as multiple minutes," per Google Research.
- AI Test Kitchen is an app that demonstrates text-to-image capabilities through two games, "City Dreamer" (build cityscapes using keywords) and "Wobble" (create friendly monsters that can dance).
The bottom line: Despite recent financial headwinds, AI is steamrolling forward — with companies such as Google positioned to serve as moral arbiters and standard-setters. Share this story. |
No comments:
Post a Comment