| | | Presented By Aon | | Axios Future | By Bryan Walsh ·Feb 13, 2021 | Welcome to Axios Future, where thanks to the pandemic it feels like we all live in a permanent mid-February of the mind. Today's Smart Brevity count: 1,822 words or about 7 minutes. | | | 1 big thing: The coming conflict over facial recognition | | | Illustration: Aïda Amer/Axios | | The arrests and charges in the aftermath of the Jan. 6 Capitol Hill insurrection made clear the power of facial recognition, even as efforts to restrict the technology are growing. Why it matters: With dozens of companies selling the ability to identify people from pictures of their faces — and no clear federal regulation governing the process — facial recognition is seeping into the U.S., raising major questions about ethics and effectiveness. Driving the news: The Minneapolis City Council voted on Friday to bar its police department from using facial recognition technology, my Axios Twin Cities colleague Nick Halter reports. The big picture: Even as efforts to restrict facial recognition at the local level are gathering momentum, the technology is being used across U.S. society, a trend accelerated by efforts to identify those involved in the Capitol Hill insurrection. - Clearview AI, one of the leading firms selling facial recognition to police, reported a 26% jump in usage from law enforcement agencies the day after the riot.
- Cybersecurity researchers employed facial recognition to identify a retired Air Force officer recorded in the Capitol that day, and after the attack Instagram accounts popped up purporting to name trespassers.
By the numbers: A report by the Government Accountability Office found that between 2011 and 2019, law enforcement agencies performed 390,186 searches to find facial matches for images or video of more than 150,000 people. - The Black Lives Matter protests over the summer also led to a spike in use in facial recognition among law enforcement agencies, according to Chad Steelberg, the CEO of Veritone, an AI company. "We consistently signed an agency a week, every single week."
- U.S. Customs and Border Protection used facial recognition on more than 23 million travelers in 2020, up from 19 million in 2019, according to data released on Thursday.
The big questions: Does it work? And should it work? - Facial recognition is notoriously less accurate on non-white faces, and a 2019 federal study found Asian and Black people were up to 100 times more likely to be misidentified than white men, depending on the individual system.
- There have been two known cases so far of wrongful arrest based on mistaken facial recognition matches.
What they're saying: "Today's facial recognition technology is fundamentally flawed and reinforces harmful biases," FTC Commissioner Rohit Chopra said last month, following a settlement with a photo storing company that used millions of users' images to create facial recognition technology it marketed to the security and air travel industries. The other side: Facial recognition companies counter that humans on their own are notoriously biased and prone to error — a 2014 study found 1 in 25 defendants sentenced to death in the U.S. are later shown to be innocent — and that the models are improving over time. - "There's nothing inherently evil about the models and the bias," says Steelberg. "You just have to surface that information so the end user is aware of it."
Be smart: At its most basic level the underlying technology isn't that sophisticated, which makes it difficult to control. - Big tech companies like Microsoft can decide not to sell facial recognition software to police departments, but there are plenty of startups to take their place.
- And as Jan. 6 showed, even individuals can tap facial recognition with ease to become cyber-sleuths — or cyber-vigilantes.
The core technology isn't limiting. It's really more of a legal jurisdiction question, which is where the rubber will meet the road. — Chad Steelberg, Veritone | | | | 2. Diabetes monitoring without the needle | | | Illustration: Annelise Capossela/Axios | | A European company is pioneering a bloodless way for people with diabetes to monitor their glucose levels. Why it matters: More than 5% of the global population is affected by diabetes, and the number is set to keep rising. A more seamless monitoring system would make it easier for people with diabetes to manage their conditions and avoid disastrous health outcomes. How it works: DiaMonTech is developing machines that use lasers and an optical lens to read glucose levels through the skin photothermally. - A user places his finger on the lens for a few seconds, and "wavelengths from the infrared laser are selectively absorbed by the glucose molecules in skin and we detect the small amount of heat that is caused by the absorption," says Thorsten Lubinski, DiaMonTech's CEO.
- A proprietary algorithm is able to convert those readings into glucose levels.
Background: People with diabetes suffer from problems managing blood sugar levels that stem from their inability or inefficiency of their bodies to produce the glucose-regulating hormone insulin. - To combat the disease, they need to frequently monitor their glucose levels to indicate when they have to take insulin or increase their sugar levels.
- The conventional method involves pricking a finger, sometimes several times a day, to produce blood that can be tested.
- More advanced continuous monitoring systems reduce or virtually eliminate the need for finger pricking, but still require an injected sensor.
What to watch: DiaMonTech has developed a lab-based version of its system that has been certified for medical use in clinics in Europe, and is working on a hand-held device for personal use that Lubinski believes could be ready by 2022. - Researchers are also working on a fully functional "artificial pancreas" that could seamlessly monitor glucose levels and dispense insulin as needed, but such devices are still likely years away.
| | | | 3. Engineering "see through soil" to fight drought | As researchers add an aqueous solution of ammonium thiocyanate, it eliminates distortion and allows for a clear view of the hydrogel. Credit: Datta et al / Princeton University Researchers have developed a method to see through soil in order to study the effects of drought-alleviating hydrogels. The big picture: Hydrogel beads can absorb huge amounts of water and release it back into drought-parched land as needed, but without being able to see into their interactions in soil, it's difficult for farmers to make the most of the technology. How it works: The research — outlined in a new study in Science Advances — involves using water doped with a chemical called ammonium thiocyanate in an experimental platform that used glass beads as a stand-in for soil and contained hydrogel beads. - The chemical changed the way water bent light and allowed the researchers to clearly see the workings of the hydrogel in the faux soil.
What they found: The researchers discovered that the amount of water the beads could store depended on the balance between the force of the hydrogels as they swelled with water and the force of the surrounding soil. - Softer hydrogels were more effective in surface layers of soil, but deeper layers — where the surrounding pressure is greater — require stiffer hydrogel beads.
What's next: Much of the southwest U.S. is currently in a state of extreme or exceptional drought, and the risk of drought is expected to grow with climate change. - Engineers should be able to use the results of the study to better tailor the chemistry and makeup of hydrogels for different crops and soil conditions.
| | | | A message from Aon | Four core priorities to reshape the future | | | | The COVID-19 pandemic has challenged companies to create a "new better" instead of accepting the "new normal." Leaders in Aon's global coalitions identified four key areas where organizations can have a significant impact on moving the economy forward. | | | 4. Biden's innovation push comes into partial focus | | | Illustration: Sarah Grillo/Axios | | The White House is taking the first steps to creating a new cross-agency federal R&D organization for climate technologies, but there's plenty we still don't know about the effort, my Axios colleague Ben Geman writes. Why it matters: President Biden's overall climate plan calls for a much more muscular federal role in scaling up research and commercialization of next-wave tech, even as it looks to speed deployment of existing low-carbon sources. Driving the news: Thursday brought the announcement of a "Climate Innovation Working Group." - Part of its mission is to "advance" plans to stand up Biden's proposed Advanced Research Projects Agency — Climate, or ARPA-C.
The big picture: The White House said the innovation group will focus on advancing and lowering costs of a wide set of technologies including... - Carbon-neutral construction materials
- Much cheaper energy storage systems
- Carbon-free hydrogen
- Air conditioning and refrigeration that does not use planet-warming gases
- Zero-carbon heat and industrial processes for heavy industries like cement
- Advanced soil management and other farming practices that remove CO2
- Ways to retrofit existing industrial and power plants with CO2 capture
The intrigue: ARPA-C may sound familiar because of the Advanced Research Projects Agency — Energy (ARPA-E) created under 2007 legislation, which got its first funding in 2009 (and itself is modeled after the military's DARPA). Similarities between the Energy Department's ARPA-E and the ARPA-C concept extend beyond just the sound. - "The precise boundaries between the two ARPAs aren't entirely clear," MIT Technology Review reports.
- "[S]ome energy observers are confused about why the administration wants to expend political capital trying to set up and fund a new research agency rather than focusing on boosting capital for existing programs," MIT Tech Review notes.
Read the whole thing. | | | | 5. Worthy of your time | Predictive policing is still racist — whatever data it uses (Will Heaven — MIT Tech Review) - Using AI systems to predict crime is biased, even after efforts to make the data fairer.
There are spying eyes everywhere — and now they share a brain (Arthur Holland Michel — Wired) - Speaking of surveillance, here's a deep dive into the scary world of fusion intelligence, which brings together disparate streams of data into an all-seeing eye.
If rush hour dies, does mass transit die with it? (Henry Grabar — Slate) - Transit systems were built on moving large numbers of people to and from business districts in the mornings and evenings, but the pandemic may have changed that forever.
How 1970s VCR dating paved the way for Tinder and Hinge (Michael Waters — Vox) - Video dating led to the video-first dating appscape. In other news, I'm really glad I'm married.
| | | | 6. 1 AI thing: Looking back at Watson's "Jeopardy!" win | | | Ken Jennings (he's the human) plays IBM's Watson AI system at Jeopardy on Jan. 13, 2011. Photo: Ben Hider/Getty Images | | Tuesday marks the 10th anniversary of IBM's Watson AI system crushing its human competition on "Jeopardy!" Why it matters: Watson's victory marked one of the first times Americans could witness an AI system using natural language processing. But, 10 years later, the field still has far to go. Background: The idea of sending a machine learning system to compete on "Jeopardy!" originated with David Ferrucci, an AI expert then at IBM. - Ferrucci's interest was in making machines that "actually understood language in a much deeper way," and the Watson program — with "Jeopardy!" as a goal — offered a chance to move closer to that goal, which led the company to eventually pick it as one of its "Grand Challenges."
How it worked: The main obstacle was that Watson would need to be able to parse the language of Jeopardy questions — meaning answers — to search for clues it could then match in its store of information. - Watson used hundreds of algorithms at every stage of the process to work out the nature of the question, before creating a weighted list of possible answers based on how likely they were to be correct.
What happened: On Feb. 16, 2011, the final episode aired featuring IBM's Watson going up against top champions Brad Rutter and Ken Jennings. - It wasn't even a contest — by the end of the third and concluding match, Watson was up nearly $50,000.
- Jennings summed up the defeat with a tongue-in-cheek answer to the last Final Jeopardy: "I, for one, welcome our new computer overlords."
The catch: As impressive as Watson was, Ferrucci points out that "in the end, it was making linguistic predictions. There's no really deep interpretation." - Ferrucci, who left IBM in 2012, is now trying to build AIs that can better interact with human beings at his startup Elemental Cognition.
- "Our focus is on not learning what's the next word or what's the next number?" he says. "It's learning the comprehension."
The bottom line: Should AI truly be able to do that, our new computer overlords will have truly arrived. | | | | A message from Aon | How businesses can prepare for what comes next | | | | To build a stronger, more resilient future, businesses must not only think about today's challenges posed by COVID-19, but also prepare for future long-tail risks such as climate change, cyber threats and the health and wealth gap. Learn more. | | | Axios thanks our partners for supporting our newsletters. Sponsorship has no influence on editorial content. Axios, 3100 Clarendon Blvd, Suite 1300, Arlington VA 22201 | | You received this email because you signed up for newsletters from Axios. Change your preferences or unsubscribe here. | | Was this email forwarded to you? Sign up now to get Axios in your inbox. | | Follow Axios on social media: | | | |
No comments:
Post a Comment