Staff Writer
Few AI leaders are as well-connected, or have as wide a breadth of AI experience, as James Manyika. Google’s senior vice president of research, technology, and society has a Ph.D. in AI from Oxford, served as a tech adviser in the Obama Administration, and was both a visiting scientist at NASA’s Jet Propulsion Labs and a McKinsey consultant.
Now, Manyika, 58, also serves as a vice chair of the National AI Advisory Committee, the federal panel tasked with strategizing AI regulation.
“Throughout my career and all the different things I’ve done, there’s been a bit of a through line of, on the one hand, how do we harness the possibilities of technology to benefit people everywhere, from East Palo Alto to Zimbabwe—but at the same time, be thoughtful about the impact on society of these technologies?” Manyika says in a phone interview.
At Google, Manyika has a wide purview over the company’s AI research and products, which touch climate change, health care, entertainment, and more. One AI-powered project tracks wildfires from California to Australia. Another, Flood Hub, which forecasts flood risk in vulnerable areas, expanded this year to serve 80 countries and hundreds of millions of people.
As enthusiastic as Manyika is about AI’s capabilities, he also warns of its risks. In May, Manyika signed the Center for AI Safety’s statement on AI risk, which raised the possibility of humanity’s “extinction from AI.” Manyika says his experience growing up under a system akin to apartheid in Rhodesia (now Zimbabwe) had a “deep impact” on his approach to AI and ascertaining the scenarios that should be prevented. Facial-recognition tools in the hands of an oppressive government, for example, could be disastrous.
So Manyika says that his main priority is to develop products safety and to be transparent about their limitations—even if that means that Google sometimes lags behind its AI competitors. “Others will go earlier or later, some will go faster,” he says. “To me, the only race here is to get it right