Google’s AI Isn’t Too Woke, It’s Too Rushed
The Age
SKIPPED
Details
- Date Published
- 29 Feb 2024
- Priority Score
- 3
- Australian
- Yes
- Created
- 8 Mar 2025, 01:04 pm
Description
Google boss Sundar Pichai hasn’t been infected by the woke mind virus. He’s too obsessed with growth and is neglecting the proper checks on his products.
Summary
The article argues that Google, under the leadership of Sundar Pichai, has prioritized rapid growth over thorough safety checks in their AI products, leading to incidents like the controversial outputs from their AI chatbot, Gemini. This reflects a broader industry trend where safety personnel are greatly outnumbered by those focused on expanding AI capabilities, reminiscent of similar issues in Microsoft's AI. Such practices could pose significant risks, as hastily released AI features could potentially cause societal harm. Despite Pichai's acknowledgment of AI's potential dangers, the lack of immediate regulatory frameworks places the onus on companies like Google to self-regulate, a responsibility they seem to be neglecting in the competitive race to dominate AI technology.
Body
ByParmy OlsonFebruary 29, 2024 — 3.20pmSaveLog in,registerorsubscribeto save articles for later.Save articles for laterAdd articles to your saved list and come back to them any time.Got itNormal text sizeLarger text sizeVery large text sizeDid you hear? Google has been accused of having a secret vendetta against white people. Elon Musk exchangedtweets about the conspiracy on Xmore than 150 times over the past week, all regardingportraits generated with Google’s new AI chatbot Gemini.Ben Shapiro,The New York Postand Musk were driven apoplectic over howdiverse the images were: Female popes! Black Nazis!Indigenous founding fathers!Google apologised,and has paused the feature.Google boss Sundar Pichai is in the crosshairs over the search giant’s AI chatbot Gemini.Credit:BloombergIn reality, the issue is that the company did a shoddy job overcorrecting on tech that used to skew racist. No, its chief executive officer Sundar Pichai hasn’t been infected by the woke mind virus. Rather, he’s too obsessed with growth and is neglecting the proper checks on his products.Three years ago, Google got in trouble when itsphoto-tagging tool started labelling some black peopleas apes. It shut the feature down, and then made the problem worse by firing two of its leading AI ethics researchers. These were the people whose job was to make sure that Google’s technology was fair in how it depicted women and minorities. Not overly diverse like the new Gemini, but equitable and balanced.When Gemini started producing images ofGerman World War II soldiers who were Black and Asianthis week, it was a sign that the ethics team hadn’t become more powerful, as Musk and others suggest, but that it was being ignored amid Google’s race against Microsoft and OpenAI to dominate generative web search. Proper investment would have led to a smarter approach to diversity in image generation, but Google was neglecting that work.The signs have been there for the past year. People who test artificial intelligence systems for safety are outnumbered by those whose job is to make it bigger and more capable by 30-1, according to an estimate from theCentre for Humane Technology. Often they are shouting into a void and told not to get in the way.Google’s earlier chatbotBard was so faultythat it made factual errors in its marketing demo. Employees had sounded warnings about that, but managers wouldn’t listen. One posted on an internal message board that Bard was “worse than useless: please do not launch,” and many of the 7000 staffers who viewed the message agreed, according to a Bloomberg News investigation.Not long after, engineers who’d carried out a risk assessment told their Google superiors that Bard could cause harm and wasn’t ready. You can probably guess what Google did next: It released Bard to the public.AdvertisementGoogle’s rushed, faulty AI isn’t alone.Microsoft’s Bing chatbot wasn’t just inaccurate, it was unhinged, telling aNew York Timescolumnist soon after its release thatit was in love with himand wanted to destroy things. Google has said that responsible AI is a top priority, and that it was “continuing to invest in the teams” that apply its AI principles to products.LoadingOpenAI, which kick-started Big Tech’s race for a foothold in generative AI, normalised the rationale for treating us all like guinea pigs with new AI tools. Its website describes an “iterative deployment” philosophy, where it releases products like ChatGPT quickly to study their safety and impact and to prepare us for more powerful AI in the future. Google’s Pichai now says much the same. By releasing half-baked AI tools, he’s giving us “time to adapt” to when AI becomes super powerful, according to comments he made in a60 Minutesinterview last year.When asked what keeps him up at night, Pichai said, with no trace of irony, that it was knowing that AI could be “very harmful if deployed wrongly.” So what was his solution? Pichai didn’t mention investing more in the researchers that make AI safe, accurate and ethical, but pointed to greater regulation, a solution that lay outside his control.“There have to be consequences for creating deepfake videos which cause harm to society,” he said, referring to AI videos that could spread misinformation. “Anybody who has worked with AI for a while, you know, you realise this is something so different and so deep that we would need societal regulations to think about how to adapt.”This is a bit like the chef of a restaurant saying, “Making people sick with salmonella is bad, and we need more food inspectors to check our raw food,” when they know full well there are no food inspectors to speak of and won’t be for years. It gives them license to continue dishing out tainted meat or fish. The same is true in AI.LoadingWith regulations in the distant future, Pichai knows the onus is on his company to build AI systems that are fair and safe. But now that he is caught up in the race to put generative AI into everything quickly, there’s little incentive to ensure that it is.We know about Gemini’s diversity bug because of all the tweets on X, but the AI model may have other problems we don’t know about — issues that may not trigger Elon Musk but are no less insidious. The female popes and black founding fathers are products of a deeper, years-long problem of putting growth and market dominance before safety.Expect our role as guinea pigs to continue until that changes.Parmy Olson is a Bloomberg Opinion columnist covering technology. A former reporter for the Wall Street Journal and Forbes, she is author ofWe Are Anonymous.BloombergThe Business Briefing newsletter delivers major stories, exclusive coverage and expert opinion.Sign up to get it every weekday morning.SaveLog in,registerorsubscribeto save articles for later.AIGoogleOpinionSundar PichaiLoading