Google has announced that it will temporarily suspend the ability of its flagship generative AI model suite, Gemini, to generate images of people while it works to update its models to improve the historical accuracy of its output.
in post The company announced on social media platform
“While we do this, we will pause human image generation and plan to re-release an improved version soon,” the company added.
Google released its Gemini image generation tool earlier this month. However, there is no shortage of examples that create uncomfortable images of historical figures. on social media In recent years, images of the Founding Fathers of the United States have been portrayed as American Indians, blacks, and Asians, leading to criticism and even ridicule.
Today, Paris-based venture capitalist Michael Jackson joined the pile, branding Google's AI a “nonsensical DEI parody” in a LinkedIn post. (DEI stands for “diversity, equity, and inclusion.”)
before Yesterday's post to X, Google acknowledged that it was “aware” that its AI was creating “inaccuracies in the depiction of some historical image generation,” adding in a statement: The Gemini Alimage generation certainly generates a wide range of people. This is generally a good thing because people all over the world are using it. But that's beside the point here. ”
Generative AI tools generate output based on training data and other parameters such as model weights.
Such tools produce biased output in more stereotypical ways, such as displaying overtly sexual images of women or using images of white men in requests for high-status positions. This has led to increased exposure to criticism.
In 2015, an AI image classification tool developed by Google incorrectly classified a black man as a gorilla, sparking outrage. The company promised to fix the problem, but as Wired reported years later, the “fix” was purely a workaround, simply blocking Google's technology to recognize gorillas.