Back in February, Google suspended its AI-powered chatbot Gemini's ability to generate images of people after users complained of historical inaccuracies. For example, a Gemini asked to draw a “Roman legion” ends up drawing an anachronistically diverse group of soldiers, while drawing the “Zulu warriors” as uniformly black.
Google CEO Sundar Pichai apologized, and Demis Hassabis, co-founder of Google's AI research arm DeepMind, said a fix should arrive “very soon.” But already in May, the promised fix has yet to be delivered. appear.
Google touted many other features of Gemini at its annual I/O developer conference this week, from custom chatbots to vacation itinerary planners and integrations with Google Calendar, Keep, and YouTube Music. However, a Google spokesperson confirmed that human image generation remains turned off in Gemini apps on the web and mobile.
So what's the holdup? Well, the issue is probably more complex than Hassabis implied.
The data sets used to train image generators like Gemini typically include more images of white people than people of other races and ethnicities, and images of non-white people in those data sets. reinforces negative stereotypes. In an apparent effort to correct for these biases, Google has implemented clumsy internal hardcoding to add diversity to queries that don't specify a person's appearance. And now we are struggling to find a reasonable middle ground that does not repeat history.
Will Google get there? perhaps. Probably not. Either way, this lingering incident is a reminder that fixing AI cheating is not easy, especially when bias is at the root of the cheating.