Close Menu
TechBrunchTechBrunch
  • Home
  • AI
  • Apps
  • Crypto
  • Security
  • Startups
  • TechCrunch
  • Venture

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

What's Hot

YouTube removes its trending pages and now trend list

July 10, 2025

As X loses CEO, daily use is decreasing and competition is growing

July 10, 2025

French police arrest Russian basketball player accused of ransomware: Report

July 10, 2025
Facebook X (Twitter) Instagram
TechBrunchTechBrunch
  • Home
  • AI

    OpenAI seeks to extend human lifespans with the help of longevity startups

    January 17, 2025

    Farewell to the $200 million woolly mammoth and TikTok

    January 17, 2025

    Nord Security founder launches Nexos.ai to help enterprises move AI projects from pilot to production

    January 17, 2025

    Data proves it remains difficult for startups to raise capital, even though VCs invested $75 billion in the fourth quarter

    January 16, 2025

    Apple suspends AI notification summaries for news after generating false alerts

    January 16, 2025
  • Apps

    YouTube removes its trending pages and now trend list

    July 10, 2025

    As X loses CEO, daily use is decreasing and competition is growing

    July 10, 2025

    Google adds inter-image generation capabilities to VEO 3

    July 10, 2025

    Cameo's Birthday Reminder app, Candl, is a weak attempt at a comeback

    July 10, 2025

    Hallucinations about soundslice on chatgpt music app frequently, founders have made to lie

    July 9, 2025
  • Crypto

    Vitalik Buterin reserves for Sam Altman's global project

    June 28, 2025

    Calci will close a $185 million round as rival Polymeruk reportedly seeks $200 million

    June 25, 2025

    Stablecoin Evangelist: Katie Haun's Battle of Digital Dollars

    June 22, 2025

    Hackers steal and destroy millions of Iran's biggest crypto exchanges

    June 18, 2025

    Unique, a new social media app

    June 17, 2025
  • Security

    French police arrest Russian basketball player accused of ransomware: Report

    July 10, 2025

    Authorities arrest four hackers related to UK retail hacking

    July 10, 2025

    Jack Dorsey says his “safe” new bitchat app hasn't been tested for security

    July 9, 2025

    Get the exhibition tables on TechCrunch Confuse 2025

    July 9, 2025

    How to protect your mobile number from SIM swap attacks

    July 9, 2025
  • Startups

    7 days left: Founders and VCs save over $300 on all stage passes

    March 24, 2025

    AI chip startup Furiosaai reportedly rejecting $800 million acquisition offer from Meta

    March 24, 2025

    20 Hottest Open Source Startups of 2024

    March 22, 2025

    Andrill may build a weapons factory in the UK

    March 21, 2025

    Startup Weekly: Wiz bets paid off at M&A Rich Week

    March 21, 2025
  • TechCrunch

    OpenSea takes a long-term view with a focus on UX despite NFT sales remaining low

    February 8, 2024

    AI will save software companies' growth dreams

    February 8, 2024

    B2B and B2C are not about who buys, but how you sell

    February 5, 2024

    It's time for venture capital to break away from fast fashion

    February 3, 2024

    a16z's Chris Dixon believes it's time to focus on blockchain use cases rather than speculation

    February 2, 2024
  • Venture

    David George on the Future to be released in 2025

    July 9, 2025

    Edo Liberty explores missed links for Enterprise AI in 2025

    July 9, 2025

    Avoid the pitfalls that stall startup funding with TC at every stage

    July 9, 2025

    TC starts all stages in Boston, six days until the end of savings up to $475

    July 9, 2025

    European VCs break taboos by investing in pure defense technology from the Ukrainian war zone

    July 9, 2025
TechBrunchTechBrunch

'Embarrassing and wrong': Google admits it lost control of its image-generating AI

TechBrunchBy TechBrunchFebruary 23, 20246 Mins Read
Facebook Twitter Pinterest Telegram LinkedIn Tumblr WhatsApp Email
Share
Facebook Twitter LinkedIn Pinterest Telegram Email


Google this week apologized (or came very close to apologizing) for another embarrassing AI failure: an image generation model that injected diversity into photos with a farcical disregard for historical context. While the underlying problem is completely understandable, Google is accusing the model of being “over-sensitive.” But I didn't create the model myself.

The AI ​​system in question is Gemini, the company's flagship conversational AI platform, which invokes a version of the Imagen 2 model to create images on demand when requested.

But recently it was discovered that asking it to generate images of specific historical situations or people can yield hilarious results. For example, the Founding Fathers, whom we know as white slave owners, were depicted as a multicultural group that included people of color.

This embarrassing and easily reproduced issue was quickly mocked by online commentators. Also, as expected, this issue has been incorporated into the ongoing debate about diversity, equity, and inclusion (currently at a local minimum of reputation), and the woke mind virus that is further infiltrating the already liberal tech sector. was taken by experts as evidence of

Image credits: Image created by Twitter user Patrick Ganley.

DEI has gone mad, a concerned citizenry has conspicuously cried out. This is Biden’s America! Google is an “ideological echo chamber” and a trail horse for the left. (It must be said that the left was also suitably perturbed by this strange phenomenon.)

But as anyone familiar with the technology knows, and as Google explains in a post adjacent to today's little apology, this problem is a very reasonable workaround for systematic bias in the training data. It was the result of a strategy.

For example, let's say you're using Gemini to create a marketing campaign and ask it to generate 10 photos of “people walking their dog in the park.” You don't specify the type of person, dog, or park, so it's the dealer's choice. Generative models output what they know best. And in many cases, it's a product of training data rather than reality, which can have all sorts of biases built in.

Among the thousands of related images captured by the model, what types of people, or even dogs or parks, are the most common? In reality, these image collections (stock images, rights-free photos, etc.) ), white people are overrepresented in many cases, and as a result, models are often white by default. Please do not specify.

This is just an artifact of training data, but as Google points out, “Our users come from all over the world, so we want it to work for everyone.” When you request photos, you may receive photos of a variety of people. You probably don't want to receive just images of people of one ethnicity (or other characteristics).”

Illustration of a group of people who were recently laid off and are holding boxes.

Imagine requesting an image like this. What if it's all he's one type of person? Bad outcome! Image credits: Getty Images/Victoria Kart

There's nothing wrong with taking a photo of a white man walking his golden retriever in a suburban park. But if you ask for 10, all A white man walking a golden in a suburban park? And you live in Morocco, where people, dogs and parks all look different? That's never a desirable outcome. If someone does not specify a characteristic, the model should choose diversity over homogeneity, regardless of how biased the model's training data is.

This is a common problem with all types of generated media. And there are no easy solutions. But for particularly common cases, sensitive cases, or both, companies like Google, OpenAI, and Anthropic invisibly include additional instructions in their models.

It cannot be overstated how common this type of implicit instruction is. The entire LLM ecosystem is built on implicit directives (also known as system prompts), where guidelines like “Keep it simple” and “No swearing” are modeled before every conversation. given. If you ask for a joke, you won't get a racist joke. Because even though models have ingested thousands of jokes, they are trained, like most of us, not to tell them. This is not a secret agenda (though that could be achieved through greater transparency), but infrastructure.

Where Google's model went wrong was that there was no implicit instruction for situations where historical context was important. So a prompt like “The person walking the dog in the park” would be improved by silently adding words like “The person is a random gender and ethnicity,” but a prompt like “The person walking the dog in the park” would be improved by silently adding words like “The person is a random gender and ethnicity,” but a prompt like “The person walking the dog in the park” would be improved by silently adding words like “The person is a random gender and ethnicity,” but a prompt like “The person walking the dog in the park” would be improved by silently adding words like “The person is a random gender and ethnicity,” but “The person walking the dog in the “The Fathers Sign the Constitution” is clearly not the case. The same has been improved.

Prabhakar Raghavan, senior vice president at Google, said:

First, Gemini's adjustment to show different people clearly failed to explain cases where it shouldn't show a range. And second, over time, the model becomes much more cautious than we intended, refusing to answer certain prompts outright, and erroneously labeling highly unusual prompts as sensitive. I interpreted it.

These two things caused the model to overcorrect in some cases and be overly conservative in others, producing embarrassing or incorrect images.

I forgive Raghavan for quitting on the brink because I know how hard it is to say “sorry” sometimes. More importantly, there's an interesting line: “The model turned out to be much more cautious than we intended.”

So how does a model “become” something? Software. Thousands of Google engineers built it, tested it, and iterated on it. Someone wrote implicit instructions to improve some answers and make others fail hilariously. When this failed, if someone had been able to inspect the entire prompt, they probably would have caught the Google team's mistake.

Google blames the model for “becoming” something it wasn't “intended” to be. But they made a model! It's like breaking glass; instead of saying “I dropped it,” I say “it fell.” (I did this.)

Indeed, mistakes made by these models are inevitable. They hallucinate, reflect prejudices, and act unexpectedly. However, the responsibility for those mistakes lies not with the model, but with the people who created it. Today it's Google. Tomorrow is OpenAI. The next day, and probably for several months straight, it will be X.AI.

These companies have a vested interest in convincing you that the AI ​​itself is making mistakes. Please don't let me.



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related Posts

OpenAI seeks to extend human lifespans with the help of longevity startups

January 17, 2025

Farewell to the $200 million woolly mammoth and TikTok

January 17, 2025

Nord Security founder launches Nexos.ai to help enterprises move AI projects from pilot to production

January 17, 2025

Data proves it remains difficult for startups to raise capital, even though VCs invested $75 billion in the fourth quarter

January 16, 2025

Apple suspends AI notification summaries for news after generating false alerts

January 16, 2025

Nvidia releases more tools and guardrails to help enterprises adopt AI agents

January 16, 2025

Leave A Reply Cancel Reply

Top Reviews
Editors Picks

7 days left: Founders and VCs save over $300 on all stage passes

March 24, 2025

AI chip startup Furiosaai reportedly rejecting $800 million acquisition offer from Meta

March 24, 2025

20 Hottest Open Source Startups of 2024

March 22, 2025

Andrill may build a weapons factory in the UK

March 21, 2025
About Us
About Us

Welcome to Tech Brunch, your go-to destination for cutting-edge insights, news, and analysis in the fields of Artificial Intelligence (AI), Cryptocurrency, Technology, and Startups. At Tech Brunch, we are passionate about exploring the latest trends, innovations, and developments shaping the future of these dynamic industries.

Our Picks

YouTube removes its trending pages and now trend list

July 10, 2025

As X loses CEO, daily use is decreasing and competition is growing

July 10, 2025

French police arrest Russian basketball player accused of ransomware: Report

July 10, 2025

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

© 2025 TechBrunch. Designed by TechBrunch.
  • Home
  • About Tech Brunch
  • Advertise with Tech Brunch
  • Contact us
  • DMCA Notice
  • Privacy Policy
  • Terms of Use

Type above and press Enter to search. Press Esc to cancel.