Close Menu
TechBrunchTechBrunch
  • Home
  • AI
  • Apps
  • Crypto
  • Security
  • Startups
  • TechCrunch
  • Venture

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

What's Hot

The court denied requests to suspend awards regarding Apple's App Store payment fees

June 6, 2025

Circle IPOs are giving hope to more startups waiting to be published to more startups

June 5, 2025

Perplexity received 780 million questions last month, the CEO says

June 5, 2025
Facebook X (Twitter) Instagram
TechBrunchTechBrunch
  • Home
  • AI

    OpenAI seeks to extend human lifespans with the help of longevity startups

    January 17, 2025

    Farewell to the $200 million woolly mammoth and TikTok

    January 17, 2025

    Nord Security founder launches Nexos.ai to help enterprises move AI projects from pilot to production

    January 17, 2025

    Data proves it remains difficult for startups to raise capital, even though VCs invested $75 billion in the fourth quarter

    January 16, 2025

    Apple suspends AI notification summaries for news after generating false alerts

    January 16, 2025
  • Apps

    The court denied requests to suspend awards regarding Apple's App Store payment fees

    June 6, 2025

    Perplexity received 780 million questions last month, the CEO says

    June 5, 2025

    Bonfire's new software allows users to build their own social communities free from platform control

    June 5, 2025

    x Test to highlight posts that users with dissent

    June 5, 2025

    Google says the updated Gemini 2.5 Pro AI model is excellent at coding

    June 5, 2025
  • Crypto

    Circle IPOs are giving hope to more startups waiting to be published to more startups

    June 5, 2025

    GameStop bought $500 million in Bitcoin

    May 28, 2025

    Vote for the session you want to watch in 2025

    May 26, 2025

    Save $900 + 90% from 2 tickets to destroy 2025 in the last 24 hours

    May 25, 2025

    Only 3 days left to save up to $900 to destroy the 2025 pass

    May 23, 2025
  • Security

    Humanity unveils custom AI models for US national security customers

    June 5, 2025

    Unlock phone company Cellebrite to acquire mobile testing startup Corellium for $170 million

    June 5, 2025

    Ransomware Gangs claim responsibility for Kettering Health Hack

    June 4, 2025

    Former CTO of CrowdStrike's cyber-rivals and how automation can undermine security for early-stage startups

    June 4, 2025

    Data breaches at newspaper giant Lee Enterprises impact 40,000 people

    June 4, 2025
  • Startups

    7 days left: Founders and VCs save over $300 on all stage passes

    March 24, 2025

    AI chip startup Furiosaai reportedly rejecting $800 million acquisition offer from Meta

    March 24, 2025

    20 Hottest Open Source Startups of 2024

    March 22, 2025

    Andrill may build a weapons factory in the UK

    March 21, 2025

    Startup Weekly: Wiz bets paid off at M&A Rich Week

    March 21, 2025
  • TechCrunch

    OpenSea takes a long-term view with a focus on UX despite NFT sales remaining low

    February 8, 2024

    AI will save software companies' growth dreams

    February 8, 2024

    B2B and B2C are not about who buys, but how you sell

    February 5, 2024

    It's time for venture capital to break away from fast fashion

    February 3, 2024

    a16z's Chris Dixon believes it's time to focus on blockchain use cases rather than speculation

    February 2, 2024
  • Venture

    Less than 48 hours left until display at TC at all stages

    June 5, 2025

    TC Session: AI will be on sale today at Berkeley

    June 5, 2025

    North America accounts for the majority of AI VC investment despite the harsh political environment

    June 5, 2025

    3 days left: Charge all your locations in stages on TC Expo Floor

    June 4, 2025

    From $5 to Financial Empowerment: Why Stash co-founder Brandon Krieg is a must-see for TechCrunch All Stage 2025

    June 4, 2025
TechBrunchTechBrunch

'Embarrassing and wrong': Google admits it lost control of its image-generating AI

TechBrunchBy TechBrunchFebruary 23, 20246 Mins Read
Facebook Twitter Pinterest Telegram LinkedIn Tumblr WhatsApp Email
Share
Facebook Twitter LinkedIn Pinterest Telegram Email


Google this week apologized (or came very close to apologizing) for another embarrassing AI failure: an image generation model that injected diversity into photos with a farcical disregard for historical context. While the underlying problem is completely understandable, Google is accusing the model of being “over-sensitive.” But I didn't create the model myself.

The AI ​​system in question is Gemini, the company's flagship conversational AI platform, which invokes a version of the Imagen 2 model to create images on demand when requested.

But recently it was discovered that asking it to generate images of specific historical situations or people can yield hilarious results. For example, the Founding Fathers, whom we know as white slave owners, were depicted as a multicultural group that included people of color.

This embarrassing and easily reproduced issue was quickly mocked by online commentators. Also, as expected, this issue has been incorporated into the ongoing debate about diversity, equity, and inclusion (currently at a local minimum of reputation), and the woke mind virus that is further infiltrating the already liberal tech sector. was taken by experts as evidence of

Image credits: Image created by Twitter user Patrick Ganley.

DEI has gone mad, a concerned citizenry has conspicuously cried out. This is Biden’s America! Google is an “ideological echo chamber” and a trail horse for the left. (It must be said that the left was also suitably perturbed by this strange phenomenon.)

But as anyone familiar with the technology knows, and as Google explains in a post adjacent to today's little apology, this problem is a very reasonable workaround for systematic bias in the training data. It was the result of a strategy.

For example, let's say you're using Gemini to create a marketing campaign and ask it to generate 10 photos of “people walking their dog in the park.” You don't specify the type of person, dog, or park, so it's the dealer's choice. Generative models output what they know best. And in many cases, it's a product of training data rather than reality, which can have all sorts of biases built in.

Among the thousands of related images captured by the model, what types of people, or even dogs or parks, are the most common? In reality, these image collections (stock images, rights-free photos, etc.) ), white people are overrepresented in many cases, and as a result, models are often white by default. Please do not specify.

This is just an artifact of training data, but as Google points out, “Our users come from all over the world, so we want it to work for everyone.” When you request photos, you may receive photos of a variety of people. You probably don't want to receive just images of people of one ethnicity (or other characteristics).”

Illustration of a group of people who were recently laid off and are holding boxes.

Imagine requesting an image like this. What if it's all he's one type of person? Bad outcome! Image credits: Getty Images/Victoria Kart

There's nothing wrong with taking a photo of a white man walking his golden retriever in a suburban park. But if you ask for 10, all A white man walking a golden in a suburban park? And you live in Morocco, where people, dogs and parks all look different? That's never a desirable outcome. If someone does not specify a characteristic, the model should choose diversity over homogeneity, regardless of how biased the model's training data is.

This is a common problem with all types of generated media. And there are no easy solutions. But for particularly common cases, sensitive cases, or both, companies like Google, OpenAI, and Anthropic invisibly include additional instructions in their models.

It cannot be overstated how common this type of implicit instruction is. The entire LLM ecosystem is built on implicit directives (also known as system prompts), where guidelines like “Keep it simple” and “No swearing” are modeled before every conversation. given. If you ask for a joke, you won't get a racist joke. Because even though models have ingested thousands of jokes, they are trained, like most of us, not to tell them. This is not a secret agenda (though that could be achieved through greater transparency), but infrastructure.

Where Google's model went wrong was that there was no implicit instruction for situations where historical context was important. So a prompt like “The person walking the dog in the park” would be improved by silently adding words like “The person is a random gender and ethnicity,” but a prompt like “The person walking the dog in the park” would be improved by silently adding words like “The person is a random gender and ethnicity,” but a prompt like “The person walking the dog in the park” would be improved by silently adding words like “The person is a random gender and ethnicity,” but a prompt like “The person walking the dog in the park” would be improved by silently adding words like “The person is a random gender and ethnicity,” but “The person walking the dog in the “The Fathers Sign the Constitution” is clearly not the case. The same has been improved.

Prabhakar Raghavan, senior vice president at Google, said:

First, Gemini's adjustment to show different people clearly failed to explain cases where it shouldn't show a range. And second, over time, the model becomes much more cautious than we intended, refusing to answer certain prompts outright, and erroneously labeling highly unusual prompts as sensitive. I interpreted it.

These two things caused the model to overcorrect in some cases and be overly conservative in others, producing embarrassing or incorrect images.

I forgive Raghavan for quitting on the brink because I know how hard it is to say “sorry” sometimes. More importantly, there's an interesting line: “The model turned out to be much more cautious than we intended.”

So how does a model “become” something? Software. Thousands of Google engineers built it, tested it, and iterated on it. Someone wrote implicit instructions to improve some answers and make others fail hilariously. When this failed, if someone had been able to inspect the entire prompt, they probably would have caught the Google team's mistake.

Google blames the model for “becoming” something it wasn't “intended” to be. But they made a model! It's like breaking glass; instead of saying “I dropped it,” I say “it fell.” (I did this.)

Indeed, mistakes made by these models are inevitable. They hallucinate, reflect prejudices, and act unexpectedly. However, the responsibility for those mistakes lies not with the model, but with the people who created it. Today it's Google. Tomorrow is OpenAI. The next day, and probably for several months straight, it will be X.AI.

These companies have a vested interest in convincing you that the AI ​​itself is making mistakes. Please don't let me.



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related Posts

OpenAI seeks to extend human lifespans with the help of longevity startups

January 17, 2025

Farewell to the $200 million woolly mammoth and TikTok

January 17, 2025

Nord Security founder launches Nexos.ai to help enterprises move AI projects from pilot to production

January 17, 2025

Data proves it remains difficult for startups to raise capital, even though VCs invested $75 billion in the fourth quarter

January 16, 2025

Apple suspends AI notification summaries for news after generating false alerts

January 16, 2025

Nvidia releases more tools and guardrails to help enterprises adopt AI agents

January 16, 2025

Leave A Reply Cancel Reply

Top Reviews
Editors Picks

7 days left: Founders and VCs save over $300 on all stage passes

March 24, 2025

AI chip startup Furiosaai reportedly rejecting $800 million acquisition offer from Meta

March 24, 2025

20 Hottest Open Source Startups of 2024

March 22, 2025

Andrill may build a weapons factory in the UK

March 21, 2025
About Us
About Us

Welcome to Tech Brunch, your go-to destination for cutting-edge insights, news, and analysis in the fields of Artificial Intelligence (AI), Cryptocurrency, Technology, and Startups. At Tech Brunch, we are passionate about exploring the latest trends, innovations, and developments shaping the future of these dynamic industries.

Our Picks

The court denied requests to suspend awards regarding Apple's App Store payment fees

June 6, 2025

Circle IPOs are giving hope to more startups waiting to be published to more startups

June 5, 2025

Perplexity received 780 million questions last month, the CEO says

June 5, 2025

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

© 2025 TechBrunch. Designed by TechBrunch.
  • Home
  • About Tech Brunch
  • Advertise with Tech Brunch
  • Contact us
  • DMCA Notice
  • Privacy Policy
  • Terms of Use

Type above and press Enter to search. Press Esc to cancel.