Close Menu
TechBrunchTechBrunch
  • Home
  • AI
  • Apps
  • Crypto
  • Security
  • Startups
  • TechCrunch
  • Venture

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

What's Hot

Act 2 of Drive Capital – How Columbus Ventures Success After Split

July 5, 2025

Pets ready-made stem cell therapy may come

July 4, 2025

Everyone in high tech has an opinion about Soham Parekh

July 3, 2025
Facebook X (Twitter) Instagram
TechBrunchTechBrunch
  • Home
  • AI

    OpenAI seeks to extend human lifespans with the help of longevity startups

    January 17, 2025

    Farewell to the $200 million woolly mammoth and TikTok

    January 17, 2025

    Nord Security founder launches Nexos.ai to help enterprises move AI projects from pilot to production

    January 17, 2025

    Data proves it remains difficult for startups to raise capital, even though VCs invested $75 billion in the fourth quarter

    January 16, 2025

    Apple suspends AI notification summaries for news after generating false alerts

    January 16, 2025
  • Apps

    Not everyone is excited about DMs on the thread

    July 3, 2025

    Meta has found another way to engage you: message that message first

    July 3, 2025

    Everything you need to know about Flash, Blueski-based Instagram alternatives

    July 3, 2025

    Substack brings new updates to live streaming as it increases video push

    July 2, 2025

    Amazon shuts down the Freevee app in August

    July 2, 2025
  • Crypto

    Vitalik Buterin reserves for Sam Altman's global project

    June 28, 2025

    Calci will close a $185 million round as rival Polymeruk reportedly seeks $200 million

    June 25, 2025

    Stablecoin Evangelist: Katie Haun's Battle of Digital Dollars

    June 22, 2025

    Hackers steal and destroy millions of Iran's biggest crypto exchanges

    June 18, 2025

    Unique, a new social media app

    June 17, 2025
  • Security

    Ransomware Gang Hunter International says it's shut down

    July 3, 2025

    India's biggest finance says hackers have accessed customer data from insurance units

    July 2, 2025

    Data breaches reveal that Catwatchful's “Stalkerware” is spying on thousands of phones

    July 2, 2025

    Hacking, Leaking, Exposure: Do not use stalkerware apps

    July 2, 2025

    Qantas Hacks lead to theft of personal data for 6 million passengers

    July 2, 2025
  • Startups

    7 days left: Founders and VCs save over $300 on all stage passes

    March 24, 2025

    AI chip startup Furiosaai reportedly rejecting $800 million acquisition offer from Meta

    March 24, 2025

    20 Hottest Open Source Startups of 2024

    March 22, 2025

    Andrill may build a weapons factory in the UK

    March 21, 2025

    Startup Weekly: Wiz bets paid off at M&A Rich Week

    March 21, 2025
  • TechCrunch

    OpenSea takes a long-term view with a focus on UX despite NFT sales remaining low

    February 8, 2024

    AI will save software companies' growth dreams

    February 8, 2024

    B2B and B2C are not about who buys, but how you sell

    February 5, 2024

    It's time for venture capital to break away from fast fashion

    February 3, 2024

    a16z's Chris Dixon believes it's time to focus on blockchain use cases rather than speculation

    February 2, 2024
  • Venture

    Act 2 of Drive Capital – How Columbus Ventures Success After Split

    July 5, 2025

    Pets ready-made stem cell therapy may come

    July 4, 2025

    Everyone in high tech has an opinion about Soham Parekh

    July 3, 2025

    All stages of TechCrunch regain early release prices for limited time

    July 3, 2025

    Kristen Craft brings fresh fundraising strategies to every stage

    July 3, 2025
TechBrunchTechBrunch

Gemini's data analytics capabilities aren't as good as Google claims

TechBrunchBy TechBrunchJune 29, 20247 Mins Read
Facebook Twitter Pinterest Telegram LinkedIn Tumblr WhatsApp Email
Share
Facebook Twitter LinkedIn Pinterest Telegram Email


One of the selling points of Google's flagship generative AI models, Gemini 1.5 Pro and 1.5 Flash, is the amount of data they can allegedly process and analyze. During press conferences and demos, Google has repeatedly claimed that these models can achieve previously impossible tasks, like summarizing multiple documents spanning hundreds of pages or searching for scenes in movies, thanks to “long context.”

But new research suggests that the model isn't actually very good at those things.

Two separate studies looked at how well Google's Gemini model and other models could interpret huge amounts of data (think works the length of “War and Peace”). Both studies found that Gemini 1.5 Pro and 1.5 Flash struggled to correctly answer questions on large data sets: In one set of document-based tests, the model got the answer right only 40% to 50% of the time.

“While models like Gemini 1.5 Pro can technically handle long contexts, we have seen numerous examples that show the models don't actually 'understand' the content,” Marzena Karpinska, a postdoctoral researcher at the University of Massachusetts Amherst and co-author on one of the studies, told TechCrunch.

Gemini's context window is missing

A model's context, or context window, refers to the input data (e.g., text) that the model considers before generating an output (e.g., additional text). A simple question like “Who won the 2020 US Presidential election?” can act as context, as can a movie script, show, or audio clip. The larger the context window, the larger the size of the document that can fit in it.

The latest version of Gemini can ingest more than 2 million tokens as context. (“Tokens” are bits of raw data, like the syllables “fan,” “tas,” and “tic” in the word “fantastic.”) That's the equivalent of about 1.4 million words, 2 hours of video, or 22 hours of audio — the most context of any model on the market.

During a briefing earlier this year, Google showed off a few pre-recorded demos to showcase the potential of Gemini's long-context features, including one in which Gemini 1.5 Pro searches through the Apollo 11 moon landing television recording (about 402 pages long) for humorous quotations and finds scenes from the broadcast that resemble a pencil sketch.

Oriol Viñals, vice president of research at Google DeepMind, who led the briefing, described the model as “magical.”

“[1.5 Pro] “It does these kinds of inference tasks across every page, across every word,” he said.

That may have been an exaggeration.

In one of the studies that benchmarked these capabilities, Karpinska, along with researchers from the Allen Institute for AI and Princeton University, asked the model to evaluate true and false statements about fiction books written in English. The researchers chose recent works so that the model couldn't “cheat” by relying on prior knowledge, and they peppered the statements with specific details and plot references that would be difficult to understand without reading the entire book.

If there was a statement like “Using his skills as an Apos, Nusis is able to reverse engineer the type of portal that is opened by the reagent key found in Rona's crate,” Gemini 1.5 Pros and 1.5 Flashes who had ingested the relevant book were required to state whether the statement was true or false and explain why.

Image credit: University of Massachusetts Amherst

Testing with a single book of about 260,000 words (about 520 pages), the researchers found that 1.5 Pro answered true/false questions correctly 46.7% of the time, while Flash answered them correctly only 20% of the time. This means that Coin is significantly more accurate at answering questions about books than Google's latest machine learning models. Averaging across all benchmark results, neither model was able to beat random chance in question-answering accuracy.

“We found that the model had a harder time verifying claims that required consideration of large parts of the book, or even the entire book, compared to claims that could be resolved by obtaining text-level evidence,” Karpinska said. “Qualitatively, we also observed that the model struggled to verify claims about implicit information that was obvious to a human reader but not explicitly stated in the text.”

The second of the two studies, co-authored by researchers at the University of California, Santa Barbara, tested Gemini 1.5 Flash (but not 1.5 Pro)'s ability to “infer” about videos — that is, to search the videos and answer questions about their content.

The co-authors created a dataset that combined images (such as a photo of a birthday cake) with questions for the model to answer about objects depicted in the image (such as “What cartoon character is on this cake?”). To evaluate the model, they randomly chose one image and inserted “distracting” images before and after it to create a slideshow-like video.

Flash's performance wasn't much better: In tests where the model was asked to transcribe six handwritten digits from a 25-image “slideshow,” Flash correctly recognized about 50 percent of the transcriptions, dropping to about 30 percent accuracy at eight digits.

“The real-world question-answering task on images seems particularly difficult for all the models we tested,” Michael Saxon, a PhD student at the University of California, Santa Barbara, and one of the study's co-authors, told TechCrunch. “It may be that the subtle inference required to recognize that there are numbers in the frame and read them is what breaks the models.”

Google is overpromising with Gemini

Neither study was peer reviewed, nor did they look at Gemini 1.5 Pro and 1.5 Flash releases in a 2 million token context (both tested 1 million token context releases), and Flash is not as good as Pro in terms of performance, which Google is promoting as a lower-cost alternative.

Still, both add fuel to the fires of Google overpromising early on with Gemini and then falling short of expectations. None of the models the researchers tested, including OpenAI's GPT-4o and Anthropic's Claude 3.5 Sonnet, performed well. But Google is the only model provider that promotes the context window as a top priority in its ads.

“There's nothing wrong with a simple claim that, based on objective technical details, 'our model can process X number of tokens,'” Saxon says, “but the question is, what useful thing can you do with it?”

Generative AI in general has come under increased scrutiny as companies (and investors) grow frustrated with the technology's limitations.

In two recent surveys by Boston Consulting Group, nearly half of respondents (all of whom were chief executive officers) said they don't expect generative AI to deliver significant productivity gains and are concerned that generative AI-powered tools could lead to mistakes and data leaks. PitchBook recently reported that early-stage generative AI deals have declined for the second consecutive quarter, plummeting 76% from their peak in Q3 2023.

Faced with meeting-summary chatbots that conjure up fictitious details about people and AI search platforms that are essentially plagiarism generators, customers are searching for promising differentiators. Google, which has been in a sometimes clumsy race to catch up with generative AI rivals, has been desperate to make Gemini's context one of those differentiators.

But it appears the gamble was premature.

“There's still no set way to actually show that 'reasoning' or 'understanding' of long documents is happening, and essentially each group putting out these models is making these claims based on their own ad-hoc assessments,” Karpinska said. “Since we don't know how long contextual processing has been implemented, and the companies don't share these details, it's hard to judge how realistic these claims are.”

Google did not respond to a request for comment.

Both Saxon and Karpinska believe the antidote to the hype around generative AI is better benchmarking and, likewise, more emphasis on third-party critique. Saxon points out that one common test of long context — the “needle in a haystack” that Google cites frequently in its marketing materials — only measures a model’s ability to retrieve specific information, like a name or number, from a dataset, but not to answer complex questions about that information.

“All scientists and most engineers who use these models fundamentally agree that the existing benchmarking culture is broken,” Saxon says, “so it's important for the public to understand that they should take these huge reports with a pinch of salt, with numbers like 'benchmark overall general intelligence.'”



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related Posts

OpenAI seeks to extend human lifespans with the help of longevity startups

January 17, 2025

Farewell to the $200 million woolly mammoth and TikTok

January 17, 2025

Nord Security founder launches Nexos.ai to help enterprises move AI projects from pilot to production

January 17, 2025

Data proves it remains difficult for startups to raise capital, even though VCs invested $75 billion in the fourth quarter

January 16, 2025

Apple suspends AI notification summaries for news after generating false alerts

January 16, 2025

Nvidia releases more tools and guardrails to help enterprises adopt AI agents

January 16, 2025

Leave A Reply Cancel Reply

Top Reviews
Editors Picks

7 days left: Founders and VCs save over $300 on all stage passes

March 24, 2025

AI chip startup Furiosaai reportedly rejecting $800 million acquisition offer from Meta

March 24, 2025

20 Hottest Open Source Startups of 2024

March 22, 2025

Andrill may build a weapons factory in the UK

March 21, 2025
About Us
About Us

Welcome to Tech Brunch, your go-to destination for cutting-edge insights, news, and analysis in the fields of Artificial Intelligence (AI), Cryptocurrency, Technology, and Startups. At Tech Brunch, we are passionate about exploring the latest trends, innovations, and developments shaping the future of these dynamic industries.

Our Picks

Act 2 of Drive Capital – How Columbus Ventures Success After Split

July 5, 2025

Pets ready-made stem cell therapy may come

July 4, 2025

Everyone in high tech has an opinion about Soham Parekh

July 3, 2025

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

© 2025 TechBrunch. Designed by TechBrunch.
  • Home
  • About Tech Brunch
  • Advertise with Tech Brunch
  • Contact us
  • DMCA Notice
  • Privacy Policy
  • Terms of Use

Type above and press Enter to search. Press Esc to cancel.