Close Menu
TechBrunchTechBrunch
  • Home
  • AI
  • Apps
  • Crypto
  • Security
  • Startups
  • TechCrunch
  • Venture

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

What's Hot

iOS 19: All the rumor changes that Apple could bring to the new operating system

June 7, 2025

The Trump administration is aiming for Biden and Obama's cybersecurity rules

June 7, 2025

WWDC 2025: What to expect from this year's meeting

June 7, 2025
Facebook X (Twitter) Instagram
TechBrunchTechBrunch
  • Home
  • AI

    OpenAI seeks to extend human lifespans with the help of longevity startups

    January 17, 2025

    Farewell to the $200 million woolly mammoth and TikTok

    January 17, 2025

    Nord Security founder launches Nexos.ai to help enterprises move AI projects from pilot to production

    January 17, 2025

    Data proves it remains difficult for startups to raise capital, even though VCs invested $75 billion in the fourth quarter

    January 16, 2025

    Apple suspends AI notification summaries for news after generating false alerts

    January 16, 2025
  • Apps

    iOS 19: All the rumor changes that Apple could bring to the new operating system

    June 7, 2025

    WWDC 2025: What to expect from this year's meeting

    June 7, 2025

    Trump Mask feud was perfect for X and jumped on the app store chart

    June 6, 2025

    iOS 19: All the rumor changes that Apple could bring to the new operating system

    June 6, 2025

    WWDC 2025: What to expect from this year's meeting

    June 6, 2025
  • Crypto

    xNotify Polymarket as partner in the official forecast market

    June 6, 2025

    Circle IPOs are giving hope to more startups waiting to be published to more startups

    June 5, 2025

    GameStop bought $500 million in Bitcoin

    May 28, 2025

    Vote for the session you want to watch in 2025

    May 26, 2025

    Save $900 + 90% from 2 tickets to destroy 2025 in the last 24 hours

    May 25, 2025
  • Security

    The Trump administration is aiming for Biden and Obama's cybersecurity rules

    June 7, 2025

    After data is wiped out, Kiranapro co-founders cannot rule out external hacks

    June 7, 2025

    Humanity appoints national security experts to governing trusts

    June 6, 2025

    Italian lawmakers say Italy used spyware to target immigrant activists' mobile phones, but not for journalists

    June 6, 2025

    Humanity unveils custom AI models for US national security customers

    June 5, 2025
  • Startups

    7 days left: Founders and VCs save over $300 on all stage passes

    March 24, 2025

    AI chip startup Furiosaai reportedly rejecting $800 million acquisition offer from Meta

    March 24, 2025

    20 Hottest Open Source Startups of 2024

    March 22, 2025

    Andrill may build a weapons factory in the UK

    March 21, 2025

    Startup Weekly: Wiz bets paid off at M&A Rich Week

    March 21, 2025
  • TechCrunch

    OpenSea takes a long-term view with a focus on UX despite NFT sales remaining low

    February 8, 2024

    AI will save software companies' growth dreams

    February 8, 2024

    B2B and B2C are not about who buys, but how you sell

    February 5, 2024

    It's time for venture capital to break away from fast fashion

    February 3, 2024

    a16z's Chris Dixon believes it's time to focus on blockchain use cases rather than speculation

    February 2, 2024
  • Venture

    Why investing in a growing AI startup is risky and more complicated

    June 6, 2025

    Startup Battlefield 200: Only 3 days left

    June 6, 2025

    Book all TC Stage Exhibitor Tables before ending today

    June 6, 2025

    Less than 48 hours left until display at TC at all stages

    June 5, 2025

    TC Session: AI will be on sale today at Berkeley

    June 5, 2025
TechBrunchTechBrunch

Tokens are a big reason why today's generative AI is inadequate

TechBrunchBy TechBrunchJuly 6, 20245 Mins Read
Facebook Twitter Pinterest Telegram LinkedIn Tumblr WhatsApp Email
Share
Facebook Twitter LinkedIn Pinterest Telegram Email


Generative AI models don't process text in the same way that humans do, and understanding their “token”-based internal environment might help explain some of their quirky behavior and stubborn limitations.

Most models, from tiny on-device models like Gemma to OpenAI's industry-leading GPT-4o, are built on an architecture called a Transformer. Because of the way Transformers create associations between text and other kinds of data, they cannot take in and output raw text — at least, not without a massive amount of computation.

Therefore, for practical and technical reasons, today's Transformer models work with text split into small, bite-sized pieces called tokens. This process is called tokenization.

Tokens can be words like “fantastic”, or syllables like “fan”, “tas”, or “tic”, or, depending on the tokenizer, individual letters within a word (e.g. “f”, “a”, “n”, “t”, “a”, “s”, “t”, “i”, “c”).

This method allows the Transformer to capture more information (in a semantic sense) before hitting an upper limit called the context window, but tokenization can also introduce bias.

Some tokens have odd spaces that can cause the transformer to fail. For example, the tokenizer might encode “once upon a time” as “once”, “upon”, “a”, “time”, but might encode “once upon a” (with a trailing space) as “once”, “upon”, “a”, “”. Depending on how you prompt the model (with “once upon a” or “once upon a “), you might get completely different results, because the model (unlike humans) does not understand that the meaning is the same.

Tokenizers also treat uppercase and lowercase letters differently. To the model, “Hello” is not necessarily the same as “HELLO”. “Hello” is usually one token (depending on the tokenizer), but “HELLO” can be up to three tokens (“HE”, “El”, “O”). This is why many transformers fail the uppercase test.

“It's a bit hard to avoid the question of what exactly a 'word' should be for a language model, and even if human experts could agree on a perfect token vocabulary, the model will probably find it useful to 'chunk' things further,” Sheridan Feucht, a PhD student at Northeastern University researching interpretability of large-scale language models, told TechCrunch. “My guess is that because of all this ambiguity, there will never be a perfect tokenizer.”

This ambiguity causes even more problems in languages ​​other than English.

Many tokenization methods assume that a space in a sentence represents a new word because they were designed with English in mind, but not all languages ​​use spaces to separate words: Chinese and Japanese don't, nor do Korean, Thai, or Khmer.

A 2023 Oxford University study found that differences in how languages ​​other than English are tokenized can mean that completing a task expressed in a language other than English can take twice as long as the same task expressed in English. The same study and another one found that given that many AI vendors charge per token, users of languages ​​with less “token efficiency” may see their models perform worse, yet still pay more.

Tokenizers often treat each character in logographic writing systems (where printed symbols represent words without regard to pronunciation, such as Chinese) as a separate token, resulting in high token counts. Similarly, tokenizers that process agglutinative languages ​​(where words are made up of small meaningful word elements called morphemes, such as Turkish) tend to turn each morpheme into a token, resulting in high overall token counts. (The Thai word for “hello,” สวัสดี, is six tokens.)

In 2023, Yenny Jun, an AI researcher at Google DeepMind, conducted an analysis comparing tokenization and its downstream impact in different languages. Using a dataset of bilingual texts translated into 52 languages, Jun showed that some languages ​​require up to 10 times as many tokens to express the same meaning in English.

Beyond language inequality, tokenization may also explain why today's models are poorly suited to math.

Numbers are rarely tokenized consistently. Because the tokenizer doesn't really know what the numbers are, it may treat “380” as a single token and represent “381” as a pair (“38” and “1”). This effectively destroys the relationship between numbers and outcomes in equations and formulas. The result is Transformer Confusion. Recent papers have shown that models struggle to understand repetitive numeric patterns and context, especially temporal data (see: GPT-4 thinks 7,735 is greater than 7,926).

This is also why the model is not very good at solving anagram problems or reversing words.

It turns out that a lot of odd behavior and issues with LLM actually stem from tokenization. We'll explore some of these issues and explain why tokenization is problematic and why, ideally, we'd like to find a way to remove this stage entirely. pic.twitter.com/5haV7FvbBx

—Andrey Karpathy (@karpathy) February 20, 2024

So tokenization clearly poses a challenge for generative AI. Can it be solved?

perhaps.

Feucht points to “byte-level” state-space models like MambaByte, which can ingest much more data than Transformer without performance degradation by eliminating tokenization altogether. By working directly with raw bytes representing text and other data, MambaByte competes with some Transformer models on language analysis tasks, while better handling “noise” such as character swapping, spaces, and capitalization.

But models like MambaByte are still in the early research stages.

“It would probably be best to let the model look at the characters directly without forcing tokenization, but right now that's computationally infeasible for the Transformer,” Feucht said. “Especially for the Transformer model, the computations are proportional to the square of the sequence length, so we recommend using a short text representation.”

Barring a breakthrough in tokenization, new model architectures will likely be key.





Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related Posts

OpenAI seeks to extend human lifespans with the help of longevity startups

January 17, 2025

Farewell to the $200 million woolly mammoth and TikTok

January 17, 2025

Nord Security founder launches Nexos.ai to help enterprises move AI projects from pilot to production

January 17, 2025

Data proves it remains difficult for startups to raise capital, even though VCs invested $75 billion in the fourth quarter

January 16, 2025

Apple suspends AI notification summaries for news after generating false alerts

January 16, 2025

Nvidia releases more tools and guardrails to help enterprises adopt AI agents

January 16, 2025

Leave A Reply Cancel Reply

Top Reviews
Editors Picks

7 days left: Founders and VCs save over $300 on all stage passes

March 24, 2025

AI chip startup Furiosaai reportedly rejecting $800 million acquisition offer from Meta

March 24, 2025

20 Hottest Open Source Startups of 2024

March 22, 2025

Andrill may build a weapons factory in the UK

March 21, 2025
About Us
About Us

Welcome to Tech Brunch, your go-to destination for cutting-edge insights, news, and analysis in the fields of Artificial Intelligence (AI), Cryptocurrency, Technology, and Startups. At Tech Brunch, we are passionate about exploring the latest trends, innovations, and developments shaping the future of these dynamic industries.

Our Picks

iOS 19: All the rumor changes that Apple could bring to the new operating system

June 7, 2025

The Trump administration is aiming for Biden and Obama's cybersecurity rules

June 7, 2025

WWDC 2025: What to expect from this year's meeting

June 7, 2025

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

© 2025 TechBrunch. Designed by TechBrunch.
  • Home
  • About Tech Brunch
  • Advertise with Tech Brunch
  • Contact us
  • DMCA Notice
  • Privacy Policy
  • Terms of Use

Type above and press Enter to search. Press Esc to cancel.