Google's next major AI model comes to compete with OpenAI's slew of new products.
On Wednesday, Google announced Gemini 2.0 Flash. The company says it can natively generate images and audio in addition to text. 2.0 Flash can also use third-party apps and services, allowing you to use Google search, run code, and more.
The experimental release of 2.0 Flash is available starting today through the Gemini API and Google's AI developer platforms, AI Studio and Vertex AI. However, the audio and image generation feature will only launch for “early access partners” ahead of a wider rollout in January.
Google says it will be bringing various versions of 2.0 Flash to products such as Android Studio, Chrome DevTools, Firebase, and Gemini Code Assist in the coming months.
flash, upgraded
The first generation of Flash, 1.5 Flash, could only generate text and was not designed for particularly demanding workloads. Google says this new model is more versatile because it can call tools like search and interact with external APIs.
“We know that Flash is very popular among developers because of its balance of speed and performance,” Tulsee Doshi, Google's head of product for the Gemini model, said in a Tuesday briefing. “2.0 Flash is just as fast as ever, but even more powerful.”
Google claims that Flash 2.0 is twice as fast as the company's Gemini 1.5 Pro model in certain benchmarks, according to Google's own tests, and offers “significant” improvements in areas such as coding and image analysis. I am doing it. In fact, the company says the 2.0 Flash will replace the 1.5 Pro as Gemini's flagship model thanks to its superior math skills and “facts.”
As alluded to earlier, 2.0 Flash can generate and modify images along with text. Models can also capture photos, videos, and audio recordings and answer questions about them (e.g., “What did he say?”).
Audio generation is another key feature of 2.0 Flash, which Doshi described as “manipulable” and “customizable.” For example, the model can narrate text using one of eight voices that are “optimized” for different accents and languages.
“You can ask them to speak slower, you can ask them to speak faster, you can ask them to say things like pirates,” she added.
Now, as a journalist, I feel obligated to point out that Google does not provide images or audio samples for 2.0 Flash. At least as of this writing, there is no way to know how the quality compares to the output of other models.
Google said it uses its SynthID technology to watermark all audio and images generated in Flash 2.0. Software and platforms that support SynthID (i.e., some Google products) flag the model output as synthetic.
This is to allay the fear of abuse. In fact, the threat of deepfakes is growing. According to identity verification service Sumsub, the number of deepfakes detected worldwide increased fourfold from 2023 to 2024.
Multimodal API
The production version of 2.0 Flash is expected to be released in January. But in the meantime, Google is releasing the Multimodal Live API, an API that allows developers to build apps with real-time audio and video streaming capabilities.
Google says the Multimodal Live API allows developers to create real-time multimodal apps with audio and video input from cameras and screens. The API supports the integration of tools to perform tasks and can handle “natural conversation patterns” such as interruptions, in line with OpenAI's Realtime API.
The Multimodal Live API will be generally available starting this morning.