Google is trying to create a buzz with Gemini, its flagship suite of generative AI models, apps, and services.
So what is Gemini? How can it be used? And how does it compare to its competitors?
We've created this handy guide to help you easily stay up to date with the latest developments in Gemini. This guide will be updated as new Gemini models, features, and news about Google's Gemini plans are released.
What is Gemini?
Gemini is Google's long-promised next-generation GenAI model family, developed by Google's AI research lab DeepMind and Google Research. Available in 3 flavors.
Gemini Ultra is Gemini's highest performance model. Gemini Pro, Gemini's “light” model. Gemini Nano is a small “distilled” model that works with mobile devices such as the Pixel 8 Pro.
All Gemini models are trained to be “natively multimodal.” In other words, you will be able to manipulate and use more than just words. They are pre-trained and fine-tuned based on a variety of audio, images, videos, a large set of codebases, and text in different languages.
This distinguishes Gemini from models such as Google's own LaMDA, which are trained only on text data. LaMDA cannot understand or generate anything other than text (essays, email drafts, etc.), but this is not the case with the Gemini model.
What is the difference between the Gemini app and the Gemini model?
Proving once again that they lack branding savvy, Google didn't make it clear from the start that Gemini was separate from the Gemini app (formerly Bard) on the web and mobile. The Gemini app is simply an interface that allows you to access specific Gemini models. Think of him as a client of Google's GenAI.
Incidentally, Gemini's apps and models are also completely independent from Imagen 2, Google's text-to-image model, which is available in some of the company's development tools and environments.
What can Gemini do?
Gemini models are multimodal, so in theory they can perform a variety of multimodal tasks, from transcribing audio to captioning images and videos to generating artwork. Some of these features are still in production (more on this later), and Google says it will deliver all of them and even more at some point in the not-too-distant future. I promise.
Of course, it's a little difficult to just take the company's word for it.
Google underperformed significantly with its initial Bard release. And recently, a video purporting to demonstrate Gemini's abilities caused an uproar, but it turned out to be heavily manipulated and more or less aspirational.
Still, assuming Google is more or less true to its claims, here's what the various tiers of Gemini will be able to do when they reach their full potential.
gemini ultra
Google says that thanks to its multimodality, Gemini Ultra can be used for physics homework, solving problems step by step on worksheets, pointing out potential mistakes in answers already filled in, and more. It can be used.
Google says Gemini Ultra can also be applied to tasks such as identifying scientific papers related to a particular problem. We “update” the graphs from these papers by extracting information from them and generating the formulas needed to recreate the graphs with newer data. .
As mentioned earlier, Gemini Ultra provides technical support for image generation. However, that feature has not yet been incorporated into the production version of the model. Probably because its mechanism is more complex than how apps like his ChatGPT generate images. Rather than feeding the prompt to an image generator (such as his DALL-E 3 in the case of ChatGPT), Gemini outputs the image “natively” without any intermediate steps.
Gemini Ultra is available as an API through Vertex AI, Google's fully managed AI developer platform, and AI Studio, Google's web-based tool for app and platform developers. This is also used by the Gemini app, but it's not free. To access Gemini Ultra through what Google calls Gemini Advanced, you need to subscribe to the Google One AI premium plan, which costs $20 per month.
The AI Premium plan also connects Gemini to your broader Google Workspace account. For example, think of Gmail emails, Docs documents, Sheets presentations, Google Meet recordings, and more. This is useful if you want to summarize emails or have Gemini capture notes during a video call.
gemini pro
Google says Gemini Pro has improved reasoning, planning, and understanding capabilities over LaMDA.
An independent study by researchers at Carnegie Mellon University and BerriAI found that early versions of Gemini Pro actually outperformed OpenAI's GPT-3.5 at handling longer and more complex inference chains. However, this study found that, like all large-scale language models, this version of Gemini Pro particularly struggled with math problems involving several digits, and users encountered examples of poor reasoning and obvious mistakes. It also turns out that I have found it.
But Google promised a remedy, and the first one came in the form of Gemini 1.5 Pro.
The Gemini 1.5 Pro is designed as a drop-in replacement and has improved in many ways compared to its predecessor, but perhaps most notable is the amount of data it can handle. Gemini 1.5 Pro can capture approximately 700,000 words, or approximately 30,000 lines of code. This is 35 times more than what Gemini 1.0 Pro can handle. And the model is multimodal, so it's not limited to text. Gemini 1.5 Pro can analyze up to 11 hours of audio or 1 hour of video in different languages, although it takes longer (for example, searching for a scene in an hour of video takes 30 seconds to 1 minute of processing). It takes time).
Gemini 1.5 Pro entered public preview with Vertex AI in April.
An additional endpoint, Gemini Pro Vision, can process text and images, including photos and videos, and output text according to OpenAI's GPT-4 with Vision model.
Within Vertex AI, developers can use a fine-tuning or “grounding” process to customize Gemini Pro for specific contexts and use cases. Gemini Pro can also connect to external third-party APIs to perform certain actions.
AI Studio has a workflow for creating structured chat prompts using Gemini Pro. Developers have access to both Gemini Pro and Gemini Pro Vision endpoints, allowing them to adjust the model's temperature to control the creative range of output, provide samples to dictate tone and style, and adjust safety settings. You can adjust it.
gemini nano
Gemini Nano is a much smaller version of the Gemini Pro and Ultra models, and is efficient enough to run tasks directly on your phone (in some cases) rather than sending them to a server somewhere. So far, it powers several features on the Pixel 8 Pro, Pixel 8, and Samsung Galaxy S24, including Summarize in Recorder and Smart Reply in Gboard.
The Recorder app, which allows users to record and transcribe audio with the press of a button, includes Gemini-powered summaries of recorded conversations, interviews, presentations, and other snippets. Users can get these summaries even when no signal or Wi-Fi connection is available. Also, in consideration of your privacy, no data is left behind from your phone during the process.
Gemini Nano is also included in Gboard, Google's keyboard app. There, we've enhanced a feature called Smart Reply that helps suggest what you want to say next when you're having a conversation in a messaging app. Google says the feature will initially only work on WhatsApp, but will be available in more apps over time.
Nano also enables Magic Compose in the Google Messages app on supported devices, allowing you to compose messages in styles like Excited, Formal, and Lyrical.
Is Gemini better than OpenAI's GPT-4?
Google has touted Gemini's superiority in benchmarking many times, with Gemini Ultra achieving current state-of-the-art results in “30 of 32 academic benchmarks widely used in large-scale language model research and development.” claims to be superior. On the other hand, the company says that Gemini 1.5 Pro outperforms Gemini Ultra in some scenarios for tasks such as summarizing content, brainstorming, and writing. Perhaps this will change with the release of the next Ultra model.
But putting aside the question of whether the benchmarks really point to a better model, the scores that Google notes appear to be only marginally better than their OpenAI counterparts. And – as mentioned above – early impressions aren't very good, with users and academics saying older versions of Gemini Pro tended to have basic facts wrong, struggled with translation, and had poor coding suggestions. points out.
How much does Gemini cost?
Gemini 1.5 Pro is available for free in the Gemini app and currently in AI Studio and Vertex AI.
However, once Gemini 1.5 Pro leaves preview on Vertex, the model cost will be $0.0025 per character and the output will cost $0.00005 per character. Vertex customers pay per 1,000 characters (approximately 140-250 words) and for models such as Gemini Pro Vision, per image ($0.0025).
Assume a 500 word article contains 2,000 characters. It costs $5 to summarize that article in Gemini 1.5 Pro. On the other hand, producing an article of similar length costs $0.1.
Pricing for the Ultra has not yet been announced.
Where can I try Gemini?
gemini pro
The easiest place to experience Gemini Pro is in the Gemini app. Pro and Ultra answer questions in a variety of languages.
Gemini Pro and Ultra are also accessible in Vertex AI preview via API. The API is free to use “within limits” for the time being and supports features such as chat functionality and filtering, as well as certain regions including Europe.
Additionally, Gemini Pro and Ultra are located in AI Studio. This service allows developers to iterate through prompts and Gemini-based chatbots, obtain API keys for use in their apps, or export code to a more fully featured IDE. .
Code Assist (formerly Duet AI for Developers) is Google's suite of AI-powered assistance tools for code completion and generation, using the Gemini model. Developers can make “big” changes across the codebase, such as updating dependencies between files or reviewing large portions of code.
Google introduced the Gemini model in its development tools for the Chrome and Firebase mobile development platforms, as well as database creation and management tools. We also launched new security products built on Gemini, including Gemini for Threat Intelligence, a component of Google's Mandiant cybersecurity platform. It analyzes large portions of potentially malicious code and allows users to perform natural language searches for signs of ongoing threats and compromise.