Google is trying to create a buzz with Gemini, its flagship suite of generative AI models, apps, and services. However, as our unofficial review revealed, Gemini seems promising in some aspects, but falls short in others.
So what is Gemini? How can it be used? And how does it compare to its competitors?
We've created this handy guide to help you easily stay up to date with the latest developments in Gemini. This guide will be updated as new Gemini models, features, and news about Google's Gemini plans are released.
What is Gemini?
Gemini is Google's long-promised next-generation GenAI model family, developed by Google's AI research lab DeepMind and Google Research. Available in 3 flavors.
Gemini Ultra, Gemini's flagship model. Gemini Pro, Gemini's “light” model. Gemini Nano is a small “distilled” model that works with mobile devices such as the Pixel 8 Pro.
All Gemini models are trained to be “natively multimodal.” In other words, you will be able to manipulate and use more than just words. They are pre-trained and fine-tuned with a variety of audio, images, videos, a large codebase, and text in various languages.
This distinguishes Gemini from models such as Google's own LaMDA, which are trained only on text data. LaMDA cannot understand or generate anything other than text (essays, email drafts, etc.), but this is not the case with the Gemini model.
What is the difference between the Gemini app and the Gemini model?
Proving once again that they lack branding savvy, Google didn't make it clear from the start that Gemini was separate from the Gemini app (formerly Bard) on the web and mobile. The Gemini app is simply an interface that allows you to access specific Gemini models. Think of him as a client of Google's GenAI.
Incidentally, Gemini's apps and models are also completely independent from Imagen 2, Google's text-to-image model, which is available in some of the company's development tools and environments. please do not worry. You're not alone in your confusion.
What can Gemini do?
Gemini models are multimodal, so in theory they can perform a variety of multimodal tasks, from transcribing audio to captioning images and videos to generating artwork. Few of these features have reached production stage yet (more on that later), but Google says it will deliver all of them, and even more, at some point in the not-too-distant future. I promise.
Of course, it's a little difficult to just take the company's word for it.
Google underperformed significantly with its initial Bard release. And recently, a video purporting to demonstrate Gemini's abilities caused an uproar, but it turned out to be heavily manipulated and more or less aspirational.
Still, assuming Google is more or less true to its claims, here's what the various tiers of Gemini will be able to do when they reach their full potential.
gemini ultra
Google says that thanks to its multimodality, Gemini Ultra can be used for physics homework, solving problems step by step on worksheets, pointing out potential mistakes in answers already filled in, and more. It can be used.
Google says Gemini Ultra can also be applied to tasks such as identifying scientific papers related to a particular problem. We “update” the graphs from these papers by extracting information from them and generating the formulas needed to recreate the graphs with more recent data. .
As mentioned earlier, Gemini Ultra provides technical support for image generation. However, that feature has not yet been incorporated into the production version of the model. Probably because its mechanism is more complex than how apps like his ChatGPT generate images. Rather than feeding the prompt to an image generator (such as his DALL-E 3 in the case of ChatGPT), Gemini outputs the image “natively” without intermediate steps.
Gemini Ultra is available as an API through Vertex AI, Google's fully managed AI developer platform, and AI Studio, Google's web-based tool for app and platform developers. This is also used by the Gemini app, but it's not free. To access Gemini Ultra through what Google calls Gemini Advanced, you need to subscribe to the Google One AI premium plan, which costs $20 per month.
The AI Premium plan also connects Gemini to your broader Google Workspace account. For example, think of Gmail emails, Docs documents, Sheets presentations, Google Meet recordings, and more. This is useful if you want to summarize emails or have Gemini capture notes during a video call.
gemini pro
Google says Gemini Pro has improved reasoning, planning, and understanding capabilities over LaMDA.
An independent study by researchers at Carnegie Mellon University and BerriAI found that Gemini Pro actually outperforms OpenAI's GPT-3.5 when processing longer and more complex inference chains. However, the study found that, like all large-scale language models, Gemini Pro particularly struggled with math problems involving several digit numbers, and users found many examples of incorrect inferences and mistakes. It also became clear that
But Google promised improvements, and the first improvements came in the form of Gemini 1.5 Pro.
Designed as a drop-in replacement, Gemini 1.5 Pro (currently in preview) improves in many ways compared to its predecessor, perhaps most notably in the amount of data it can process. Gemini 1.5 Pro can (in limited private preview) capture up to 700,000 words, or up to 30,000 lines of code. This is 35 times more than what Gemini 1.0 Pro can handle. And the model is multimodal, so it's not limited to text. Gemini 1.5 Pro can analyze up to 11 hours of audio or 1 hour of video in different languages, although it takes longer (for example, searching for a scene in an hour of video takes 30 seconds to 1 minute of processing). It takes time).
Gemini Pro is also available via Vertex AI's API, which accepts text as input and produces text as output. An additional endpoint, Gemini Pro Vision, can process text and images, including photos and videos, and output text according to OpenAI's GPT-4 with Vision model.
Within Vertex AI, developers can use a fine-tuning or “grounding” process to customize Gemini Pro for specific contexts and use cases. Gemini Pro can also connect to external third-party APIs to perform certain actions.
AI Studio has a workflow for creating structured chat prompts using Gemini Pro. Developers have access to both Gemini Pro and Gemini Pro Vision endpoints, allowing them to adjust model temperatures to control the creative range of output, provide samples to dictate tone and style, and adjust safety settings. You can adjust it.
gemini nano
Gemini Nano is a much smaller version of the Gemini Pro and Ultra models, and is efficient enough to run tasks directly on your phone (in some cases) rather than sending them to a server somewhere. So far, it has enhanced two of his features on the Pixel 8 Pro: Summarize in Recorder and Smart Reply in Gboard.
The Recorder app, which allows users to record and transcribe audio with the press of a button, includes Gemini-powered summaries of recorded conversations, interviews, presentations, and other snippets. Users can get these summaries even when no signal or Wi-Fi connection is available. Also, in consideration of your privacy, no data is left behind from your phone during the process.
Gemini Nano is also included as a developer preview in Google's keyboard app, Gboard. There, we've enhanced a feature called Smart Reply that helps suggest what you want to say next when you're having a conversation in a messaging app. The feature will initially only work on WhatsApp, but Google says it will be available in more apps in 2024.
Is Gemini better than OpenAI's GPT-4?
Google has touted Gemini's superiority in benchmarking many times, with Gemini Ultra achieving current state-of-the-art results in “30 of 32 academic benchmarks widely used in large-scale language model research and development.” claims to be superior. Meanwhile, the company says Gemini Pro is more capable than GPT-3.5 at tasks such as content summarization, brainstorming, and writing.
But putting aside the question of whether the benchmarks really show a good model, the scores that Google points out seem to be only marginally better than their OpenAI counterparts. And, as mentioned above, early impressions aren't very good, with users and academics saying Gemini Pro tends to get basic facts wrong, struggles with translation, and provides poor coding suggestions. I'm pointing it out.
How much does Gemini cost?
Gemini Pro is available for free in the Gemini app and currently in AI Studio and Vertex AI.
However, once Gemini Pro finishes previewing in Vertex, the model cost will be $0.0025 per character and the output will cost $0.00005 per character. Vertex customers pay per 1,000 characters (approximately 140-250 words) and for models such as Gemini Pro Vision, per image ($0.0025).
Assume a 500 word article contains 2,000 characters. It costs $5 to summarize that article in Gemini Pro. On the other hand, producing an article of similar length costs $0.1.
Pricing for the Ultra has not yet been announced.
Where can I try Gemini?
gemini pro
The easiest place to experience Gemini Pro is in the Gemini app. Pro and Ultra answer questions in a variety of languages.
Gemini Pro and Ultra are also accessible in Vertex AI preview via API. The API is free to use “within limits” for the time being and supports features such as chat functionality and filtering, as well as certain regions including Europe.
Additionally, Gemini Pro and Ultra are located in AI Studio. This service allows developers to iterate through prompts and Gemini-based chatbots, obtain API keys for use in their apps, or export code to a more fully featured IDE. .
Duet AI for Developers, Google's suite of AI-powered assistance tools for code completion and generation, now uses the Gemini model. And Google introduced the Gemini model to its development tools for Chrome and Firebase mobile development platforms.
gemini nano
Gemini Nano is included in the Pixel 8 Pro and will be included in other devices in the future. Developers interested in incorporating this model into their Android apps can sign up for a sneak peek.
Will Gemini come to iPhone?
Maybe! Apple and Google are reportedly in talks to use Gemini for a number of features that will be included in iOS updates expected later this year. Nothing is definitive, as Apple is also reportedly in talks with OpenAI and working on developing its own GenAI capabilities.
This post was originally published on February 16, 2024 and has since been updated to include new information about Gemini and Google's plans for it.