Google is currently expanding the range of Gemini large-scale language models available to developers on the Vertex AI platform.
Gemini 1.0 Pro (Google is good at branding, so just a week ago it was still known as Gemini Pro 1.0) is now generally available after being in public preview for a while. Meanwhile, Google says that Gemini 1.0 Ultra (which you may also remember was formerly known as Gemini Ultra 1.0) is now generally available “via a whitelist,” which means This is different from the general method of acquisition.
Google also today announced Gemini 1.5 Pro (not Gemini Pro 1.5, of course), an update to the existing Gemini Pro model that the company says will perform at the level of its current flagship model, Gemini 1.0 Ultra. That's what it means. But perhaps more importantly, this model can handle the context of his million tokens. That's about an hour of video, 30,000 lines of code, and over 700,000 words. The model uses what Google calls a “new expert mix approach” and is currently in private preview.
With Vertex, Google is also adding support for adapter-based tuning, with support for technologies like human feedback and reinforcement learning from distillation coming soon. Additionally, developers can now more easily extend models with up-to-date data and can also call functions for more complex workflows. This will allow the developer to connect his Gemini model to external APIs.
As for other developer tools, Google now provides access to the Gemini API through the Dart SDK, making it easy for developers to use the Gemini API in their Dart and Flutter apps. It also makes it easier for developers to use the Gemini API in Project IDX, an experimental web-based integrated development platform, and adds integration to Firebase, a mobile development platform, in the form of an extension.