Google has released three new “open” generative AI models that it claims are “safer,” “smaller,” and “more transparent” than most others out there. Bold claims, to be sure.
These are additions to Google's Gemma 2 family of generative models that debuted in May. The new models — Gemma 2 2B, ShieldGemma and Gemma Scope — are designed for slightly different applications and use cases but share a common focus on safety.
Google's Gemma series model differs from the Gemini model in that Google has not released the Gemini source code, which is not only open to developers but is also used in Google's own products. Rather, Gemma is Google's effort to foster goodwill within the developer community in the same way that Meta is attempting with Llama.
Gemma 2 2B is a lightweight model that generates analytical text that can run on a variety of hardware, including laptops and edge devices. It is licensed for certain research and commercial applications and can be downloaded from sources such as Google's Vertex AI model library, data science platform Kaggle, and Google's AI Studio toolkit.
ShieldGemma is a collection of “safety classifiers” that attempt to detect harm such as hate speech, harassment, sexually explicit content, etc. Built on top of Gemma 2, ShieldGemma can be used to prompt generative models and filter the content they generate.
Finally, Gemma Scope allows developers to “zoom in” on specific points within a Gemma 2 model to better interpret its inner workings. Google explains it this way in a blog post:[Gemma Scope is made up of] “Specialized neural networks help parse the dense, complex information processed by Gemma 2 and expand it into a form that is easier to analyze and understand. By studying these expanded views, researchers can gain valuable insights into how Gemma 2 identifies patterns, processes information, and ultimately makes predictions.”
The release of the new Gemma 2 model comes on the heels of the U.S. Department of Commerce endorsing the Open AI model in a preliminary report that said the open model broadens the reach of generative AI to small businesses, researchers, nonprofits and individual developers, but also highlighted the need for oversight capabilities for potential risks in such models.