Three new “open” generative AI models have been released by Google, which makes the audacious claim that these are “safer,” “smaller,” and “more transparent” than others.
These are new members of Google’s generative model family, Gemma 2, which made its debut back in May. While the new Gemma 2 2B, ShieldGemma, and Gemma Scope versions are made for slightly different use cases and applications, they all have a safety bent.
In contrast to its Gemini models, which are utilized by Google’s own products and are also available to developers, Google does not make the source code for its Gemma series of models available. Rather, like Meta is trying to do with Llama, Google is aiming to promote goodwill within the developer community with Gemma.
Gemma 2 2B is a lightweight model for text generation and analysis that runs on a variety of hardware, including laptops and edge devices. It is licensed for use in research and commercial applications and is available for download via Google’s Vertex AI model library, Kaggle data science platform, and Google’s AI Studio toolset.
Regarding ShieldGemma, it is a group of “safety classifiers” that try to identify harmful content such as harassment, hate speech, and sexually explicit images. Developed atop Gemma 2, ShieldGemma is able to filter both information generated by the model and prompts to a generative model.
The last feature is Gemma Scope, which enables developers to “zoom in” on particular locations within a Gemma 2 model to improve the interpretability of the model’s internal operations. As stated in a blog post by Google: “The specialized neural networks that comprise Gemma Scope enable us to better comprehend and analyze the dense, complicated data that Gemma 2 processes. Researcher insights into Gemma 2’s pattern recognition, information processing, and prediction-making processes can be obtained by examining these expanded views.”
Shortly after endorsing open AI models in a preliminary report, the U.S. Commerce Department released the new Gemma 2 models. As per the report, open models make generative AI more accessible to smaller enterprises, researchers, nonprofits, and individual developers. However, they also underscore the necessity of having the ability to keep an eye on these models for any possible hazards.