Researchers and developers can now access Gemma 2

Researchers and developers can now access Gemma 2



Gemma 2 is a powerful tool designed for developers and researchers that offers open accessibility and wide frame compatibility. Available on the Gem License website, Gemma 2 allows users to share and commercialize their innovations easily. It supports leading AI frameworks such as Hugging Face Transformers, JAX, PyTorch, and TensorFlow via native Keras 3.0, vLLM, gemma.cpp, call.cpp, and Wave. Additionally, Gemma is optimized with NVIDIA TensorRT-LLM for use on NVIDIA-accelerated infrastructure or as an NVIDIA NIM inference microservice. Users can tune up with Keras and Hugging Face starting today, with more efficient tuning options in parameters on the way.

Deployment of Gemma 2 will be made easier for Google Cloud customers starting next month on Vertex AI. The Gemma Cookbook provides practical examples and recipes to guide users in creating applications and fine-tuning Gemma 2 models for specific tasks, including generating augmented recovery. Responsible AI development is a priority for Gemma, with resources available to help developers and researchers build and deploy AI responsibly, such as the Responsible Generative AI Toolkit. The recently open-sourced Master of Laws Comparison allows for a thorough evaluation of language models, with Python companion library for running benchmarks and visualizing results. Open access to text watermarking technology, SynthID, for Gemma models is also in the works.

During the training of Gemma 2, strict internal security processes are followed, pre-training data is filtered, and rigorous testing is conducted to identify and mitigate biases and risks. Results are published on a variety of public benchmarks focused on security and performance. Gemma 2 is a valuable resource for those looking to develop and deploy AI in a responsible and efficient manner.

Article Source
https://blog.google/technology/developers/google-gemma-2/