Google Cloud has announced the launch of two versions of its AI model, Gemini 1.5 Flash and Pro, to the public. Gemini 1.5 Flash is a small multimodal model with a 1 million context window designed for narrow, high-frequency tasks. On the other hand, Gemini 1.5 Pro is the most powerful version of Google’s LLM, boasting a 2 million context window and now available to all developers.
The goal of these models is to showcase how Google’s AI capabilities enable companies to create effective AI agents and solutions. During a recent press conference, Google Cloud CEO Thomas Kurian highlighted the growing adoption of generative AI by various organizations like Accenture, Airbus, and more due to the combination of Google’s models and the Vertex platform.
Gemini 1.5 Flash offers lower latency, affordability, and improved performance compared to previous models. It is 40% faster than GPT-3.5 Turbo with a lower entry price than OpenAI’s model. Context caching and provisioned throughput are new features introduced to enhance the developer experience.
Gemini 1.5 Pro stands out with its 2 million tokens context window, allowing for processing large amounts of data before generating a response. The model is capable of understanding extensive text, audio, video, code, and more, offering significant value to companies exploring its potential.
Google has also unveiled context caching for Gemini 1.5 Pro and Flash, reducing processing costs significantly. Provisioned throughput feature enables developers to scale their usage of Google’s Gemini models, providing better predictability and reliability for production workloads. These advancements aim to make AI applications more accessible and efficient for developers across various industries.
Overall, Google’s release of Gemini 1.5 Flash and Pro demonstrates continuous innovation in AI technology and its commitment to empowering organizations to leverage the power of generative AI for practical applications.
Article Source
https://venturebeat.com/ai/google-opens-up-gemini-1-5-flash-pro-with-2m-tokens-to-the-public/