Model Release

Google Releases Gemma 4, Its Most Capable Open Model, Under Apache 2.0 License

Google releases Gemma 4 open models in four sizes, built on Gemini 3 technology. Available under Apache 2.0 license for full commercial freedom.

GoogleGemma 4Open SourceApache 2.0DeepMind
※ このページにはアフィリエイトリンクが含まれています。リンク経由でご購入いただくと、運営費の一部として還元されます。

MOUNTAIN VIEW, CA – April 2, 2026 – Google DeepMind today announced the release of Gemma 4, the company's most intelligent open model family to date. Built from the same world-class research and technology as Gemini 3, Gemma 4 is purpose-built for advanced reasoning and agentic workflows, and is released under the commercially permissive Apache 2.0 license.


Four Versatile Sizes for Diverse Hardware


Gemma 4 is released in four sizes designed to run efficiently across a wide range of hardware:


  • Effective 2B (E2B): An ultra-lightweight model that runs on smartphones, Raspberry Pi, and NVIDIA Jetson Orin Nano edge devices. Features native multimodal capabilities including audio input.
  • Effective 4B (E4B): A step up from E2B, delivering higher accuracy while maintaining mobile device compatibility. Features a 128K context window.
  • 26B Mixture of Experts (MoE): An efficient design that activates only 3.8 billion of its total parameters during inference, delivering exceptionally fast token generation. Ideal for latency-sensitive use cases.
  • 31B Dense: The highest-quality model optimized for fine-tuning. Currently ranks as the #3 open model in the world on the industry-standard Arena AI text leaderboard.

  • Industry-Leading Performance


    The Gemma 4 31B model has secured the #3 position among open models on the Arena AI text leaderboard, while the 26B MoE model holds the #6 spot. Remarkably, these models outperform competitors 20 times their size. For developers, this unprecedented level of intelligence-per-parameter means achieving frontier-level capabilities with significantly less hardware overhead.


    Key Capabilities


  • Advanced Reasoning: Capable of multi-step planning and deep logic with significant improvements in math and instruction-following benchmarks.
  • Agentic Workflows: Native support for function-calling, structured JSON output, and native system instructions for building autonomous agents.
  • Code Generation: High-quality offline code generation, turning workstations into local-first AI code assistants.
  • Vision and Audio: All models natively process video and images. E2B and E4B models feature native audio input for speech recognition.
  • Longer Context: Edge models feature 128K context windows, while larger models offer up to 256K.
  • 140+ Languages: Natively trained on over 140 languages for inclusive, global applications.

  • Shift to Apache 2.0 License


    One of the most significant changes with Gemma 4 is the transition to the Apache 2.0 license. This gives developers complete control over their data, infrastructure, and models, with the freedom to build and deploy across any environment. Since the first generation, Gemma has been downloaded over 400 million times, spawning a vibrant 'Gemmaverse' of more than 100,000 variants.


    Ecosystem and Partnerships


    Gemma 4 is available across a broad ecosystem including Google AI Studio, Hugging Face, Ollama, NVIDIA NIM, LM Studio, Kaggle, and Vertex AI. NVIDIA has announced optimizations for local agentic AI on RTX PCs and DGX Spark, while collaborations with Qualcomm Technologies and MediaTek are advancing mobile deployment.


    Gemma 4 sets a new standard for open AI models, democratizing frontier-level AI capabilities for researchers, enterprise developers, and the broader community worldwide.

    AI Newsletter

    Get the latest AI tools and news delivered daily