Google releases the multimodal model Gemma 4, supporting more than 140 languages

GateNews

Gate News message, April 3, Google released the multimodal model Gemma 4. Gemma 4 can be used to process text and image inputs (the small model supports audio input) and generate text outputs. This version includes open-weight models with both pretraining and instruction tuning. Gemma 4’s context window can hold up to 256,000 tokens and supports more than 140 languages. Gemma 4 uses both a dense architecture and a mixture-of-experts (MoE) architecture, suitable for tasks such as text generation, encoding, and reasoning. These models come in four different sizes: E2B, E4B, 26B A4B, and 31B, and can be deployed in a range of environments from phones to laptops and servers.

Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to Disclaimer.
Comment
0/400
No comments