Gemma 4 31B
Gemma 4 31B is a dense transformer language model from the Gemma family, containing 30.7B parameters across 60 layers. It supports up to 262K tokens of context with a hidden dimension of 5632 and 8 KV heads for efficient grouped-query atten…
30.7B
Parameters
256K
Max Context
Dense
Architecture
—
Released
Text
Modality
About Gemma 4 31B
Gemma 4 31B is a dense transformer language model from the Gemma family, containing 30.7B parameters across 60 layers. It supports up to 262K tokens of context with a hidden dimension of 5632 and 8 KV heads for efficient grouped-query attention (GQA). Dense 31B. Hybrid local+global attn. Dual RoPE. TurboQuant 3-bit KV. 256K ctx. #3 open model on Arena.
Technical Specifications
System Requirements
Estimated VRAM at 10% overhead for different quantization methods and context sizes.
| Quantization | 1K ctx | 195K ctx | 256K ctx |
|---|---|---|---|
Q4_K_M0.50 B/W ~97% of FP16 | 16.10Consumer GPU | 61.64Datacenter GPU | 75.87Datacenter GPU |
Q8_01.00 B/W ~100% of FP16 | 31.97Datacenter GPU | 77.51Datacenter GPU | 91.74Cluster / Multi-GPU |
F162.00 B/W Reference | 63.71Datacenter GPU | 109.2Cluster / Multi-GPU | 123.5Cluster / Multi-GPU |
Other Gemma Models
View AllFind the right GPU for Gemma 4 31B
Use the interactive VRAM Calculator to see exactly how much memory you need at any quantization level, context length, and overhead setting.