Gemma 4 E4B
Gemma 4 E4B is a dense transformer language model from the Gemma family, containing 8B parameters across 42 layers. It supports up to 131K tokens of context with a hidden dimension of 3072 and 6 KV heads for efficient grouped-query attentio…
8.0B
Parameters
128K
Max Context
Dense
Architecture
—
Released
Text + Audio + Vision
Modality
About Gemma 4 E4B
Gemma 4 E4B is a dense transformer language model from the Gemma family, containing 8B parameters across 42 layers. It supports up to 131K tokens of context with a hidden dimension of 3072 and 6 KV heads for efficient grouped-query attention (GQA). Effective 4.5B active via PLE. Hybrid local+global attn. Audio+image. 128K ctx.
Technical Specifications
System Requirements
Estimated VRAM at 10% overhead for different quantization methods and context sizes.
| Quantization | 1K ctx | 128K ctx |
|---|---|---|
Q4_K_M0.50 B/W ~97% of FP16 | 4.26Consumer GPU | 19.89Consumer GPU |
Q8_01.00 B/W ~100% of FP16 | 8.39Consumer GPU | 24.02Datacenter GPU |
F162.00 B/W Reference | 16.66Consumer GPU | 32.29Datacenter GPU |
Other Gemma Models
View AllFind the right GPU for Gemma 4 E4B
Use the interactive VRAM Calculator to see exactly how much memory you need at any quantization level, context length, and overhead setting.