GemmaDenseGemma License

Gemma 4 E2B

Gemma 4 E2B is a dense transformer language model from the Gemma family, containing 5.1B parameters across 35 layers. It supports up to 131K tokens of context with a hidden dimension of 2560 and 4 KV heads for efficient grouped-query attent

5.1B

Parameters

128K

Max Context

Dense

Architecture

Released

Text + Audio + Vision

Modality

About Gemma 4 E2B

Gemma 4 E2B is a dense transformer language model from the Gemma family, containing 5.1B parameters across 35 layers. It supports up to 131K tokens of context with a hidden dimension of 2560 and 4 KV heads for efficient grouped-query attention (GQA). Effective 2.3B active via PLE. Hybrid local+global attn. Audio+image. 128K ctx.

General PurposeChat

Technical Specifications

Total Parameters5.1B
ArchitectureDense
Attention TypeGQA (Grouped Query Attention)
Hidden Dimensiond = 2,560
Transformer Layers35
Attention Heads16
KV Headsn_kv = 4
Head Dimensiond_head = 128
Activation FunctionSwiGLU
NormalizationRMSNorm
Position EmbeddingRoPE

System Requirements

Estimated VRAM at 10% overhead for different quantization methods and context sizes.

Quantization1K ctx128K ctx
Q4_K_M0.50 B/W
~97% of FP16
2.70Consumer GPU
11.39Consumer GPU
Q8_01.00 B/W
~100% of FP16
5.34Consumer GPU
14.02Consumer GPU
F162.00 B/W
Reference
10.61Consumer GPU
19.29Consumer GPU
Fits 24 GB consumer GPU
Fits 80 GB datacenter GPU
Requires cluster / multi-GPU

Other Gemma Models

View All

Find the right GPU for Gemma 4 E2B

Use the interactive VRAM Calculator to see exactly how much memory you need at any quantization level, context length, and overhead setting.