QwenDenseApache 2.0

Qwen 2.5 72B

Qwen 2.5 72B is a dense transformer language model from the Qwen family, containing 72.7B parameters across 80 layers. It supports up to 131K tokens of context with a hidden dimension of 8192 and 8 KV heads for efficient grouped-query atten

72.7B

Parameters

128K

Max Context

Dense

Architecture

Released

Text

Modality

About Qwen 2.5 72B

Qwen 2.5 72B is a dense transformer language model from the Qwen family, containing 72.7B parameters across 80 layers. It supports up to 131K tokens of context with a hidden dimension of 8192 and 8 KV heads for efficient grouped-query attention (GQA).

ResearchEnterprise

Technical Specifications

Total Parameters72.7B
ArchitectureDense
Attention TypeGQA (Grouped Query Attention)
Hidden Dimensiond = 8,192
Transformer Layers80
Attention Heads64
KV Headsn_kv = 8
Head Dimensiond_head = 128
Activation FunctionSwiGLU
NormalizationRMSNorm
Position EmbeddingRoPE

System Requirements

Estimated VRAM at 10% overhead for different quantization methods and context sizes.

Quantization1K ctx128K ctx
Q4_K_M0.50 B/W
~97% of FP16
37.89Datacenter GPU
77.58Datacenter GPU
Q8_01.00 B/W
~100% of FP16
75.47Datacenter GPU
115.2Cluster / Multi-GPU
F162.00 B/W
Reference
150.6Cluster / Multi-GPU
190.3Cluster / Multi-GPU
Fits 24 GB consumer GPU
Fits 80 GB datacenter GPU
Requires cluster / multi-GPU

Other Qwen Models

View All

Find the right GPU for Qwen 2.5 72B

Use the interactive VRAM Calculator to see exactly how much memory you need at any quantization level, context length, and overhead setting.