QwenDenseApache 2.0

Qwen 2.5 Coder 32B

Qwen 2.5 Coder 32B is the top local coding model as of early 2025. Fine-tuned on an enormous code corpus, it matches GPT-4-level performance on many coding benchmarks while running entirely on a 24 GB consumer GPU. Supports 92 programming l

32.5B

Parameters

32K

Max Context

Dense

Architecture

Nov 12, 2024

Released

Text

Modality

About Qwen 2.5 Coder 32B

Qwen 2.5 Coder 32B is the top local coding model as of early 2025. Fine-tuned on an enormous code corpus, it matches GPT-4-level performance on many coding benchmarks while running entirely on a 24 GB consumer GPU. Supports 92 programming languages with particularly strong Python, JavaScript, TypeScript, Java, C++, and Go. The 32K context handles most real-world codebases. Apache 2.0 licensed. For professional developers running local AI coding assistants, this is the gold standard.

Professional CodingCode ReviewCode GenerationCommercial

Technical Specifications

Total Parameters32.5B
ArchitectureDense
Attention TypeGQA (Grouped Query Attention)
Hidden Dimensiond = 5,120
Transformer Layers64
Attention Heads40
KV Headsn_kv = 8
Head Dimensiond_head = 128
Activation FunctionSwiGLU
NormalizationRMSNorm
Position EmbeddingRoPE

System Requirements

Estimated VRAM at 10% overhead for different quantization methods and context sizes.

Quantization1K ctx32K ctx
Q4_K_M0.50 B/W
~97% of FP16
17.05Consumer GPU
24.80Datacenter GPU
Q8_01.00 B/W
~100% of FP16
33.85Datacenter GPU
41.60Datacenter GPU
F162.00 B/W
Reference
67.44Datacenter GPU
75.19Datacenter GPU
Fits 24 GB consumer GPU
Fits 80 GB datacenter GPU
Requires cluster / multi-GPU

Other Qwen Models

View All

Find the right GPU for Qwen 2.5 Coder 32B

Use the interactive VRAM Calculator to see exactly how much memory you need at any quantization level, context length, and overhead setting.