QwenMoEApache 2.0

Qwen 3 Coder 480B-A35B (MoE)

Qwen 3 Coder 480B-A35B (MoE) is a mixture-of-experts (MoE) transformer language model from the Qwen family, containing 480B parameters across 96 layers. It has 480B total parameters loaded into VRAM with 35B active per token. It supports up

480.0B

Parameters

35.0B

Active

256K

Max Context

MoE

Architecture

Released

Text

Modality

About Qwen 3 Coder 480B-A35B (MoE)

Qwen 3 Coder 480B-A35B (MoE) is a mixture-of-experts (MoE) transformer language model from the Qwen family, containing 480B parameters across 96 layers. It has 480B total parameters loaded into VRAM with 35B active per token. It supports up to 262K tokens of context with a hidden dimension of 8192 and 8 KV heads for efficient grouped-query attention (GQA). Apache 2.0. Agentic coding MoE. Up to 1M extrapolated ctx. Server class.

CodeAgentic

Technical Specifications

Total Parameters480.0B
Active Parameters35.0B per token
ArchitectureMixture of Experts
Total Experts35
Attention TypeGQA (MoE)
Hidden Dimensiond = 8,192
Transformer Layers96
Attention Heads64
KV Headsn_kv = 8
Head Dimensiond_head = 128
Activation FunctionSwiGLU
NormalizationRMSNorm
Position EmbeddingRoPE

System Requirements

Estimated VRAM at 10% overhead for different quantization methods and context sizes.

Quantization1K ctx195K ctx256K ctx
Q4_K_M0.50 B/W
~97% of FP16
248.5Cluster / Multi-GPU
321.3Cluster / Multi-GPU
344.1Cluster / Multi-GPU
Q8_01.00 B/W
~100% of FP16
496.6Cluster / Multi-GPU
569.5Cluster / Multi-GPU
592.2Cluster / Multi-GPU
F162.00 B/W
Reference
992.8Cluster / Multi-GPU
1065.7Cluster / Multi-GPU
1088.4Cluster / Multi-GPU
Fits 24 GB consumer GPU
Fits 80 GB datacenter GPU
Requires cluster / multi-GPU

Other Qwen Models

View All

Find the right GPU for Qwen 3 Coder 480B-A35B (MoE)

Use the interactive VRAM Calculator to see exactly how much memory you need at any quantization level, context length, and overhead setting.