QwenMoEApache 2.0

Qwen 3 30B-A3B (MoE)

Qwen 3 30B-A3B is an efficiency-oriented MoE model with 30B total but only 3B active parameters per token. It delivers quality comparable to dense 8B-14B models while running at the speed of a 3B model. At Q4_K_M it needs ~16 GB VRAM (all e

30.0B

Parameters

3.0B

Active

128K

Max Context

MoE

Architecture

Apr 29, 2025

Released

Text

Modality

About Qwen 3 30B-A3B (MoE)

Qwen 3 30B-A3B is an efficiency-oriented MoE model with 30B total but only 3B active parameters per token. It delivers quality comparable to dense 8B-14B models while running at the speed of a 3B model. At Q4_K_M it needs ~16 GB VRAM (all experts loaded). The extreme activation sparsity makes it ideal for batched inference, edge server deployments, and scenarios where throughput matters more than peak single-query quality. Apache 2.0 licensed.

High ThroughputEdge ServerBatched InferenceCommercial

Technical Specifications

Total Parameters30.0B
Active Parameters3.0B per token
ArchitectureMixture of Experts
Total Experts128
Active Experts8 per token
Attention TypeGQA (Grouped Query Attention)
Hidden Dimensiond = 4,096
Transformer Layers48
Attention Heads32
KV Headsn_kv = 8
Head Dimensiond_head = 128
Activation FunctionSwiGLU
NormalizationRMSNorm
Position EmbeddingRoPE

System Requirements

Estimated VRAM at 10% overhead for different quantization methods and context sizes.

Quantization1K ctx128K ctx
Q4_K_M0.50 B/W
~97% of FP16
15.69Consumer GPU
39.51Datacenter GPU
Q8_01.00 B/W
~100% of FP16
31.20Datacenter GPU
55.01Datacenter GPU
F162.00 B/W
Reference
62.21Datacenter GPU
86.03Cluster / Multi-GPU
Fits 24 GB consumer GPU
Fits 80 GB datacenter GPU
Requires cluster / multi-GPU

Other Qwen Models

View All

Find the right GPU for Qwen 3 30B-A3B (MoE)

Use the interactive VRAM Calculator to see exactly how much memory you need at any quantization level, context length, and overhead setting.