Qwen 3 30B-A3B (MoE)
Qwen 3 30B-A3B is an efficiency-oriented MoE model with 30B total but only 3B active parameters per token. It delivers quality comparable to dense 8B-14B models while running at the speed of a 3B model. At Q4_K_M it needs ~16 GB VRAM (all e…
30.0B
Parameters
3.0B
Active
128K
Max Context
MoE
Architecture
Apr 29, 2025
Released
Text
Modality
About Qwen 3 30B-A3B (MoE)
Qwen 3 30B-A3B is an efficiency-oriented MoE model with 30B total but only 3B active parameters per token. It delivers quality comparable to dense 8B-14B models while running at the speed of a 3B model. At Q4_K_M it needs ~16 GB VRAM (all experts loaded). The extreme activation sparsity makes it ideal for batched inference, edge server deployments, and scenarios where throughput matters more than peak single-query quality. Apache 2.0 licensed.
Technical Specifications
System Requirements
Estimated VRAM at 10% overhead for different quantization methods and context sizes.
| Quantization | 1K ctx | 128K ctx |
|---|---|---|
Q4_K_M0.50 B/W ~97% of FP16 | 15.69Consumer GPU | 39.51Datacenter GPU |
Q8_01.00 B/W ~100% of FP16 | 31.20Datacenter GPU | 55.01Datacenter GPU |
F162.00 B/W Reference | 62.21Datacenter GPU | 86.03Cluster / Multi-GPU |
Other Qwen Models
View AllFind the right GPU for Qwen 3 30B-A3B (MoE)
Use the interactive VRAM Calculator to see exactly how much memory you need at any quantization level, context length, and overhead setting.