QwenMoEApache 2.0

Qwen 3.5 122B-A10B (MoE)

Qwen 3.5 122B-A10B (MoE) is a mixture-of-experts (MoE) transformer language model from the Qwen family, containing 122B parameters across 12 layers. It has 122B total parameters loaded into VRAM with 10B active per token. It supports up to

122.0B

Parameters

10.0B

Active

256K

Max Context

MoE

Architecture

Released

Text

Modality

About Qwen 3.5 122B-A10B (MoE)

Qwen 3.5 122B-A10B (MoE) is a mixture-of-experts (MoE) transformer language model from the Qwen family, containing 122B parameters across 12 layers. It has 122B total parameters loaded into VRAM with 10B active per token. It supports up to 262K tokens of context with a hidden dimension of 3072 and 2 KV heads for efficient grouped-query attention (GQA). Apache 2.0. MoE: 256 experts. DeltaNet+MoE hybrid. Server/high-end workstation.

ResearchEnterprise

Technical Specifications

Total Parameters122.0B
Active Parameters10.0B per token
ArchitectureMixture of Experts
Total Experts10
Attention TypeGQA (MoE)
Hidden Dimensiond = 3,072
Transformer Layers12
Attention Heads32
KV Headsn_kv = 2
Head Dimensiond_head = 256
Activation FunctionSwiGLU
NormalizationRMSNorm
Position EmbeddingRoPE

System Requirements

Estimated VRAM at 10% overhead for different quantization methods and context sizes.

Quantization1K ctx195K ctx256K ctx
Q4_K_M0.50 B/W
~97% of FP16
63.08Datacenter GPU
67.64Datacenter GPU
69.06Datacenter GPU
Q8_01.00 B/W
~100% of FP16
126.1Cluster / Multi-GPU
130.7Cluster / Multi-GPU
132.1Cluster / Multi-GPU
F162.00 B/W
Reference
252.3Cluster / Multi-GPU
256.8Cluster / Multi-GPU
258.2Cluster / Multi-GPU
Fits 24 GB consumer GPU
Fits 80 GB datacenter GPU
Requires cluster / multi-GPU

Other Qwen Models

View All

Find the right GPU for Qwen 3.5 122B-A10B (MoE)

Use the interactive VRAM Calculator to see exactly how much memory you need at any quantization level, context length, and overhead setting.