QwenMoEApache 2.0

Qwen 3.5 35B-A3B (MoE)

Qwen 3.5 35B-A3B combines the DeltaNet hybrid architecture with MoE — 256 experts, 8+1 active, totaling 35B loaded but only 3B active per token. It achieves roughly 3.5 tokens/second on an RTX 4090, making it one of the few MoE models viabl

35.0B

Parameters

3.0B

Active

256K

Max Context

MoE

Architecture

Feb 18, 2026

Released

Text

Modality

About Qwen 3.5 35B-A3B (MoE)

Qwen 3.5 35B-A3B combines the DeltaNet hybrid architecture with MoE — 256 experts, 8+1 active, totaling 35B loaded but only 3B active per token. It achieves roughly 3.5 tokens/second on an RTX 4090, making it one of the few MoE models viable on consumer GPUs. The combination of linear attention layers and sparse MoE feed-forward networks delivers exceptional efficiency. Apache 2.0 licensed. A glimpse of where efficient local AI is heading.

Efficient MoEConsumer GPULong ContextAgenticCommercial

Technical Specifications

Total Parameters35.0B
Active Parameters3.0B per token
ArchitectureMixture of Experts
Total Experts256
Active Experts9 per token
Attention TypeHybrid Gated DeltaNet + Full Attention (25% layers, MoE FFN)
Hidden Dimensiond = 2,048
Transformer Layers10
Attention Heads16
KV Headsn_kv = 2
Head Dimensiond_head = 256
Activation FunctionSwiGLU
NormalizationRMSNorm
Position EmbeddingDual RoPE (local + global)

System Requirements

Estimated VRAM at 10% overhead for different quantization methods and context sizes.

Quantization1K ctx195K ctx256K ctx
Q4_K_M0.50 B/W
~97% of FP16
18.11Consumer GPU
21.91Consumer GPU
23.09Consumer GPU
Q8_01.00 B/W
~100% of FP16
36.20Datacenter GPU
40.00Datacenter GPU
41.18Datacenter GPU
F162.00 B/W
Reference
72.38Datacenter GPU
76.18Datacenter GPU
77.36Datacenter GPU
Fits 24 GB consumer GPU
Fits 80 GB datacenter GPU
Requires cluster / multi-GPU

Other Qwen Models

View All

Find the right GPU for Qwen 3.5 35B-A3B (MoE)

Use the interactive VRAM Calculator to see exactly how much memory you need at any quantization level, context length, and overhead setting.