QwenDenseApache 2.0

Qwen 3.5 2B

Qwen 3.5 2B is a dense transformer language model from the Qwen family, containing 2B parameters across 6 layers. It supports up to 262K tokens of context with a hidden dimension of 2048 and 4 KV heads for efficient grouped-query attention

2.0B

Parameters

256K

Max Context

Dense

Architecture

Released

Text

Modality

About Qwen 3.5 2B

Qwen 3.5 2B is a dense transformer language model from the Qwen family, containing 2B parameters across 6 layers. It supports up to 262K tokens of context with a hidden dimension of 2048 and 4 KV heads for efficient grouped-query attention (GQA). Apache 2.0. Hybrid DeltaNet+Attn (25% layers KV cache). 262K→1M ctx.

On-DeviceBasic Chat

Technical Specifications

Total Parameters2.0B
ArchitectureDense
Attention TypeGQA (Grouped Query Attention)
Hidden Dimensiond = 2,048
Transformer Layers6
Attention Heads16
KV Headsn_kv = 4
Head Dimensiond_head = 256
Activation FunctionSwiGLU
NormalizationRMSNorm
Position EmbeddingRoPE

System Requirements

Estimated VRAM at 10% overhead for different quantization methods and context sizes.

Quantization1K ctx195K ctx256K ctx
Q4_K_M0.50 B/W
~97% of FP16
1.06Consumer GPU
5.61Consumer GPU
7.03Consumer GPU
Q8_01.00 B/W
~100% of FP16
2.09Consumer GPU
6.65Consumer GPU
8.07Consumer GPU
F162.00 B/W
Reference
4.16Consumer GPU
8.71Consumer GPU
10.14Consumer GPU
Fits 24 GB consumer GPU
Fits 80 GB datacenter GPU
Requires cluster / multi-GPU

Other Qwen Models

View All

Find the right GPU for Qwen 3.5 2B

Use the interactive VRAM Calculator to see exactly how much memory you need at any quantization level, context length, and overhead setting.