DeepSeekDenseMIT

DeepSeek R1 Distill Qwen 7B

DeepSeek R1 Distill Qwen 7B is a dense transformer language model from the DeepSeek family, containing 7.61B parameters across 28 layers. It supports up to 33K tokens of context with a hidden dimension of 3584 and 4 KV heads for efficient g

7.6B

Parameters

32K

Max Context

Dense

Architecture

Released

Text

Modality

About DeepSeek R1 Distill Qwen 7B

DeepSeek R1 Distill Qwen 7B is a dense transformer language model from the DeepSeek family, containing 7.61B parameters across 28 layers. It supports up to 33K tokens of context with a hidden dimension of 3584 and 4 KV heads for efficient grouped-query attention (GQA). Reasoning distilled into Qwen 2.5 7B base. Great local reasoning.

Reasoning

Technical Specifications

Total Parameters7.6B
ArchitectureDense
Attention TypeGQA (Grouped Query Attention)
Hidden Dimensiond = 3,584
Transformer Layers28
Attention Heads28
KV Headsn_kv = 4
Head Dimensiond_head = 128
Activation FunctionSwiGLU
NormalizationRMSNorm
Position EmbeddingRoPE

System Requirements

Estimated VRAM at 10% overhead for different quantization methods and context sizes.

Quantization1K ctx32K ctx
Q4_K_M0.50 B/W
~97% of FP16
3.99Consumer GPU
5.68Consumer GPU
Q8_01.00 B/W
~100% of FP16
7.92Consumer GPU
9.62Consumer GPU
F162.00 B/W
Reference
15.79Consumer GPU
17.48Consumer GPU
Fits 24 GB consumer GPU
Fits 80 GB datacenter GPU
Requires cluster / multi-GPU

Other DeepSeek Models

View All

Find the right GPU for DeepSeek R1 Distill Qwen 7B

Use the interactive VRAM Calculator to see exactly how much memory you need at any quantization level, context length, and overhead setting.