DeepSeek R1 Distill Qwen 7B
DeepSeek R1 Distill Qwen 7B is a dense transformer language model from the DeepSeek family, containing 7.61B parameters across 28 layers. It supports up to 33K tokens of context with a hidden dimension of 3584 and 4 KV heads for efficient g…
7.6B
Parameters
32K
Max Context
Dense
Architecture
—
Released
Text
Modality
About DeepSeek R1 Distill Qwen 7B
DeepSeek R1 Distill Qwen 7B is a dense transformer language model from the DeepSeek family, containing 7.61B parameters across 28 layers. It supports up to 33K tokens of context with a hidden dimension of 3584 and 4 KV heads for efficient grouped-query attention (GQA). Reasoning distilled into Qwen 2.5 7B base. Great local reasoning.
Technical Specifications
System Requirements
Estimated VRAM at 10% overhead for different quantization methods and context sizes.
| Quantization | 1K ctx | 32K ctx |
|---|---|---|
Q4_K_M0.50 B/W ~97% of FP16 | 3.99Consumer GPU | 5.68Consumer GPU |
Q8_01.00 B/W ~100% of FP16 | 7.92Consumer GPU | 9.62Consumer GPU |
F162.00 B/W Reference | 15.79Consumer GPU | 17.48Consumer GPU |
Other DeepSeek Models
View AllDeepSeek R1 (MoE)
Params
671.0B
Layers
61
Context
64K
DeepSeek V3 (MoE)
Params
671.0B
Layers
61
Context
64K
DeepSeek V3 0324 (MoE)
Params
685.0B
Layers
61
Context
64K
DeepSeek V4-Pro (MoE)
Params
1.6T
Layers
80
Context
1.0M
DeepSeek V4-Flash (MoE)
Params
284.0B
Layers
48
Context
1.0M
DeepSeek R1 Distill Qwen 1.5B
Params
1.5B
Layers
28
Context
32K
Find the right GPU for DeepSeek R1 Distill Qwen 7B
Use the interactive VRAM Calculator to see exactly how much memory you need at any quantization level, context length, and overhead setting.