DeepSeek V3 0324 (MoE)
DeepSeek V3 0324 (MoE) is a mixture-of-experts (MoE) transformer language model from the DeepSeek family, containing 685B parameters across 61 layers. It has 685B total parameters loaded into VRAM with 37B active per token. It supports up t…
685.0B
Parameters
37.0B
Active
64K
Max Context
MoE
Architecture
—
Released
Text
Modality
About DeepSeek V3 0324 (MoE)
DeepSeek V3 0324 (MoE) is a mixture-of-experts (MoE) transformer language model from the DeepSeek family, containing 685B parameters across 61 layers. It has 685B total parameters loaded into VRAM with 37B active per token. It supports up to 66K tokens of context with a hidden dimension of 7168 and 8 KV heads for efficient grouped-query attention (GQA). March 2024 update. 685B total params. MLA compressed KV cache.
Technical Specifications
System Requirements
Estimated VRAM at 10% overhead for different quantization methods and context sizes.
| Quantization | 1K ctx | 64K ctx |
|---|---|---|
Q4_K_M0.50 B/W ~97% of FP16 | 354.3Cluster / Multi-GPU | 369.3Cluster / Multi-GPU |
Q8_01.00 B/W ~100% of FP16 | 708.4Cluster / Multi-GPU | 723.4Cluster / Multi-GPU |
F162.00 B/W Reference | 1416.5Cluster / Multi-GPU | 1431.5Cluster / Multi-GPU |
Other DeepSeek Models
View AllDeepSeek R1 (MoE)
Params
671.0B
Layers
61
Context
64K
DeepSeek V3 (MoE)
Params
671.0B
Layers
61
Context
64K
DeepSeek V4-Pro (MoE)
Params
1.6T
Layers
80
Context
1.0M
DeepSeek V4-Flash (MoE)
Params
284.0B
Layers
48
Context
1.0M
DeepSeek R1 Distill Qwen 1.5B
Params
1.5B
Layers
28
Context
32K
DeepSeek R1 Distill Qwen 7B
Params
7.6B
Layers
28
Context
32K
Find the right GPU for DeepSeek V3 0324 (MoE)
Use the interactive VRAM Calculator to see exactly how much memory you need at any quantization level, context length, and overhead setting.