DeepSeekDenseLlama 3.1 Community License

DeepSeek R1 Distill Llama 70B

DeepSeek R1 Distill Llama 70B is the largest distilled reasoning variant, based on Llama 3.3 70B fine-tuned on DeepSeek R1 reasoning traces. It delivers the strongest reasoning performance of any distilled model — approaching the full R1 on

70.6B

Parameters

32K

Max Context

Dense

Architecture

Jan 20, 2025

Released

Text

Modality

About DeepSeek R1 Distill Llama 70B

DeepSeek R1 Distill Llama 70B is the largest distilled reasoning variant, based on Llama 3.3 70B fine-tuned on DeepSeek R1 reasoning traces. It delivers the strongest reasoning performance of any distilled model — approaching the full R1 on math and code benchmarks. At ~38 GB VRAM at Q4_K_M, it requires a 32 GB+ GPU or partial offloading on 24 GB. Uses the Llama 3.1 Community License (not MIT like other R1 distill variants).

ReasoningMathSTEMCodeResearch

Technical Specifications

Total Parameters70.6B
ArchitectureDense
Attention TypeGQA (Grouped Query Attention)
Hidden Dimensiond = 8,192
Transformer Layers80
Attention Heads64
KV Headsn_kv = 8
Head Dimensiond_head = 128
Activation FunctionSwiGLU
NormalizationRMSNorm
Position EmbeddingRoPE

System Requirements

Estimated VRAM at 10% overhead for different quantization methods and context sizes.

Quantization1K ctx32K ctx
Q4_K_M0.50 B/W
~97% of FP16
36.80Datacenter GPU
46.49Datacenter GPU
Q8_01.00 B/W
~100% of FP16
73.30Datacenter GPU
82.98Cluster / Multi-GPU
F162.00 B/W
Reference
146.3Cluster / Multi-GPU
156.0Cluster / Multi-GPU
Fits 24 GB consumer GPU
Fits 80 GB datacenter GPU
Requires cluster / multi-GPU

Other DeepSeek Models

View All

Find the right GPU for DeepSeek R1 Distill Llama 70B

Use the interactive VRAM Calculator to see exactly how much memory you need at any quantization level, context length, and overhead setting.