LlamaDenseLlama 3.1 Community License

Llama 3.1 70B

Llama 3.1 70B was the open-weight model to beat through late 2024 and most of 2025. At 70.6B parameters it delivers GPT-4-class performance on reasoning, coding, and creative tasks. At Q4_K_M it requires ~38 GB VRAM — fitting on a single RT

70.6B

Parameters

128K

Max Context

Dense

Architecture

Jul 23, 2024

Released

Text

Modality

About Llama 3.1 70B

Llama 3.1 70B was the open-weight model to beat through late 2024 and most of 2025. At 70.6B parameters it delivers GPT-4-class performance on reasoning, coding, and creative tasks. At Q4_K_M it requires ~38 GB VRAM — fitting on a single RTX 5090 (32 GB) only with partial offloading. Most users run it at Q3_K_M (~30 GB) on 24 GB GPUs with ~6 GB offloaded to system RAM. The quality jump from 8B to 70B is substantial for complex reasoning, multi-step instruction following, and long-form generation. Still a strong choice where ecosystem maturity matters.

ReasoningCodeCreative WritingRAGAgentic

Technical Specifications

Total Parameters70.6B
ArchitectureDense
Attention TypeGQA (Grouped Query Attention)
Hidden Dimensiond = 8,192
Transformer Layers80
Attention Heads64
KV Headsn_kv = 8
Head Dimensiond_head = 128
Activation FunctionSwiGLU
NormalizationRMSNorm
Position EmbeddingRoPE (theta=500,000)

System Requirements

Estimated VRAM at 10% overhead for different quantization methods and context sizes.

Quantization1K ctx128K ctx
Q4_K_M0.50 B/W
~97% of FP16
36.80Datacenter GPU
76.49Datacenter GPU
Q8_01.00 B/W
~100% of FP16
73.30Datacenter GPU
113.0Cluster / Multi-GPU
F162.00 B/W
Reference
146.3Cluster / Multi-GPU
186.0Cluster / Multi-GPU
Fits 24 GB consumer GPU
Fits 80 GB datacenter GPU
Requires cluster / multi-GPU

Other Llama Models

View All

Find the right GPU for Llama 3.1 70B

Use the interactive VRAM Calculator to see exactly how much memory you need at any quantization level, context length, and overhead setting.