MistralMoEApache 2.0

Mistral Large 3 (MoE)

Mistral Large 3 (MoE) is a mixture-of-experts (MoE) transformer language model from the Mistral family, containing 675B parameters across 88 layers. It has 675B total parameters loaded into VRAM with 41B active per token. It supports up to

675.0B

Parameters

41.0B

Active

256K

Max Context

MoE

Architecture

Released

Text

Modality

About Mistral Large 3 (MoE)

Mistral Large 3 (MoE) is a mixture-of-experts (MoE) transformer language model from the Mistral family, containing 675B parameters across 88 layers. It has 675B total parameters loaded into VRAM with 41B active per token. It supports up to 262K tokens of context with a hidden dimension of 12288 and 8 KV heads for efficient grouped-query attention (GQA). Apache 2.0. MoE: 128 experts, top-4 routing. Server class.

ResearchEnterprise

Technical Specifications

Total Parameters675.0B
Active Parameters41.0B per token
ArchitectureMixture of Experts
Total Experts41
Attention TypeGQA (MoE)
Hidden Dimensiond = 12,288
Transformer Layers88
Attention Heads96
KV Headsn_kv = 8
Head Dimensiond_head = 128
Activation FunctionSwiGLU
NormalizationRMSNorm
Position EmbeddingRoPE

System Requirements

Estimated VRAM at 10% overhead for different quantization methods and context sizes.

Quantization1K ctx195K ctx256K ctx
Q4_K_M0.50 B/W
~97% of FP16
349.2Cluster / Multi-GPU
416.0Cluster / Multi-GPU
436.9Cluster / Multi-GPU
Q8_01.00 B/W
~100% of FP16
698.1Cluster / Multi-GPU
764.9Cluster / Multi-GPU
785.8Cluster / Multi-GPU
F162.00 B/W
Reference
1395.9Cluster / Multi-GPU
1462.7Cluster / Multi-GPU
1483.6Cluster / Multi-GPU
Fits 24 GB consumer GPU
Fits 80 GB datacenter GPU
Requires cluster / multi-GPU

Other Mistral Models

View All

Find the right GPU for Mistral Large 3 (MoE)

Use the interactive VRAM Calculator to see exactly how much memory you need at any quantization level, context length, and overhead setting.