Llama 4 Scout (MoE)
Llama 4 Scout is Meta's efficiency-focused MoE model. With 109B total parameters but only 17B active per token, it delivers performance comparable to dense 70B models at roughly half the inference FLOPs. However, all 109B parameters must be…
109.0B
Parameters
17.0B
Active
256K
Max Context
MoE
Architecture
Apr 5, 2025
Released
Text + Vision
Modality
About Llama 4 Scout (MoE)
Llama 4 Scout is Meta's efficiency-focused MoE model. With 109B total parameters but only 17B active per token, it delivers performance comparable to dense 70B models at roughly half the inference FLOPs. However, all 109B parameters must be loaded into VRAM (~55 GB at Q4_K_M), making it a high-memory-requirement model despite its efficient compute profile. Supports 256K context and multimodal vision input. Best suited for servers or high-end workstations with 48 GB+ GPUs.
Technical Specifications
System Requirements
Estimated VRAM at 10% overhead for different quantization methods and context sizes.
| Quantization | 1K ctx | 195K ctx | 256K ctx |
|---|---|---|---|
Q4_K_M0.50 B/W ~97% of FP16 | 56.53Datacenter GPU | 92.96Cluster / Multi-GPU | 104.3Cluster / Multi-GPU |
Q8_01.00 B/W ~100% of FP16 | 112.9Cluster / Multi-GPU | 149.3Cluster / Multi-GPU | 160.7Cluster / Multi-GPU |
F162.00 B/W Reference | 225.5Cluster / Multi-GPU | 262.0Cluster / Multi-GPU | 273.4Cluster / Multi-GPU |
Other Llama Models
View AllFind the right GPU for Llama 4 Scout (MoE)
Use the interactive VRAM Calculator to see exactly how much memory you need at any quantization level, context length, and overhead setting.