OLMo 3 7B
OLMo 3 7B is a dense transformer language model from the AI2 OLMo family, containing 7B parameters across 32 layers. It supports up to 33K tokens of context with a hidden dimension of 4096 and 8 KV heads for efficient grouped-query attentio…
7.0B
Parameters
32K
Max Context
Dense
Architecture
—
Released
Text
Modality
About OLMo 3 7B
OLMo 3 7B is a dense transformer language model from the AI2 OLMo family, containing 7B parameters across 32 layers. It supports up to 33K tokens of context with a hidden dimension of 4096 and 8 KV heads for efficient grouped-query attention (GQA). Fully open data/code/weights. Transparent research model.
Technical Specifications
System Requirements
Estimated VRAM at 10% overhead for different quantization methods and context sizes.
| Quantization | 1K ctx | 32K ctx |
|---|---|---|
Q4_K_M0.50 B/W ~97% of FP16 | 3.74Consumer GPU | 7.62Consumer GPU |
Q8_01.00 B/W ~100% of FP16 | 7.36Consumer GPU | 11.24Consumer GPU |
F162.00 B/W Reference | 14.60Consumer GPU | 18.47Consumer GPU |
Other AI2 OLMo Models
View AllFind the right GPU for OLMo 3 7B
Use the interactive VRAM Calculator to see exactly how much memory you need at any quantization level, context length, and overhead setting.