SmolLM3 3B
SmolLM3 3B is a dense transformer language model from the SmolLM family, containing 3B parameters across 32 layers. It supports up to 8K tokens of context with a hidden dimension of 2560 and 8 KV heads for efficient grouped-query attention …
3.0B
Parameters
8K
Max Context
Dense
Architecture
—
Released
Text
Modality
About SmolLM3 3B
SmolLM3 3B is a dense transformer language model from the SmolLM family, containing 3B parameters across 32 layers. It supports up to 8K tokens of context with a hidden dimension of 2560 and 8 KV heads for efficient grouped-query attention (GQA). Apache 2.0. Tiny model for CPU/browser/phone. Educational use.
Technical Specifications
System Requirements
Estimated VRAM at 10% overhead for different quantization methods and context sizes.
| Quantization | 1K ctx | 8K ctx |
|---|---|---|
Q4_K_M0.50 B/W ~97% of FP16 | 1.68Consumer GPU | 2.55Consumer GPU |
Q8_01.00 B/W ~100% of FP16 | 3.23Consumer GPU | 4.10Consumer GPU |
F162.00 B/W Reference | 6.33Consumer GPU | 7.20Consumer GPU |
Find the right GPU for SmolLM3 3B
Use the interactive VRAM Calculator to see exactly how much memory you need at any quantization level, context length, and overhead setting.