Snowflake Arctic (MoE)
Snowflake Arctic (MoE) is a mixture-of-experts (MoE) transformer language model from the Snowflake family, containing 480B parameters across 64 layers. It has 480B total parameters loaded into VRAM with 17B active per token. It supports up …
480.0B
Parameters
17.0B
Active
32K
Max Context
MoE
Architecture
—
Released
Text
Modality
About Snowflake Arctic (MoE)
Snowflake Arctic (MoE) is a mixture-of-experts (MoE) transformer language model from the Snowflake family, containing 480B parameters across 64 layers. It has 480B total parameters loaded into VRAM with 17B active per token. It supports up to 33K tokens of context with a hidden dimension of 7168 and 8 KV heads for efficient grouped-query attention (GQA). Apache 2.0. Enterprise SQL/coding MoE. Server class.
Technical Specifications
System Requirements
Estimated VRAM at 10% overhead for different quantization methods and context sizes.
| Quantization | 1K ctx | 32K ctx |
|---|---|---|
Q4_K_M0.50 B/W ~97% of FP16 | 248.4Cluster / Multi-GPU | 256.1Cluster / Multi-GPU |
Q8_01.00 B/W ~100% of FP16 | 496.5Cluster / Multi-GPU | 504.2Cluster / Multi-GPU |
F162.00 B/W Reference | 992.7Cluster / Multi-GPU | 1000.4Cluster / Multi-GPU |
Find the right GPU for Snowflake Arctic (MoE)
Use the interactive VRAM Calculator to see exactly how much memory you need at any quantization level, context length, and overhead setting.