KimiMoEMIT

Kimi K2.6 (MoE)

Kimi K2.6 (MoE) is a mixture-of-experts (MoE) transformer language model from the Kimi family, containing 1000B parameters across 61 layers. It has 1000B total parameters loaded into VRAM with 32B active per token. It supports up to 262K to

1.0T

Parameters

32.0B

Active

256K

Max Context

MoE

Architecture

Released

Text + Vision

Modality

About Kimi K2.6 (MoE)

Kimi K2.6 (MoE) is a mixture-of-experts (MoE) transformer language model from the Kimi family, containing 1000B parameters across 61 layers. It has 1000B total parameters loaded into VRAM with 32B active per token. It supports up to 262K tokens of context with a hidden dimension of 7168 and 8 KV heads for efficient grouped-query attention (GQA). Modified MIT. MoE: 384 experts, 8+1 active. MLA for KV compression. Multimodal (MoonViT 400M). Server class. 1T total params.

ResearchEnterprise

Technical Specifications

Total Parameters1.0T
Active Parameters32.0B per token
ArchitectureMixture of Experts
Total Experts32
Attention TypeGQA (MoE)
Hidden Dimensiond = 7,168
Transformer Layers61
Attention Heads64
KV Headsn_kv = 8
Head Dimensiond_head = 128
Activation FunctionSwiGLU
NormalizationRMSNorm
Position EmbeddingRoPE

System Requirements

Estimated VRAM at 10% overhead for different quantization methods and context sizes.

Quantization1K ctx195K ctx256K ctx
Q4_K_M0.50 B/W
~97% of FP16
517.1Cluster / Multi-GPU
563.4Cluster / Multi-GPU
577.9Cluster / Multi-GPU
Q8_01.00 B/W
~100% of FP16
1034.0Cluster / Multi-GPU
1080.3Cluster / Multi-GPU
1094.8Cluster / Multi-GPU
F162.00 B/W
Reference
2067.8Cluster / Multi-GPU
2114.1Cluster / Multi-GPU
2128.5Cluster / Multi-GPU
Fits 24 GB consumer GPU
Fits 80 GB datacenter GPU
Requires cluster / multi-GPU

Find the right GPU for Kimi K2.6 (MoE)

Use the interactive VRAM Calculator to see exactly how much memory you need at any quantization level, context length, and overhead setting.