NvidiaMoENVIDIA Open Model License

Nemotron 3 Nano 30B-A3B (MoE)

Nemotron 3 Nano 30B-A3B (MoE) is a mixture-of-experts (MoE) transformer language model from the Nvidia family, containing 30B parameters across 40 layers. It has 30B total parameters loaded into VRAM with 3B active per token. It supports up

30.0B

Parameters

3.0B

Active

256K

Max Context

MoE

Architecture

Released

Text

Modality

About Nemotron 3 Nano 30B-A3B (MoE)

Nemotron 3 Nano 30B-A3B (MoE) is a mixture-of-experts (MoE) transformer language model from the Nvidia family, containing 30B parameters across 40 layers. It has 30B total parameters loaded into VRAM with 3B active per token. It supports up to 262K tokens of context with a hidden dimension of 2560 and 8 KV heads for efficient grouped-query attention (GQA). Nemotron Open Model License. MoE. Up to 1M context. Efficient local reasoning/agents.

ReasoningAgentic

Technical Specifications

Total Parameters30.0B
Active Parameters3.0B per token
ArchitectureMixture of Experts
Total Experts3
Attention TypeGQA (MoE)
Hidden Dimensiond = 2,560
Transformer Layers40
Attention Heads32
KV Headsn_kv = 8
Head Dimensiond_head = 128
Activation FunctionSwiGLU
NormalizationRMSNorm
Position EmbeddingRoPE

System Requirements

Estimated VRAM at 10% overhead for different quantization methods and context sizes.

Quantization1K ctx195K ctx256K ctx
Q4_K_M0.50 B/W
~97% of FP16
15.66Consumer GPU
46.02Datacenter GPU
55.51Datacenter GPU
Q8_01.00 B/W
~100% of FP16
31.17Datacenter GPU
61.53Datacenter GPU
71.01Datacenter GPU
F162.00 B/W
Reference
62.18Datacenter GPU
92.54Cluster / Multi-GPU
102.0Cluster / Multi-GPU
Fits 24 GB consumer GPU
Fits 80 GB datacenter GPU
Requires cluster / multi-GPU

Other Nvidia Models

View All

Find the right GPU for Nemotron 3 Nano 30B-A3B (MoE)

Use the interactive VRAM Calculator to see exactly how much memory you need at any quantization level, context length, and overhead setting.