IBM GraniteDenseApache 2.0

Granite 3.1 8B

Granite 3.1 8B is a dense transformer language model from the IBM Granite family, containing 8B parameters across 40 layers. It supports up to 131K tokens of context with a hidden dimension of 4096 and 8 KV heads for efficient grouped-query

8.0B

Parameters

128K

Max Context

Dense

Architecture

Released

Text

Modality

About Granite 3.1 8B

Granite 3.1 8B is a dense transformer language model from the IBM Granite family, containing 8B parameters across 40 layers. It supports up to 131K tokens of context with a hidden dimension of 4096 and 8 KV heads for efficient grouped-query attention (GQA). Apache 2.0. Enterprise chat, code, safety. Mature local deployments.

CodeEnterprise

Technical Specifications

Total Parameters8.0B
ArchitectureDense
Attention TypeGQA (Grouped Query Attention)
Hidden Dimensiond = 4,096
Transformer Layers40
Attention Heads32
KV Headsn_kv = 8
Head Dimensiond_head = 128
Activation FunctionSwiGLU
NormalizationRMSNorm
Position EmbeddingRoPE

System Requirements

Estimated VRAM at 10% overhead for different quantization methods and context sizes.

Quantization1K ctx128K ctx
Q4_K_M0.50 B/W
~97% of FP16
4.29Consumer GPU
24.14Datacenter GPU
Q8_01.00 B/W
~100% of FP16
8.43Consumer GPU
28.27Datacenter GPU
F162.00 B/W
Reference
16.70Consumer GPU
36.54Datacenter GPU
Fits 24 GB consumer GPU
Fits 80 GB datacenter GPU
Requires cluster / multi-GPU

Other IBM Granite Models

View All

Find the right GPU for Granite 3.1 8B

Use the interactive VRAM Calculator to see exactly how much memory you need at any quantization level, context length, and overhead setting.