MistralDenseApache 2.0

Ministral 3 14B

Ministral 3 14B is the largest of the Ministral 3 cascade-distilled family, derived from the Mistral Small 3.1 24B teacher. It includes a vision encoder and delivers strong laptop-class coding and general performance. At ~8 GB VRAM at Q4_K_

14.0B

Parameters

256K

Max Context

Dense

Architecture

Sep 16, 2025

Released

Text + Vision

Modality

About Ministral 3 14B

Ministral 3 14B is the largest of the Ministral 3 cascade-distilled family, derived from the Mistral Small 3.1 24B teacher. It includes a vision encoder and delivers strong laptop-class coding and general performance. At ~8 GB VRAM at Q4_K_M it fits on 12 GB GPUs and runs fast on 16 GB. Apache 2.0 licensed with 262K context support. A strong choice for users who want the best laptop-class model with vision capabilities.

General PurposeCodeVisionLaptopCommercial

Technical Specifications

Total Parameters14.0B
ArchitectureDense
Attention TypeGQA (Grouped Query Attention)
Hidden Dimensiond = 5,120
Transformer Layers40
Attention Heads32
KV Headsn_kv = 8
Head Dimensiond_head = 128
Activation FunctionSwiGLU
NormalizationRMSNorm
Position EmbeddingRoPE

System Requirements

Estimated VRAM at 10% overhead for different quantization methods and context sizes.

Quantization1K ctx195K ctx256K ctx
Q4_K_M0.50 B/W
~97% of FP16
7.39Consumer GPU
37.75Datacenter GPU
47.24Datacenter GPU
Q8_01.00 B/W
~100% of FP16
14.63Consumer GPU
44.99Datacenter GPU
54.47Datacenter GPU
F162.00 B/W
Reference
29.10Datacenter GPU
59.46Datacenter GPU
68.95Datacenter GPU
Fits 24 GB consumer GPU
Fits 80 GB datacenter GPU
Requires cluster / multi-GPU

Other Mistral Models

View All

Find the right GPU for Ministral 3 14B

Use the interactive VRAM Calculator to see exactly how much memory you need at any quantization level, context length, and overhead setting.