DeepCoder 14B
DeepCoder 14B is a dense transformer language model from the Coding family, containing 14B parameters across 40 layers. It supports up to 33K tokens of context with a hidden dimension of 5120 and 8 KV heads for efficient grouped-query atten…
14.0B
Parameters
32K
Max Context
Dense
Architecture
—
Released
Text
Modality
About DeepCoder 14B
DeepCoder 14B is a dense transformer language model from the Coding family, containing 14B parameters across 40 layers. It supports up to 33K tokens of context with a hidden dimension of 5120 and 8 KV heads for efficient grouped-query attention (GQA). RL-derived code reasoning. Good local coding reasoner class.
Technical Specifications
System Requirements
Estimated VRAM at 10% overhead for different quantization methods and context sizes.
| Quantization | 1K ctx | 32K ctx |
|---|---|---|
Q4_K_M0.50 B/W ~97% of FP16 | 7.39Consumer GPU | 12.24Consumer GPU |
Q8_01.00 B/W ~100% of FP16 | 14.63Consumer GPU | 19.47Consumer GPU |
F162.00 B/W Reference | 29.10Datacenter GPU | 33.95Datacenter GPU |
Find the right GPU for DeepCoder 14B
Use the interactive VRAM Calculator to see exactly how much memory you need at any quantization level, context length, and overhead setting.