DeepSeek-Coder-V2-Lite-NF4

NF4 quantized DeepSeek-Coder-V2-Lite-Instruct for AIMO3 tool-integrated reasoning.

Key Specs

Spec Value
Total Params 16B
Active Params 2.4B (MoE)
Context Length 128K
VRAM (NF4) ~10GB

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
import torch

bnb_config = BitsAndBytesConfig(
    load_in_4bit=True,
    bnb_4bit_quant_type="nf4",
    bnb_4bit_compute_dtype=torch.bfloat16,
)

model = AutoModelForCausalLM.from_pretrained(
    "aphoticshaman/deepseek-coder-v2-lite-nf4",
    quantization_config=bnb_config,
    device_map="auto",
    trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained("aphoticshaman/deepseek-coder-v2-lite-nf4")

Author

Ryan J Cardwell (Archer Phoenix) - AIMO3 Competitor

Downloads last month
9
Safetensors
Model size
16B params
Tensor type
F32
·
BF16
·
U8
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for aphoticshaman/deepseek-coder-v2-lite-nf4

Quantized
(61)
this model