GGUF Files for LocoOperator-4B
These are the GGUF files for LocoreMind/LocoOperator-4B.
Downloads
| GGUF Link | Quantization | Description |
|---|---|---|
| Download | Q2_K | Lowest quality |
| Download | Q3_K_S | |
| Download | IQ3_S | Integer quant, preferable over Q3_K_S |
| Download | IQ3_M | Integer quant |
| Download | Q3_K_M | |
| Download | Q3_K_L | |
| Download | IQ4_XS | Integer quant |
| Download | Q4_K_S | Fast with good performance |
| Download | Q4_K_M | Recommended: Perfect mix of speed and performance |
| Download | Q5_K_S | |
| Download | Q5_K_M | |
| Download | Q6_K | Very good quality |
| Download | Q8_0 | Best quality |
| Download | f16 | Full precision, don't bother; use a quant |
Note from Flexan
I provide GGUFs and quantizations of publicly available models that do not have a GGUF equivalent available yet. This process is not yet automated and I download, convert, quantize, and upload them by hand, usually for models I deem interesting and wish to try out.
If there are some quants missing that you'd like me to add, you may request one in the community tab. If you want to request a public model to be converted, you can also request that in the community tab. If you have questions regarding the model, please refer to the original model repo.
Model Card for LocoOperator-4B
Introduction
LocoOperator-4B is a 4B-parameter tool-calling agent model trained via knowledge distillation from Qwen3-Coder-Next inference traces. It specializes in multi-turn codebase exploration — reading files, searching code, and navigating project structures within a Claude Code-style agent loop. Designed as a local sub agent, it runs via llama.cpp at zero API cost.
| LocoOperator-4B | |
|---|---|
| Base Model | Qwen3-4B-Instruct-2507 |
| Teacher Model | Qwen3-Coder-Next |
| Training Method | Full-parameter SFT (distillation) |
| Training Data | 170,356 multi-turn conversation samples |
| Max Sequence Length | 16,384 tokens |
| Training Hardware | 4x NVIDIA H200 141GB SXM5 |
| Training Time | ~25 hours |
| Framework | MS-SWIFT |
Key Features
- Tool-Calling Agent: Generates structured
<tool_call>JSON for Read, Grep, Glob, Bash, Write, Edit, and Task (subagent delegation) - 100% JSON Validity: Every tool call is valid JSON with all required arguments — outperforming the teacher model (87.6%)
- Local Deployment: GGUF quantized, runs on Mac Studio via llama.cpp at zero API cost
- Lightweight Explorer: 4B parameters, optimized for fast codebase search and navigation
- Multi-Turn: Handles conversation depths of 3–33 messages with consistent tool-calling behavior
Performance
Evaluated on 65 multi-turn conversation samples from diverse open-source projects (scipy, fastapi, arrow, attrs, gevent, gunicorn, etc.), with labels generated by Qwen3-Coder-Next.
Core Metrics
| Metric | Score |
|---|---|
| Tool Call Presence Alignment | 100% (65/65) |
| First Tool Type Match | 65.6% (40/61) |
| JSON Validity | 100% (76/76) |
| Argument Syntax Correctness | 100% (76/76) |
The model perfectly learned when to use tools vs. when to respond with text (100% presence alignment). Tool type mismatches are between semantically similar tools (e.g. Grep vs Read) — different but often valid strategies.
Tool Distribution Comparison
JSON & Argument Syntax Correctness
| Model | JSON Valid | Argument Syntax Valid |
|---|---|---|
| LocoOperator-4B | 76/76 (100%) | 76/76 (100%) |
| Qwen3-Coder-Next (teacher) | 89/89 (100%) | 78/89 (87.6%) |
LocoOperator-4B achieves perfect structured output. The teacher model has 11 tool calls with missing required arguments (empty
arguments: {}).
Quick Start
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "LocoreMind/LocoOperator-4B"
# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
# prepare the messages
messages = [
{
"role": "system",
"content": "You are a read-only codebase search specialist.\n\nCRITICAL CONSTRAINTS:\n1. STRICTLY READ-ONLY: You cannot create, edit, delete, move files, or run any state-changing commands. Use tools/bash ONLY for reading (e.g., ls, find, cat, grep).\n2. EFFICIENCY: Spawn multiple parallel tool calls for faster searching.\n3. OUTPUT RULES: \n - ALWAYS use absolute file paths.\n - STRICTLY NO EMOJIS in your response.\n - Output your final report directly. Do not use colons before tool calls.\n\nENV: Working directory is /Users/developer/workspace/code-analyzer (macOS, zsh)."
},
{
"role": "user",
"content": "Analyze the Black codebase at `/Users/developer/workspace/code-analyzer/projects/black`.\nFind and explain:\n1. How Black discovers config files.\n2. The exact search order for config files.\n3. Supported config file formats.\n4. Where this configuration discovery logic lives in the codebase.\n\nReturn a comprehensive answer with relevant code snippets and absolute file paths."
}
]
# prepare the model input
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# conduct text completion
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512,
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
content = tokenizer.decode(output_ids, skip_special_tokens=True)
print(content)
Local Deployment
For GGUF quantized deployment with llama.cpp, hybrid proxy routing, and batch analysis pipelines, refer to our GitHub repository.
Training Details
| Parameter | Value |
|---|---|
| Base model | Qwen3-4B-Instruct-2507 |
| Teacher model | Qwen3-Coder-Next |
| Method | Full-parameter SFT |
| Training data | 170,356 samples |
| Hardware | 4x NVIDIA H200 141GB SXM5 |
| Parallelism | DDP (no DeepSpeed) |
| Precision | BF16 |
| Epochs | 1 |
| Batch size | 2/GPU, gradient accumulation 4 (effective batch 32) |
| Learning rate | 2e-5, warmup ratio 0.03 |
| Max sequence length | 16,384 tokens |
| Template | qwen3_nothinking |
| Framework | MS-SWIFT |
| Training time | ~25 hours |
| Checkpoint | Step 2524 |
Known Limitations
- First-tool-type match is 65.6% — the model sometimes picks a different (but not necessarily wrong) tool than the teacher
- Tends to under-generate parallel tool calls compared to the teacher (76 vs 89 total calls across 65 samples)
- Preference for Bash over Read may indicate the model defaults to shell commands where file reads would be more appropriate
- Evaluated on 65 samples only; larger-scale evaluation needed
License
MIT
Acknowledgments
- Downloads last month
- 187
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
16-bit
Model tree for Flexan/LocoreMind-LocoOperator-4B-GGUF
Base model
Qwen/Qwen3-4B-Instruct-2507