Model Card for Qwen2.5-1.5B-Coder-Finetune
This model is a fine-tuned version of Qwen/Qwen2.5-1.5B aimed at improving coding capabilities and reasoning traces. It was trained using the Naholav/CodeGen-Diverse-5K dataset with a focus on structured reasoning using <think> tags.
Model Details
- Base Model: Qwen/Qwen2.5-1.5B
- Dataset: Naholav/CodeGen-Diverse-5K
- Language: English, Code
- License: Apache 2.0
- Finetuning Approach: Supervised Fine-Tuning (SFT) focusing on Input-Output-Solution structure with reasoning steps.
Uses
Direct Use
The model is designed to generate code solutions given a problem statement. It is specifically trained to output a "thinking" process before the final solution, which helps in complex logical tasks.
Prompt Format
The model expects the following format to trigger the reasoning capability:
Input: [Your Problem Description]
<think>
[Model generates reasoning here]
</think>
Solution: [Model generates code here]
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for sinem02/Qwen2.5-Coder-1.5B-LoRA-Deep-v2
Base model
Qwen/Qwen2.5-1.5B