| language: | |
| - en | |
| license: apache-2.0 | |
| tags: | |
| - code | |
| - coding-assistant | |
| - qwen2.5 | |
| - mlx | |
| - fine-tuned | |
| # sunilagali/my-coding-assistant | |
| A fine-tuned coding + general AI assistant by **Sunil Agali**, built on | |
| Qwen2.5-Coder-7B-Instruct and trained entirely on a MacBook M-chip. | |
| ## What it does | |
| - Writes production-ready Python, JavaScript, and more | |
| - Debugs and explains code clearly | |
| - Answers general tech and programming questions | |
| ## Training Details | |
| - Base model: Qwen2.5-Coder-7B-Instruct | |
| - Fine-tuned with: MLX-LM LoRA on Apple Silicon | |
| - Rounds: 3 rounds of training | |
| - Tokens: 1.3M+ tokens | |
| - Final val loss: 0.722 | |
| - Datasets: Magicoder, OpenHermes, Dolly, CodeFeedback, ShareGPT4 | |
| ## How to run locally | |
| ```bash | |
| # Install Ollama | |
| brew install ollama | |
| # Pull base model | |
| ollama pull qwen2.5-coder:7b-instruct | |
| # Run | |
| ollama run qwen2.5-coder:7b-instruct | |
| ``` | |
| ## Author | |
| - Hugging Face: https://huggingface.co/sunilagali | |
| - Model: https://huggingface.co/sunilagali/my-coding-assistant | |