--- base_model: unsloth/llama-3-8b-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - gguf license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** zayedansari - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [](https://github.com/unslothai/unsloth) # Formula1Model 🏎️ An expert Formula 1 assistant fine-tuned on the **2024 Formula 1 Championship dataset** ([vibingshu/2024_formula1_championship_dataset](https://huggingface.co/datasets/vibingshu/2024_formula1_championship_dataset)). This model was fine-tuned using **Unsloth** and exported in 8-bit (`Q8_0`) format for efficient local inference with Ollama. --- ## 🔧 Model Details - **Base Model:** LLaMA 3.2 (fine-tuned with Unsloth) - **Dataset:** 2024 F1 results, drivers, constructors, and races - **Format:** GGUF (`Q8_0`) - **Task:** Question answering & expert analysis on Formula 1 - **Use Case:** F1 trivia, race insights, driver/team history, strategy-style Q&A --- ## 📊 Training Hardware: Google Colab (T4 / A100, depending on availability) Tools Used: Unsloth, Hugging Face datasets, LoRA adapters Precision: 8-bit (Q8_0) for efficient inference --- ## 🚀 Usage ollama pull zayedansari/Formula1Model ollama run zayedansari/Formula1Model --- ## Example **Who won the 2024 Monaco Grand Prix?** > Max Verstappen won the Bahrain Grand Prix driving for Red Bull Racing Honda RBPT. --- ## 📜 License This model is released under the Apache 2.0 license. You are free to use, modify, and distribute it with proper attribution.