--- library_name: mlx-lm base_model: mlx-community/Llama-3.2-3B-Instruct-4bit tags: - tech - ai - research papers - twitter - viral-content - mlx - lora license: mit language: - en --- # Tech Tweet Generator Llama-3 (Fine-Tuned) This model is a fine-tuned version of **Llama-3.2-3B-Instruct** designed to convert dense scientific and technical research paper abstracts into engaging, viral Twitter threads. It was trained using **LoRA (Low-Rank Adaptation)** on the Apple MLX framework. ## 🚀 Model Description - **Developed by:** Meet Merchant - **Base Model:** `mlx-community/Llama-3.2-3B-Instruct-4bit` - **Task:** Summarization / Style Transfer (Research Paper Abstract -> Engaging Twitter Thread) - **Language:** English - **Framework:** MLX ## 💻 How to Use You can run this model locally on your Mac using `mlx-lm`. ### Installation ```bash pip install mlx-lm ``` ### Python Code ```python from mlx_lm import load, generate # Load the model and your adapters model, tokenizer = load( "mlx-community/Llama-3.2-3B-Instruct-4bit", adapter_path="meetmerchant/tech-tweet-generator-llama3" ) abstract = """ [Paste Abstract Here] """ prompt = f""" <|begin_of_text|><|start_header_id|>system<|end_header_id|> You are a viral science communicator. <|eot_id|><|start_header_id|>user<|end_header_id|> Title: Example Paper Abstract: {abstract} <|eot_id|><|start_header_id|>assistant<|end_header_id|> """ response = generate(model, tokenizer, prompt=prompt, max_tokens=500) print(response) ``` ## 📊 Training Details - **Dataset:** 50+ ArXiv papers from the field of AI. - **Ground Truth:** Generated by GPT-4o-mini ("Teacher-Student" distillation). - **Training Config:** - LoRA Rank: 16 - Quantization: 4-bit - Iterations: 200 ## 🏆 Evaluation The model was evaluated using **LLM-as-a-Judge (GPT-4o)** against the base model. - **Win Rate:** 66% (vs Base Model) - **Strengths:** High engagement, emoji usage, accessible language. - **Weaknesses:** Can occasionally hallucinate specific details if the abstract is too dense. --- *Built with ❤️ on a MacBook Pro M3 using Apple MLX.*