| | --- |
| | title: Forgekit |
| | app_file: app.py |
| | sdk: gradio |
| | sdk_version: 5.42.0 |
| | --- |
| | |
| | # π₯ ForgeKit |
| |
|
| | **Forge your perfect AI model β no code required.** |
| |
|
| | ForgeKit is an open-source platform that lets anyone create custom AI models by merging existing ones. No coding, no complex setup β just pick your models, configure the merge, and get a ready-to-run Colab notebook. |
| |
|
| | ## β¨ Features |
| |
|
| | ### βοΈ Merge Builder |
| | - Add models by ID and instantly check architecture compatibility |
| | - Choose from 6 merge methods: DARE-TIES, TIES, SLERP, Linear, Task Arithmetic, Passthrough |
| | - Adjust weights and densities with smart presets |
| | - Auto-suggest base model and tokenizer |
| | - Generate ready-to-run Colab notebooks with one click |
| |
|
| | ### π Model Explorer |
| | - Search HuggingFace Hub for models |
| | - Filter by architecture type |
| | - View detailed model specs (hidden size, layers, vocab, etc.) |
| |
|
| | ### π¦ GGUF Quantizer |
| | - Convert any HF model to GGUF format |
| | - Multiple quantization levels (Q8_0, Q5_K_M, Q4_K_M, etc.) |
| | - Ready-to-run Colab notebook generation |
| | |
| | ### π Deploy |
| | - Generate deployment files for HuggingFace Spaces |
| | - Gradio chat interface or Docker + llama.cpp options |
| | - Auto-generated app.py and README |
| | |
| | ### π Community Leaderboard |
| | - Browse community-created merges |
| | - Submit your own merged models |
| | - Discover popular merge recipes |
| | |
| | ## π οΈ Supported Merge Methods |
| | |
| | | Method | Models | Best For | |
| | |--------|--------|----------| |
| | | **DARE-TIES** | 2-10 | Combining specialists (coding + math) | |
| | | **TIES** | 2-10 | Resolving parameter interference | |
| | | **SLERP** | 2 | Smooth two-model interpolation | |
| | | **Linear** | 2-10 | Simple weighted averaging | |
| | | **Task Arithmetic** | 1-10 | Adding/removing capabilities | |
| | | **Passthrough** | 1-10 | Layer stacking (Frankenmerge) | |
| | |
| | ## π How It Works |
| | |
| | 1. **Add Models** β Enter HuggingFace model IDs |
| | 2. **Check Compatibility** β ForgeKit verifies architectures match |
| | 3. **Configure** β Choose method, adjust weights, pick presets |
| | 4. **Generate** β Get a Colab notebook with everything pre-filled |
| | 5. **Run** β Open in Colab, click Run All, wait for your model |
| | 6. **Ship** β Auto-upload to HF Hub + optional GGUF + Space deployment |
| | |
| | ## π Requirements |
| | |
| | The generated Colab notebooks handle all dependencies. You just need: |
| | - A Google account (for Colab) |
| | - A HuggingFace account (for model access and upload) |
| | - A HF token (for gated models and uploading) |
| | |
| | ## π§βπ» Built By |
| | |
| | **[AIencoder](https://huggingface.co/AIencoder)** β AI/ML Engineer |
| | |
| | - [Portfolio](https://aiencoder-portfolio.static.hf.space) |
| | - [GitHub](https://github.com/Ary5272) |
| | |
| | ## π License |
| | |
| | MIT β use it, fork it, improve it. |
| | |