|
|
--- |
|
|
title: QuantLLM | The Ultra-Fast LLM Quantization |
|
|
emoji: 🐢 |
|
|
colorFrom: pink |
|
|
colorTo: yellow |
|
|
sdk: static |
|
|
pinned: true |
|
|
license: mit |
|
|
short_description: This space hosts our **quantized** Model |
|
|
thumbnail: >- |
|
|
https://cdn-uploads.huggingface.co/production/uploads/64fef2082a6f97f3a9f6bb5e/HQ8qrbubqLX9g-yn8Hagp.png |
|
|
--- |
|
|
|
|
|
 |
|
|
|
|
|
## QuantLLM | The Ultra-Fast LLM Quantization |
|
|
|
|
|
Welcome to **QuantLLM** — an open-source community focused on building efficient and accessible AI models. |
|
|
This space hosts our **quantized** and **fine-tuned transformer models**, designed for real-world applications and ease of deployment. |