YanLabs/gemma-3-27b-it-abliterated-normpreserve-v1 Text Generation ⢠27B ⢠Updated 24 days ago ⢠610 ⢠5
huihui-ai/Huihui-Mistral-Small-3.2-24B-Instruct-2506-abliterated-v2 Image-Text-to-Text ⢠24B ⢠Updated Sep 11, 2025 ⢠116 ⢠7
huihui-ai/Magistral-Small-2506-abliterated Text Generation ⢠24B ⢠Updated Jun 18, 2025 ⢠4 ⢠14
ReadyArt/MS3.2-The-Omega-Directive-24B-Unslop-v2.0 Text Generation ⢠24B ⢠Updated Jul 24, 2025 ⢠398 ⢠31
Doctor-Shotgun/MS3.2-24B-Magnum-Diamond Text Generation ⢠24B ⢠Updated Jul 7, 2025 ⢠241 ⢠48
view post Post 4572 I have just released a new blogpost about kv caching and its role in inference speedup šš https://huggingface.co/blog/not-lain/kv-caching/some takeaways : See translation 4 replies Ā· š„ 8 8 š¤ 4 4 + Reply