lmstudio-community/gemma-3-1b-it-GGUF Text Generation β’ 1.0B β’ Updated Mar 12, 2025 β’ 2.46k β’ 7
view post Post 5145 We collaborated with Hugging Face to enable you to train MoE models 12Γ faster with 35% less VRAM via our new Triton kernels (no accuracy loss). π€Train gpt-oss locally on 12.8GB VRAM with our free notebooks: https://unsloth.ai/docs/new/faster-moe See translation 1 reply Β· π₯ 29 29 π€ 5 5 + Reply