installama.sh at the TigerBeetle 1000x World Tour !
Last week I had the chance to give a short talk during the TigerBeetle 1000x World Tour (organized by @jedisct1 👏 ) a fantastic event celebrating high-performance engineering and the people who love pushing systems to their limits!
In the talk, I focused on the CPU and Linux side of things, with a simple goal in mind: making the installation of llama.cpp instant, automatic, and optimal, no matter your OS or hardware setup.
Today, we announce Mistral 3, the next generation of Mistral models. Mistral 3 includes three state-of-the-art small, dense models (14B, 8B, and 3B) and Mistral Large 3 – our most capable model to date – a sparse mixture-of-experts trained with 41B active and 675B total parameters.
All models are released under the Apache 2.0 license.
🚀 installama.sh update: Vulkan & FreeBSD support added!
The fastest way to install and run llama.cpp has just been updated!
We are expanding hardware and OS support to make local AI even more accessible. This includes:
🌋 Vulkan support for Linux on x86_64 and aarch64. 😈 FreeBSD support (CPU backend) on x86_64 and aarch64 too. ✨ Lots of small optimizations and improvements under the hood.
Give it a try right now:
curl angt.github.io/installama.sh | MODEL=unsloth/Qwen3-4B-GGUF:Q4_0 sh