Spaces:
Sleeping
Sleeping
| title: llama.ui | |
| emoji: π¦ | |
| colorFrom: gray | |
| colorTo: gray | |
| sdk: docker | |
| pinned: true | |
| license: mit | |
| short_description: A minimal AI chat interface that runs entirely in browser. | |
| thumbnail: >- | |
| https://cdn-uploads.huggingface.co/production/uploads/6638c1488bd9205c327037b7/ZbzeoXlAo6aXJ5a37El8e.png | |
| # π¦ llama.ui - Minimal Interface for Local AI Companion β¨ | |
| **Tired of complex AI setups?** π© `llama.ui` is an open-source desktop application that provides a beautiful β¨, user-friendly interface for interacting with large language models (LLMs) powered by `llama.cpp`. Designed for simplicity and privacy π, this project lets you chat with powerful quantized models on your local machine - no cloud required! π«βοΈ | |
| ## β‘ TL;DR | |
| This repository is a fork of [llama.cpp](https://github.com/ggml-org/llama.cpp) WebUI with: | |
| - Fresh new styles π¨ | |
| - Extra functionality βοΈ | |
| - Smoother experience β¨ | |
| ## π Key Features | |
| 1. **Multi-Provider Support**: Works with llama.cpp, LM Studio, Ollama, vLLM, OpenAI,.. and many more! | |
| 2. **Conversation Management**: | |
| - IndexedDB storage for conversations | |
| - Branching conversation support (edit messages while preserving history) | |
| - Import/export functionality | |
| 3. **Rich UI Components**: | |
| - Markdown rendering with syntax highlighting | |
| - LaTeX math support | |
| - File attachments (text, images, PDFs) | |
| - Theme customization with DaisyUI themes | |
| - Responsive design for mobile and desktop | |
| 4. **Advanced Features**: | |
| - PWA support with offline capabilities | |
| - Streaming responses with Server-Sent Events | |
| - Customizable generation parameters | |
| - Performance metrics display | |
| 5. **Privacy Focused**: All data is stored locally in your browser - no cloud required! | |
| ## π Getting Started in 60 Seconds! | |
| ### π» Standalone Mode (Zero Installation) | |
| 1. β¨ Open our [hosted UI instance](https://olegshulyakov.github.io/llama.ui/) | |
| 2. βοΈ Click the gear icon β General settings | |
| 3. π Set "Base URL" to your local llama.cpp server (e.g. `http://localhost:8080`) | |
| 4. π Start chatting with your AI! | |
| <details><summary><b>π§ Need HTTPS magic for your local instance? Try this mitmproxy hack!</b></summary> | |
| <p> | |
| **Uh-oh!** Browsers block HTTP requests from HTTPS sites π€. Since `llama.cpp` uses HTTP, we need a bridge π. Enter [mitmproxy](https://www.mitmproxy.org/) - our traffic wizard! π§ββοΈ | |
| **Local setup:** | |
| ```bash | |
| mitmdump -p 8443 --mode reverse:http://localhost:8080/ | |
| ``` | |
| **Docker quickstart:** | |
| ```bash | |
| docker run -it -p 8443:8443 mitmproxy/mitmproxy mitmdump -p 8443 --mode reverse:http://localhost:8080/ | |
| ``` | |
| **Pro-tip with Docker Compose:** | |
| ```yml | |
| services: | |
| mitmproxy: | |
| container_name: mitmproxy | |
| image: mitmproxy/mitmproxy:latest | |
| ports: | |
| - '8443:8443' # π Port magic happening here! | |
| command: mitmdump -p 8443 --mode reverse:http://localhost:8080/ | |
| # ... (other config) | |
| ``` | |
| > β οΈ **Certificate Tango Time!** | |
| > | |
| > 1. Visit http://localhost:8443 | |
| > 2. Click "Trust this certificate" π€ | |
| > 3. Restart π¦ llama.ui page π | |
| > 4. Profit! πΈ | |
| **VoilΓ !** You've hacked the HTTPS barrier! π©β¨ | |
| </p> | |
| </details> | |
| ### π₯οΈ Full Local Installation (Power User Edition) | |
| 1. π¦ Grab the latest release from our [releases page](https://github.com/olegshulyakov/llama.ui/releases) | |
| 2. ποΈ Unpack the archive (feel that excitement! π€©) | |
| 3. β‘ Fire up your llama.cpp server: | |
| **Linux/MacOS:** | |
| ```bash | |
| ./server --host 0.0.0.0 \ | |
| --port 8080 \ | |
| --path "/path/to/llama.ui" \ | |
| -m models/llama-2-7b.Q4_0.gguf \ | |
| --ctx-size 4096 | |
| ``` | |
| **Windows:** | |
| ```bat | |
| llama-server ^ | |
| --host 0.0.0.0 ^ | |
| --port 8080 ^ | |
| --path "C:\path\to\llama.ui" ^ | |
| -m models\mistral-7b.Q4_K_M.gguf ^ | |
| --ctx-size 4096 | |
| ``` | |
| 4. π Visit http://localhost:8080 and meet your new AI buddy! π€β€οΈ | |
| ## π Join Our Awesome Community! | |
| **We're building something special together!** π | |
| - π― **PRs are welcome!** (Seriously, we high-five every contribution! β) | |
| - π **Bug squashing?** Yes please! π§― | |
| - π **Documentation heroes** needed! π¦Έ | |
| - β¨ **Make magic** with your commits! (Follow [Conventional Commits](https://www.conventionalcommits.org)) | |
| ## π οΈ Developer Wonderland | |
| **Prerequisites:** | |
| - π» macOS/Windows/Linux | |
| - β¬’ [Node.js](https://nodejs.org/) >= 22 | |
| - π¦ Local [llama.cpp server](https://github.com/ggml-org/llama.cpp/tree/master/tools/server) humming along | |
| **Build the future:** | |
| ```bash | |
| npm ci # π¦ Grab dependencies | |
| npm run build # π¨ Craft the magic | |
| npm start # π¬ Launch dev server (http://localhost:5173) for live-coding bliss! π₯ | |
| ``` | |
| ### ποΈ Architecture | |
| #### Core Technologies | |
| - **Frontend**: [React](https://react.dev/) with [TypeScript](https://www.typescriptlang.org/) | |
| - **Styling**: [Tailwind CSS](https://tailwindcss.com/docs/) + [DaisyUI](https://daisyui.com/) | |
| - **State Management**: React Context API | |
| - **Routing**: [React Router](https://reactrouter.com/) | |
| - **Storage**: IndexedDB via [Dexie.js](https://dexie.org/) | |
| - **Build Tool**: [Vite](https://vite.dev/) | |
| #### Key Components | |
| 1. **App Context**: Manages global configuration and settings | |
| 2. **Inference Context**: Handles API communication with inference providers | |
| 3. **Message Context**: Manages conversation state and message generation | |
| 4. **Storage Utils**: IndexedDB operations and localStorage management | |
| 5. **Inference API**: HTTP client for communicating with inference servers | |
| ## π License - Freedom First! | |
| llama.ui is proudly **MIT licensed** - go build amazing things! | |
| --- | |
| <p align="center"> | |
| Made with β€οΈ and β by humans who believe in private AI | |
| </p> |