--- license: mit title: Yuuki-api sdk: docker emoji: 🚀 colorFrom: purple colorTo: pink pinned: true thumbnail: >- https://cdn-uploads.huggingface.co/production/uploads/68a8bd1d45ff88ffe886e331/64rKm_tmEj0DPMdzTAWCM.png ---

Yuuki API

# Local Inference API for Yuuki Models **FastAPI server. Docker deployment. Multi-model support. Zero external dependencies.**
**Run Yuuki models directly on CPU with lazy loading and automatic caching.**
Features    Live API    Sponsor

[![License](https://img.shields.io/badge/MIT-222222?style=flat-square&logo=opensourceinitiative&logoColor=white)](LICENSE)   [![FastAPI](https://img.shields.io/badge/FastAPI-222222?style=flat-square&logo=fastapi&logoColor=white)](https://fastapi.tiangolo.com/)   [![Docker](https://img.shields.io/badge/Docker-222222?style=flat-square&logo=docker&logoColor=white)](https://www.docker.com/)   [![PyTorch](https://img.shields.io/badge/PyTorch-222222?style=flat-square&logo=pytorch&logoColor=white)](https://pytorch.org/)   [![Transformers](https://img.shields.io/badge/Transformers-222222?style=flat-square&logo=huggingface&logoColor=white)](https://huggingface.co/docs/transformers/)   [![HuggingFace](https://img.shields.io/badge/Spaces-222222?style=flat-square&logo=huggingface&logoColor=white)](https://huggingface.co/spaces)
---
**Self-hosted inference server.**

Three Yuuki model variants.
Lazy loading with memory caching.
REST API with OpenAPI docs.
Health check and model list endpoints.
CORS enabled for web clients.
Automatic model downloads at build time.
CPU-optimized with ~50 tokens/second.
**Production-ready deployment.**

Dockerized for HuggingFace Spaces.
Health checks with auto-restart.
Request/response timing metrics.
Configurable token limits.
Temperature and top-p sampling.

No API keys. No rate limits. Just inference.

---
## What is Yuuki-API?

**Yuuki-API** is a self-hosted inference server for the [Yuuki language models](https://huggingface.co/YuuKi-OS). It provides a FastAPI-based REST API that loads models on-demand, caches them in memory, and serves predictions via simple HTTP endpoints. Unlike cloud APIs, this runs entirely locally -- no API keys, no rate limits, no external dependencies. The server supports three Yuuki model variants: **Yuuki-best** (flagship checkpoint), **Yuuki-3.7** (balanced), and **Yuuki-v0.1** (lightweight). Models are lazy-loaded on first use and cached for subsequent requests. All inference runs on CPU with PyTorch, optimized for resource-constrained environments like HuggingFace Spaces Free tier. Built with **FastAPI**, **PyTorch**, **Transformers**, and packaged in a **Docker** container. Pre-downloads model weights during the build step to minimize startup time. Interactive API documentation available at `/docs`.
---
## Features

Multi-Model Support

Three Yuuki variants: `yuuki-best`, `yuuki-3.7`, and `yuuki-v0.1`. Each model maps to its HuggingFace checkpoint. Clients specify the model via the `model` field in POST requests. Default is `yuuki-best` if not specified.

Lazy Loading & Caching

Models are loaded into memory only when first requested, not at server startup. Once loaded, they remain cached for the lifetime of the process. This allows the server to start instantly while supporting multiple models without consuming memory upfront.

REST API with Docs

Standard REST endpoints: `GET /` for API info, `GET /health` for status, `GET /models` for available models, and `POST /generate` for inference. FastAPI automatically generates interactive OpenAPI documentation at `/docs` and JSON schema at `/openapi.json`.

CORS Enabled

Configured with permissive CORS headers to allow requests from any origin. Essential for browser-based clients like [Yuuki Chat](https://github.com/YuuKi-OS/Yuuki-chat) or web demos.

Request Validation

Pydantic models validate all inputs: `prompt` (1-4000 chars), `max_new_tokens` (1-512), `temperature` (0.1-2.0), and `top_p` (0.0-1.0). Invalid requests return structured error messages with HTTP 400/422 status codes.

Response Timing

Every `/generate` response includes a `time_ms` field showing inference latency in milliseconds. Useful for performance monitoring and client-side UX (e.g., showing "Generated in 2.1s").

Dockerized Deployment

Multi-stage Dockerfile that pre-downloads all three model checkpoints during the build step. This means the container starts with models already cached, eliminating cold-start delays. Optimized for HuggingFace Spaces but works anywhere Docker runs.

Health Checks

Built-in `/health` endpoint returns server status and lists which models are currently loaded in memory. Docker health check configured to auto-restart on failures.

---
## API Reference

### `GET /` Returns API metadata and available endpoints. ```bash curl https://opceanai-yuuki-api.hf.space/ ``` **Response:** ```json { "message": "Yuuki Local Inference API", "models": ["yuuki-best", "yuuki-3.7", "yuuki-v0.1"], "endpoints": { "health": "GET /health", "models": "GET /models", "generate": "POST /generate", "docs": "GET /docs" } } ```
### `GET /health` Health check endpoint showing server status and loaded models. ```bash curl https://opceanai-yuuki-api.hf.space/health ``` **Response:** ```json { "status": "ok", "available_models": ["yuuki-best", "yuuki-3.7", "yuuki-v0.1"], "loaded_models": ["yuuki-best"] } ```
### `GET /models` Lists all available models with their HuggingFace identifiers. ```bash curl https://opceanai-yuuki-api.hf.space/models ``` **Response:** ```json { "models": [ {"id": "yuuki-best", "name": "OpceanAI/Yuuki-best"}, {"id": "yuuki-3.7", "name": "OpceanAI/Yuuki-3.7"}, {"id": "yuuki-v0.1", "name": "OpceanAI/Yuuki-v0.1"} ] } ```
### `POST /generate` Generate text completion from a prompt. ```bash curl -X POST https://opceanai-yuuki-api.hf.space/generate \ -H "Content-Type: application/json" \ -d '{ "prompt": "def fibonacci(n):", "model": "yuuki-best", "max_new_tokens": 100, "temperature": 0.7, "top_p": 0.95 }' ``` **Request Body:** | Field | Type | Required | Default | Range | Description | |:------|:-----|:---------|:--------|:------|:------------| | `prompt` | string | **Yes** | - | 1-4000 chars | Input text to complete | | `model` | string | No | `yuuki-best` | - | Model ID to use | | `max_new_tokens` | integer | No | 120 | 1-512 | Maximum tokens to generate | | `temperature` | float | No | 0.7 | 0.1-2.0 | Sampling temperature | | `top_p` | float | No | 0.95 | 0.0-1.0 | Nucleus sampling threshold | **Response:** ```json { "response": " if n <= 1:\n return n\n return fibonacci(n-1) + fibonacci(n-2)", "model": "yuuki-best", "tokens_generated": 25, "time_ms": 2033 } ``` | Field | Type | Description | |:------|:-----|:------------| | `response` | string | Generated text (excluding the original prompt) | | `model` | string | Model ID that was used | | `tokens_generated` | integer | Number of new tokens produced | | `time_ms` | integer | Inference time in milliseconds |
**Error Responses:** ```json // Invalid model {"detail": "Invalid model. Available: ['yuuki-best', 'yuuki-3.7', 'yuuki-v0.1']"} // Token limit exceeded {"detail": [{"type": "less_than_equal", "loc": ["body", "max_new_tokens"], "msg": "Input should be less than or equal to 512", "input": 1024}]} // Server error {"detail": "Model inference failed: Out of memory"} ```
---
## Models

| Model ID | HuggingFace Path | Parameters | Description | Speed (CPU) | |:---------|:-----------------|:-----------|:------------|:------------| | `yuuki-best` | `OpceanAI/Yuuki-best` | 124M | Flagship checkpoint with best quality. Trained to step 2000. | ~50 tok/s | | `yuuki-3.7` | `OpceanAI/Yuuki-3.7` | 124M | Balanced checkpoint for speed and quality. | ~50 tok/s | | `yuuki-v0.1` | `OpceanAI/Yuuki-v0.1` | 124M | Lightweight first-generation model. Fastest inference. | ~55 tok/s | All models are based on GPT-2 architecture (124M parameters) and trained on CPU (Snapdragon 685) with zero cloud budget. Model weights are ~500MB each. The server caches loaded models in RAM (~1.5GB total if all three are loaded).
---
## Tech Stack

| Technology | Version | Purpose | |:-----------|:--------|:--------| | **FastAPI** | 0.115.0 | Web framework, request validation, auto-docs | | **Uvicorn** | 0.30.6 | ASGI server for running FastAPI | | **PyTorch** | 2.4.1 | Deep learning framework for model inference | | **Transformers** | 4.45.0 | HuggingFace library for loading and running LLMs | | **Pydantic** | 2.9.0 | Request/response validation | | **Accelerate** | 0.34.2 | Model loading optimizations |
### System Requirements | Resource | Minimum | Recommended | |:---------|:--------|:------------| | CPU | 2 cores | 4+ cores | | RAM | 2GB | 4GB (8GB if loading all models) | | Storage | 2GB | 3GB | | Python | 3.10+ | 3.10+ |
---
## Architecture

``` Client (Browser/CLI) | | HTTP POST /generate v +-------------------------------------------------------+ | Yuuki-API (FastAPI + Uvicorn) | | | | /generate endpoint | | | | | v | | load_model(model_key) | | | | | v | | +-----------------+ | | | Cache Check | <-- loaded_models dict | | +-----------------+ | | | | | Model cached? | | / \ | | YES NO | | | | | | | v | | | AutoModelForCausalLM.from_pretrained() | | | AutoTokenizer.from_pretrained() | | | | | | | v | | | Store in loaded_models cache | | | | | | +<-------------+ | | | | | v | | tokenizer.encode(prompt) | | | | | v | | model.generate() | | | | | v | | tokenizer.decode(output) | | | | | v | | {"response": "...", "tokens_generated": N, | | "time_ms": T, "model": "yuuki-best"} | +----------------------+--------------------------------+ | v JSON Response to Client ```
### Request Flow 1. **Client sends POST** to `/generate` with `prompt`, `model`, and parameters 2. **FastAPI validates** request body via Pydantic models 3. **load_model()** checks if model is cached in memory 4. **If not cached:** Downloads from HuggingFace, loads with PyTorch, stores in cache 5. **If cached:** Retrieves from `loaded_models` dict 6. **Tokenizer encodes** prompt to token IDs 7. **Model generates** continuation with specified parameters 8. **Tokenizer decodes** new tokens to text 9. **Response returned** with generated text, token count, and timing
---
## Installation

### Local Development ```bash # Clone repository git clone https://github.com/YuuKi-OS/Yuuki-api cd Yuuki-api # Create virtual environment python3.10 -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate # Install dependencies pip install -r requirements.txt # Run server uvicorn app:app --host 0.0.0.0 --port 7860 ``` Server will start at `http://localhost:7860`. Visit `http://localhost:7860/docs` for interactive API documentation.
### Docker ```bash # Build image docker build -t yuuki-api . # Run container docker run -p 7860:7860 yuuki-api ``` **Note:** The Docker build step downloads all three models (~1.5GB total) which takes 5-10 minutes on first build. Subsequent builds use Docker layer caching and are much faster.
---
## Deploy to HuggingFace Spaces

The recommended deployment method for zero-cost hosting. ### Steps 1. **Create a new Space** at [huggingface.co/new-space](https://huggingface.co/new-space) 2. **Choose SDK:** Docker 3. **Upload files:** - `README.md` (with YAML header) - `Dockerfile` - `app.py` - `requirements.txt` 4. **Wait for build** (~10 minutes for model downloads) 5. **Access API** at `https://YOUR-USERNAME-SPACE-NAME.hf.space`
### README.md Header ```yaml --- title: Yuuki API emoji: 🤖 colorFrom: purple colorTo: black sdk: docker pinned: false --- ```
### Environment Variables None required. The API has zero external dependencies -- no API keys, no database, no auth services.
---
## Usage Examples

### Python ```python import requests response = requests.post( "https://opceanai-yuuki-api.hf.space/generate", json={ "prompt": "def hello_world():", "model": "yuuki-best", "max_new_tokens": 50, "temperature": 0.7 } ) data = response.json() print(data["response"]) print(f"Generated {data['tokens_generated']} tokens in {data['time_ms']}ms") ```
### JavaScript / TypeScript ```typescript const response = await fetch('https://opceanai-yuuki-api.hf.space/generate', { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ prompt: 'def hello_world():', model: 'yuuki-best', max_new_tokens: 50, temperature: 0.7 }) }); const data = await response.json(); console.log(data.response); console.log(`Generated ${data.tokens_generated} tokens in ${data.time_ms}ms`); ```
### cURL ```bash curl -X POST https://opceanai-yuuki-api.hf.space/generate \ -H "Content-Type: application/json" \ -d '{ "prompt": "def hello_world():", "model": "yuuki-best", "max_new_tokens": 50, "temperature": 0.7 }' ```
### Next.js API Route ```typescript // app/api/generate/route.ts import { NextRequest, NextResponse } from 'next/server'; export async function POST(req: NextRequest) { const { prompt, model = 'yuuki-best', max_new_tokens = 100 } = await req.json(); const response = await fetch('https://opceanai-yuuki-api.hf.space/generate', { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ prompt, model, max_new_tokens }) }); const data = await response.json(); return NextResponse.json(data); } ```
---
## Performance

### Inference Speed (CPU) | Tokens | Yuuki Best | Yuuki 3.7 | Yuuki v0.1 | |:-------|:-----------|:----------|:-----------| | 50 | ~1.0s | ~1.0s | ~0.9s | | 100 | ~2.0s | ~2.0s | ~1.8s | | 250 | ~5.0s | ~4.8s | ~4.5s | | 512 (max) | ~10.2s | ~10.0s | ~9.3s | Benchmarked on HuggingFace Spaces Free tier (2-core CPU). Times are for first request after model load. Subsequent requests are ~10% faster due to PyTorch optimizations.
### Memory Usage | State | RAM Usage | |:------|:----------| | Server idle (no models loaded) | ~250MB | | + 1 model loaded | ~750MB | | + 2 models loaded | ~1.2GB | | + 3 models loaded | ~1.7GB | HuggingFace Spaces Free tier provides 16GB RAM, so all three models can be loaded simultaneously with plenty of headroom.
### Cold Start Time | Operation | Duration | |:----------|:---------| | Server startup (no models) | <1s | | First request (model download + load) | 8-12s | | Subsequent requests (cached) | <100ms overhead | Docker build pre-downloads models, so cold start on HuggingFace Spaces is instant.
---
## Configuration

### Model Limits Adjust `max_new_tokens` limit in `app.py`: ```python class GenerateRequest(BaseModel): prompt: str = Field(..., min_length=1, max_length=4000) max_new_tokens: int = Field(default=120, ge=1, le=512) # Change 512 to your limit temperature: float = Field(default=0.7, ge=0.1, le=2.0) top_p: float = Field(default=0.95, ge=0.0, le=1.0) ``` Higher limits increase memory usage and inference time. 512 tokens (~2KB text) balances quality and speed on CPU.
### Adding More Models Add new models to the `MODELS` dict in `app.py`: ```python MODELS = { "yuuki-best": "OpceanAI/Yuuki-best", "yuuki-3.7": "OpceanAI/Yuuki-3.7", "yuuki-v0.1": "OpceanAI/Yuuki-v0.1", "my-model": "username/my-model-checkpoint", # Add here } ```
### CORS Configuration Modify CORS settings in `app.py`: ```python app.add_middleware( CORSMiddleware, allow_origins=["*"], # Change to specific domains: ["https://myapp.com"] allow_methods=["*"], allow_headers=["*"], ) ```
---
## Troubleshooting

### Server returns 500 error **Check logs for:** - `Out of memory` → Model too large for available RAM. Try `yuuki-v0.1` or reduce `max_new_tokens`. - `Connection timeout` → Model loading takes >30s. This is normal on first load.
### Models not loading **Verify:** - HuggingFace Transformers is installed: `pip show transformers` - Model IDs are correct in `MODELS` dict - Internet connection available for model downloads - `~/.cache/huggingface/` has write permissions
### Slow inference **Optimizations:** - Use `yuuki-v0.1` instead of `yuuki-best` for 10-15% speedup - Reduce `max_new_tokens` to minimum needed - Lower `temperature` to 0.3-0.5 for faster sampling - Ensure no other processes are using CPU
### Docker build fails **Common issues:** - Out of disk space → Model downloads need 2GB+ free - Network timeout → Retry build, HuggingFace servers may be busy - Python version mismatch → Use Python 3.10 base image
---
## Roadmap

### v1.0 -- Current (Complete) - [x] Three Yuuki model variants - [x] Lazy loading with memory caching - [x] FastAPI with OpenAPI docs - [x] Docker deployment - [x] Health check endpoint - [x] CORS enabled - [x] Request validation - [x] Response timing metrics ### v1.1 -- Enhancements (Planned) - [ ] Streaming responses (Server-Sent Events) - [ ] Token usage statistics endpoint - [ ] Model warm-up on server start - [ ] Request queuing for concurrent requests - [ ] Prometheus metrics export - [ ] Rate limiting per IP ### v2.0 -- Advanced Features (Future) - [ ] GPU support with CUDA - [ ] Batch inference - [ ] Model quantization (4-bit/8-bit) - [ ] Multi-turn conversation context - [ ] Fine-tuning API - [ ] WebSocket support
---
## Contributing

### Development Setup ```bash git clone https://github.com/YuuKi-OS/Yuuki-api cd Yuuki-api python3.10 -m venv venv source venv/bin/activate pip install -r requirements.txt # Run with hot reload uvicorn app:app --reload --host 0.0.0.0 --port 7860 ```
### Commit Convention ``` (): ``` Types: `feat` | `fix` | `docs` | `perf` | `refactor` | `chore` ``` feat(api): add streaming response support - Implement SSE endpoint at /generate/stream - Add async generator for token-by-token streaming - Update docs with streaming examples Closes #12 ```
### Pull Request Checklist - [ ] Code follows PEP 8 style guidelines - [ ] All endpoints tested with valid/invalid inputs - [ ] No breaking changes to existing API - [ ] Documentation updated (README + docstrings) - [ ] Dockerfile builds successfully - [ ] Commits follow the convention above
---
## About the Yuuki Project

Yuuki-API is part of the [Yuuki project](https://huggingface.co/OpceanAI/Yuuki-best) -- a code-generation LLM being trained entirely on a smartphone (Redmi 12, Snapdragon 685, CPU only) with zero cloud budget.
**Training Details** | | | |:--|:--| | Base model | GPT-2 (124M parameters) | | Training type | Continued pre-training | | Hardware | Snapdragon 685, CPU only | | Training time | 50+ hours | | Progress | 2,000 / 37,500 steps (5.3%) | | Cost | $0.00 | **Quality Scores (Checkpoint 2000)** | Language | Score | |:---------|:------| | Agda | 55 / 100 | | C | 20 / 100 | | Assembly | 15 / 100 | | Python | 8 / 100 |
Created by **agua_omg** -- a young independent developer who started the project in January 2026 because paying for Claude was no longer an option. The name Yuuki combines the Japanese word for snow (Yuki) with the character Yuu from Girls' Last Tour.
---
## Related Projects

| Project | Description | |:--------|:------------| | [Yuuki Chat](https://github.com/YuuKi-OS/Yuuki-chat) | macOS-inspired chat interface with web research and YouTube search | | [Yuuki Web](https://github.com/YuuKi-OS/yuuki-web) | Official landing page for the Yuuki project | | [yuy](https://github.com/YuuKi-OS/yuy) | CLI for downloading, managing, and running Yuuki models | | [yuy-chat](https://github.com/YuuKi-OS/yuy-chat) | TUI chat interface for local AI conversations | | [Yuuki-best](https://huggingface.co/OpceanAI/Yuuki-best) | Best checkpoint model weights | | [Yuuki Space](https://huggingface.co/spaces/OpceanAI/Yuuki) | Web-based interactive demo | | [yuuki-training](https://github.com/YuuKi-OS/yuuki-training) | Training code and scripts |
---
## Links

[![Live API](https://img.shields.io/badge/Live_API-HuggingFace_Spaces-ffd21e?style=for-the-badge&logo=huggingface&logoColor=black)](https://huggingface.co/spaces/OpceanAI/Yuuki-api)   [![Model Weights](https://img.shields.io/badge/Model_Weights-Hugging_Face-ffd21e?style=for-the-badge&logo=huggingface&logoColor=black)](https://huggingface.co/OpceanAI/Yuuki-best)   [![Yuuki Chat](https://img.shields.io/badge/Yuuki_Chat-Vercel-000000?style=for-the-badge&logo=vercel&logoColor=white)](https://yuuki-chat.vercel.app)
[![YUY CLI](https://img.shields.io/badge/Yuy_CLI-GitHub-181717?style=for-the-badge&logo=github&logoColor=white)](https://github.com/YuuKi-OS/yuy)   [![YUY Chat](https://img.shields.io/badge/Yuy_Chat-GitHub-181717?style=for-the-badge&logo=github&logoColor=white)](https://github.com/YuuKi-OS/yuy-chat)   [![Sponsor](https://img.shields.io/badge/Sponsor-GitHub_Sponsors-ea4aaa?style=for-the-badge&logo=githubsponsors&logoColor=white)](https://github.com/sponsors/aguitauwu)

---
## License

``` MIT License Copyright (c) 2026 Yuuki Project Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ```
---
**Built with patience, a phone, and zero budget.**
[![Yuuki Project](https://img.shields.io/badge/Yuuki_Project-2026-000000?style=for-the-badge)](https://huggingface.co/OpceanAI)