aguitauwu commited on
Commit
727ebbb
·
1 Parent(s): f39112f
Files changed (1) hide show
  1. README.md +1036 -7
README.md CHANGED
@@ -1,11 +1,1040 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
- title: Yuuki Local
3
- emoji: 🚀
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4
  colorFrom: purple
5
- colorTo: pink
6
  sdk: docker
7
  pinned: false
8
- license: mit
9
- thumbnail: >-
10
- https://cdn-uploads.huggingface.co/production/uploads/68a8bd1d45ff88ffe886e331/Jg0VGSi-gjB59IIJ1Fv2z.png
11
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <div align="center">
2
+
3
+ <br>
4
+
5
+ <img src="https://img.shields.io/badge/%E2%9C%A6-YUUKI--API-000000?style=for-the-badge&labelColor=000000" alt="Yuuki API" height="50">
6
+
7
+ <br><br>
8
+
9
+ # Local Inference API for Yuuki Models
10
+
11
+ **FastAPI server. Docker deployment. Multi-model support. Zero external dependencies.**<br>
12
+ **Run Yuuki models directly on CPU with lazy loading and automatic caching.**
13
+
14
+ <br>
15
+
16
+ <a href="#features"><img src="https://img.shields.io/badge/FEATURES-000000?style=for-the-badge" alt="Features"></a>
17
+ &nbsp;&nbsp;
18
+ <a href="https://huggingface.co/spaces/OpceanAI/Yuuki-api"><img src="https://img.shields.io/badge/LIVE_API-000000?style=for-the-badge" alt="Live API"></a>
19
+ &nbsp;&nbsp;
20
+ <a href="https://github.com/sponsors/aguitauwu"><img src="https://img.shields.io/badge/SPONSOR-000000?style=for-the-badge" alt="Sponsor"></a>
21
+
22
+ <br><br>
23
+
24
+ [![License](https://img.shields.io/badge/MIT-222222?style=flat-square&logo=opensourceinitiative&logoColor=white)](LICENSE)
25
+ &nbsp;
26
+ [![FastAPI](https://img.shields.io/badge/FastAPI-222222?style=flat-square&logo=fastapi&logoColor=white)](https://fastapi.tiangolo.com/)
27
+ &nbsp;
28
+ [![Docker](https://img.shields.io/badge/Docker-222222?style=flat-square&logo=docker&logoColor=white)](https://www.docker.com/)
29
+ &nbsp;
30
+ [![PyTorch](https://img.shields.io/badge/PyTorch-222222?style=flat-square&logo=pytorch&logoColor=white)](https://pytorch.org/)
31
+ &nbsp;
32
+ [![Transformers](https://img.shields.io/badge/Transformers-222222?style=flat-square&logo=huggingface&logoColor=white)](https://huggingface.co/docs/transformers/)
33
+ &nbsp;
34
+ [![HuggingFace](https://img.shields.io/badge/Spaces-222222?style=flat-square&logo=huggingface&logoColor=white)](https://huggingface.co/spaces)
35
+
36
+ <br>
37
+
38
  ---
39
+
40
+ <br>
41
+
42
+ <table>
43
+ <tr>
44
+ <td width="50%" valign="top">
45
+
46
+ **Self-hosted inference server.**<br><br>
47
+ Three Yuuki model variants.<br>
48
+ Lazy loading with memory caching.<br>
49
+ REST API with OpenAPI docs.<br>
50
+ Health check and model list endpoints.<br>
51
+ CORS enabled for web clients.<br>
52
+ Automatic model downloads at build time.<br>
53
+ CPU-optimized with ~50 tokens/second.
54
+
55
+ </td>
56
+ <td width="50%" valign="top">
57
+
58
+ **Production-ready deployment.**<br><br>
59
+ Dockerized for HuggingFace Spaces.<br>
60
+ Health checks with auto-restart.<br>
61
+ Request/response timing metrics.<br>
62
+ Configurable token limits.<br>
63
+ Temperature and top-p sampling.<br>
64
+ <br>
65
+ No API keys. No rate limits. Just inference.
66
+
67
+ </td>
68
+ </tr>
69
+ </table>
70
+
71
+ <br>
72
+
73
+ </div>
74
+
75
+ ---
76
+
77
+ <br>
78
+
79
+ <div align="center">
80
+
81
+ ## What is Yuuki-API?
82
+
83
+ </div>
84
+
85
+ <br>
86
+
87
+ **Yuuki-API** is a self-hosted inference server for the [Yuuki language models](https://huggingface.co/YuuKi-OS). It provides a FastAPI-based REST API that loads models on-demand, caches them in memory, and serves predictions via simple HTTP endpoints. Unlike cloud APIs, this runs entirely locally -- no API keys, no rate limits, no external dependencies.
88
+
89
+ The server supports three Yuuki model variants: **Yuuki-best** (flagship checkpoint), **Yuuki-3.7** (balanced), and **Yuuki-v0.1** (lightweight). Models are lazy-loaded on first use and cached for subsequent requests. All inference runs on CPU with PyTorch, optimized for resource-constrained environments like HuggingFace Spaces Free tier.
90
+
91
+ Built with **FastAPI**, **PyTorch**, **Transformers**, and packaged in a **Docker** container. Pre-downloads model weights during the build step to minimize startup time. Interactive API documentation available at `/docs`.
92
+
93
+ <br>
94
+
95
+ ---
96
+
97
+ <br>
98
+
99
+ <div align="center">
100
+
101
+ ## Features
102
+
103
+ </div>
104
+
105
+ <br>
106
+
107
+ <table>
108
+ <tr>
109
+ <td width="50%" valign="top">
110
+
111
+ <h3>Multi-Model Support</h3>
112
+
113
+ Three Yuuki variants: `yuuki-best`, `yuuki-3.7`, and `yuuki-v0.1`. Each model maps to its HuggingFace checkpoint. Clients specify the model via the `model` field in POST requests. Default is `yuuki-best` if not specified.
114
+
115
+ <br>
116
+
117
+ <h3>Lazy Loading & Caching</h3>
118
+
119
+ Models are loaded into memory only when first requested, not at server startup. Once loaded, they remain cached for the lifetime of the process. This allows the server to start instantly while supporting multiple models without consuming memory upfront.
120
+
121
+ <br>
122
+
123
+ <h3>REST API with Docs</h3>
124
+
125
+ Standard REST endpoints: `GET /` for API info, `GET /health` for status, `GET /models` for available models, and `POST /generate` for inference. FastAPI automatically generates interactive OpenAPI documentation at `/docs` and JSON schema at `/openapi.json`.
126
+
127
+ <br>
128
+
129
+ <h3>CORS Enabled</h3>
130
+
131
+ Configured with permissive CORS headers to allow requests from any origin. Essential for browser-based clients like [Yuuki Chat](https://github.com/YuuKi-OS/Yuuki-chat) or web demos.
132
+
133
+ </td>
134
+ <td width="50%" valign="top">
135
+
136
+ <h3>Request Validation</h3>
137
+
138
+ Pydantic models validate all inputs: `prompt` (1-4000 chars), `max_new_tokens` (1-512), `temperature` (0.1-2.0), and `top_p` (0.0-1.0). Invalid requests return structured error messages with HTTP 400/422 status codes.
139
+
140
+ <br>
141
+
142
+ <h3>Response Timing</h3>
143
+
144
+ Every `/generate` response includes a `time_ms` field showing inference latency in milliseconds. Useful for performance monitoring and client-side UX (e.g., showing "Generated in 2.1s").
145
+
146
+ <br>
147
+
148
+ <h3>Dockerized Deployment</h3>
149
+
150
+ Multi-stage Dockerfile that pre-downloads all three model checkpoints during the build step. This means the container starts with models already cached, eliminating cold-start delays. Optimized for HuggingFace Spaces but works anywhere Docker runs.
151
+
152
+ <br>
153
+
154
+ <h3>Health Checks</h3>
155
+
156
+ Built-in `/health` endpoint returns server status and lists which models are currently loaded in memory. Docker health check configured to auto-restart on failures.
157
+
158
+ </td>
159
+ </tr>
160
+ </table>
161
+
162
+ <br>
163
+
164
+ ---
165
+
166
+ <br>
167
+
168
+ <div align="center">
169
+
170
+ ## API Reference
171
+
172
+ </div>
173
+
174
+ <br>
175
+
176
+ ### `GET /`
177
+
178
+ Returns API metadata and available endpoints.
179
+
180
+ ```bash
181
+ curl https://opceanai-yuuki-api.hf.space/
182
+ ```
183
+
184
+ **Response:**
185
+ ```json
186
+ {
187
+ "message": "Yuuki Local Inference API",
188
+ "models": ["yuuki-best", "yuuki-3.7", "yuuki-v0.1"],
189
+ "endpoints": {
190
+ "health": "GET /health",
191
+ "models": "GET /models",
192
+ "generate": "POST /generate",
193
+ "docs": "GET /docs"
194
+ }
195
+ }
196
+ ```
197
+
198
+ <br>
199
+
200
+ ### `GET /health`
201
+
202
+ Health check endpoint showing server status and loaded models.
203
+
204
+ ```bash
205
+ curl https://opceanai-yuuki-api.hf.space/health
206
+ ```
207
+
208
+ **Response:**
209
+ ```json
210
+ {
211
+ "status": "ok",
212
+ "available_models": ["yuuki-best", "yuuki-3.7", "yuuki-v0.1"],
213
+ "loaded_models": ["yuuki-best"]
214
+ }
215
+ ```
216
+
217
+ <br>
218
+
219
+ ### `GET /models`
220
+
221
+ Lists all available models with their HuggingFace identifiers.
222
+
223
+ ```bash
224
+ curl https://opceanai-yuuki-api.hf.space/models
225
+ ```
226
+
227
+ **Response:**
228
+ ```json
229
+ {
230
+ "models": [
231
+ {"id": "yuuki-best", "name": "OpceanAI/Yuuki-best"},
232
+ {"id": "yuuki-3.7", "name": "OpceanAI/Yuuki-3.7"},
233
+ {"id": "yuuki-v0.1", "name": "OpceanAI/Yuuki-v0.1"}
234
+ ]
235
+ }
236
+ ```
237
+
238
+ <br>
239
+
240
+ ### `POST /generate`
241
+
242
+ Generate text completion from a prompt.
243
+
244
+ ```bash
245
+ curl -X POST https://opceanai-yuuki-api.hf.space/generate \
246
+ -H "Content-Type: application/json" \
247
+ -d '{
248
+ "prompt": "def fibonacci(n):",
249
+ "model": "yuuki-best",
250
+ "max_new_tokens": 100,
251
+ "temperature": 0.7,
252
+ "top_p": 0.95
253
+ }'
254
+ ```
255
+
256
+ **Request Body:**
257
+
258
+ | Field | Type | Required | Default | Range | Description |
259
+ |:------|:-----|:---------|:--------|:------|:------------|
260
+ | `prompt` | string | **Yes** | - | 1-4000 chars | Input text to complete |
261
+ | `model` | string | No | `yuuki-best` | - | Model ID to use |
262
+ | `max_new_tokens` | integer | No | 120 | 1-512 | Maximum tokens to generate |
263
+ | `temperature` | float | No | 0.7 | 0.1-2.0 | Sampling temperature |
264
+ | `top_p` | float | No | 0.95 | 0.0-1.0 | Nucleus sampling threshold |
265
+
266
+ **Response:**
267
+
268
+ ```json
269
+ {
270
+ "response": " if n <= 1:\n return n\n return fibonacci(n-1) + fibonacci(n-2)",
271
+ "model": "yuuki-best",
272
+ "tokens_generated": 25,
273
+ "time_ms": 2033
274
+ }
275
+ ```
276
+
277
+ | Field | Type | Description |
278
+ |:------|:-----|:------------|
279
+ | `response` | string | Generated text (excluding the original prompt) |
280
+ | `model` | string | Model ID that was used |
281
+ | `tokens_generated` | integer | Number of new tokens produced |
282
+ | `time_ms` | integer | Inference time in milliseconds |
283
+
284
+ <br>
285
+
286
+ **Error Responses:**
287
+
288
+ ```json
289
+ // Invalid model
290
+ {"detail": "Invalid model. Available: ['yuuki-best', 'yuuki-3.7', 'yuuki-v0.1']"}
291
+
292
+ // Token limit exceeded
293
+ {"detail": [{"type": "less_than_equal", "loc": ["body", "max_new_tokens"],
294
+ "msg": "Input should be less than or equal to 512", "input": 1024}]}
295
+
296
+ // Server error
297
+ {"detail": "Model inference failed: Out of memory"}
298
+ ```
299
+
300
+ <br>
301
+
302
+ ---
303
+
304
+ <br>
305
+
306
+ <div align="center">
307
+
308
+ ## Models
309
+
310
+ </div>
311
+
312
+ <br>
313
+
314
+ | Model ID | HuggingFace Path | Parameters | Description | Speed (CPU) |
315
+ |:---------|:-----------------|:-----------|:------------|:------------|
316
+ | `yuuki-best` | `OpceanAI/Yuuki-best` | 124M | Flagship checkpoint with best quality. Trained to step 2000. | ~50 tok/s |
317
+ | `yuuki-3.7` | `OpceanAI/Yuuki-3.7` | 124M | Balanced checkpoint for speed and quality. | ~50 tok/s |
318
+ | `yuuki-v0.1` | `OpceanAI/Yuuki-v0.1` | 124M | Lightweight first-generation model. Fastest inference. | ~55 tok/s |
319
+
320
+ All models are based on GPT-2 architecture (124M parameters) and trained on CPU (Snapdragon 685) with zero cloud budget. Model weights are ~500MB each. The server caches loaded models in RAM (~1.5GB total if all three are loaded).
321
+
322
+ <br>
323
+
324
+ ---
325
+
326
+ <br>
327
+
328
+ <div align="center">
329
+
330
+ ## Tech Stack
331
+
332
+ </div>
333
+
334
+ <br>
335
+
336
+ | Technology | Version | Purpose |
337
+ |:-----------|:--------|:--------|
338
+ | **FastAPI** | 0.115.0 | Web framework, request validation, auto-docs |
339
+ | **Uvicorn** | 0.30.6 | ASGI server for running FastAPI |
340
+ | **PyTorch** | 2.4.1 | Deep learning framework for model inference |
341
+ | **Transformers** | 4.45.0 | HuggingFace library for loading and running LLMs |
342
+ | **Pydantic** | 2.9.0 | Request/response validation |
343
+ | **Accelerate** | 0.34.2 | Model loading optimizations |
344
+
345
+ <br>
346
+
347
+ ### System Requirements
348
+
349
+ | Resource | Minimum | Recommended |
350
+ |:---------|:--------|:------------|
351
+ | CPU | 2 cores | 4+ cores |
352
+ | RAM | 2GB | 4GB (8GB if loading all models) |
353
+ | Storage | 2GB | 3GB |
354
+ | Python | 3.10+ | 3.10+ |
355
+
356
+ <br>
357
+
358
+ ---
359
+
360
+ <br>
361
+
362
+ <div align="center">
363
+
364
+ ## Architecture
365
+
366
+ </div>
367
+
368
+ <br>
369
+
370
+ ```
371
+ Client (Browser/CLI)
372
+ |
373
+ | HTTP POST /generate
374
+ v
375
+ +-------------------------------------------------------+
376
+ | Yuuki-API (FastAPI + Uvicorn) |
377
+ | |
378
+ | /generate endpoint |
379
+ | | |
380
+ | v |
381
+ | load_model(model_key) |
382
+ | | |
383
+ | v |
384
+ | +-----------------+ |
385
+ | | Cache Check | <-- loaded_models dict |
386
+ | +-----------------+ |
387
+ | | |
388
+ | Model cached? |
389
+ | / \ |
390
+ | YES NO |
391
+ | | | |
392
+ | | v |
393
+ | | AutoModelForCausalLM.from_pretrained() |
394
+ | | AutoTokenizer.from_pretrained() |
395
+ | | | |
396
+ | | v |
397
+ | | Store in loaded_models cache |
398
+ | | | |
399
+ | +<-------------+ |
400
+ | | |
401
+ | v |
402
+ | tokenizer.encode(prompt) |
403
+ | | |
404
+ | v |
405
+ | model.generate() |
406
+ | | |
407
+ | v |
408
+ | tokenizer.decode(output) |
409
+ | | |
410
+ | v |
411
+ | {"response": "...", "tokens_generated": N, |
412
+ | "time_ms": T, "model": "yuuki-best"} |
413
+ +----------------------+--------------------------------+
414
+ |
415
+ v
416
+ JSON Response to Client
417
+ ```
418
+
419
+ <br>
420
+
421
+ ### Request Flow
422
+
423
+ 1. **Client sends POST** to `/generate` with `prompt`, `model`, and parameters
424
+ 2. **FastAPI validates** request body via Pydantic models
425
+ 3. **load_model()** checks if model is cached in memory
426
+ 4. **If not cached:** Downloads from HuggingFace, loads with PyTorch, stores in cache
427
+ 5. **If cached:** Retrieves from `loaded_models` dict
428
+ 6. **Tokenizer encodes** prompt to token IDs
429
+ 7. **Model generates** continuation with specified parameters
430
+ 8. **Tokenizer decodes** new tokens to text
431
+ 9. **Response returned** with generated text, token count, and timing
432
+
433
+ <br>
434
+
435
+ ---
436
+
437
+ <br>
438
+
439
+ <div align="center">
440
+
441
+ ## Installation
442
+
443
+ </div>
444
+
445
+ <br>
446
+
447
+ ### Local Development
448
+
449
+ ```bash
450
+ # Clone repository
451
+ git clone https://github.com/YuuKi-OS/Yuuki-api
452
+ cd Yuuki-api
453
+
454
+ # Create virtual environment
455
+ python3.10 -m venv venv
456
+ source venv/bin/activate # On Windows: venv\Scripts\activate
457
+
458
+ # Install dependencies
459
+ pip install -r requirements.txt
460
+
461
+ # Run server
462
+ uvicorn app:app --host 0.0.0.0 --port 7860
463
+ ```
464
+
465
+ Server will start at `http://localhost:7860`. Visit `http://localhost:7860/docs` for interactive API documentation.
466
+
467
+ <br>
468
+
469
+ ### Docker
470
+
471
+ ```bash
472
+ # Build image
473
+ docker build -t yuuki-api .
474
+
475
+ # Run container
476
+ docker run -p 7860:7860 yuuki-api
477
+ ```
478
+
479
+ **Note:** The Docker build step downloads all three models (~1.5GB total) which takes 5-10 minutes on first build. Subsequent builds use Docker layer caching and are much faster.
480
+
481
+ <br>
482
+
483
+ ---
484
+
485
+ <br>
486
+
487
+ <div align="center">
488
+
489
+ ## Deploy to HuggingFace Spaces
490
+
491
+ </div>
492
+
493
+ <br>
494
+
495
+ The recommended deployment method for zero-cost hosting.
496
+
497
+ ### Steps
498
+
499
+ 1. **Create a new Space** at [huggingface.co/new-space](https://huggingface.co/new-space)
500
+ 2. **Choose SDK:** Docker
501
+ 3. **Upload files:**
502
+ - `README.md` (with YAML header)
503
+ - `Dockerfile`
504
+ - `app.py`
505
+ - `requirements.txt`
506
+ 4. **Wait for build** (~10 minutes for model downloads)
507
+ 5. **Access API** at `https://YOUR-USERNAME-SPACE-NAME.hf.space`
508
+
509
+ <br>
510
+
511
+ ### README.md Header
512
+
513
+ ```yaml
514
+ ---
515
+ title: Yuuki API
516
+ emoji: 🤖
517
  colorFrom: purple
518
+ colorTo: black
519
  sdk: docker
520
  pinned: false
521
+ ---
522
+ ```
523
+
524
+ <br>
525
+
526
+ ### Environment Variables
527
+
528
+ None required. The API has zero external dependencies -- no API keys, no database, no auth services.
529
+
530
+ <br>
531
+
532
+ ---
533
+
534
+ <br>
535
+
536
+ <div align="center">
537
+
538
+ ## Usage Examples
539
+
540
+ </div>
541
+
542
+ <br>
543
+
544
+ ### Python
545
+
546
+ ```python
547
+ import requests
548
+
549
+ response = requests.post(
550
+ "https://opceanai-yuuki-api.hf.space/generate",
551
+ json={
552
+ "prompt": "def hello_world():",
553
+ "model": "yuuki-best",
554
+ "max_new_tokens": 50,
555
+ "temperature": 0.7
556
+ }
557
+ )
558
+
559
+ data = response.json()
560
+ print(data["response"])
561
+ print(f"Generated {data['tokens_generated']} tokens in {data['time_ms']}ms")
562
+ ```
563
+
564
+ <br>
565
+
566
+ ### JavaScript / TypeScript
567
+
568
+ ```typescript
569
+ const response = await fetch('https://opceanai-yuuki-api.hf.space/generate', {
570
+ method: 'POST',
571
+ headers: { 'Content-Type': 'application/json' },
572
+ body: JSON.stringify({
573
+ prompt: 'def hello_world():',
574
+ model: 'yuuki-best',
575
+ max_new_tokens: 50,
576
+ temperature: 0.7
577
+ })
578
+ });
579
+
580
+ const data = await response.json();
581
+ console.log(data.response);
582
+ console.log(`Generated ${data.tokens_generated} tokens in ${data.time_ms}ms`);
583
+ ```
584
+
585
+ <br>
586
+
587
+ ### cURL
588
+
589
+ ```bash
590
+ curl -X POST https://opceanai-yuuki-api.hf.space/generate \
591
+ -H "Content-Type: application/json" \
592
+ -d '{
593
+ "prompt": "def hello_world():",
594
+ "model": "yuuki-best",
595
+ "max_new_tokens": 50,
596
+ "temperature": 0.7
597
+ }'
598
+ ```
599
+
600
+ <br>
601
+
602
+ ### Next.js API Route
603
+
604
+ ```typescript
605
+ // app/api/generate/route.ts
606
+ import { NextRequest, NextResponse } from 'next/server';
607
+
608
+ export async function POST(req: NextRequest) {
609
+ const { prompt, model = 'yuuki-best', max_new_tokens = 100 } = await req.json();
610
+
611
+ const response = await fetch('https://opceanai-yuuki-api.hf.space/generate', {
612
+ method: 'POST',
613
+ headers: { 'Content-Type': 'application/json' },
614
+ body: JSON.stringify({ prompt, model, max_new_tokens })
615
+ });
616
+
617
+ const data = await response.json();
618
+ return NextResponse.json(data);
619
+ }
620
+ ```
621
+
622
+ <br>
623
+
624
+ ---
625
+
626
+ <br>
627
+
628
+ <div align="center">
629
+
630
+ ## Performance
631
+
632
+ </div>
633
+
634
+ <br>
635
+
636
+ ### Inference Speed (CPU)
637
+
638
+ | Tokens | Yuuki Best | Yuuki 3.7 | Yuuki v0.1 |
639
+ |:-------|:-----------|:----------|:-----------|
640
+ | 50 | ~1.0s | ~1.0s | ~0.9s |
641
+ | 100 | ~2.0s | ~2.0s | ~1.8s |
642
+ | 250 | ~5.0s | ~4.8s | ~4.5s |
643
+ | 512 (max) | ~10.2s | ~10.0s | ~9.3s |
644
+
645
+ Benchmarked on HuggingFace Spaces Free tier (2-core CPU). Times are for first request after model load. Subsequent requests are ~10% faster due to PyTorch optimizations.
646
+
647
+ <br>
648
+
649
+ ### Memory Usage
650
+
651
+ | State | RAM Usage |
652
+ |:------|:----------|
653
+ | Server idle (no models loaded) | ~250MB |
654
+ | + 1 model loaded | ~750MB |
655
+ | + 2 models loaded | ~1.2GB |
656
+ | + 3 models loaded | ~1.7GB |
657
+
658
+ HuggingFace Spaces Free tier provides 16GB RAM, so all three models can be loaded simultaneously with plenty of headroom.
659
+
660
+ <br>
661
+
662
+ ### Cold Start Time
663
+
664
+ | Operation | Duration |
665
+ |:----------|:---------|
666
+ | Server startup (no models) | <1s |
667
+ | First request (model download + load) | 8-12s |
668
+ | Subsequent requests (cached) | <100ms overhead |
669
+
670
+ Docker build pre-downloads models, so cold start on HuggingFace Spaces is instant.
671
+
672
+ <br>
673
+
674
+ ---
675
+
676
+ <br>
677
+
678
+ <div align="center">
679
+
680
+ ## Configuration
681
+
682
+ </div>
683
+
684
+ <br>
685
+
686
+ ### Model Limits
687
+
688
+ Adjust `max_new_tokens` limit in `app.py`:
689
+
690
+ ```python
691
+ class GenerateRequest(BaseModel):
692
+ prompt: str = Field(..., min_length=1, max_length=4000)
693
+ max_new_tokens: int = Field(default=120, ge=1, le=512) # Change 512 to your limit
694
+ temperature: float = Field(default=0.7, ge=0.1, le=2.0)
695
+ top_p: float = Field(default=0.95, ge=0.0, le=1.0)
696
+ ```
697
+
698
+ Higher limits increase memory usage and inference time. 512 tokens (~2KB text) balances quality and speed on CPU.
699
+
700
+ <br>
701
+
702
+ ### Adding More Models
703
+
704
+ Add new models to the `MODELS` dict in `app.py`:
705
+
706
+ ```python
707
+ MODELS = {
708
+ "yuuki-best": "OpceanAI/Yuuki-best",
709
+ "yuuki-3.7": "OpceanAI/Yuuki-3.7",
710
+ "yuuki-v0.1": "OpceanAI/Yuuki-v0.1",
711
+ "my-model": "username/my-model-checkpoint", # Add here
712
+ }
713
+ ```
714
+
715
+ <br>
716
+
717
+ ### CORS Configuration
718
+
719
+ Modify CORS settings in `app.py`:
720
+
721
+ ```python
722
+ app.add_middleware(
723
+ CORSMiddleware,
724
+ allow_origins=["*"], # Change to specific domains: ["https://myapp.com"]
725
+ allow_methods=["*"],
726
+ allow_headers=["*"],
727
+ )
728
+ ```
729
+
730
+ <br>
731
+
732
+ ---
733
+
734
+ <br>
735
+
736
+ <div align="center">
737
+
738
+ ## Troubleshooting
739
+
740
+ </div>
741
+
742
+ <br>
743
+
744
+ ### Server returns 500 error
745
+
746
+ **Check logs for:**
747
+ - `Out of memory` → Model too large for available RAM. Try `yuuki-v0.1` or reduce `max_new_tokens`.
748
+ - `Connection timeout` → Model loading takes >30s. This is normal on first load.
749
+
750
+ <br>
751
+
752
+ ### Models not loading
753
+
754
+ **Verify:**
755
+ - HuggingFace Transformers is installed: `pip show transformers`
756
+ - Model IDs are correct in `MODELS` dict
757
+ - Internet connection available for model downloads
758
+ - `~/.cache/huggingface/` has write permissions
759
+
760
+ <br>
761
+
762
+ ### Slow inference
763
+
764
+ **Optimizations:**
765
+ - Use `yuuki-v0.1` instead of `yuuki-best` for 10-15% speedup
766
+ - Reduce `max_new_tokens` to minimum needed
767
+ - Lower `temperature` to 0.3-0.5 for faster sampling
768
+ - Ensure no other processes are using CPU
769
+
770
+ <br>
771
+
772
+ ### Docker build fails
773
+
774
+ **Common issues:**
775
+ - Out of disk space → Model downloads need 2GB+ free
776
+ - Network timeout → Retry build, HuggingFace servers may be busy
777
+ - Python version mismatch → Use Python 3.10 base image
778
+
779
+ <br>
780
+
781
+ ---
782
+
783
+ <br>
784
+
785
+ <div align="center">
786
+
787
+ ## Roadmap
788
+
789
+ </div>
790
+
791
+ <br>
792
+
793
+ ### v1.0 -- Current (Complete)
794
+
795
+ - [x] Three Yuuki model variants
796
+ - [x] Lazy loading with memory caching
797
+ - [x] FastAPI with OpenAPI docs
798
+ - [x] Docker deployment
799
+ - [x] Health check endpoint
800
+ - [x] CORS enabled
801
+ - [x] Request validation
802
+ - [x] Response timing metrics
803
+
804
+ ### v1.1 -- Enhancements (Planned)
805
+
806
+ - [ ] Streaming responses (Server-Sent Events)
807
+ - [ ] Token usage statistics endpoint
808
+ - [ ] Model warm-up on server start
809
+ - [ ] Request queuing for concurrent requests
810
+ - [ ] Prometheus metrics export
811
+ - [ ] Rate limiting per IP
812
+
813
+ ### v2.0 -- Advanced Features (Future)
814
+
815
+ - [ ] GPU support with CUDA
816
+ - [ ] Batch inference
817
+ - [ ] Model quantization (4-bit/8-bit)
818
+ - [ ] Multi-turn conversation context
819
+ - [ ] Fine-tuning API
820
+ - [ ] WebSocket support
821
+
822
+ <br>
823
+
824
+ ---
825
+
826
+ <br>
827
+
828
+ <div align="center">
829
+
830
+ ## Contributing
831
+
832
+ </div>
833
+
834
+ <br>
835
+
836
+ ### Development Setup
837
+
838
+ ```bash
839
+ git clone https://github.com/YuuKi-OS/Yuuki-api
840
+ cd Yuuki-api
841
+
842
+ python3.10 -m venv venv
843
+ source venv/bin/activate
844
+ pip install -r requirements.txt
845
+
846
+ # Run with hot reload
847
+ uvicorn app:app --reload --host 0.0.0.0 --port 7860
848
+ ```
849
+
850
+ <br>
851
+
852
+ ### Commit Convention
853
+
854
+ ```
855
+ <type>(<scope>): <subject>
856
+ ```
857
+
858
+ Types: `feat` | `fix` | `docs` | `perf` | `refactor` | `chore`
859
+
860
+ ```
861
+ feat(api): add streaming response support
862
+
863
+ - Implement SSE endpoint at /generate/stream
864
+ - Add async generator for token-by-token streaming
865
+ - Update docs with streaming examples
866
+
867
+ Closes #12
868
+ ```
869
+
870
+ <br>
871
+
872
+ ### Pull Request Checklist
873
+
874
+ - [ ] Code follows PEP 8 style guidelines
875
+ - [ ] All endpoints tested with valid/invalid inputs
876
+ - [ ] No breaking changes to existing API
877
+ - [ ] Documentation updated (README + docstrings)
878
+ - [ ] Dockerfile builds successfully
879
+ - [ ] Commits follow the convention above
880
+
881
+ <br>
882
+
883
+ ---
884
+
885
+ <br>
886
+
887
+ <div align="center">
888
+
889
+ ## About the Yuuki Project
890
+
891
+ </div>
892
+
893
+ <br>
894
+
895
+ Yuuki-API is part of the [Yuuki project](https://huggingface.co/OpceanAI/Yuuki-best) -- a code-generation LLM being trained entirely on a smartphone (Redmi 12, Snapdragon 685, CPU only) with zero cloud budget.
896
+
897
+ <table>
898
+ <tr>
899
+ <td width="50%" valign="top">
900
+
901
+ **Training Details**
902
+
903
+ | | |
904
+ |:--|:--|
905
+ | Base model | GPT-2 (124M parameters) |
906
+ | Training type | Continued pre-training |
907
+ | Hardware | Snapdragon 685, CPU only |
908
+ | Training time | 50+ hours |
909
+ | Progress | 2,000 / 37,500 steps (5.3%) |
910
+ | Cost | $0.00 |
911
+
912
+ </td>
913
+ <td width="50%" valign="top">
914
+
915
+ **Quality Scores (Checkpoint 2000)**
916
+
917
+ | Language | Score |
918
+ |:---------|:------|
919
+ | Agda | 55 / 100 |
920
+ | C | 20 / 100 |
921
+ | Assembly | 15 / 100 |
922
+ | Python | 8 / 100 |
923
+
924
+ </td>
925
+ </tr>
926
+ </table>
927
+
928
+ Created by **agua_omg** -- a young independent developer who started the project in January 2026 because paying for Claude was no longer an option. The name Yuuki combines the Japanese word for snow (Yuki) with the character Yuu from Girls' Last Tour.
929
+
930
+ <br>
931
+
932
+ ---
933
+
934
+ <br>
935
+
936
+ <div align="center">
937
+
938
+ ## Related Projects
939
+
940
+ </div>
941
+
942
+ <br>
943
+
944
+ | Project | Description |
945
+ |:--------|:------------|
946
+ | [Yuuki Chat](https://github.com/YuuKi-OS/Yuuki-chat) | macOS-inspired chat interface with web research and YouTube search |
947
+ | [Yuuki Web](https://github.com/YuuKi-OS/yuuki-web) | Official landing page for the Yuuki project |
948
+ | [yuy](https://github.com/YuuKi-OS/yuy) | CLI for downloading, managing, and running Yuuki models |
949
+ | [yuy-chat](https://github.com/YuuKi-OS/yuy-chat) | TUI chat interface for local AI conversations |
950
+ | [Yuuki-best](https://huggingface.co/OpceanAI/Yuuki-best) | Best checkpoint model weights |
951
+ | [Yuuki Space](https://huggingface.co/spaces/OpceanAI/Yuuki) | Web-based interactive demo |
952
+ | [yuuki-training](https://github.com/YuuKi-OS/yuuki-training) | Training code and scripts |
953
+
954
+ <br>
955
+
956
+ ---
957
+
958
+ <br>
959
+
960
+ <div align="center">
961
+
962
+ ## Links
963
+
964
+ </div>
965
+
966
+ <br>
967
+
968
+ <div align="center">
969
+
970
+ [![Live API](https://img.shields.io/badge/Live_API-HuggingFace_Spaces-ffd21e?style=for-the-badge&logo=huggingface&logoColor=black)](https://huggingface.co/spaces/OpceanAI/Yuuki-api)
971
+ &nbsp;
972
+ [![Model Weights](https://img.shields.io/badge/Model_Weights-Hugging_Face-ffd21e?style=for-the-badge&logo=huggingface&logoColor=black)](https://huggingface.co/OpceanAI/Yuuki-best)
973
+ &nbsp;
974
+ [![Yuuki Chat](https://img.shields.io/badge/Yuuki_Chat-Vercel-000000?style=for-the-badge&logo=vercel&logoColor=white)](https://yuuki-chat.vercel.app)
975
+
976
+ <br>
977
+
978
+ [![YUY CLI](https://img.shields.io/badge/Yuy_CLI-GitHub-181717?style=for-the-badge&logo=github&logoColor=white)](https://github.com/YuuKi-OS/yuy)
979
+ &nbsp;
980
+ [![YUY Chat](https://img.shields.io/badge/Yuy_Chat-GitHub-181717?style=for-the-badge&logo=github&logoColor=white)](https://github.com/YuuKi-OS/yuy-chat)
981
+ &nbsp;
982
+ [![Sponsor](https://img.shields.io/badge/Sponsor-GitHub_Sponsors-ea4aaa?style=for-the-badge&logo=githubsponsors&logoColor=white)](https://github.com/sponsors/aguitauwu)
983
+
984
+ </div>
985
+
986
+ <br>
987
+
988
+ ---
989
+
990
+ <br>
991
+
992
+ <div align="center">
993
+
994
+ ## License
995
+
996
+ </div>
997
+
998
+ <br>
999
+
1000
+ ```
1001
+ MIT License
1002
+
1003
+ Copyright (c) 2026 Yuuki Project
1004
+
1005
+ Permission is hereby granted, free of charge, to any person obtaining a copy
1006
+ of this software and associated documentation files (the "Software"), to deal
1007
+ in the Software without restriction, including without limitation the rights
1008
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
1009
+ copies of the Software, and to permit persons to whom the Software is
1010
+ furnished to do so, subject to the following conditions:
1011
+
1012
+ The above copyright notice and this permission notice shall be included in all
1013
+ copies or substantial portions of the Software.
1014
+
1015
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
1016
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
1017
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
1018
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
1019
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
1020
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
1021
+ SOFTWARE.
1022
+ ```
1023
+
1024
+ <br>
1025
+
1026
+ ---
1027
+
1028
+ <br>
1029
+
1030
+ <div align="center">
1031
+
1032
+ **Built with patience, a phone, and zero budget.**
1033
+
1034
+ <br>
1035
+
1036
+ [![Yuuki Project](https://img.shields.io/badge/Yuuki_Project-2026-000000?style=for-the-badge)](https://huggingface.co/OpceanAI)
1037
+
1038
+ <br>
1039
+
1040
+ </div>