Update README.md
Browse files
README.md
CHANGED
|
@@ -17,9 +17,9 @@ pipeline_tag: text-generation
|
|
| 17 |
library_name: llama.cpp
|
| 18 |
---
|
| 19 |
|
| 20 |
-
# Offline AI 2.
|
| 21 |
|
| 22 |
-
Offline AI 2.
|
| 23 |
|
| 24 |
Version 1 proved a simple idea:
|
| 25 |
AI can run completely offline.
|
|
@@ -27,7 +27,7 @@ No cloud.
|
|
| 27 |
No tracking.
|
| 28 |
No data collection.
|
| 29 |
|
| 30 |
-
Version 2.
|
| 31 |
|
| 32 |
Everything runs locally.
|
| 33 |
No internet connection required.
|
|
@@ -56,7 +56,7 @@ macOS / Windows
|
|
| 56 |
Base model: EuroLLM-9B (quantized Q8_0 for offline execution)
|
| 57 |
Format: GGUF (llama.cpp compatible)
|
| 58 |
Runtime: llama.cpp
|
| 59 |
-
Offline AI Version: 2.
|
| 60 |
Recommended RAM: 16 GB
|
| 61 |
Platforms: macOS, Windows
|
| 62 |
|
|
@@ -64,16 +64,16 @@ The EuroLLM model provides strong multilingual performance (Czech, Slovak, Engli
|
|
| 64 |
|
| 65 |
---
|
| 66 |
|
| 67 |
-
## 🧠 WHAT CHANGED IN 2.
|
| 68 |
|
| 69 |
-
- Refined
|
| 70 |
- Improved response handling
|
| 71 |
- More stable execution
|
| 72 |
- Cleaner interaction flow
|
| 73 |
- Stronger project identity and structure
|
| 74 |
- Designed as a private AI workspace rather than a simple launcher
|
| 75 |
|
| 76 |
-
Offline AI 2.
|
| 77 |
It is built as a foundation for future expansion while remaining minimal and fully local.
|
| 78 |
|
| 79 |
---
|
|
|
|
| 17 |
library_name: llama.cpp
|
| 18 |
---
|
| 19 |
|
| 20 |
+
# Offline AI 2.1 – EuroLLM-9B-Q8_0 (GGUF)
|
| 21 |
|
| 22 |
+
Offline AI 2.1 is the next evolution of the OfflineAI.online project.
|
| 23 |
|
| 24 |
Version 1 proved a simple idea:
|
| 25 |
AI can run completely offline.
|
|
|
|
| 27 |
No tracking.
|
| 28 |
No data collection.
|
| 29 |
|
| 30 |
+
Version 2.1 expands this concept into a lightweight private AI workspace designed for independent work, experimentation, and digital sovereignty.
|
| 31 |
|
| 32 |
Everything runs locally.
|
| 33 |
No internet connection required.
|
|
|
|
| 56 |
Base model: EuroLLM-9B (quantized Q8_0 for offline execution)
|
| 57 |
Format: GGUF (llama.cpp compatible)
|
| 58 |
Runtime: llama.cpp
|
| 59 |
+
Offline AI Version: 2.1
|
| 60 |
Recommended RAM: 16 GB
|
| 61 |
Platforms: macOS, Windows
|
| 62 |
|
|
|
|
| 64 |
|
| 65 |
---
|
| 66 |
|
| 67 |
+
## 🧠 WHAT CHANGED IN 2.1
|
| 68 |
|
| 69 |
+
- Refined CLI architecture
|
| 70 |
- Improved response handling
|
| 71 |
- More stable execution
|
| 72 |
- Cleaner interaction flow
|
| 73 |
- Stronger project identity and structure
|
| 74 |
- Designed as a private AI workspace rather than a simple launcher
|
| 75 |
|
| 76 |
+
Offline AI 2.1 is not just “run model → chat → exit”.
|
| 77 |
It is built as a foundation for future expansion while remaining minimal and fully local.
|
| 78 |
|
| 79 |
---
|