GoodDavid commited on
Commit
2b4d759
·
verified ·
1 Parent(s): a523db1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -7
README.md CHANGED
@@ -17,9 +17,9 @@ pipeline_tag: text-generation
17
  library_name: llama.cpp
18
  ---
19
 
20
- # Offline AI 2.0 – EuroLLM-9B-Q8_0 (GGUF)
21
 
22
- Offline AI 2.0 is the next evolution of the OfflineAI.online project.
23
 
24
  Version 1 proved a simple idea:
25
  AI can run completely offline.
@@ -27,7 +27,7 @@ No cloud.
27
  No tracking.
28
  No data collection.
29
 
30
- Version 2.0 expands this concept into a lightweight private AI workspace designed for independent work, experimentation, and digital sovereignty.
31
 
32
  Everything runs locally.
33
  No internet connection required.
@@ -56,7 +56,7 @@ macOS / Windows
56
  Base model: EuroLLM-9B (quantized Q8_0 for offline execution)
57
  Format: GGUF (llama.cpp compatible)
58
  Runtime: llama.cpp
59
- Offline AI Version: 2.0
60
  Recommended RAM: 16 GB
61
  Platforms: macOS, Windows
62
 
@@ -64,16 +64,16 @@ The EuroLLM model provides strong multilingual performance (Czech, Slovak, Engli
64
 
65
  ---
66
 
67
- ## 🧠 WHAT CHANGED IN 2.0
68
 
69
- - Refined wrapper architecture
70
  - Improved response handling
71
  - More stable execution
72
  - Cleaner interaction flow
73
  - Stronger project identity and structure
74
  - Designed as a private AI workspace rather than a simple launcher
75
 
76
- Offline AI 2.0 is not just “run model → chat → exit”.
77
  It is built as a foundation for future expansion while remaining minimal and fully local.
78
 
79
  ---
 
17
  library_name: llama.cpp
18
  ---
19
 
20
+ # Offline AI 2.1 – EuroLLM-9B-Q8_0 (GGUF)
21
 
22
+ Offline AI 2.1 is the next evolution of the OfflineAI.online project.
23
 
24
  Version 1 proved a simple idea:
25
  AI can run completely offline.
 
27
  No tracking.
28
  No data collection.
29
 
30
+ Version 2.1 expands this concept into a lightweight private AI workspace designed for independent work, experimentation, and digital sovereignty.
31
 
32
  Everything runs locally.
33
  No internet connection required.
 
56
  Base model: EuroLLM-9B (quantized Q8_0 for offline execution)
57
  Format: GGUF (llama.cpp compatible)
58
  Runtime: llama.cpp
59
+ Offline AI Version: 2.1
60
  Recommended RAM: 16 GB
61
  Platforms: macOS, Windows
62
 
 
64
 
65
  ---
66
 
67
+ ## 🧠 WHAT CHANGED IN 2.1
68
 
69
+ - Refined CLI architecture
70
  - Improved response handling
71
  - More stable execution
72
  - Cleaner interaction flow
73
  - Stronger project identity and structure
74
  - Designed as a private AI workspace rather than a simple launcher
75
 
76
+ Offline AI 2.1 is not just “run model → chat → exit”.
77
  It is built as a foundation for future expansion while remaining minimal and fully local.
78
 
79
  ---