fibonacciai commited on
Commit
815d405
·
verified ·
1 Parent(s): d6afb37

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +141 -3
README.md CHANGED
@@ -1,3 +1,141 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ widget:
3
+ - text: Fibonacci Intelligence ✨
4
+ parameters:
5
+ negative_prompt: fibonacci ai
6
+ output:
7
+ url: images/IMG_20250104_152637_289-GBCTSioQi-transformed-transformed.png
8
+ license: apache-2.0
9
+ tags:
10
+ - gemma3n
11
+ - GGUF
12
+ - conversational
13
+ - product-specialized-ai
14
+ - llama-cpp
15
+ - RealRobot
16
+ - lmstudio
17
+ - fibonacciai
18
+ - chatbot
19
+ - persian
20
+ - iran
21
+ - text-generation
22
+ - jan
23
+ - ollama
24
+ datasets:
25
+ - fibonacciai/RealRobot-chatbot-v2
26
+ - fibonacciai/Realrobot-chatbot
27
+ language:
28
+ - en
29
+ - fa
30
+ base_model:
31
+ - google/gemma-3n-E4B-it
32
+ new_version: fibonacciai/fibonacci-2-9b
33
+ pipeline_tag: question-answering
34
+ ---
35
+ ![baner1](https://cdn.imgurl.ir/uploads/i061213_gemma-3n-future-2.gif)
36
+ https://youtu.be/yS3aX3_w3T0
37
+ # RealRobot_chatbot_llm (GGUF) - The Blueprint for Specialized Product AI
38
+ ![1](https://cdn.imgurl.ir/uploads/s501697_RealRobot_chatbot_llm1.jpg)
39
+ This repository contains the highly optimized GGUF (quantized) version of the `RealRobot_chatbot_llm` model, developed by **fibonacciai**.
40
+
41
+ Our model is built on the efficient **Gemma3n architecture** and is fine-tuned on a proprietary dataset from the RealRobot product catalog. This model serves as the **proof-of-concept** for our core value proposition: the ability to rapidly create accurate, cost-effective, and deployable specialized language models for any business, based on their own product data.
42
+ ![baner1](https://cdn.imgurl.ir/uploads/c466728_1_YX_BOaLkFhVfP9S3979P_Q.gif)
43
+
44
+ ## 📈 Key Advantages and Value Proposition
45
+
46
+ The `RealRobot_chatbot_llm` demonstrates the unique benefits of our specialization strategy:
47
+ ![لوگوی مدل](https://cdn.imgurl.ir/uploads/f430581_RealRobot_chatbot_llm2.jpg)
48
+ * **Hyper-Specialization & Accuracy:** The model is trained exclusively on product data, eliminating the noise and inaccuracy of general-purpose models. It provides authoritative, relevant answers directly related to the RealRobot product line.
49
+ * **Scalable Business Model:** The entire process—from dataset creation to GGUF deployment—is a repeatable blueprint. **This exact specialized AI solution can be replicated for any company or platform** that wishes to embed a highly accurate, product-aware chatbot.
50
+ * **Cost & Resource Efficiency:** Leveraging the small and optimized Gemma 3n architecture, combined with GGUF quantization, ensures maximum performance and minimal computational cost. This makes on-premise, real-time deployment economically viable for enterprises of all sizes.
51
+ * **Optimal Deployment:** The GGUF format enables seamless integration into embedded systems, mobile applications, and local servers using industry-standard tools like `llama.cpp`.
52
+
53
+ ## 📝 Model & Architecture Details: Gemma 3n
54
+
55
+ The `RealRobot_chatbot_llm` is built upon the cutting-edge **Gemma 3n** architecture, a powerful, open model family from Google, optimized for size and speed.
56
+
57
+ | Feature | Description |
58
+ | :--- | :--- |
59
+ | **Base Architecture** | Google's Gemma 3n (Optimized for size and speed) |
60
+ | **Efficiency Focus** | Designed for accelerated performance on local devices (CPU/Edge) |
61
+ | **Model Size** | Approx. 4 Billion Parameters (Quantized) |
62
+ | **Fine-tuning Base** | `gemma-3n-e2b-it-bnb-4bit` |
63
+ ![2](https://cdn.imgurl.ir/uploads/f430581_RealRobot_chatbot_llm2.jpg)
64
+ ## 📊 Training Data: RealRobot Product Catalog
65
+
66
+ This model's high accuracy is a direct result of being fine-tuned on a single-domain, high-quality dataset:
67
+
68
+ * **Dataset Source:** [`fibonacciai/RealRobot-chatbot-v2`](https://huggingface.co/datasets/fibonacciai/RealRobot-chatbot-v2)
69
+ * **Content Focus:** The dataset is composed of conversational data and information derived directly from the **RealRobot website product documentation and support materials**.
70
+ * **Purpose:** This data ensures the chatbot can accurately and effectively answer customer questions about product features, usage, and troubleshooting specific to the RealRobot offerings.
71
+ ![3](https://cdn.imgurl.ir/uploads/s27401_RealRobot_chatbot_llm3.jpg)
72
+ ## ⚙️ How to Use (GGUF)
73
+
74
+ This GGUF model can be run using various clients, with `llama.cpp` being the most common.
75
+
76
+ ### 1. Using `llama.cpp` (Terminal)
77
+
78
+ 1. **Clone and build `llama.cpp`:**
79
+ ```bash
80
+ git clone [https://github.com/ggerganov/llama.cpp](https://github.com/ggerganov/llama.cpp)
81
+ cd llama.cpp
82
+ make
83
+ ```
84
+
85
+ 2. **Run the model:**
86
+ Use the `--hf-repo` flag to automatically download the model file. Replace `[YOUR_GGUF_FILENAME.gguf]` with the actual filename (e.g., `RealRobot_chatbot_llm-Q8_0.gguf`).
87
+
88
+ ```bash
89
+ ./main --hf-repo fibonacciai/RealRobot_chatbot_llm \
90
+ --hf-file [YOUR_GGUF_FILENAME.gguf] \
91
+ -n 256 \
92
+ -p "<start_of_turn>user\nWhat are the main features of the RealRobot X1 model?<end_of_turn>\n<start_of_turn>model\n"
93
+ ```
94
+
95
+ ### 2. Using `llama-cpp-python` (Python)
96
+
97
+ 1. **Install the library:**
98
+ ```bash
99
+ pip install llama-cpp-python
100
+ ```
101
+
102
+ 2. **Run in Python:**
103
+ ```python
104
+ from llama_cpp import Llama
105
+
106
+ GGUF_FILE = "[YOUR_GGUF_FILENAME.gguf]"
107
+ REPO_ID = "fibonacciai/RealRobot_chatbot_llm"
108
+
109
+ llm = Llama.from_pretrained(
110
+ repo_id=REPO_ID,
111
+ filename=GGUF_FILE,
112
+ n_ctx=2048,
113
+ chat_format="gemma", # Use the gemma chat format
114
+ verbose=False
115
+ )
116
+
117
+ messages = [
118
+ {"role": "user", "content": "How do I troubleshoot error code X-404 on the platform?"},
119
+ ]
120
+
121
+ response = llm.create_chat_completion(messages)
122
+ print(response['choices'][0]['message']['content'])
123
+ ```
124
+
125
+ ## ⚠️ Limitations and Bias
126
+
127
+ * **Domain Focus:** The model is highly specialized. It excels in answering questions about RealRobot products but will have limited performance on general knowledge outside this domain.
128
+ * **Output Verification:** The model's output should always be verified by human oversight before being used in critical customer support or business processes.
129
+
130
+ ## 📜 License
131
+
132
+ The model is licensed under the **Apache 2.0** license.
133
+
134
+ ## 📞 Contact for Specialized AI Solutions
135
+
136
+ For specialized inquiries, collaboration, or to develop a custom product AI for your business using this scalable blueprint, please contact:
137
+ **[info@realrobot.ir]**
138
+ **[www.RealRobot.ir]**
139
+
140
+
141
+ ![4](https://cdn.imgurl.ir/uploads/d77014_RealRobot_chatbot_llm4.jpg)