WilhelmT commited on
Commit
397a0fa
·
verified ·
1 Parent(s): f563f32

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +195 -5
README.md CHANGED
@@ -1,5 +1,195 @@
1
- ---
2
- license: other
3
- license_name: a
4
- license_link: LICENSE
5
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ license_name: embedl-models-community-licence-1.0
4
+ license_link: https://github.com/embedl/embedl-models/blob/main/LICENSE
5
+ base_model:
6
+ - google/gemma-3-1b-it
7
+ tags:
8
+ - text-generation-inference
9
+ ---
10
+
11
+
12
+ # gemma-3-1b-it-FlashHead-W4A16
13
+
14
+ ![My model banner](assets/FlashHead.png)
15
+
16
+ **Optimized version of gemma-3-1b-it using Quantization and FlashHead, Embedl’s efficient replacement for the language model head, reducing size while preserving accuracy.**
17
+ Designed for **low-latency inference** on **NVIDIA RTX GPUs**, leveraging:
18
+
19
+ - FlashHead
20
+ - Quantization (W4A16)
21
+ - Custom vLLM generation via `embedl-models`
22
+
23
+ FlashHead matches the gemma-3-1b-it baseline within rounding error on common benchmarks (MMLU-Pro, HellaSwag, GSM8K, etc.) and, combined with quantization, delivers SOTA on-device latency.
24
+
25
+ ---
26
+
27
+ ## Model Details
28
+ | **Field** | **Value** |
29
+ |------------|------------|
30
+ | **Base Model** | gemma-3-1b-it |
31
+ | **Input / Output** | Text → Text |
32
+ | **Release Date** | 2025-12-08 |
33
+ | **Version** | 1.0 |
34
+ | **Optimizations** | FlashHead LM Head, Quantization (W4A16) |
35
+ | **Developers** | Embedl |
36
+ | **Licenses** | Upstream: Gemma Terms of Use. <br>Optimized components: Embedl Models Community Licence v1.0 *(no redistribution)* |
37
+ | **Intended Use** | Text generation, reasoning, assistant-style interaction, and general-purpose NLP on NVIDIA RTX GPUs |
38
+
39
+ ---
40
+
41
+ ## Optimizations
42
+
43
+ - **FlashHead LM Head** - lightweight replacement for the dense LM head, significantly improving throughput.
44
+ - **Quantization (W4A16)** - large reduction in memory footprint and latency.
45
+ - **Custom Runtime Integration** - compatible with **vLLM (0.10.2)** via the `embedl-models` package.
46
+
47
+ ---
48
+
49
+ ## Performance
50
+
51
+ ### Token Generation Speed (RTX 3500 Ada, batch size = 1)
52
+
53
+ | **Precision** | **Tokens/sec** | **Speedup vs BF16** |
54
+ |----------------|----------------|----------------------|
55
+ | BF16 baseline | 148 | 1.0× |
56
+ | **FlashHead (Embedl)** | **178** | **1.20×** |
57
+ | W4A16 baseline | 243 | 1.64x× |
58
+ | **FlashHead W4A16 (Embedl)** | **336** | **2.27×** |
59
+
60
+ FlashHead improves end-to-end speed by **1.38×** over state-of-the-art, while maintaining full accuracy parity.
61
+
62
+ **Measurement setup:** vLLM 0.10.2, batch_size=1, prompt length=32, max_new_tokens=128, 10 warm-up runs, averaged over 100 runs.
63
+
64
+ ---
65
+
66
+ ## Accuracy (Parity with Baseline)
67
+
68
+ | **Method** | **MMLU-Pro** | **IFEval** | **BBH** | **TruthfulQA** | **GSM8K** |
69
+ |-------------|---------------|--------------|-------------|----------------|--------------|
70
+ | **Baseline** | 0.15 | 0.55 | 0.38 | 0.31 | 0.42 |
71
+ | **FlashHead** | 0.15 | 0.49 | 0.38 | 0.31 | 0.39 |
72
+
73
+ FlashHead closely matches baseline accuracy.
74
+
75
+ ---
76
+
77
+ ## Installation
78
+
79
+ ```bash
80
+ pip install embedl-models
81
+ ```
82
+
83
+ The `embedl-models` package is required, it provides the optimized FlashHead implementation and quantized model runtime.
84
+
85
+ ---
86
+
87
+ ## Usage Examples
88
+ **Note (vLLM context length):** `max_model_len=131072` may fail on GPUs without enough free VRAM for the KV cache. If you see a KV cache memory error, lower `max_model_len` (or increase `gpu_memory_utilization`).
89
+
90
+ ### vLLM Inference
91
+
92
+ ```python
93
+ from vllm import SamplingParams
94
+ from embedl.models.vllm import LLM
95
+
96
+ model_id = "embedl/gemma-3-1b-it-FlashHead-W4A16"
97
+
98
+ if __name__ == "__main__":
99
+ sampling = SamplingParams(max_tokens=128, temperature=0.0)
100
+ llm = LLM(model=model_id, trust_remote_code=True, max_model_len=131072)
101
+
102
+ prompt = "Write a haiku about coffee."
103
+ output = llm.generate([prompt], sampling)
104
+ print(output[0].outputs[0].text)
105
+ ```
106
+
107
+ ---
108
+
109
+ ### Interactive REPL Example
110
+
111
+ The `run_repl()` coroutine launches an **interactive, streaming chat interface** using the vLLM backend with FlashHead enabled.
112
+ It maintains an in-memory chat history and supports simple commands such as `/exit` to quit and `/reset` to clear context.
113
+
114
+ ```python
115
+ import asyncio
116
+ from embedl.models.vllm.demo import run_repl
117
+
118
+ model_id = "embedl/gemma-3-1b-it-FlashHead-W4A16"
119
+
120
+ if __name__ == "__main__":
121
+ asyncio.run(
122
+ run_repl(
123
+ model=model_id,
124
+ max_model_len=131072
125
+ )
126
+ )
127
+ ```
128
+ ---
129
+
130
+ ---
131
+
132
+ ## ⚠️ Important Warning: Hugging Face Transformers Support
133
+
134
+ > **FlashHead is currently not applied when using the Hugging Face `transformers` pipeline.**
135
+ > Generation through `transformers` will fall back to the standard dense LM head, **disabling FlashHead acceleration**.
136
+ >
137
+ > For now, **we strongly recommend using the vLLM integration** (`embedl.models.vllm.LLM`) to ensure FlashHead is active and optimized for low-latency inference.
138
+ >
139
+ > Full support for the Hugging Face `transformers` pipeline with FlashHead integration will be released **in the coming days**.
140
+
141
+ ---
142
+
143
+ ## Limitations
144
+
145
+ - Limited to **vLLM 0.10.2** (pinned dependency)
146
+ - **Batch size = 1** (real-time generation)
147
+ - Currently optimized for **NVIDIA RTX GPUs**
148
+
149
+ ---
150
+
151
+ ## Roadmap
152
+
153
+ Planned improvements:
154
+
155
+ - Advanced mixed precision quantization
156
+ - Huggingface transformers generation
157
+ - vLLM CLI benchmarking for detailed latency evaluation
158
+ - `lm-eval-harness` integration for detailed accuracy evaluation
159
+ - Upstream support in **Transformers** and **vLLM**
160
+ - Compatibility with **GGUF**, **MLC**, **Llama.cpp**, **Ollama**, etc.
161
+ - Broader model coverage (larger models, VLMs, VLAs)
162
+
163
+ ---
164
+
165
+ ## License
166
+
167
+ - **Upstream:** Gemma Terms of Use.
168
+ - **Optimized Components:** Embedl Models Community Licence v1.0 *(no redistribution)*
169
+
170
+ ---
171
+
172
+ ## Contact
173
+
174
+ **Enterprise & Commercial Inquiries**
175
+ [sales@embedl.com](mailto:sales@embedl.com)
176
+
177
+ **Technical Issues & Early Access**
178
+ [https://github.com/embedl/embedl-models](https://github.com/embedl/embedl-models)
179
+
180
+ **More Information & Model Releases**
181
+ [https://embedl.com](https://embedl.com)
182
+
183
+ ---
184
+
185
+ ### Partner & Developer Opportunities
186
+
187
+ If you are evaluating on-device inference, building products on SLMs, or exploring custom model optimization, reach out for:
188
+
189
+ - Embedl SDK - AI optimization tools & profiling
190
+ - Embedl HUB - benchmarking platform
191
+ - Engineering support for on-prem/edge deployments
192
+ - Migration guidance (Llama / Qwen / Gemma)
193
+ - Early access & partner co-marketing opportunities
194
+
195
+ Contact: [sales@embedl.com](mailto:sales@embedl.com)