Commit
e7586ba
·
verified ·
0 Parent(s):

Super-squash branch 'main' using huggingface_hub

Browse files

Co-authored-by: pandora-s <pandora-s@users.noreply.huggingface.co>
Co-authored-by: juliendenize <juliendenize@users.noreply.huggingface.co>

.gitattributes ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
12
+ *.model filter=lfs diff=lfs merge=lfs -text
13
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
14
+ *.npy filter=lfs diff=lfs merge=lfs -text
15
+ *.npz filter=lfs diff=lfs merge=lfs -text
16
+ *.onnx filter=lfs diff=lfs merge=lfs -text
17
+ *.ot filter=lfs diff=lfs merge=lfs -text
18
+ *.parquet filter=lfs diff=lfs merge=lfs -text
19
+ *.pb filter=lfs diff=lfs merge=lfs -text
20
+ *.pickle filter=lfs diff=lfs merge=lfs -text
21
+ *.pkl filter=lfs diff=lfs merge=lfs -text
22
+ *.pt filter=lfs diff=lfs merge=lfs -text
23
+ *.pth filter=lfs diff=lfs merge=lfs -text
24
+ *.rar filter=lfs diff=lfs merge=lfs -text
25
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
26
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
28
+ *.tar filter=lfs diff=lfs merge=lfs -text
29
+ *.tflite filter=lfs diff=lfs merge=lfs -text
30
+ *.tgz filter=lfs diff=lfs merge=lfs -text
31
+ *.wasm filter=lfs diff=lfs merge=lfs -text
32
+ *.xz filter=lfs diff=lfs merge=lfs -text
33
+ *.zip filter=lfs diff=lfs merge=lfs -text
34
+ *.zst filter=lfs diff=lfs merge=lfs -text
35
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ tokenizer.json filter=lfs diff=lfs merge=lfs -text
37
+ tekken.json filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,583 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: vllm
3
+ language:
4
+ - en
5
+ - fr
6
+ - es
7
+ - de
8
+ - it
9
+ - pt
10
+ - nl
11
+ - zh
12
+ - ja
13
+ - ko
14
+ - ar
15
+ license: apache-2.0
16
+ inference: false
17
+ base_model:
18
+ - mistralai/Ministral-3-14B-Base-2512
19
+ extra_gated_description: >-
20
+ If you want to learn more about how we process your personal data, please read
21
+ our <a href="https://mistral.ai/terms/">Privacy Policy</a>.
22
+ tags:
23
+ - mistral-common
24
+ ---
25
+
26
+ # Ministral 3 14B Reasoning 2512
27
+
28
+ The largest model in the Ministral 3 family, **Ministral 3 14B** offers frontier capabilities and performance comparable to its larger [Mistral Small 3.2 24B](https://huggingface.co/mistralai/Mistral-Small-3.2-Instruct-2506) counterpart. A powerful and efficient language model with vision capabilities.
29
+
30
+ This model is the reasoning post-trained version, trained for reasoning tasks, making it ideal for math, coding and stem related use cases.
31
+
32
+ The Ministral 3 family is designed for edge deployment, capable of running on a wide range of hardware. Ministral 3 14B can even be deployed locally, capable of fitting in 32GB of VRAM in BF16, and less than 24GB of RAM/VRAM when quantized.
33
+
34
+ ## Key Features
35
+ Ministral 3 14B consists of two main architectural components:
36
+ - **13.5B Language Model**
37
+ - **0.4B Vision Encoder**
38
+
39
+ The Ministral 3 14B Reasoning model offers the following capabilities:
40
+ - **Vision**: Enables the model to analyze images and provide insights based on visual content, in addition to text.
41
+ - **Multilingual**: Supports dozens of languages, including English, French, Spanish, German, Italian, Portuguese, Dutch, Chinese, Japanese, Korean, Arabic.
42
+ - **System Prompt**: Maintains strong adherence and support for system prompts.
43
+ - **Agentic**: Offers best-in-class agentic capabilities with native function calling and JSON outputting.
44
+ - **Reasoning**: Excels at complex, multi-step reasoning and dynamic problem-solving.
45
+ - **Edge-Optimized**: Delivers best-in-class performance at a small scale, deployable anywhere.
46
+ - **Apache 2.0 License**: Open-source license allowing usage and modification for both commercial and non-commercial purposes.
47
+ - **Large Context Window**: Supports a 256k context window.
48
+
49
+ ### Use Cases
50
+ Private AI deployments where advanced capabilities meet practical hardware constraints:
51
+ - Private/custom chat and AI assistant deployments in constrained environments
52
+ - Advanced local agentic use cases
53
+ - Fine-tuning and specialization
54
+ - And more...
55
+
56
+ Bringing advanced AI capabilities to most environments.
57
+
58
+ ## Ministral 3 Family
59
+
60
+ | Model Name | Type | Precision | Link |
61
+ |--------------------------------|--------------------|-----------|------------------------------------------------------------------------------------------|
62
+ | Ministral 3 3B Base 2512 | Base pre-trained | BF16 | [Hugging Face](https://huggingface.co/mistralai/Ministral-3-3B-Base-2512) |
63
+ | Ministral 3 3B Instruct 2512 | Instruct post-trained | FP8 | [Hugging Face](https://huggingface.co/mistralai/Ministral-3-3B-Instruct-2512) |
64
+ | Ministral 3 3B Reasoning 2512 | Reasoning capable | BF16 | [Hugging Face](https://huggingface.co/mistralai/Ministral-3-3B-Reasoning-2512) |
65
+ | Ministral 3 8B Base 2512 | Base pre-trained | BF16 | [Hugging Face](https://huggingface.co/mistralai/Ministral-3-8B-Base-2512) |
66
+ | Ministral 3 8B Instruct 2512 | Instruct post-trained | FP8 | [Hugging Face](https://huggingface.co/mistralai/Ministral-3-8B-Instruct-2512) |
67
+ | Ministral 3 8B Reasoning 2512 | Reasoning capable | BF16 | [Hugging Face](https://huggingface.co/mistralai/Ministral-3-8B-Reasoning-2512) |
68
+ | Ministral 3 14B Base 2512 | Base pre-trained | BF16 | [Hugging Face](https://huggingface.co/mistralai/Ministral-3-14B-Base-2512) |
69
+ | Ministral 3 14B Instruct 2512 | Instruct post-trained | FP8 | [Hugging Face](https://huggingface.co/mistralai/Ministral-3-14B-Instruct-2512) |
70
+ | **Ministral 3 14B Reasoning 2512** | **Reasoning capable** | **BF16** | [Hugging Face](https://huggingface.co/mistralai/Ministral-3-14B-Reasoning-2512) |
71
+
72
+ Other formats available [here](https://huggingface.co/collections/mistralai/ministral-3-additional-checkpoints).
73
+
74
+ ## Benchmark Results
75
+
76
+ We compare Ministral 3 to similar sized models.
77
+
78
+ ### Reasoning
79
+
80
+ | Model | AIME25 | AIME24 | GPQA Diamond | LiveCodeBench |
81
+ |---------------------------|-------------|-------------|--------------|---------------|
82
+ | **Ministral 3 14B** | <u>0.850</u>| <u>0.898</u>| <u>0.712</u> | <u>0.646</u> |
83
+ | Qwen3-14B (Thinking) | 0.737 | 0.837 | 0.663 | 0.593 |
84
+ | | | | | |
85
+ | **Ministral 3 8B** | 0.787 | <u>0.860</u>| 0.668 | <u>0.616</u> |
86
+ | Qwen3-VL-8B-Thinking | <u>0.798</u>| <u>0.860</u>| <u>0.671</u> | 0.580 |
87
+ | | | | | |
88
+ | **Ministral 3 3B** | <u>0.721</u>| <u>0.775</u>| 0.534 | <u>0.548</u> |
89
+ | Qwen3-VL-4B-Thinking | 0.697 | 0.729 | <u>0.601</u> | 0.513 |
90
+
91
+ ### Instruct
92
+
93
+ | Model | Arena Hard | WildBench | MATH Maj@1 | MM MTBench |
94
+ |---------------------------|-------------|------------|-------------|------------------|
95
+ | **Ministral 3 14B** | <u>0.551</u>| <u>68.5</u>| <u>0.904</u>| <u>8.49</u> |
96
+ | Qwen3 14B (Non-Thinking) | 0.427 | 65.1 | 0.870 | NOT MULTIMODAL |
97
+ | Gemma3-12B-Instruct | 0.436 | 63.2 | 0.854 | 6.70 |
98
+ | | | | | |
99
+ | **Ministral 3 8B** | 0.509 | <u>66.8</u>| 0.876 | <u>8.08</u> |
100
+ | Qwen3-VL-8B-Instruct | <u>0.528</u>| 66.3 | <u>0.946</u>| 8.00 |
101
+ | | | | | |
102
+ | **Ministral 3 3B** | 0.305 | <u>56.8</u>| 0.830 | 7.83 |
103
+ | Qwen3-VL-4B-Instruct | <u>0.438</u>| <u>56.8</u>| <u>0.900</u>| <u>8.01</u> |
104
+ | Qwen3-VL-2B-Instruct | 0.163 | 42.2 | 0.786 | 6.36 |
105
+ | Gemma3-4B-Instruct | 0.318 | 49.1 | 0.759 | 5.23 |
106
+
107
+ ### Base
108
+
109
+ | Model | Multilingual MMLU | MATH CoT 2-Shot | AGIEval 5-shot | MMLU Redux 5-shot | MMLU 5-shot | TriviaQA 5-shot |
110
+ |---------------------|-------------------|-----------------|----------------|-------------------|-------------|-----------------|
111
+ | **Ministral 3 14B** | 0.742 | <u>0.676</u> | 0.648 | 0.820 | 0.794 | 0.749 |
112
+ | Qwen3 14B Base | <u>0.754</u> | 0.620 | <u>0.661</u> | <u>0.837</u> | <u>0.804</u>| 0.703 |
113
+ | Gemma 3 12B Base | 0.690 | 0.487 | 0.587 | 0.766 | 0.745 | <u>0.788</u> |
114
+ | | | | | | | |
115
+ | **Ministral 3 8B** | <u>0.706</u> | <u>0.626</u> | 0.591 | 0.793 | <u>0.761</u>| <u>0.681</u> |
116
+ | Qwen 3 8B Base | 0.700 | 0.576 | <u>0.596</u> | <u>0.794</u> | 0.760 | 0.639 |
117
+ | | | | | | | |
118
+ | **Ministral 3 3B** | 0.652 | <u>0.601</u> | 0.511 | 0.735 | 0.707 | 0.592 |
119
+ | Qwen 3 4B Base | <u>0.677</u> | 0.405 | <u>0.570</u> | <u>0.759</u> | <u>0.713</u>| 0.530 |
120
+ | Gemma 3 4B Base | 0.516 | 0.294 | 0.430 | 0.626 | 0.589 | <u>0.640</u> |
121
+
122
+ ## Usage
123
+
124
+ The model can be used with the following frameworks;
125
+ - [`vllm`](https://github.com/vllm-project/vllm): See [here](#vllm)
126
+ - [`transformers`](https://github.com/huggingface/transformers): See [here](#transformers)
127
+
128
+ ### vLLM
129
+
130
+ We recommend using this model with [vLLM](https://github.com/vllm-project/vllm).
131
+
132
+ #### Installation
133
+
134
+ Make sure to install most recent vllm:
135
+
136
+ ```
137
+ uv pip install -U vllm \
138
+ --torch-backend=auto \
139
+ --extra-index-url https://wheels.vllm.ai/nightly
140
+ ```
141
+
142
+ Doing so should automatically install [`mistral_common >= 1.8.6`](https://github.com/mistralai/mistral-common/releases/tag/v1.8.6).
143
+
144
+ To check:
145
+ ```
146
+ python -c "import mistral_common; print(mistral_common.__version__)"
147
+ ```
148
+
149
+ You can also make use of a ready-to-go [docker image](https://github.com/vllm-project/vllm/blob/main/Dockerfile) or on the [docker hub](https://hub.docker.com/layers/vllm/vllm-openai/latest/images/sha256-de9032a92ffea7b5c007dad80b38fd44aac11eddc31c435f8e52f3b7404bbf39).
150
+
151
+ #### Serve
152
+
153
+ To fully exploit the `Ministral-3-14B-Reasoning-2512` we recommed using 2xH200 GPUs for deployment due to its large context. However if you don't need a large context, you can fall back to a single GPU.
154
+
155
+ A simple launch command is:
156
+
157
+ ```bash
158
+
159
+ vllm serve mistralai/Ministral-3-14B-Reasoning-2512 \
160
+ --tensor-parallel-size 2 \
161
+ --tokenizer_mode mistral --config_format mistral --load_format mistral \
162
+ --enable-auto-tool-choice --tool-call-parser mistral \
163
+ --reasoning-parser mistral
164
+ ```
165
+
166
+ Key parameter notes:
167
+
168
+ * enable-auto-tool-choice: Required when enabling tool usage.
169
+ * tool-call-parser mistral: Required when enabling tool usage.
170
+ * reasoning-parser mistral: Required when enabling reasoning.
171
+
172
+ Additional flags:
173
+
174
+ * You can set `--max-model-len` to preserve memory. By default it is set to `262144` which is quite large but not necessary for most scenarios.
175
+ * You can set `--max-num-batched-tokens` to balance throughput and latency, higher means higher throughput but higher latency.
176
+
177
+ #### Usage of the model
178
+
179
+ Here we asumme that the model `mistralai/Ministral-3-8B-Reasoning-2512` is served and you can ping it to the domain `localhost` with the port `8000` which is the default for vLLM.
180
+
181
+ <details>
182
+ <summary>Vision Reasoning</summary>
183
+
184
+ Let's see if the Ministral 3 model knows when to pick a fight !
185
+
186
+ ```python
187
+ from typing import Any
188
+
189
+ from openai import OpenAI
190
+ from huggingface_hub import hf_hub_download
191
+
192
+ # Modify OpenAI's API key and API base to use vLLM's API server.
193
+ openai_api_key = "EMPTY"
194
+ openai_api_base = "http://localhost:8000/v1"
195
+
196
+ TEMP = 0.7
197
+ TOP_P = 0.95
198
+ MAX_TOK = 262144
199
+ client = OpenAI(
200
+ api_key=openai_api_key,
201
+ base_url=openai_api_base,
202
+ )
203
+
204
+ models = client.models.list()
205
+ model = models.data[0].id
206
+
207
+
208
+ def load_system_prompt(repo_id: str, filename: str) -> dict[str, Any]:
209
+ file_path = hf_hub_download(repo_id=repo_id, filename=filename)
210
+ with open(file_path, "r") as file:
211
+ system_prompt = file.read()
212
+
213
+ index_begin_think = system_prompt.find("[THINK]")
214
+ index_end_think = system_prompt.find("[/THINK]")
215
+
216
+ return {
217
+ "role": "system",
218
+ "content": [
219
+ {"type": "text", "text": system_prompt[:index_begin_think]},
220
+ {
221
+ "type": "thinking",
222
+ "thinking": system_prompt[
223
+ index_begin_think + len("[THINK]") : index_end_think
224
+ ],
225
+ "closed": True,
226
+ },
227
+ {
228
+ "type": "text",
229
+ "text": system_prompt[index_end_think + len("[/THINK]") :],
230
+ },
231
+ ],
232
+ }
233
+
234
+
235
+ SYSTEM_PROMPT = load_system_prompt(model, "SYSTEM_PROMPT.txt")
236
+
237
+ image_url = "https://static.wikia.nocookie.net/essentialsdocs/images/7/70/Battle.png/revision/latest?cb=20220523172438"
238
+
239
+ messages = [
240
+ SYSTEM_PROMPT,
241
+ {
242
+ "role": "user",
243
+ "content": [
244
+ {
245
+ "type": "text",
246
+ "text": "What action do you think I should take in this situation? List all the possible actions and explain why you think they are good or bad.",
247
+ },
248
+ {"type": "image_url", "image_url": {"url": image_url}},
249
+ ],
250
+ },
251
+ ]
252
+
253
+
254
+ stream = client.chat.completions.create(
255
+ model=model,
256
+ messages=messages,
257
+ stream=True,
258
+ temperature=TEMP,
259
+ top_p=TOP_P,
260
+ max_tokens=MAX_TOK,
261
+ )
262
+
263
+ print("client: Start streaming chat completions...:\n")
264
+ printed_reasoning_content = False
265
+ answer = []
266
+
267
+ for chunk in stream:
268
+ reasoning_content = None
269
+ content = None
270
+ # Check the content is reasoning_content or content
271
+ if hasattr(chunk.choices[0].delta, "reasoning_content"):
272
+ reasoning_content = chunk.choices[0].delta.reasoning_content
273
+ if hasattr(chunk.choices[0].delta, "content"):
274
+ content = chunk.choices[0].delta.content
275
+
276
+ if reasoning_content is not None:
277
+ if not printed_reasoning_content:
278
+ printed_reasoning_content = True
279
+ print("Start reasoning:\n", end="", flush=True)
280
+ print(reasoning_content, end="", flush=True)
281
+ elif content is not None:
282
+ # Extract and print the content
283
+ if not reasoning_content and printed_reasoning_content:
284
+ answer.extend(content)
285
+ print(content, end="", flush=True)
286
+
287
+ if answer:
288
+ print("\n\n=============\nAnswer\n=============\n")
289
+ print("".join(answer))
290
+ else:
291
+ print("\n\n=============\nNo Answer\n=============\n")
292
+ print(
293
+ "No answer was generated by the model, probably because the maximum number of tokens was reached."
294
+ )
295
+ ```
296
+
297
+ Now we'll make it compute some maths !
298
+
299
+ ```python
300
+ from typing import Any
301
+
302
+ from openai import OpenAI
303
+ from huggingface_hub import hf_hub_download
304
+
305
+ # Modify OpenAI's API key and API base to use vLLM's API server.
306
+ openai_api_key = "EMPTY"
307
+ openai_api_base = "http://localhost:8000/v1"
308
+
309
+ TEMP = 0.7
310
+ TOP_P = 0.95
311
+ MAX_TOK = 262144
312
+ client = OpenAI(
313
+ api_key=openai_api_key,
314
+ base_url=openai_api_base,
315
+ )
316
+
317
+ models = client.models.list()
318
+ model = models.data[0].id
319
+
320
+
321
+ def load_system_prompt(repo_id: str, filename: str) -> dict[str, Any]:
322
+ file_path = hf_hub_download(repo_id=repo_id, filename=filename)
323
+ with open(file_path, "r") as file:
324
+ system_prompt = file.read()
325
+
326
+ index_begin_think = system_prompt.find("[THINK]")
327
+ index_end_think = system_prompt.find("[/THINK]")
328
+
329
+ return {
330
+ "role": "system",
331
+ "content": [
332
+ {"type": "text", "text": system_prompt[:index_begin_think]},
333
+ {
334
+ "type": "thinking",
335
+ "thinking": system_prompt[
336
+ index_begin_think + len("[THINK]") : index_end_think
337
+ ],
338
+ "closed": True,
339
+ },
340
+ {
341
+ "type": "text",
342
+ "text": system_prompt[index_end_think + len("[/THINK]") :],
343
+ },
344
+ ],
345
+ }
346
+
347
+
348
+ SYSTEM_PROMPT = load_system_prompt(model, "SYSTEM_PROMPT.txt")
349
+
350
+ image_url = "https://i.ytimg.com/vi/5Y3xLHeyKZU/hqdefault.jpg"
351
+
352
+ messages = [
353
+ SYSTEM_PROMPT,
354
+ {
355
+ "role": "user",
356
+ "content": [
357
+ {
358
+ "type": "text",
359
+ "text": "Solve the equations. If they contain only numbers, use your calculator, else only think. Answer in the language of the image.",
360
+ },
361
+ {"type": "image_url", "image_url": {"url": image_url}},
362
+ ],
363
+ },
364
+ ]
365
+
366
+ stream = client.chat.completions.create(
367
+ model=model,
368
+ messages=messages,
369
+ stream=True,
370
+ temperature=TEMP,
371
+ top_p=TOP_P,
372
+ max_tokens=MAX_TOK,
373
+ )
374
+
375
+ print("client: Start streaming chat completions...:\n")
376
+ printed_reasoning_content = False
377
+ answer = []
378
+
379
+ for chunk in stream:
380
+ reasoning_content = None
381
+ content = None
382
+ # Check the content is reasoning_content or content
383
+ if hasattr(chunk.choices[0].delta, "reasoning_content"):
384
+ reasoning_content = chunk.choices[0].delta.reasoning_content
385
+ if hasattr(chunk.choices[0].delta, "content"):
386
+ content = chunk.choices[0].delta.content
387
+
388
+ if reasoning_content is not None:
389
+ if not printed_reasoning_content:
390
+ printed_reasoning_content = True
391
+ print("Start reasoning:\n", end="", flush=True)
392
+ print(reasoning_content, end="", flush=True)
393
+ if content is not None:
394
+ # Extract and print the content
395
+ if not reasoning_content and printed_reasoning_content:
396
+ answer.extend(content)
397
+ print(content, end="", flush=True)
398
+
399
+ if answer:
400
+ print("\n\n=============\nAnswer\n=============\n")
401
+ print("".join(answer))
402
+ else:
403
+ print("\n\n=============\nNo Answer\n=============\n")
404
+ print(
405
+ "No answer was generated by the model, probably because the maximum number of tokens was reached."
406
+ )
407
+ ```
408
+
409
+ </details>
410
+
411
+ <details>
412
+ <summary>Text-Only Request</summary>
413
+
414
+ Let's do more maths and leave it up to the model to figure out how to achieve a result.
415
+
416
+ ```python
417
+ from typing import Any
418
+ from openai import OpenAI
419
+ from huggingface_hub import hf_hub_download
420
+
421
+ # Modify OpenAI's API key and API base to use vLLM's API server.
422
+ openai_api_key = "EMPTY"
423
+ openai_api_base = "http://localhost:8000/v1"
424
+
425
+ TEMP = 0.7
426
+ TOP_P = 0.95
427
+ MAX_TOK = 262144
428
+ client = OpenAI(
429
+ api_key=openai_api_key,
430
+ base_url=openai_api_base,
431
+ )
432
+
433
+ models = client.models.list()
434
+ model = models.data[0].id
435
+
436
+
437
+ def load_system_prompt(repo_id: str, filename: str) -> dict[str, Any]:
438
+ file_path = hf_hub_download(repo_id=repo_id, filename=filename)
439
+ with open(file_path, "r") as file:
440
+ system_prompt = file.read()
441
+
442
+ index_begin_think = system_prompt.find("[THINK]")
443
+ index_end_think = system_prompt.find("[/THINK]")
444
+
445
+ return {
446
+ "role": "system",
447
+ "content": [
448
+ {"type": "text", "text": system_prompt[:index_begin_think]},
449
+ {
450
+ "type": "thinking",
451
+ "thinking": system_prompt[
452
+ index_begin_think + len("[THINK]") : index_end_think
453
+ ],
454
+ "closed": True,
455
+ },
456
+ {
457
+ "type": "text",
458
+ "text": system_prompt[index_end_think + len("[/THINK]") :],
459
+ },
460
+ ],
461
+ }
462
+
463
+
464
+ SYSTEM_PROMPT = load_system_prompt(model, "SYSTEM_PROMPT.txt")
465
+
466
+ query = "Use each number in 2,5,6,3 exactly once, along with any combination of +, -, ×, ÷ (and parentheses for grouping), to make the number 24."
467
+
468
+ messages = [
469
+ SYSTEM_PROMPT,
470
+ {"role": "user", "content": query}
471
+ ]
472
+ stream = client.chat.completions.create(
473
+ model=model,
474
+ messages=messages,
475
+ stream=True,
476
+ temperature=TEMP,
477
+ top_p=TOP_P,
478
+ max_tokens=MAX_TOK,
479
+ )
480
+
481
+ print("client: Start streaming chat completions...:\n")
482
+ printed_reasoning_content = False
483
+ answer = []
484
+
485
+ for chunk in stream:
486
+ reasoning_content = None
487
+ content = None
488
+ # Check the content is reasoning_content or content
489
+ if hasattr(chunk.choices[0].delta, "reasoning_content"):
490
+ reasoning_content = chunk.choices[0].delta.reasoning_content
491
+ if hasattr(chunk.choices[0].delta, "content"):
492
+ content = chunk.choices[0].delta.content
493
+
494
+ if reasoning_content is not None:
495
+ if not printed_reasoning_content:
496
+ printed_reasoning_content = True
497
+ print("Start reasoning:\n", end="", flush=True)
498
+ print(reasoning_content, end="", flush=True)
499
+ if content is not None:
500
+ # Extract and print the content
501
+ if not reasoning_content and printed_reasoning_content:
502
+ answer.extend(content)
503
+ print(content, end="", flush=True)
504
+
505
+ if answer:
506
+ print("\n\n=============\nAnswer\n=============\n")
507
+ print("".join(answer))
508
+ else:
509
+ print("\n\n=============\nNo Answer\n=============\n")
510
+ print("No answer was generated by the model, probably because the maximum number of tokens was reached.")
511
+ ```
512
+
513
+ </details>
514
+
515
+ ### Transformers
516
+
517
+ You can also use Ministral 3 3B Reasoning 2512 with `Transformers` !
518
+ Make sure to install `Transformers` from its first v5 release candidate or from "main":
519
+
520
+ ```
521
+ pip install transformers==5.0.0rc0
522
+ ```
523
+
524
+ To make the best use of our model with `Transformers` make sure to have [installed](https://github.com/mistralai/mistral-common) `mistral-common >= 1.8.6` to use our tokenizer.
525
+
526
+ ```bash
527
+ pip install mistral-common --upgrade
528
+ ```
529
+
530
+ Then load our tokenizer along with the model and generate:
531
+
532
+ <details>
533
+ <summary>Python snippet</summary>
534
+
535
+ ```python
536
+ import torch
537
+ from transformers import Mistral3ForConditionalGeneration, MistralCommonBackend
538
+
539
+ model_id = "mistralai/Ministral-3-14B-Reasoning-2512"
540
+
541
+ tokenizer = MistralCommonBackend.from_pretrained(model_id)
542
+ model = Mistral3ForConditionalGeneration.from_pretrained(
543
+ model_id, torch_dtype=torch.bfloat16, device_map="auto"
544
+ )
545
+
546
+ image_url = "https://static.wikia.nocookie.net/essentialsdocs/images/7/70/Battle.png/revision/latest?cb=20220523172438"
547
+
548
+ messages = [
549
+ {
550
+ "role": "user",
551
+ "content": [
552
+ {
553
+ "type": "text",
554
+ "text": "What action do you think I should take in this situation? List all the possible actions and explain why you think they are good or bad.",
555
+ },
556
+ {"type": "image_url", "image_url": {"url": image_url}},
557
+ ],
558
+ },
559
+ ]
560
+
561
+ tokenized = tokenizer.apply_chat_template(messages, return_tensors="pt", return_dict=True)
562
+
563
+ tokenized["input_ids"] = tokenized["input_ids"].to(device="cuda")
564
+ tokenized["pixel_values"] = tokenized["pixel_values"].to(dtype=torch.bfloat16, device="cuda")
565
+ image_sizes = [tokenized["pixel_values"].shape[-2:]]
566
+
567
+ output = model.generate(
568
+ **tokenized,
569
+ image_sizes=image_sizes,
570
+ max_new_tokens=8092,
571
+ )[0]
572
+
573
+ decoded_output = tokenizer.decode(output[len(tokenized["input_ids"][0]):])
574
+ print(decoded_output)
575
+ ```
576
+
577
+ </details>
578
+
579
+ ## License
580
+
581
+ This model is licensed under the [Apache 2.0 License](https://www.apache.org/licenses/LICENSE-2.0.txt).
582
+
583
+ *You must not use this model in a manner that infringes, misappropriates, or otherwise violates any third party’s rights, including intellectual property rights.*
SYSTEM_PROMPT.txt ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ # HOW YOU SHOULD THINK AND ANSWER
2
+
3
+ First draft your thinking process (inner monologue) until you arrive at a response. Format your response using Markdown, and use LaTeX for any mathematical equations. Write both your thoughts and the response in the same language as the input.
4
+
5
+ Your thinking process must follow the template below:[THINK]Your thoughts or/and draft, like working through an exercise on scratch paper. Be as casual and as long as you want until you are confident to generate the response to the user.[/THINK]Here, provide a self-contained response.
chat_template.jinja ADDED
@@ -0,0 +1,126 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {#- Default system message if no system prompt is passed. #}
2
+ {%- set default_system_message = '# HOW YOU SHOULD THINK AND ANSWER\n\nFirst draft your thinking process (inner monologue) until you arrive at a response. Format your response using Markdown, and use LaTeX for any mathematical equations. Write both your thoughts and the response in the same language as the input.\n\nYour thinking process must follow the template below:[THINK]Your thoughts or/and draft, like working through an exercise on scratch paper. Be as casual and as long as you want until you are confident to generate the response to the user.[/THINK]Here, provide a self-contained response.' %}
3
+
4
+ {#- Begin of sequence token. #}
5
+ {{- bos_token }}
6
+
7
+ {#- Handle system prompt if it exists. #}
8
+ {#- System prompt supports text content or text and thinking chunks. #}
9
+ {%- if messages[0]['role'] == 'system' %}
10
+ {{- '[SYSTEM_PROMPT]' -}}
11
+ {%- if messages[0]['content'] is string %}
12
+ {{- messages[0]['content'] -}}
13
+ {%- else %}
14
+ {%- for block in messages[0]['content'] %}
15
+ {%- if block['type'] == 'text' %}
16
+ {{- block['text'] }}
17
+ {%- elif block['type'] == 'thinking' %}
18
+ {{- '[THINK]' + block['thinking'] + '[/THINK]' }}
19
+ {%- else %}
20
+ {{- raise_exception('Only text and thinking chunks are supported in system message contents.') }}
21
+ {%- endif %}
22
+ {%- endfor %}
23
+ {%- endif %}
24
+ {{- '[/SYSTEM_PROMPT]' -}}
25
+ {%- set loop_messages = messages[1:] %}
26
+ {%- else %}
27
+ {%- set loop_messages = messages %}
28
+ {%- if default_system_message != '' %}
29
+ {{- '[SYSTEM_PROMPT]' + default_system_message + '[/SYSTEM_PROMPT]' }}
30
+ {%- endif %}
31
+ {%- endif %}
32
+
33
+
34
+ {#- Tools definition #}
35
+ {%- set tools_definition = '' %}
36
+ {%- set has_tools = false %}
37
+ {%- if tools is defined and tools is not none and tools|length > 0 %}
38
+ {%- set has_tools = true %}
39
+ {%- set tools_definition = '[AVAILABLE_TOOLS]' + (tools| tojson) + '[/AVAILABLE_TOOLS]' %}
40
+ {{- tools_definition }}
41
+ {%- endif %}
42
+
43
+ {#- Checks for alternating user/assistant messages. #}
44
+ {%- set ns = namespace(index=0) %}
45
+ {%- for message in loop_messages %}
46
+ {%- if message.role == 'user' or (message.role == 'assistant' and (message.tool_calls is not defined or message.tool_calls is none or message.tool_calls | length == 0)) %}
47
+ {%- if (message['role'] == 'user') != (ns.index % 2 == 0) %}
48
+ {{- raise_exception('After the optional system message, conversation roles must alternate user and assistant roles except for tool calls and results.') }}
49
+ {%- endif %}
50
+ {%- set ns.index = ns.index + 1 %}
51
+ {%- endif %}
52
+ {%- endfor %}
53
+
54
+ {#- Handle conversation messages. #}
55
+ {%- for message in loop_messages %}
56
+
57
+ {#- User messages supports text content or text and image chunks. #}
58
+ {%- if message['role'] == 'user' %}
59
+ {%- if message['content'] is string %}
60
+ {{- '[INST]' + message['content'] + '[/INST]' }}
61
+ {%- elif message['content'] | length > 0 %}
62
+ {{- '[INST]' }}
63
+ {%- if message['content'] | length == 2 %}
64
+ {%- set blocks = message['content'] | sort(attribute='type') %}
65
+ {%- else %}
66
+ {%- set blocks = message['content'] %}
67
+ {%- endif %}
68
+ {%- for block in blocks %}
69
+ {%- if block['type'] == 'text' %}
70
+ {{- block['text'] }}
71
+ {%- elif block['type'] in ['image', 'image_url'] %}
72
+ {{- '[IMG]' }}
73
+ {%- else %}
74
+ {{- raise_exception('Only text, image and image_url chunks are supported in user message content.') }}
75
+ {%- endif %}
76
+ {%- endfor %}
77
+ {{- '[/INST]' }}
78
+ {%- else %}
79
+ {{- raise_exception('User message must have a string or a list of chunks in content') }}
80
+ {%- endif %}
81
+
82
+ {#- Assistant messages supports text content or text, image and thinking chunks. #}
83
+ {%- elif message['role'] == 'assistant' %}
84
+ {%- if (message['content'] is none or message['content'] == '' or message['content']|length == 0) and (message['tool_calls'] is not defined or message['tool_calls'] is none or message['tool_calls']|length == 0) %}
85
+ {{- raise_exception('Assistant message must have a string or a list of chunks in content or a list of tool calls.') }}
86
+ {%- endif %}
87
+
88
+ {%- if message['content'] is string and message['content'] != '' %}
89
+ {{- message['content'] }}
90
+ {%- elif message['content'] | length > 0 %}
91
+ {%- for block in message['content'] %}
92
+ {%- if block['type'] == 'text' %}
93
+ {{- block['text'] }}
94
+ {%- elif block['type'] == 'thinking' %}
95
+ {{- '[THINK]' + block['thinking'] + '[/THINK]' }}
96
+ {%- else %}
97
+ {{- raise_exception('Only text and thinking chunks are supported in assistant message contents.') }}
98
+ {%- endif %}
99
+ {%- endfor %}
100
+ {%- endif %}
101
+
102
+ {%- if message['tool_calls'] is defined and message['tool_calls'] is not none and message['tool_calls']|length > 0 %}
103
+ {%- for tool in message['tool_calls'] %}
104
+ {{- '[TOOL_CALLS]' }}
105
+ {%- set name = tool['function']['name'] %}
106
+ {%- set arguments = tool['function']['arguments'] %}
107
+ {%- if arguments is not string %}
108
+ {%- set arguments = arguments|tojson|safe %}
109
+ {%- elif arguments == '' %}
110
+ {%- set arguments = '{}' %}
111
+ {%- endif %}
112
+ {{- name + '[ARGS]' + arguments }}
113
+ {%- endfor %}
114
+ {%- endif %}
115
+
116
+ {{- eos_token }}
117
+
118
+ {#- Tool messages only supports text content. #}
119
+ {%- elif message['role'] == 'tool' %}
120
+ {{- '[TOOL_RESULTS]' + message['content']|string + '[/TOOL_RESULTS]' }}
121
+
122
+ {#- Raise exception for unsupported roles. #}
123
+ {%- else %}
124
+ {{- raise_exception('Only user, assistant and tool roles are supported, got ' + message['role'] + '.') }}
125
+ {%- endif %}
126
+ {%- endfor %}
config.json ADDED
@@ -0,0 +1,61 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "Mistral3ForConditionalGeneration"
4
+ ],
5
+ "dtype": "bfloat16",
6
+ "image_token_index": 10,
7
+ "model_type": "mistral3",
8
+ "multimodal_projector_bias": false,
9
+ "projector_hidden_act": "gelu",
10
+ "spatial_merge_size": 2,
11
+ "text_config": {
12
+ "attention_dropout": 0.0,
13
+ "head_dim": 128,
14
+ "hidden_act": "silu",
15
+ "hidden_size": 5120,
16
+ "initializer_range": 0.02,
17
+ "intermediate_size": 16384,
18
+ "max_position_embeddings": 262144,
19
+ "model_type": "ministral3",
20
+ "num_attention_heads": 32,
21
+ "num_hidden_layers": 40,
22
+ "num_key_value_heads": 8,
23
+ "rms_norm_eps": 1e-05,
24
+ "rope_parameters": {
25
+ "beta_fast": 32.0,
26
+ "beta_slow": 1.0,
27
+ "factor": 16.0,
28
+ "llama_4_scaling_beta": 0.1,
29
+ "mscale": 1.0,
30
+ "mscale_all_dim": 1.0,
31
+ "original_max_position_embeddings": 16384,
32
+ "rope_theta": 1000000000.0,
33
+ "rope_type": "yarn",
34
+ "type": "yarn"
35
+ },
36
+ "sliding_window": null,
37
+ "use_cache": true,
38
+ "vocab_size": 131072
39
+ },
40
+ "transformers_version": "5.0.0.dev0",
41
+ "vision_config": {
42
+ "attention_dropout": 0.0,
43
+ "head_dim": 64,
44
+ "hidden_act": "silu",
45
+ "hidden_size": 1024,
46
+ "image_size": 1540,
47
+ "initializer_range": 0.02,
48
+ "intermediate_size": 4096,
49
+ "model_type": "pixtral",
50
+ "num_attention_heads": 16,
51
+ "num_channels": 3,
52
+ "num_hidden_layers": 24,
53
+ "patch_size": 14,
54
+ "rope_parameters": {
55
+ "rope_theta": 10000.0,
56
+ "rope_type": "default"
57
+ },
58
+ "rope_theta": 10000.0
59
+ },
60
+ "vision_feature_layer": -1
61
+ }
consolidated.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2515cfe79e63d9a35dbbd17dbfade74e3c67150e2ca8bda218955e2cafd93eff
3
+ size 27890132928
generation_config.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token_id": 1,
3
+ "eos_token_id": 2,
4
+ "max_length": 262144,
5
+ "pad_token_id": 11,
6
+ "transformers_version": "5.0.0.dev0"
7
+ }
model-00001-of-00006.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:163fef681d41683f2ac17ae56f3b6935188d593d252389fffef11ce148fde021
3
+ size 4925537056
model-00002-of-00006.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1bbf71635f7242e22752cee209350ff0925b447708d3fbf184c1725bca481230
3
+ size 4865565920
model-00003-of-00006.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:81eec09dad5d0ed898fd60cfce3ae35a487bafb805d192b40d4b9724d6c330b3
3
+ size 4865565968
model-00004-of-00006.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:648838faaaca188c26901ea32a1ea60c82c7ded26f25098b0b6acda47358bf89
3
+ size 4865565968
model-00005-of-00006.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:142478cca6ed3f38a2c9e43d03bc2a3ba84a9d863078e3adbe294f6392aace51
3
+ size 4865565968
model-00006-of-00006.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c87c0c31b38ee58428ccd217c3579be969e684b3e2058b23a6232e7e208eda87
3
+ size 3502340328
model.safetensors.index.json ADDED
@@ -0,0 +1,593 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "metadata": {
3
+ "total_parameters": 13945031680,
4
+ "total_size": 27890063360
5
+ },
6
+ "weight_map": {
7
+ "language_model.lm_head.weight": "model-00006-of-00006.safetensors",
8
+ "language_model.model.embed_tokens.weight": "model-00001-of-00006.safetensors",
9
+ "language_model.model.layers.0.input_layernorm.weight": "model-00001-of-00006.safetensors",
10
+ "language_model.model.layers.0.mlp.down_proj.weight": "model-00001-of-00006.safetensors",
11
+ "language_model.model.layers.0.mlp.gate_proj.weight": "model-00001-of-00006.safetensors",
12
+ "language_model.model.layers.0.mlp.up_proj.weight": "model-00001-of-00006.safetensors",
13
+ "language_model.model.layers.0.post_attention_layernorm.weight": "model-00001-of-00006.safetensors",
14
+ "language_model.model.layers.0.self_attn.k_proj.weight": "model-00001-of-00006.safetensors",
15
+ "language_model.model.layers.0.self_attn.o_proj.weight": "model-00001-of-00006.safetensors",
16
+ "language_model.model.layers.0.self_attn.q_proj.weight": "model-00001-of-00006.safetensors",
17
+ "language_model.model.layers.0.self_attn.v_proj.weight": "model-00001-of-00006.safetensors",
18
+ "language_model.model.layers.1.input_layernorm.weight": "model-00001-of-00006.safetensors",
19
+ "language_model.model.layers.1.mlp.down_proj.weight": "model-00001-of-00006.safetensors",
20
+ "language_model.model.layers.1.mlp.gate_proj.weight": "model-00001-of-00006.safetensors",
21
+ "language_model.model.layers.1.mlp.up_proj.weight": "model-00001-of-00006.safetensors",
22
+ "language_model.model.layers.1.post_attention_layernorm.weight": "model-00001-of-00006.safetensors",
23
+ "language_model.model.layers.1.self_attn.k_proj.weight": "model-00001-of-00006.safetensors",
24
+ "language_model.model.layers.1.self_attn.o_proj.weight": "model-00001-of-00006.safetensors",
25
+ "language_model.model.layers.1.self_attn.q_proj.weight": "model-00001-of-00006.safetensors",
26
+ "language_model.model.layers.1.self_attn.v_proj.weight": "model-00001-of-00006.safetensors",
27
+ "language_model.model.layers.10.input_layernorm.weight": "model-00002-of-00006.safetensors",
28
+ "language_model.model.layers.10.mlp.down_proj.weight": "model-00002-of-00006.safetensors",
29
+ "language_model.model.layers.10.mlp.gate_proj.weight": "model-00002-of-00006.safetensors",
30
+ "language_model.model.layers.10.mlp.up_proj.weight": "model-00002-of-00006.safetensors",
31
+ "language_model.model.layers.10.post_attention_layernorm.weight": "model-00002-of-00006.safetensors",
32
+ "language_model.model.layers.10.self_attn.k_proj.weight": "model-00002-of-00006.safetensors",
33
+ "language_model.model.layers.10.self_attn.o_proj.weight": "model-00002-of-00006.safetensors",
34
+ "language_model.model.layers.10.self_attn.q_proj.weight": "model-00002-of-00006.safetensors",
35
+ "language_model.model.layers.10.self_attn.v_proj.weight": "model-00002-of-00006.safetensors",
36
+ "language_model.model.layers.11.input_layernorm.weight": "model-00002-of-00006.safetensors",
37
+ "language_model.model.layers.11.mlp.down_proj.weight": "model-00002-of-00006.safetensors",
38
+ "language_model.model.layers.11.mlp.gate_proj.weight": "model-00002-of-00006.safetensors",
39
+ "language_model.model.layers.11.mlp.up_proj.weight": "model-00002-of-00006.safetensors",
40
+ "language_model.model.layers.11.post_attention_layernorm.weight": "model-00002-of-00006.safetensors",
41
+ "language_model.model.layers.11.self_attn.k_proj.weight": "model-00002-of-00006.safetensors",
42
+ "language_model.model.layers.11.self_attn.o_proj.weight": "model-00002-of-00006.safetensors",
43
+ "language_model.model.layers.11.self_attn.q_proj.weight": "model-00002-of-00006.safetensors",
44
+ "language_model.model.layers.11.self_attn.v_proj.weight": "model-00002-of-00006.safetensors",
45
+ "language_model.model.layers.12.input_layernorm.weight": "model-00003-of-00006.safetensors",
46
+ "language_model.model.layers.12.mlp.down_proj.weight": "model-00003-of-00006.safetensors",
47
+ "language_model.model.layers.12.mlp.gate_proj.weight": "model-00002-of-00006.safetensors",
48
+ "language_model.model.layers.12.mlp.up_proj.weight": "model-00003-of-00006.safetensors",
49
+ "language_model.model.layers.12.post_attention_layernorm.weight": "model-00003-of-00006.safetensors",
50
+ "language_model.model.layers.12.self_attn.k_proj.weight": "model-00002-of-00006.safetensors",
51
+ "language_model.model.layers.12.self_attn.o_proj.weight": "model-00002-of-00006.safetensors",
52
+ "language_model.model.layers.12.self_attn.q_proj.weight": "model-00002-of-00006.safetensors",
53
+ "language_model.model.layers.12.self_attn.v_proj.weight": "model-00002-of-00006.safetensors",
54
+ "language_model.model.layers.13.input_layernorm.weight": "model-00003-of-00006.safetensors",
55
+ "language_model.model.layers.13.mlp.down_proj.weight": "model-00003-of-00006.safetensors",
56
+ "language_model.model.layers.13.mlp.gate_proj.weight": "model-00003-of-00006.safetensors",
57
+ "language_model.model.layers.13.mlp.up_proj.weight": "model-00003-of-00006.safetensors",
58
+ "language_model.model.layers.13.post_attention_layernorm.weight": "model-00003-of-00006.safetensors",
59
+ "language_model.model.layers.13.self_attn.k_proj.weight": "model-00003-of-00006.safetensors",
60
+ "language_model.model.layers.13.self_attn.o_proj.weight": "model-00003-of-00006.safetensors",
61
+ "language_model.model.layers.13.self_attn.q_proj.weight": "model-00003-of-00006.safetensors",
62
+ "language_model.model.layers.13.self_attn.v_proj.weight": "model-00003-of-00006.safetensors",
63
+ "language_model.model.layers.14.input_layernorm.weight": "model-00003-of-00006.safetensors",
64
+ "language_model.model.layers.14.mlp.down_proj.weight": "model-00003-of-00006.safetensors",
65
+ "language_model.model.layers.14.mlp.gate_proj.weight": "model-00003-of-00006.safetensors",
66
+ "language_model.model.layers.14.mlp.up_proj.weight": "model-00003-of-00006.safetensors",
67
+ "language_model.model.layers.14.post_attention_layernorm.weight": "model-00003-of-00006.safetensors",
68
+ "language_model.model.layers.14.self_attn.k_proj.weight": "model-00003-of-00006.safetensors",
69
+ "language_model.model.layers.14.self_attn.o_proj.weight": "model-00003-of-00006.safetensors",
70
+ "language_model.model.layers.14.self_attn.q_proj.weight": "model-00003-of-00006.safetensors",
71
+ "language_model.model.layers.14.self_attn.v_proj.weight": "model-00003-of-00006.safetensors",
72
+ "language_model.model.layers.15.input_layernorm.weight": "model-00003-of-00006.safetensors",
73
+ "language_model.model.layers.15.mlp.down_proj.weight": "model-00003-of-00006.safetensors",
74
+ "language_model.model.layers.15.mlp.gate_proj.weight": "model-00003-of-00006.safetensors",
75
+ "language_model.model.layers.15.mlp.up_proj.weight": "model-00003-of-00006.safetensors",
76
+ "language_model.model.layers.15.post_attention_layernorm.weight": "model-00003-of-00006.safetensors",
77
+ "language_model.model.layers.15.self_attn.k_proj.weight": "model-00003-of-00006.safetensors",
78
+ "language_model.model.layers.15.self_attn.o_proj.weight": "model-00003-of-00006.safetensors",
79
+ "language_model.model.layers.15.self_attn.q_proj.weight": "model-00003-of-00006.safetensors",
80
+ "language_model.model.layers.15.self_attn.v_proj.weight": "model-00003-of-00006.safetensors",
81
+ "language_model.model.layers.16.input_layernorm.weight": "model-00003-of-00006.safetensors",
82
+ "language_model.model.layers.16.mlp.down_proj.weight": "model-00003-of-00006.safetensors",
83
+ "language_model.model.layers.16.mlp.gate_proj.weight": "model-00003-of-00006.safetensors",
84
+ "language_model.model.layers.16.mlp.up_proj.weight": "model-00003-of-00006.safetensors",
85
+ "language_model.model.layers.16.post_attention_layernorm.weight": "model-00003-of-00006.safetensors",
86
+ "language_model.model.layers.16.self_attn.k_proj.weight": "model-00003-of-00006.safetensors",
87
+ "language_model.model.layers.16.self_attn.o_proj.weight": "model-00003-of-00006.safetensors",
88
+ "language_model.model.layers.16.self_attn.q_proj.weight": "model-00003-of-00006.safetensors",
89
+ "language_model.model.layers.16.self_attn.v_proj.weight": "model-00003-of-00006.safetensors",
90
+ "language_model.model.layers.17.input_layernorm.weight": "model-00003-of-00006.safetensors",
91
+ "language_model.model.layers.17.mlp.down_proj.weight": "model-00003-of-00006.safetensors",
92
+ "language_model.model.layers.17.mlp.gate_proj.weight": "model-00003-of-00006.safetensors",
93
+ "language_model.model.layers.17.mlp.up_proj.weight": "model-00003-of-00006.safetensors",
94
+ "language_model.model.layers.17.post_attention_layernorm.weight": "model-00003-of-00006.safetensors",
95
+ "language_model.model.layers.17.self_attn.k_proj.weight": "model-00003-of-00006.safetensors",
96
+ "language_model.model.layers.17.self_attn.o_proj.weight": "model-00003-of-00006.safetensors",
97
+ "language_model.model.layers.17.self_attn.q_proj.weight": "model-00003-of-00006.safetensors",
98
+ "language_model.model.layers.17.self_attn.v_proj.weight": "model-00003-of-00006.safetensors",
99
+ "language_model.model.layers.18.input_layernorm.weight": "model-00003-of-00006.safetensors",
100
+ "language_model.model.layers.18.mlp.down_proj.weight": "model-00003-of-00006.safetensors",
101
+ "language_model.model.layers.18.mlp.gate_proj.weight": "model-00003-of-00006.safetensors",
102
+ "language_model.model.layers.18.mlp.up_proj.weight": "model-00003-of-00006.safetensors",
103
+ "language_model.model.layers.18.post_attention_layernorm.weight": "model-00003-of-00006.safetensors",
104
+ "language_model.model.layers.18.self_attn.k_proj.weight": "model-00003-of-00006.safetensors",
105
+ "language_model.model.layers.18.self_attn.o_proj.weight": "model-00003-of-00006.safetensors",
106
+ "language_model.model.layers.18.self_attn.q_proj.weight": "model-00003-of-00006.safetensors",
107
+ "language_model.model.layers.18.self_attn.v_proj.weight": "model-00003-of-00006.safetensors",
108
+ "language_model.model.layers.19.input_layernorm.weight": "model-00003-of-00006.safetensors",
109
+ "language_model.model.layers.19.mlp.down_proj.weight": "model-00003-of-00006.safetensors",
110
+ "language_model.model.layers.19.mlp.gate_proj.weight": "model-00003-of-00006.safetensors",
111
+ "language_model.model.layers.19.mlp.up_proj.weight": "model-00003-of-00006.safetensors",
112
+ "language_model.model.layers.19.post_attention_layernorm.weight": "model-00003-of-00006.safetensors",
113
+ "language_model.model.layers.19.self_attn.k_proj.weight": "model-00003-of-00006.safetensors",
114
+ "language_model.model.layers.19.self_attn.o_proj.weight": "model-00003-of-00006.safetensors",
115
+ "language_model.model.layers.19.self_attn.q_proj.weight": "model-00003-of-00006.safetensors",
116
+ "language_model.model.layers.19.self_attn.v_proj.weight": "model-00003-of-00006.safetensors",
117
+ "language_model.model.layers.2.input_layernorm.weight": "model-00001-of-00006.safetensors",
118
+ "language_model.model.layers.2.mlp.down_proj.weight": "model-00001-of-00006.safetensors",
119
+ "language_model.model.layers.2.mlp.gate_proj.weight": "model-00001-of-00006.safetensors",
120
+ "language_model.model.layers.2.mlp.up_proj.weight": "model-00001-of-00006.safetensors",
121
+ "language_model.model.layers.2.post_attention_layernorm.weight": "model-00001-of-00006.safetensors",
122
+ "language_model.model.layers.2.self_attn.k_proj.weight": "model-00001-of-00006.safetensors",
123
+ "language_model.model.layers.2.self_attn.o_proj.weight": "model-00001-of-00006.safetensors",
124
+ "language_model.model.layers.2.self_attn.q_proj.weight": "model-00001-of-00006.safetensors",
125
+ "language_model.model.layers.2.self_attn.v_proj.weight": "model-00001-of-00006.safetensors",
126
+ "language_model.model.layers.20.input_layernorm.weight": "model-00004-of-00006.safetensors",
127
+ "language_model.model.layers.20.mlp.down_proj.weight": "model-00004-of-00006.safetensors",
128
+ "language_model.model.layers.20.mlp.gate_proj.weight": "model-00003-of-00006.safetensors",
129
+ "language_model.model.layers.20.mlp.up_proj.weight": "model-00004-of-00006.safetensors",
130
+ "language_model.model.layers.20.post_attention_layernorm.weight": "model-00004-of-00006.safetensors",
131
+ "language_model.model.layers.20.self_attn.k_proj.weight": "model-00003-of-00006.safetensors",
132
+ "language_model.model.layers.20.self_attn.o_proj.weight": "model-00003-of-00006.safetensors",
133
+ "language_model.model.layers.20.self_attn.q_proj.weight": "model-00003-of-00006.safetensors",
134
+ "language_model.model.layers.20.self_attn.v_proj.weight": "model-00003-of-00006.safetensors",
135
+ "language_model.model.layers.21.input_layernorm.weight": "model-00004-of-00006.safetensors",
136
+ "language_model.model.layers.21.mlp.down_proj.weight": "model-00004-of-00006.safetensors",
137
+ "language_model.model.layers.21.mlp.gate_proj.weight": "model-00004-of-00006.safetensors",
138
+ "language_model.model.layers.21.mlp.up_proj.weight": "model-00004-of-00006.safetensors",
139
+ "language_model.model.layers.21.post_attention_layernorm.weight": "model-00004-of-00006.safetensors",
140
+ "language_model.model.layers.21.self_attn.k_proj.weight": "model-00004-of-00006.safetensors",
141
+ "language_model.model.layers.21.self_attn.o_proj.weight": "model-00004-of-00006.safetensors",
142
+ "language_model.model.layers.21.self_attn.q_proj.weight": "model-00004-of-00006.safetensors",
143
+ "language_model.model.layers.21.self_attn.v_proj.weight": "model-00004-of-00006.safetensors",
144
+ "language_model.model.layers.22.input_layernorm.weight": "model-00004-of-00006.safetensors",
145
+ "language_model.model.layers.22.mlp.down_proj.weight": "model-00004-of-00006.safetensors",
146
+ "language_model.model.layers.22.mlp.gate_proj.weight": "model-00004-of-00006.safetensors",
147
+ "language_model.model.layers.22.mlp.up_proj.weight": "model-00004-of-00006.safetensors",
148
+ "language_model.model.layers.22.post_attention_layernorm.weight": "model-00004-of-00006.safetensors",
149
+ "language_model.model.layers.22.self_attn.k_proj.weight": "model-00004-of-00006.safetensors",
150
+ "language_model.model.layers.22.self_attn.o_proj.weight": "model-00004-of-00006.safetensors",
151
+ "language_model.model.layers.22.self_attn.q_proj.weight": "model-00004-of-00006.safetensors",
152
+ "language_model.model.layers.22.self_attn.v_proj.weight": "model-00004-of-00006.safetensors",
153
+ "language_model.model.layers.23.input_layernorm.weight": "model-00004-of-00006.safetensors",
154
+ "language_model.model.layers.23.mlp.down_proj.weight": "model-00004-of-00006.safetensors",
155
+ "language_model.model.layers.23.mlp.gate_proj.weight": "model-00004-of-00006.safetensors",
156
+ "language_model.model.layers.23.mlp.up_proj.weight": "model-00004-of-00006.safetensors",
157
+ "language_model.model.layers.23.post_attention_layernorm.weight": "model-00004-of-00006.safetensors",
158
+ "language_model.model.layers.23.self_attn.k_proj.weight": "model-00004-of-00006.safetensors",
159
+ "language_model.model.layers.23.self_attn.o_proj.weight": "model-00004-of-00006.safetensors",
160
+ "language_model.model.layers.23.self_attn.q_proj.weight": "model-00004-of-00006.safetensors",
161
+ "language_model.model.layers.23.self_attn.v_proj.weight": "model-00004-of-00006.safetensors",
162
+ "language_model.model.layers.24.input_layernorm.weight": "model-00004-of-00006.safetensors",
163
+ "language_model.model.layers.24.mlp.down_proj.weight": "model-00004-of-00006.safetensors",
164
+ "language_model.model.layers.24.mlp.gate_proj.weight": "model-00004-of-00006.safetensors",
165
+ "language_model.model.layers.24.mlp.up_proj.weight": "model-00004-of-00006.safetensors",
166
+ "language_model.model.layers.24.post_attention_layernorm.weight": "model-00004-of-00006.safetensors",
167
+ "language_model.model.layers.24.self_attn.k_proj.weight": "model-00004-of-00006.safetensors",
168
+ "language_model.model.layers.24.self_attn.o_proj.weight": "model-00004-of-00006.safetensors",
169
+ "language_model.model.layers.24.self_attn.q_proj.weight": "model-00004-of-00006.safetensors",
170
+ "language_model.model.layers.24.self_attn.v_proj.weight": "model-00004-of-00006.safetensors",
171
+ "language_model.model.layers.25.input_layernorm.weight": "model-00004-of-00006.safetensors",
172
+ "language_model.model.layers.25.mlp.down_proj.weight": "model-00004-of-00006.safetensors",
173
+ "language_model.model.layers.25.mlp.gate_proj.weight": "model-00004-of-00006.safetensors",
174
+ "language_model.model.layers.25.mlp.up_proj.weight": "model-00004-of-00006.safetensors",
175
+ "language_model.model.layers.25.post_attention_layernorm.weight": "model-00004-of-00006.safetensors",
176
+ "language_model.model.layers.25.self_attn.k_proj.weight": "model-00004-of-00006.safetensors",
177
+ "language_model.model.layers.25.self_attn.o_proj.weight": "model-00004-of-00006.safetensors",
178
+ "language_model.model.layers.25.self_attn.q_proj.weight": "model-00004-of-00006.safetensors",
179
+ "language_model.model.layers.25.self_attn.v_proj.weight": "model-00004-of-00006.safetensors",
180
+ "language_model.model.layers.26.input_layernorm.weight": "model-00004-of-00006.safetensors",
181
+ "language_model.model.layers.26.mlp.down_proj.weight": "model-00004-of-00006.safetensors",
182
+ "language_model.model.layers.26.mlp.gate_proj.weight": "model-00004-of-00006.safetensors",
183
+ "language_model.model.layers.26.mlp.up_proj.weight": "model-00004-of-00006.safetensors",
184
+ "language_model.model.layers.26.post_attention_layernorm.weight": "model-00004-of-00006.safetensors",
185
+ "language_model.model.layers.26.self_attn.k_proj.weight": "model-00004-of-00006.safetensors",
186
+ "language_model.model.layers.26.self_attn.o_proj.weight": "model-00004-of-00006.safetensors",
187
+ "language_model.model.layers.26.self_attn.q_proj.weight": "model-00004-of-00006.safetensors",
188
+ "language_model.model.layers.26.self_attn.v_proj.weight": "model-00004-of-00006.safetensors",
189
+ "language_model.model.layers.27.input_layernorm.weight": "model-00004-of-00006.safetensors",
190
+ "language_model.model.layers.27.mlp.down_proj.weight": "model-00004-of-00006.safetensors",
191
+ "language_model.model.layers.27.mlp.gate_proj.weight": "model-00004-of-00006.safetensors",
192
+ "language_model.model.layers.27.mlp.up_proj.weight": "model-00004-of-00006.safetensors",
193
+ "language_model.model.layers.27.post_attention_layernorm.weight": "model-00004-of-00006.safetensors",
194
+ "language_model.model.layers.27.self_attn.k_proj.weight": "model-00004-of-00006.safetensors",
195
+ "language_model.model.layers.27.self_attn.o_proj.weight": "model-00004-of-00006.safetensors",
196
+ "language_model.model.layers.27.self_attn.q_proj.weight": "model-00004-of-00006.safetensors",
197
+ "language_model.model.layers.27.self_attn.v_proj.weight": "model-00004-of-00006.safetensors",
198
+ "language_model.model.layers.28.input_layernorm.weight": "model-00005-of-00006.safetensors",
199
+ "language_model.model.layers.28.mlp.down_proj.weight": "model-00005-of-00006.safetensors",
200
+ "language_model.model.layers.28.mlp.gate_proj.weight": "model-00004-of-00006.safetensors",
201
+ "language_model.model.layers.28.mlp.up_proj.weight": "model-00005-of-00006.safetensors",
202
+ "language_model.model.layers.28.post_attention_layernorm.weight": "model-00005-of-00006.safetensors",
203
+ "language_model.model.layers.28.self_attn.k_proj.weight": "model-00004-of-00006.safetensors",
204
+ "language_model.model.layers.28.self_attn.o_proj.weight": "model-00004-of-00006.safetensors",
205
+ "language_model.model.layers.28.self_attn.q_proj.weight": "model-00004-of-00006.safetensors",
206
+ "language_model.model.layers.28.self_attn.v_proj.weight": "model-00004-of-00006.safetensors",
207
+ "language_model.model.layers.29.input_layernorm.weight": "model-00005-of-00006.safetensors",
208
+ "language_model.model.layers.29.mlp.down_proj.weight": "model-00005-of-00006.safetensors",
209
+ "language_model.model.layers.29.mlp.gate_proj.weight": "model-00005-of-00006.safetensors",
210
+ "language_model.model.layers.29.mlp.up_proj.weight": "model-00005-of-00006.safetensors",
211
+ "language_model.model.layers.29.post_attention_layernorm.weight": "model-00005-of-00006.safetensors",
212
+ "language_model.model.layers.29.self_attn.k_proj.weight": "model-00005-of-00006.safetensors",
213
+ "language_model.model.layers.29.self_attn.o_proj.weight": "model-00005-of-00006.safetensors",
214
+ "language_model.model.layers.29.self_attn.q_proj.weight": "model-00005-of-00006.safetensors",
215
+ "language_model.model.layers.29.self_attn.v_proj.weight": "model-00005-of-00006.safetensors",
216
+ "language_model.model.layers.3.input_layernorm.weight": "model-00001-of-00006.safetensors",
217
+ "language_model.model.layers.3.mlp.down_proj.weight": "model-00001-of-00006.safetensors",
218
+ "language_model.model.layers.3.mlp.gate_proj.weight": "model-00001-of-00006.safetensors",
219
+ "language_model.model.layers.3.mlp.up_proj.weight": "model-00001-of-00006.safetensors",
220
+ "language_model.model.layers.3.post_attention_layernorm.weight": "model-00001-of-00006.safetensors",
221
+ "language_model.model.layers.3.self_attn.k_proj.weight": "model-00001-of-00006.safetensors",
222
+ "language_model.model.layers.3.self_attn.o_proj.weight": "model-00001-of-00006.safetensors",
223
+ "language_model.model.layers.3.self_attn.q_proj.weight": "model-00001-of-00006.safetensors",
224
+ "language_model.model.layers.3.self_attn.v_proj.weight": "model-00001-of-00006.safetensors",
225
+ "language_model.model.layers.30.input_layernorm.weight": "model-00005-of-00006.safetensors",
226
+ "language_model.model.layers.30.mlp.down_proj.weight": "model-00005-of-00006.safetensors",
227
+ "language_model.model.layers.30.mlp.gate_proj.weight": "model-00005-of-00006.safetensors",
228
+ "language_model.model.layers.30.mlp.up_proj.weight": "model-00005-of-00006.safetensors",
229
+ "language_model.model.layers.30.post_attention_layernorm.weight": "model-00005-of-00006.safetensors",
230
+ "language_model.model.layers.30.self_attn.k_proj.weight": "model-00005-of-00006.safetensors",
231
+ "language_model.model.layers.30.self_attn.o_proj.weight": "model-00005-of-00006.safetensors",
232
+ "language_model.model.layers.30.self_attn.q_proj.weight": "model-00005-of-00006.safetensors",
233
+ "language_model.model.layers.30.self_attn.v_proj.weight": "model-00005-of-00006.safetensors",
234
+ "language_model.model.layers.31.input_layernorm.weight": "model-00005-of-00006.safetensors",
235
+ "language_model.model.layers.31.mlp.down_proj.weight": "model-00005-of-00006.safetensors",
236
+ "language_model.model.layers.31.mlp.gate_proj.weight": "model-00005-of-00006.safetensors",
237
+ "language_model.model.layers.31.mlp.up_proj.weight": "model-00005-of-00006.safetensors",
238
+ "language_model.model.layers.31.post_attention_layernorm.weight": "model-00005-of-00006.safetensors",
239
+ "language_model.model.layers.31.self_attn.k_proj.weight": "model-00005-of-00006.safetensors",
240
+ "language_model.model.layers.31.self_attn.o_proj.weight": "model-00005-of-00006.safetensors",
241
+ "language_model.model.layers.31.self_attn.q_proj.weight": "model-00005-of-00006.safetensors",
242
+ "language_model.model.layers.31.self_attn.v_proj.weight": "model-00005-of-00006.safetensors",
243
+ "language_model.model.layers.32.input_layernorm.weight": "model-00005-of-00006.safetensors",
244
+ "language_model.model.layers.32.mlp.down_proj.weight": "model-00005-of-00006.safetensors",
245
+ "language_model.model.layers.32.mlp.gate_proj.weight": "model-00005-of-00006.safetensors",
246
+ "language_model.model.layers.32.mlp.up_proj.weight": "model-00005-of-00006.safetensors",
247
+ "language_model.model.layers.32.post_attention_layernorm.weight": "model-00005-of-00006.safetensors",
248
+ "language_model.model.layers.32.self_attn.k_proj.weight": "model-00005-of-00006.safetensors",
249
+ "language_model.model.layers.32.self_attn.o_proj.weight": "model-00005-of-00006.safetensors",
250
+ "language_model.model.layers.32.self_attn.q_proj.weight": "model-00005-of-00006.safetensors",
251
+ "language_model.model.layers.32.self_attn.v_proj.weight": "model-00005-of-00006.safetensors",
252
+ "language_model.model.layers.33.input_layernorm.weight": "model-00005-of-00006.safetensors",
253
+ "language_model.model.layers.33.mlp.down_proj.weight": "model-00005-of-00006.safetensors",
254
+ "language_model.model.layers.33.mlp.gate_proj.weight": "model-00005-of-00006.safetensors",
255
+ "language_model.model.layers.33.mlp.up_proj.weight": "model-00005-of-00006.safetensors",
256
+ "language_model.model.layers.33.post_attention_layernorm.weight": "model-00005-of-00006.safetensors",
257
+ "language_model.model.layers.33.self_attn.k_proj.weight": "model-00005-of-00006.safetensors",
258
+ "language_model.model.layers.33.self_attn.o_proj.weight": "model-00005-of-00006.safetensors",
259
+ "language_model.model.layers.33.self_attn.q_proj.weight": "model-00005-of-00006.safetensors",
260
+ "language_model.model.layers.33.self_attn.v_proj.weight": "model-00005-of-00006.safetensors",
261
+ "language_model.model.layers.34.input_layernorm.weight": "model-00005-of-00006.safetensors",
262
+ "language_model.model.layers.34.mlp.down_proj.weight": "model-00005-of-00006.safetensors",
263
+ "language_model.model.layers.34.mlp.gate_proj.weight": "model-00005-of-00006.safetensors",
264
+ "language_model.model.layers.34.mlp.up_proj.weight": "model-00005-of-00006.safetensors",
265
+ "language_model.model.layers.34.post_attention_layernorm.weight": "model-00005-of-00006.safetensors",
266
+ "language_model.model.layers.34.self_attn.k_proj.weight": "model-00005-of-00006.safetensors",
267
+ "language_model.model.layers.34.self_attn.o_proj.weight": "model-00005-of-00006.safetensors",
268
+ "language_model.model.layers.34.self_attn.q_proj.weight": "model-00005-of-00006.safetensors",
269
+ "language_model.model.layers.34.self_attn.v_proj.weight": "model-00005-of-00006.safetensors",
270
+ "language_model.model.layers.35.input_layernorm.weight": "model-00005-of-00006.safetensors",
271
+ "language_model.model.layers.35.mlp.down_proj.weight": "model-00005-of-00006.safetensors",
272
+ "language_model.model.layers.35.mlp.gate_proj.weight": "model-00005-of-00006.safetensors",
273
+ "language_model.model.layers.35.mlp.up_proj.weight": "model-00005-of-00006.safetensors",
274
+ "language_model.model.layers.35.post_attention_layernorm.weight": "model-00005-of-00006.safetensors",
275
+ "language_model.model.layers.35.self_attn.k_proj.weight": "model-00005-of-00006.safetensors",
276
+ "language_model.model.layers.35.self_attn.o_proj.weight": "model-00005-of-00006.safetensors",
277
+ "language_model.model.layers.35.self_attn.q_proj.weight": "model-00005-of-00006.safetensors",
278
+ "language_model.model.layers.35.self_attn.v_proj.weight": "model-00005-of-00006.safetensors",
279
+ "language_model.model.layers.36.input_layernorm.weight": "model-00006-of-00006.safetensors",
280
+ "language_model.model.layers.36.mlp.down_proj.weight": "model-00006-of-00006.safetensors",
281
+ "language_model.model.layers.36.mlp.gate_proj.weight": "model-00005-of-00006.safetensors",
282
+ "language_model.model.layers.36.mlp.up_proj.weight": "model-00006-of-00006.safetensors",
283
+ "language_model.model.layers.36.post_attention_layernorm.weight": "model-00006-of-00006.safetensors",
284
+ "language_model.model.layers.36.self_attn.k_proj.weight": "model-00005-of-00006.safetensors",
285
+ "language_model.model.layers.36.self_attn.o_proj.weight": "model-00005-of-00006.safetensors",
286
+ "language_model.model.layers.36.self_attn.q_proj.weight": "model-00005-of-00006.safetensors",
287
+ "language_model.model.layers.36.self_attn.v_proj.weight": "model-00005-of-00006.safetensors",
288
+ "language_model.model.layers.37.input_layernorm.weight": "model-00006-of-00006.safetensors",
289
+ "language_model.model.layers.37.mlp.down_proj.weight": "model-00006-of-00006.safetensors",
290
+ "language_model.model.layers.37.mlp.gate_proj.weight": "model-00006-of-00006.safetensors",
291
+ "language_model.model.layers.37.mlp.up_proj.weight": "model-00006-of-00006.safetensors",
292
+ "language_model.model.layers.37.post_attention_layernorm.weight": "model-00006-of-00006.safetensors",
293
+ "language_model.model.layers.37.self_attn.k_proj.weight": "model-00006-of-00006.safetensors",
294
+ "language_model.model.layers.37.self_attn.o_proj.weight": "model-00006-of-00006.safetensors",
295
+ "language_model.model.layers.37.self_attn.q_proj.weight": "model-00006-of-00006.safetensors",
296
+ "language_model.model.layers.37.self_attn.v_proj.weight": "model-00006-of-00006.safetensors",
297
+ "language_model.model.layers.38.input_layernorm.weight": "model-00006-of-00006.safetensors",
298
+ "language_model.model.layers.38.mlp.down_proj.weight": "model-00006-of-00006.safetensors",
299
+ "language_model.model.layers.38.mlp.gate_proj.weight": "model-00006-of-00006.safetensors",
300
+ "language_model.model.layers.38.mlp.up_proj.weight": "model-00006-of-00006.safetensors",
301
+ "language_model.model.layers.38.post_attention_layernorm.weight": "model-00006-of-00006.safetensors",
302
+ "language_model.model.layers.38.self_attn.k_proj.weight": "model-00006-of-00006.safetensors",
303
+ "language_model.model.layers.38.self_attn.o_proj.weight": "model-00006-of-00006.safetensors",
304
+ "language_model.model.layers.38.self_attn.q_proj.weight": "model-00006-of-00006.safetensors",
305
+ "language_model.model.layers.38.self_attn.v_proj.weight": "model-00006-of-00006.safetensors",
306
+ "language_model.model.layers.39.input_layernorm.weight": "model-00006-of-00006.safetensors",
307
+ "language_model.model.layers.39.mlp.down_proj.weight": "model-00006-of-00006.safetensors",
308
+ "language_model.model.layers.39.mlp.gate_proj.weight": "model-00006-of-00006.safetensors",
309
+ "language_model.model.layers.39.mlp.up_proj.weight": "model-00006-of-00006.safetensors",
310
+ "language_model.model.layers.39.post_attention_layernorm.weight": "model-00006-of-00006.safetensors",
311
+ "language_model.model.layers.39.self_attn.k_proj.weight": "model-00006-of-00006.safetensors",
312
+ "language_model.model.layers.39.self_attn.o_proj.weight": "model-00006-of-00006.safetensors",
313
+ "language_model.model.layers.39.self_attn.q_proj.weight": "model-00006-of-00006.safetensors",
314
+ "language_model.model.layers.39.self_attn.v_proj.weight": "model-00006-of-00006.safetensors",
315
+ "language_model.model.layers.4.input_layernorm.weight": "model-00002-of-00006.safetensors",
316
+ "language_model.model.layers.4.mlp.down_proj.weight": "model-00002-of-00006.safetensors",
317
+ "language_model.model.layers.4.mlp.gate_proj.weight": "model-00001-of-00006.safetensors",
318
+ "language_model.model.layers.4.mlp.up_proj.weight": "model-00002-of-00006.safetensors",
319
+ "language_model.model.layers.4.post_attention_layernorm.weight": "model-00002-of-00006.safetensors",
320
+ "language_model.model.layers.4.self_attn.k_proj.weight": "model-00001-of-00006.safetensors",
321
+ "language_model.model.layers.4.self_attn.o_proj.weight": "model-00001-of-00006.safetensors",
322
+ "language_model.model.layers.4.self_attn.q_proj.weight": "model-00001-of-00006.safetensors",
323
+ "language_model.model.layers.4.self_attn.v_proj.weight": "model-00001-of-00006.safetensors",
324
+ "language_model.model.layers.5.input_layernorm.weight": "model-00002-of-00006.safetensors",
325
+ "language_model.model.layers.5.mlp.down_proj.weight": "model-00002-of-00006.safetensors",
326
+ "language_model.model.layers.5.mlp.gate_proj.weight": "model-00002-of-00006.safetensors",
327
+ "language_model.model.layers.5.mlp.up_proj.weight": "model-00002-of-00006.safetensors",
328
+ "language_model.model.layers.5.post_attention_layernorm.weight": "model-00002-of-00006.safetensors",
329
+ "language_model.model.layers.5.self_attn.k_proj.weight": "model-00002-of-00006.safetensors",
330
+ "language_model.model.layers.5.self_attn.o_proj.weight": "model-00002-of-00006.safetensors",
331
+ "language_model.model.layers.5.self_attn.q_proj.weight": "model-00002-of-00006.safetensors",
332
+ "language_model.model.layers.5.self_attn.v_proj.weight": "model-00002-of-00006.safetensors",
333
+ "language_model.model.layers.6.input_layernorm.weight": "model-00002-of-00006.safetensors",
334
+ "language_model.model.layers.6.mlp.down_proj.weight": "model-00002-of-00006.safetensors",
335
+ "language_model.model.layers.6.mlp.gate_proj.weight": "model-00002-of-00006.safetensors",
336
+ "language_model.model.layers.6.mlp.up_proj.weight": "model-00002-of-00006.safetensors",
337
+ "language_model.model.layers.6.post_attention_layernorm.weight": "model-00002-of-00006.safetensors",
338
+ "language_model.model.layers.6.self_attn.k_proj.weight": "model-00002-of-00006.safetensors",
339
+ "language_model.model.layers.6.self_attn.o_proj.weight": "model-00002-of-00006.safetensors",
340
+ "language_model.model.layers.6.self_attn.q_proj.weight": "model-00002-of-00006.safetensors",
341
+ "language_model.model.layers.6.self_attn.v_proj.weight": "model-00002-of-00006.safetensors",
342
+ "language_model.model.layers.7.input_layernorm.weight": "model-00002-of-00006.safetensors",
343
+ "language_model.model.layers.7.mlp.down_proj.weight": "model-00002-of-00006.safetensors",
344
+ "language_model.model.layers.7.mlp.gate_proj.weight": "model-00002-of-00006.safetensors",
345
+ "language_model.model.layers.7.mlp.up_proj.weight": "model-00002-of-00006.safetensors",
346
+ "language_model.model.layers.7.post_attention_layernorm.weight": "model-00002-of-00006.safetensors",
347
+ "language_model.model.layers.7.self_attn.k_proj.weight": "model-00002-of-00006.safetensors",
348
+ "language_model.model.layers.7.self_attn.o_proj.weight": "model-00002-of-00006.safetensors",
349
+ "language_model.model.layers.7.self_attn.q_proj.weight": "model-00002-of-00006.safetensors",
350
+ "language_model.model.layers.7.self_attn.v_proj.weight": "model-00002-of-00006.safetensors",
351
+ "language_model.model.layers.8.input_layernorm.weight": "model-00002-of-00006.safetensors",
352
+ "language_model.model.layers.8.mlp.down_proj.weight": "model-00002-of-00006.safetensors",
353
+ "language_model.model.layers.8.mlp.gate_proj.weight": "model-00002-of-00006.safetensors",
354
+ "language_model.model.layers.8.mlp.up_proj.weight": "model-00002-of-00006.safetensors",
355
+ "language_model.model.layers.8.post_attention_layernorm.weight": "model-00002-of-00006.safetensors",
356
+ "language_model.model.layers.8.self_attn.k_proj.weight": "model-00002-of-00006.safetensors",
357
+ "language_model.model.layers.8.self_attn.o_proj.weight": "model-00002-of-00006.safetensors",
358
+ "language_model.model.layers.8.self_attn.q_proj.weight": "model-00002-of-00006.safetensors",
359
+ "language_model.model.layers.8.self_attn.v_proj.weight": "model-00002-of-00006.safetensors",
360
+ "language_model.model.layers.9.input_layernorm.weight": "model-00002-of-00006.safetensors",
361
+ "language_model.model.layers.9.mlp.down_proj.weight": "model-00002-of-00006.safetensors",
362
+ "language_model.model.layers.9.mlp.gate_proj.weight": "model-00002-of-00006.safetensors",
363
+ "language_model.model.layers.9.mlp.up_proj.weight": "model-00002-of-00006.safetensors",
364
+ "language_model.model.layers.9.post_attention_layernorm.weight": "model-00002-of-00006.safetensors",
365
+ "language_model.model.layers.9.self_attn.k_proj.weight": "model-00002-of-00006.safetensors",
366
+ "language_model.model.layers.9.self_attn.o_proj.weight": "model-00002-of-00006.safetensors",
367
+ "language_model.model.layers.9.self_attn.q_proj.weight": "model-00002-of-00006.safetensors",
368
+ "language_model.model.layers.9.self_attn.v_proj.weight": "model-00002-of-00006.safetensors",
369
+ "language_model.model.norm.weight": "model-00006-of-00006.safetensors",
370
+ "multi_modal_projector.linear_1.weight": "model-00001-of-00006.safetensors",
371
+ "multi_modal_projector.linear_2.weight": "model-00001-of-00006.safetensors",
372
+ "multi_modal_projector.norm.weight": "model-00001-of-00006.safetensors",
373
+ "multi_modal_projector.patch_merger.merging_layer.weight": "model-00001-of-00006.safetensors",
374
+ "vision_tower.ln_pre.weight": "model-00001-of-00006.safetensors",
375
+ "vision_tower.patch_conv.weight": "model-00001-of-00006.safetensors",
376
+ "vision_tower.transformer.layers.0.attention.k_proj.weight": "model-00001-of-00006.safetensors",
377
+ "vision_tower.transformer.layers.0.attention.o_proj.weight": "model-00001-of-00006.safetensors",
378
+ "vision_tower.transformer.layers.0.attention.q_proj.weight": "model-00001-of-00006.safetensors",
379
+ "vision_tower.transformer.layers.0.attention.v_proj.weight": "model-00001-of-00006.safetensors",
380
+ "vision_tower.transformer.layers.0.attention_norm.weight": "model-00001-of-00006.safetensors",
381
+ "vision_tower.transformer.layers.0.feed_forward.down_proj.weight": "model-00001-of-00006.safetensors",
382
+ "vision_tower.transformer.layers.0.feed_forward.gate_proj.weight": "model-00001-of-00006.safetensors",
383
+ "vision_tower.transformer.layers.0.feed_forward.up_proj.weight": "model-00001-of-00006.safetensors",
384
+ "vision_tower.transformer.layers.0.ffn_norm.weight": "model-00001-of-00006.safetensors",
385
+ "vision_tower.transformer.layers.1.attention.k_proj.weight": "model-00001-of-00006.safetensors",
386
+ "vision_tower.transformer.layers.1.attention.o_proj.weight": "model-00001-of-00006.safetensors",
387
+ "vision_tower.transformer.layers.1.attention.q_proj.weight": "model-00001-of-00006.safetensors",
388
+ "vision_tower.transformer.layers.1.attention.v_proj.weight": "model-00001-of-00006.safetensors",
389
+ "vision_tower.transformer.layers.1.attention_norm.weight": "model-00001-of-00006.safetensors",
390
+ "vision_tower.transformer.layers.1.feed_forward.down_proj.weight": "model-00001-of-00006.safetensors",
391
+ "vision_tower.transformer.layers.1.feed_forward.gate_proj.weight": "model-00001-of-00006.safetensors",
392
+ "vision_tower.transformer.layers.1.feed_forward.up_proj.weight": "model-00001-of-00006.safetensors",
393
+ "vision_tower.transformer.layers.1.ffn_norm.weight": "model-00001-of-00006.safetensors",
394
+ "vision_tower.transformer.layers.10.attention.k_proj.weight": "model-00001-of-00006.safetensors",
395
+ "vision_tower.transformer.layers.10.attention.o_proj.weight": "model-00001-of-00006.safetensors",
396
+ "vision_tower.transformer.layers.10.attention.q_proj.weight": "model-00001-of-00006.safetensors",
397
+ "vision_tower.transformer.layers.10.attention.v_proj.weight": "model-00001-of-00006.safetensors",
398
+ "vision_tower.transformer.layers.10.attention_norm.weight": "model-00001-of-00006.safetensors",
399
+ "vision_tower.transformer.layers.10.feed_forward.down_proj.weight": "model-00001-of-00006.safetensors",
400
+ "vision_tower.transformer.layers.10.feed_forward.gate_proj.weight": "model-00001-of-00006.safetensors",
401
+ "vision_tower.transformer.layers.10.feed_forward.up_proj.weight": "model-00001-of-00006.safetensors",
402
+ "vision_tower.transformer.layers.10.ffn_norm.weight": "model-00001-of-00006.safetensors",
403
+ "vision_tower.transformer.layers.11.attention.k_proj.weight": "model-00001-of-00006.safetensors",
404
+ "vision_tower.transformer.layers.11.attention.o_proj.weight": "model-00001-of-00006.safetensors",
405
+ "vision_tower.transformer.layers.11.attention.q_proj.weight": "model-00001-of-00006.safetensors",
406
+ "vision_tower.transformer.layers.11.attention.v_proj.weight": "model-00001-of-00006.safetensors",
407
+ "vision_tower.transformer.layers.11.attention_norm.weight": "model-00001-of-00006.safetensors",
408
+ "vision_tower.transformer.layers.11.feed_forward.down_proj.weight": "model-00001-of-00006.safetensors",
409
+ "vision_tower.transformer.layers.11.feed_forward.gate_proj.weight": "model-00001-of-00006.safetensors",
410
+ "vision_tower.transformer.layers.11.feed_forward.up_proj.weight": "model-00001-of-00006.safetensors",
411
+ "vision_tower.transformer.layers.11.ffn_norm.weight": "model-00001-of-00006.safetensors",
412
+ "vision_tower.transformer.layers.12.attention.k_proj.weight": "model-00001-of-00006.safetensors",
413
+ "vision_tower.transformer.layers.12.attention.o_proj.weight": "model-00001-of-00006.safetensors",
414
+ "vision_tower.transformer.layers.12.attention.q_proj.weight": "model-00001-of-00006.safetensors",
415
+ "vision_tower.transformer.layers.12.attention.v_proj.weight": "model-00001-of-00006.safetensors",
416
+ "vision_tower.transformer.layers.12.attention_norm.weight": "model-00001-of-00006.safetensors",
417
+ "vision_tower.transformer.layers.12.feed_forward.down_proj.weight": "model-00001-of-00006.safetensors",
418
+ "vision_tower.transformer.layers.12.feed_forward.gate_proj.weight": "model-00001-of-00006.safetensors",
419
+ "vision_tower.transformer.layers.12.feed_forward.up_proj.weight": "model-00001-of-00006.safetensors",
420
+ "vision_tower.transformer.layers.12.ffn_norm.weight": "model-00001-of-00006.safetensors",
421
+ "vision_tower.transformer.layers.13.attention.k_proj.weight": "model-00001-of-00006.safetensors",
422
+ "vision_tower.transformer.layers.13.attention.o_proj.weight": "model-00001-of-00006.safetensors",
423
+ "vision_tower.transformer.layers.13.attention.q_proj.weight": "model-00001-of-00006.safetensors",
424
+ "vision_tower.transformer.layers.13.attention.v_proj.weight": "model-00001-of-00006.safetensors",
425
+ "vision_tower.transformer.layers.13.attention_norm.weight": "model-00001-of-00006.safetensors",
426
+ "vision_tower.transformer.layers.13.feed_forward.down_proj.weight": "model-00001-of-00006.safetensors",
427
+ "vision_tower.transformer.layers.13.feed_forward.gate_proj.weight": "model-00001-of-00006.safetensors",
428
+ "vision_tower.transformer.layers.13.feed_forward.up_proj.weight": "model-00001-of-00006.safetensors",
429
+ "vision_tower.transformer.layers.13.ffn_norm.weight": "model-00001-of-00006.safetensors",
430
+ "vision_tower.transformer.layers.14.attention.k_proj.weight": "model-00001-of-00006.safetensors",
431
+ "vision_tower.transformer.layers.14.attention.o_proj.weight": "model-00001-of-00006.safetensors",
432
+ "vision_tower.transformer.layers.14.attention.q_proj.weight": "model-00001-of-00006.safetensors",
433
+ "vision_tower.transformer.layers.14.attention.v_proj.weight": "model-00001-of-00006.safetensors",
434
+ "vision_tower.transformer.layers.14.attention_norm.weight": "model-00001-of-00006.safetensors",
435
+ "vision_tower.transformer.layers.14.feed_forward.down_proj.weight": "model-00001-of-00006.safetensors",
436
+ "vision_tower.transformer.layers.14.feed_forward.gate_proj.weight": "model-00001-of-00006.safetensors",
437
+ "vision_tower.transformer.layers.14.feed_forward.up_proj.weight": "model-00001-of-00006.safetensors",
438
+ "vision_tower.transformer.layers.14.ffn_norm.weight": "model-00001-of-00006.safetensors",
439
+ "vision_tower.transformer.layers.15.attention.k_proj.weight": "model-00001-of-00006.safetensors",
440
+ "vision_tower.transformer.layers.15.attention.o_proj.weight": "model-00001-of-00006.safetensors",
441
+ "vision_tower.transformer.layers.15.attention.q_proj.weight": "model-00001-of-00006.safetensors",
442
+ "vision_tower.transformer.layers.15.attention.v_proj.weight": "model-00001-of-00006.safetensors",
443
+ "vision_tower.transformer.layers.15.attention_norm.weight": "model-00001-of-00006.safetensors",
444
+ "vision_tower.transformer.layers.15.feed_forward.down_proj.weight": "model-00001-of-00006.safetensors",
445
+ "vision_tower.transformer.layers.15.feed_forward.gate_proj.weight": "model-00001-of-00006.safetensors",
446
+ "vision_tower.transformer.layers.15.feed_forward.up_proj.weight": "model-00001-of-00006.safetensors",
447
+ "vision_tower.transformer.layers.15.ffn_norm.weight": "model-00001-of-00006.safetensors",
448
+ "vision_tower.transformer.layers.16.attention.k_proj.weight": "model-00001-of-00006.safetensors",
449
+ "vision_tower.transformer.layers.16.attention.o_proj.weight": "model-00001-of-00006.safetensors",
450
+ "vision_tower.transformer.layers.16.attention.q_proj.weight": "model-00001-of-00006.safetensors",
451
+ "vision_tower.transformer.layers.16.attention.v_proj.weight": "model-00001-of-00006.safetensors",
452
+ "vision_tower.transformer.layers.16.attention_norm.weight": "model-00001-of-00006.safetensors",
453
+ "vision_tower.transformer.layers.16.feed_forward.down_proj.weight": "model-00001-of-00006.safetensors",
454
+ "vision_tower.transformer.layers.16.feed_forward.gate_proj.weight": "model-00001-of-00006.safetensors",
455
+ "vision_tower.transformer.layers.16.feed_forward.up_proj.weight": "model-00001-of-00006.safetensors",
456
+ "vision_tower.transformer.layers.16.ffn_norm.weight": "model-00001-of-00006.safetensors",
457
+ "vision_tower.transformer.layers.17.attention.k_proj.weight": "model-00001-of-00006.safetensors",
458
+ "vision_tower.transformer.layers.17.attention.o_proj.weight": "model-00001-of-00006.safetensors",
459
+ "vision_tower.transformer.layers.17.attention.q_proj.weight": "model-00001-of-00006.safetensors",
460
+ "vision_tower.transformer.layers.17.attention.v_proj.weight": "model-00001-of-00006.safetensors",
461
+ "vision_tower.transformer.layers.17.attention_norm.weight": "model-00001-of-00006.safetensors",
462
+ "vision_tower.transformer.layers.17.feed_forward.down_proj.weight": "model-00001-of-00006.safetensors",
463
+ "vision_tower.transformer.layers.17.feed_forward.gate_proj.weight": "model-00001-of-00006.safetensors",
464
+ "vision_tower.transformer.layers.17.feed_forward.up_proj.weight": "model-00001-of-00006.safetensors",
465
+ "vision_tower.transformer.layers.17.ffn_norm.weight": "model-00001-of-00006.safetensors",
466
+ "vision_tower.transformer.layers.18.attention.k_proj.weight": "model-00001-of-00006.safetensors",
467
+ "vision_tower.transformer.layers.18.attention.o_proj.weight": "model-00001-of-00006.safetensors",
468
+ "vision_tower.transformer.layers.18.attention.q_proj.weight": "model-00001-of-00006.safetensors",
469
+ "vision_tower.transformer.layers.18.attention.v_proj.weight": "model-00001-of-00006.safetensors",
470
+ "vision_tower.transformer.layers.18.attention_norm.weight": "model-00001-of-00006.safetensors",
471
+ "vision_tower.transformer.layers.18.feed_forward.down_proj.weight": "model-00001-of-00006.safetensors",
472
+ "vision_tower.transformer.layers.18.feed_forward.gate_proj.weight": "model-00001-of-00006.safetensors",
473
+ "vision_tower.transformer.layers.18.feed_forward.up_proj.weight": "model-00001-of-00006.safetensors",
474
+ "vision_tower.transformer.layers.18.ffn_norm.weight": "model-00001-of-00006.safetensors",
475
+ "vision_tower.transformer.layers.19.attention.k_proj.weight": "model-00001-of-00006.safetensors",
476
+ "vision_tower.transformer.layers.19.attention.o_proj.weight": "model-00001-of-00006.safetensors",
477
+ "vision_tower.transformer.layers.19.attention.q_proj.weight": "model-00001-of-00006.safetensors",
478
+ "vision_tower.transformer.layers.19.attention.v_proj.weight": "model-00001-of-00006.safetensors",
479
+ "vision_tower.transformer.layers.19.attention_norm.weight": "model-00001-of-00006.safetensors",
480
+ "vision_tower.transformer.layers.19.feed_forward.down_proj.weight": "model-00001-of-00006.safetensors",
481
+ "vision_tower.transformer.layers.19.feed_forward.gate_proj.weight": "model-00001-of-00006.safetensors",
482
+ "vision_tower.transformer.layers.19.feed_forward.up_proj.weight": "model-00001-of-00006.safetensors",
483
+ "vision_tower.transformer.layers.19.ffn_norm.weight": "model-00001-of-00006.safetensors",
484
+ "vision_tower.transformer.layers.2.attention.k_proj.weight": "model-00001-of-00006.safetensors",
485
+ "vision_tower.transformer.layers.2.attention.o_proj.weight": "model-00001-of-00006.safetensors",
486
+ "vision_tower.transformer.layers.2.attention.q_proj.weight": "model-00001-of-00006.safetensors",
487
+ "vision_tower.transformer.layers.2.attention.v_proj.weight": "model-00001-of-00006.safetensors",
488
+ "vision_tower.transformer.layers.2.attention_norm.weight": "model-00001-of-00006.safetensors",
489
+ "vision_tower.transformer.layers.2.feed_forward.down_proj.weight": "model-00001-of-00006.safetensors",
490
+ "vision_tower.transformer.layers.2.feed_forward.gate_proj.weight": "model-00001-of-00006.safetensors",
491
+ "vision_tower.transformer.layers.2.feed_forward.up_proj.weight": "model-00001-of-00006.safetensors",
492
+ "vision_tower.transformer.layers.2.ffn_norm.weight": "model-00001-of-00006.safetensors",
493
+ "vision_tower.transformer.layers.20.attention.k_proj.weight": "model-00001-of-00006.safetensors",
494
+ "vision_tower.transformer.layers.20.attention.o_proj.weight": "model-00001-of-00006.safetensors",
495
+ "vision_tower.transformer.layers.20.attention.q_proj.weight": "model-00001-of-00006.safetensors",
496
+ "vision_tower.transformer.layers.20.attention.v_proj.weight": "model-00001-of-00006.safetensors",
497
+ "vision_tower.transformer.layers.20.attention_norm.weight": "model-00001-of-00006.safetensors",
498
+ "vision_tower.transformer.layers.20.feed_forward.down_proj.weight": "model-00001-of-00006.safetensors",
499
+ "vision_tower.transformer.layers.20.feed_forward.gate_proj.weight": "model-00001-of-00006.safetensors",
500
+ "vision_tower.transformer.layers.20.feed_forward.up_proj.weight": "model-00001-of-00006.safetensors",
501
+ "vision_tower.transformer.layers.20.ffn_norm.weight": "model-00001-of-00006.safetensors",
502
+ "vision_tower.transformer.layers.21.attention.k_proj.weight": "model-00001-of-00006.safetensors",
503
+ "vision_tower.transformer.layers.21.attention.o_proj.weight": "model-00001-of-00006.safetensors",
504
+ "vision_tower.transformer.layers.21.attention.q_proj.weight": "model-00001-of-00006.safetensors",
505
+ "vision_tower.transformer.layers.21.attention.v_proj.weight": "model-00001-of-00006.safetensors",
506
+ "vision_tower.transformer.layers.21.attention_norm.weight": "model-00001-of-00006.safetensors",
507
+ "vision_tower.transformer.layers.21.feed_forward.down_proj.weight": "model-00001-of-00006.safetensors",
508
+ "vision_tower.transformer.layers.21.feed_forward.gate_proj.weight": "model-00001-of-00006.safetensors",
509
+ "vision_tower.transformer.layers.21.feed_forward.up_proj.weight": "model-00001-of-00006.safetensors",
510
+ "vision_tower.transformer.layers.21.ffn_norm.weight": "model-00001-of-00006.safetensors",
511
+ "vision_tower.transformer.layers.22.attention.k_proj.weight": "model-00001-of-00006.safetensors",
512
+ "vision_tower.transformer.layers.22.attention.o_proj.weight": "model-00001-of-00006.safetensors",
513
+ "vision_tower.transformer.layers.22.attention.q_proj.weight": "model-00001-of-00006.safetensors",
514
+ "vision_tower.transformer.layers.22.attention.v_proj.weight": "model-00001-of-00006.safetensors",
515
+ "vision_tower.transformer.layers.22.attention_norm.weight": "model-00001-of-00006.safetensors",
516
+ "vision_tower.transformer.layers.22.feed_forward.down_proj.weight": "model-00001-of-00006.safetensors",
517
+ "vision_tower.transformer.layers.22.feed_forward.gate_proj.weight": "model-00001-of-00006.safetensors",
518
+ "vision_tower.transformer.layers.22.feed_forward.up_proj.weight": "model-00001-of-00006.safetensors",
519
+ "vision_tower.transformer.layers.22.ffn_norm.weight": "model-00001-of-00006.safetensors",
520
+ "vision_tower.transformer.layers.23.attention.k_proj.weight": "model-00001-of-00006.safetensors",
521
+ "vision_tower.transformer.layers.23.attention.o_proj.weight": "model-00001-of-00006.safetensors",
522
+ "vision_tower.transformer.layers.23.attention.q_proj.weight": "model-00001-of-00006.safetensors",
523
+ "vision_tower.transformer.layers.23.attention.v_proj.weight": "model-00001-of-00006.safetensors",
524
+ "vision_tower.transformer.layers.23.attention_norm.weight": "model-00001-of-00006.safetensors",
525
+ "vision_tower.transformer.layers.23.feed_forward.down_proj.weight": "model-00001-of-00006.safetensors",
526
+ "vision_tower.transformer.layers.23.feed_forward.gate_proj.weight": "model-00001-of-00006.safetensors",
527
+ "vision_tower.transformer.layers.23.feed_forward.up_proj.weight": "model-00001-of-00006.safetensors",
528
+ "vision_tower.transformer.layers.23.ffn_norm.weight": "model-00001-of-00006.safetensors",
529
+ "vision_tower.transformer.layers.3.attention.k_proj.weight": "model-00001-of-00006.safetensors",
530
+ "vision_tower.transformer.layers.3.attention.o_proj.weight": "model-00001-of-00006.safetensors",
531
+ "vision_tower.transformer.layers.3.attention.q_proj.weight": "model-00001-of-00006.safetensors",
532
+ "vision_tower.transformer.layers.3.attention.v_proj.weight": "model-00001-of-00006.safetensors",
533
+ "vision_tower.transformer.layers.3.attention_norm.weight": "model-00001-of-00006.safetensors",
534
+ "vision_tower.transformer.layers.3.feed_forward.down_proj.weight": "model-00001-of-00006.safetensors",
535
+ "vision_tower.transformer.layers.3.feed_forward.gate_proj.weight": "model-00001-of-00006.safetensors",
536
+ "vision_tower.transformer.layers.3.feed_forward.up_proj.weight": "model-00001-of-00006.safetensors",
537
+ "vision_tower.transformer.layers.3.ffn_norm.weight": "model-00001-of-00006.safetensors",
538
+ "vision_tower.transformer.layers.4.attention.k_proj.weight": "model-00001-of-00006.safetensors",
539
+ "vision_tower.transformer.layers.4.attention.o_proj.weight": "model-00001-of-00006.safetensors",
540
+ "vision_tower.transformer.layers.4.attention.q_proj.weight": "model-00001-of-00006.safetensors",
541
+ "vision_tower.transformer.layers.4.attention.v_proj.weight": "model-00001-of-00006.safetensors",
542
+ "vision_tower.transformer.layers.4.attention_norm.weight": "model-00001-of-00006.safetensors",
543
+ "vision_tower.transformer.layers.4.feed_forward.down_proj.weight": "model-00001-of-00006.safetensors",
544
+ "vision_tower.transformer.layers.4.feed_forward.gate_proj.weight": "model-00001-of-00006.safetensors",
545
+ "vision_tower.transformer.layers.4.feed_forward.up_proj.weight": "model-00001-of-00006.safetensors",
546
+ "vision_tower.transformer.layers.4.ffn_norm.weight": "model-00001-of-00006.safetensors",
547
+ "vision_tower.transformer.layers.5.attention.k_proj.weight": "model-00001-of-00006.safetensors",
548
+ "vision_tower.transformer.layers.5.attention.o_proj.weight": "model-00001-of-00006.safetensors",
549
+ "vision_tower.transformer.layers.5.attention.q_proj.weight": "model-00001-of-00006.safetensors",
550
+ "vision_tower.transformer.layers.5.attention.v_proj.weight": "model-00001-of-00006.safetensors",
551
+ "vision_tower.transformer.layers.5.attention_norm.weight": "model-00001-of-00006.safetensors",
552
+ "vision_tower.transformer.layers.5.feed_forward.down_proj.weight": "model-00001-of-00006.safetensors",
553
+ "vision_tower.transformer.layers.5.feed_forward.gate_proj.weight": "model-00001-of-00006.safetensors",
554
+ "vision_tower.transformer.layers.5.feed_forward.up_proj.weight": "model-00001-of-00006.safetensors",
555
+ "vision_tower.transformer.layers.5.ffn_norm.weight": "model-00001-of-00006.safetensors",
556
+ "vision_tower.transformer.layers.6.attention.k_proj.weight": "model-00001-of-00006.safetensors",
557
+ "vision_tower.transformer.layers.6.attention.o_proj.weight": "model-00001-of-00006.safetensors",
558
+ "vision_tower.transformer.layers.6.attention.q_proj.weight": "model-00001-of-00006.safetensors",
559
+ "vision_tower.transformer.layers.6.attention.v_proj.weight": "model-00001-of-00006.safetensors",
560
+ "vision_tower.transformer.layers.6.attention_norm.weight": "model-00001-of-00006.safetensors",
561
+ "vision_tower.transformer.layers.6.feed_forward.down_proj.weight": "model-00001-of-00006.safetensors",
562
+ "vision_tower.transformer.layers.6.feed_forward.gate_proj.weight": "model-00001-of-00006.safetensors",
563
+ "vision_tower.transformer.layers.6.feed_forward.up_proj.weight": "model-00001-of-00006.safetensors",
564
+ "vision_tower.transformer.layers.6.ffn_norm.weight": "model-00001-of-00006.safetensors",
565
+ "vision_tower.transformer.layers.7.attention.k_proj.weight": "model-00001-of-00006.safetensors",
566
+ "vision_tower.transformer.layers.7.attention.o_proj.weight": "model-00001-of-00006.safetensors",
567
+ "vision_tower.transformer.layers.7.attention.q_proj.weight": "model-00001-of-00006.safetensors",
568
+ "vision_tower.transformer.layers.7.attention.v_proj.weight": "model-00001-of-00006.safetensors",
569
+ "vision_tower.transformer.layers.7.attention_norm.weight": "model-00001-of-00006.safetensors",
570
+ "vision_tower.transformer.layers.7.feed_forward.down_proj.weight": "model-00001-of-00006.safetensors",
571
+ "vision_tower.transformer.layers.7.feed_forward.gate_proj.weight": "model-00001-of-00006.safetensors",
572
+ "vision_tower.transformer.layers.7.feed_forward.up_proj.weight": "model-00001-of-00006.safetensors",
573
+ "vision_tower.transformer.layers.7.ffn_norm.weight": "model-00001-of-00006.safetensors",
574
+ "vision_tower.transformer.layers.8.attention.k_proj.weight": "model-00001-of-00006.safetensors",
575
+ "vision_tower.transformer.layers.8.attention.o_proj.weight": "model-00001-of-00006.safetensors",
576
+ "vision_tower.transformer.layers.8.attention.q_proj.weight": "model-00001-of-00006.safetensors",
577
+ "vision_tower.transformer.layers.8.attention.v_proj.weight": "model-00001-of-00006.safetensors",
578
+ "vision_tower.transformer.layers.8.attention_norm.weight": "model-00001-of-00006.safetensors",
579
+ "vision_tower.transformer.layers.8.feed_forward.down_proj.weight": "model-00001-of-00006.safetensors",
580
+ "vision_tower.transformer.layers.8.feed_forward.gate_proj.weight": "model-00001-of-00006.safetensors",
581
+ "vision_tower.transformer.layers.8.feed_forward.up_proj.weight": "model-00001-of-00006.safetensors",
582
+ "vision_tower.transformer.layers.8.ffn_norm.weight": "model-00001-of-00006.safetensors",
583
+ "vision_tower.transformer.layers.9.attention.k_proj.weight": "model-00001-of-00006.safetensors",
584
+ "vision_tower.transformer.layers.9.attention.o_proj.weight": "model-00001-of-00006.safetensors",
585
+ "vision_tower.transformer.layers.9.attention.q_proj.weight": "model-00001-of-00006.safetensors",
586
+ "vision_tower.transformer.layers.9.attention.v_proj.weight": "model-00001-of-00006.safetensors",
587
+ "vision_tower.transformer.layers.9.attention_norm.weight": "model-00001-of-00006.safetensors",
588
+ "vision_tower.transformer.layers.9.feed_forward.down_proj.weight": "model-00001-of-00006.safetensors",
589
+ "vision_tower.transformer.layers.9.feed_forward.gate_proj.weight": "model-00001-of-00006.safetensors",
590
+ "vision_tower.transformer.layers.9.feed_forward.up_proj.weight": "model-00001-of-00006.safetensors",
591
+ "vision_tower.transformer.layers.9.ffn_norm.weight": "model-00001-of-00006.safetensors"
592
+ }
593
+ }
params.json ADDED
@@ -0,0 +1,47 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "dim": 5120,
3
+ "n_layers": 40,
4
+ "head_dim": 128,
5
+ "hidden_dim": 16384,
6
+ "n_heads": 32,
7
+ "n_kv_heads": 8,
8
+ "rope_theta": 1000000000.0,
9
+ "norm_eps": 1e-05,
10
+ "vocab_size": 131072,
11
+ "tied_embeddings": false,
12
+ "max_position_embeddings": 262144,
13
+ "llama_4_scaling": {
14
+ "original_max_position_embeddings": 16384,
15
+ "beta": 0.1
16
+ },
17
+ "q_lora_rank": null,
18
+ "qk_rope_head_dim": null,
19
+ "qk_nope_head_dim": null,
20
+ "kv_lora_rank": null,
21
+ "v_head_dim": null,
22
+ "yarn": {
23
+ "original_max_position_embeddings": 16384,
24
+ "factor": 16,
25
+ "apply_scale": false,
26
+ "beta": 32,
27
+ "alpha": 1
28
+ },
29
+ "vision_encoder": {
30
+ "image_token_id": 10,
31
+ "intermediate_size": 4096,
32
+ "num_hidden_layers": 24,
33
+ "num_attention_heads": 16,
34
+ "mm_projector_id": "patch_merge",
35
+ "spatial_merge_size": 2,
36
+ "hidden_size": 1024,
37
+ "num_channels": 3,
38
+ "image_size": 1540,
39
+ "max_image_size": 1540,
40
+ "patch_size": 14,
41
+ "rope_theta": 10000.0,
42
+ "add_pre_mm_projector_layer_norm": true,
43
+ "adapter_bias": false,
44
+ "image_break_token_id": 12,
45
+ "image_end_token_id": 13
46
+ }
47
+ }
processor_config.json ADDED
@@ -0,0 +1,42 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "image_break_token": "[IMG_BREAK]",
3
+ "image_end_token": "[IMG_END]",
4
+ "image_processor": {
5
+ "crop_size": null,
6
+ "data_format": "channels_first",
7
+ "device": null,
8
+ "disable_grouping": null,
9
+ "do_center_crop": null,
10
+ "do_convert_rgb": true,
11
+ "do_normalize": true,
12
+ "do_pad": null,
13
+ "do_rescale": true,
14
+ "do_resize": true,
15
+ "image_mean": [
16
+ 0.48145466,
17
+ 0.4578275,
18
+ 0.40821073
19
+ ],
20
+ "image_processor_type": "PixtralImageProcessorFast",
21
+ "image_seq_length": null,
22
+ "image_std": [
23
+ 0.26862954,
24
+ 0.26130258,
25
+ 0.27577711
26
+ ],
27
+ "input_data_format": null,
28
+ "pad_size": null,
29
+ "patch_size": 14,
30
+ "processor_class": "PixtralProcessor",
31
+ "resample": 3,
32
+ "rescale_factor": 0.00392156862745098,
33
+ "return_tensors": null,
34
+ "size": {
35
+ "longest_edge": 1540
36
+ }
37
+ },
38
+ "image_token": "[IMG]",
39
+ "patch_size": 14,
40
+ "processor_class": "PixtralProcessor",
41
+ "spatial_merge_size": 2
42
+ }
special_tokens_map.json ADDED
The diff for this file is too large to render. See raw diff
 
tekken.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e29d19ea32eb7e26e6c0572d57cb7f9eca0f4420e0e0fe6ae1cf3be94da1c0d6
3
+ size 16753777
tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:577575622324b2e099e2648be26bdeb5e5815ffe66d7004e9e3ddbf421db6bf1
3
+ size 17078110
tokenizer_config.json ADDED
The diff for this file is too large to render. See raw diff