Image-to-Text
Transformers
Safetensors
Cosmos
English
qwen2_5_vl
nvidia
text-generation-inference
denizaybey harrim-nv commited on
Commit
fc5cdb9
·
verified ·
0 Parent(s):

Duplicate from nvidia/Cosmos-Reason1-7B

Browse files

Co-authored-by: Mohammad Harrim <harrim-nv@users.noreply.huggingface.co>

.gitattributes ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
12
+ *.model filter=lfs diff=lfs merge=lfs -text
13
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
14
+ *.npy filter=lfs diff=lfs merge=lfs -text
15
+ *.npz filter=lfs diff=lfs merge=lfs -text
16
+ *.onnx filter=lfs diff=lfs merge=lfs -text
17
+ *.ot filter=lfs diff=lfs merge=lfs -text
18
+ *.parquet filter=lfs diff=lfs merge=lfs -text
19
+ *.pb filter=lfs diff=lfs merge=lfs -text
20
+ *.pickle filter=lfs diff=lfs merge=lfs -text
21
+ *.pkl filter=lfs diff=lfs merge=lfs -text
22
+ *.pt filter=lfs diff=lfs merge=lfs -text
23
+ *.pth filter=lfs diff=lfs merge=lfs -text
24
+ *.rar filter=lfs diff=lfs merge=lfs -text
25
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
26
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
28
+ *.tar filter=lfs diff=lfs merge=lfs -text
29
+ *.tflite filter=lfs diff=lfs merge=lfs -text
30
+ *.tgz filter=lfs diff=lfs merge=lfs -text
31
+ *.wasm filter=lfs diff=lfs merge=lfs -text
32
+ *.xz filter=lfs diff=lfs merge=lfs -text
33
+ *.zip filter=lfs diff=lfs merge=lfs -text
34
+ *.zst filter=lfs diff=lfs merge=lfs -text
35
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,313 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ license_name: nvidia-open-model-license
4
+ license_link: >-
5
+ https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license
6
+ datasets:
7
+ - nvidia/Cosmos-Reason1-SFT-Dataset
8
+ - nvidia/Cosmos-Reason1-RL-Dataset
9
+ - nvidia/Cosmos-Reason1-Benchmark
10
+ library_name: transformers
11
+ language:
12
+ - en
13
+ base_model:
14
+ - Qwen/Qwen2.5-VL-7B-Instruct
15
+ tags:
16
+ - nvidia
17
+ - cosmos
18
+ ---
19
+
20
+
21
+ # **Cosmos-Reason1: Physical AI Common Sense and Embodied Reasoning Models**
22
+
23
+ [**Cosmos**](https://huggingface.co/collections/nvidia/cosmos-reason1-67c9e926206426008f1da1b7) | [**Code**](https://github.com/nvidia-cosmos/cosmos-reason1) | [**Paper**](https://arxiv.org/abs/2503.15558) | [**Paper Website**](https://research.nvidia.com/labs/dir/cosmos-reason1)
24
+
25
+ # Model Overview
26
+
27
+ ## Description:
28
+
29
+ **Cosmos-Reason1 Models**: Physical AI models understand physical common sense and generate appropriate embodied decisions in natural language through long chain-of-thought reasoning processes.
30
+
31
+ The Cosmos-Reason1 models are post-trained with physical common sense and embodied reasoning data with supervised fine-tuning and reinforcement learning. These are Physical AI models that can understand space, time, and fundamental physics, and can serve as planning models to reason about the next steps of an embodied agent.
32
+
33
+ The models are ready for commercial use.
34
+
35
+ **Model Developer**: NVIDIA
36
+
37
+ ## Model Versions
38
+
39
+ The Cosmos-Reason1 includes the following model:
40
+
41
+ - [Cosmos-Reason1-7B](https://huggingface.co/nvidia/Cosmos-Reason1-7B): Given a text prompt and an input video, think and generate the answer with respect to the input text prompt and video.
42
+
43
+ ### License:
44
+
45
+ This model is released under the [NVIDIA Open Model License](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license). For a custom license, please contact [cosmos-license@nvidia.com](mailto:cosmos-license@nvidia.com).
46
+
47
+ Under the NVIDIA Open Model License, NVIDIA confirms:
48
+
49
+ * Models are commercially usable.
50
+ * You are free to create and distribute Derivative Models.
51
+ * NVIDIA does not claim ownership to any outputs generated using the Models or Derivative Models.
52
+
53
+ **Important Note**: If You bypass, disable, reduce the efficacy of, or circumvent any technical limitation, safety guardrail or associated safety guardrail hyperparameter, encryption, security, digital rights management, or authentication mechanism (collectively “Guardrail”) contained in the Model without a substantially similar Guardrail appropriate for your use case, your rights under this Agreement [NVIDIA Open Model License Agreement](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license) will automatically terminate.
54
+
55
+ ### Deployment Geography:
56
+
57
+ Global
58
+
59
+ ### Use Case:
60
+
61
+ Physical AI: Space, time, fundamental physics understanding and embodied reasoning, encompassing robotics, and autonomous vehicles (AV).
62
+
63
+ ### Release Date:
64
+
65
+ * Github: [05/17/2025](https://github.com/nvidia-cosmos/cosmos-reason1)
66
+ * Huggingface:
67
+ * [06/10/2025](https://huggingface.co/nvidia/Cosmos-Reason1-7B/commit/2464fff43c5c0bfb1916ac8c009feda4aed81be9). Enhanced critic capability for physical plausibility.
68
+ * [05/17/2025](https://huggingface.co/nvidia/Cosmos-Reason1-7B/commit/098a5bb62a1f4fc05e5c4ac89aae8005e301aa18). Initial release.
69
+
70
+ ## Model Architecture:
71
+
72
+ Architecture Type: A Multi-modal LLM consists of a Vision Transformer (ViT) for vision encoder and a Dense Transformer model for LLM.
73
+ Network Architecture: Qwen2.5-VL-7B-Instruct.
74
+
75
+ Cosmos-Reason-7B is post-trained based on [Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct) and follows the same model architecture.
76
+
77
+
78
+ ## Input
79
+
80
+ **Input Type(s)**: Text+Video/Image
81
+
82
+ **Input Format(s)**:
83
+ * Text: String
84
+ * Video: mp4
85
+ * Image: jpg
86
+
87
+ **Input Parameters**:
88
+ * Text: One-dimensional (1D)
89
+ * Video: Three-dimensional (3D)
90
+ * Image: Two-dimensional (2D)
91
+
92
+ **Other Properties Related to Input**:
93
+ * Use `FPS=4` for input video to match the training setup.
94
+ * Append `Answer the question in the following format: <think>\nyour reasoning\n</think>\n\n<answer>\nyour answer\n</answer>.` in the system prompt to encourage long chain-of-thought reasoning response.
95
+
96
+ ## Output
97
+
98
+ **Output Type(s)**: Text
99
+
100
+ **Output Format**: String
101
+
102
+ **Output Parameters**: Text: One-dimensional (1D)
103
+
104
+ **Other Properties Related to Output**:
105
+ * Recommend using 4096 or more output max tokens to avoid truncation of long chain-of-thought response.
106
+ * Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA’s hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions. <br>
107
+
108
+
109
+ ## Software Integration
110
+
111
+ **Runtime Engine(s):**
112
+
113
+ * [vLLM](https://github.com/vllm-project/vllm)
114
+
115
+ **Supported Hardware Microarchitecture Compatibility:**
116
+
117
+ * NVIDIA Blackwell
118
+ * NVIDIA Hopper
119
+
120
+ **Note**: We have only tested doing inference with BF16 precision.
121
+
122
+ **Operating System(s):**
123
+
124
+ * Linux (We have not tested on other operating systems.)
125
+
126
+
127
+ # Usage
128
+
129
+ See [Cosmos-Reason1](https://github.com/nvidia-cosmos/cosmos-reason1) for details.
130
+ * Post Training: [Cosmos-Reason1](https://github.com/nvidia-cosmos/cosmos-reason1) provides examples of supervised fine-tuning and reinforcement learning on embodied reasoning datasets.
131
+
132
+ # Evaluation
133
+
134
+ Please see our [technical paper](https://arxiv.org/pdf/2503.15558) for detailed evaluations on physical common sense and embodied reasoning. Part of the evaluation datasets are released under [Cosmos-Reason1-Benchmark](https://huggingface.co/datasets/nvidia/Cosmos-Reason1-Benchmark). The embodied reasoning datasets and benchmarks focus on the following areas: robotics (RoboVQA, BridgeDataV2, Agibot, RobFail), ego-centric human demonstration (HoloAssist), and Autonomous Vehicle (AV) driving video data. The AV dataset is collected and annotated by NVIDIA.
135
+ All datasets go through the data annotation process described in the technical paper to prepare training and evaluation data and annotations.
136
+
137
+ **Data Collection Method**:
138
+ * RoboVQA: Hybrid: Automatic/Sensors
139
+ * BridgeDataV2: Automatic/Sensors
140
+ * AgiBot: Automatic/Sensors
141
+ * RoboFail: Automatic/Sensors
142
+ * HoloAssist: Human
143
+ * AV: Automatic/Sensors
144
+
145
+ **Labeling Method**:
146
+ * RoboVQA: Hybrid: Human,Automated
147
+ * BridgeDataV2: Hybrid: Human,Automated
148
+ * AgiBot: Hybrid: Human,Automated
149
+ * RoboFail: Hybrid: Human,Automated
150
+ * HoloAssist: Hybrid: Human,Automated
151
+ * AV: Hybrid: Human,Automated
152
+
153
+ **Metrics**:
154
+ We report the model accuracy on the embodied reasoning benchmark introduced in [Cosmos-Reason1](https://arxiv.org/abs/2503.15558). The results differ from those presented in Table 9 due to additional training aimed at supporting a broader range of Physical AI tasks beyond the benchmark.
155
+ | | [RoboVQA](https://robovqa.github.io/) | AV | [BridgeDataV2](https://rail-berkeley.github.io/bridgedata/)| [Agibot](https://github.com/OpenDriveLab/AgiBot-World)| [HoloAssist](https://holoassist.github.io/) | [RoboFail](https://robot-reflect.github.io/) | Average |
156
+ |--------------------|---------------------------------------------|----------|------------------------------------------------------|------------------------------------------------|------------------------------------------------|------------------------------------------------|------------------------------------------------|
157
+ | **Accuracy** | 87.3 | 70.8 | 63.7 | 48.9 | 62.7 | 57.2 | 65.1 |
158
+
159
+ ## Dataset Format
160
+ Modality: Video (mp4) and Text
161
+
162
+ ## Dataset Quantification
163
+ We release the embodied reasoning data and benchmarks. Each data sample is a pair of video and text. The text annotations include understanding and reasoning annotations described in the Cosmos-Reason1 paper. Each video may have multiple text annotations. The quantity of the video and text pairs is described in the table below.
164
+ **The AV data is currently unavailable and will be uploaded soon!**
165
+
166
+ | | [RoboVQA](https://robovqa.github.io/) | AV | [BridgeDataV2](https://rail-berkeley.github.io/bridgedata/)| [Agibot](https://github.com/OpenDriveLab/AgiBot-World)| [HoloAssist](https://holoassist.github.io/) | [RoboFail](https://robot-reflect.github.io/) | Total Storage Size |
167
+ |--------------------|---------------------------------------------|----------|------------------------------------------------------|------------------------------------------------|------------------------------------------------|------------------------------------------------|--------------------|
168
+ | **SFT Data** | 1.14m | 24.7k | 258k | 38.9k | 273k | N/A | **300.6GB** |
169
+ | **RL Data** | 252 | 200 | 240 | 200 | 200 | N/A | **2.6GB** |
170
+ | **Benchmark Data** | 110 | 100 | 100 | 100 | 100 | 100 | **1.5GB** |
171
+
172
+
173
+
174
+ We release text annotations for all embodied reasoning datasets and videos for RoboVQA and AV datasets. For other datasets, users may download the source videos from the original data source and find corresponding video sources via the video names. The held-out RoboFail benchmark is released for measuring the generalization capability.
175
+
176
+
177
+ ## Inference:
178
+ **Test Hardware:** H100, A100, GB200 <br>
179
+ > [!NOTE]
180
+ > We suggest using `fps=4` for the input video and `max_tokens=4096` to avoid truncated response.
181
+ ```python
182
+ from transformers import AutoProcessor
183
+ from vllm import LLM, SamplingParams
184
+ from qwen_vl_utils import process_vision_info
185
+
186
+ # You can also replace the MODEL_PATH by a safetensors folder path mentioned above
187
+ MODEL_PATH = "nvidia/Cosmos-Reason1-7B"
188
+
189
+ llm = LLM(
190
+ model=MODEL_PATH,
191
+ limit_mm_per_prompt={"image": 10, "video": 10},
192
+ )
193
+
194
+ sampling_params = SamplingParams(
195
+ temperature=0.6,
196
+ top_p=0.95,
197
+ repetition_penalty=1.05,
198
+ max_tokens=4096,
199
+ )
200
+
201
+ video_messages = [
202
+ {"role": "system", "content": "You are a helpful assistant. Answer the question in the following format: <think>\nyour reasoning\n</think>\n\n<answer>\nyour answer\n</answer>."},
203
+ {"role": "user", "content": [
204
+ {"type": "text", "text": (
205
+ "Is it safe to turn right?"
206
+ )
207
+ },
208
+ {
209
+ "type": "video",
210
+ "video": "file:///path/to/your/video.mp4",
211
+ "fps": 4,
212
+ }
213
+ ]
214
+ },
215
+ ]
216
+
217
+ # Here we use video messages as a demonstration
218
+ messages = video_messages
219
+
220
+ processor = AutoProcessor.from_pretrained(MODEL_PATH)
221
+ prompt = processor.apply_chat_template(
222
+ messages,
223
+ tokenize=False,
224
+ add_generation_prompt=True,
225
+ )
226
+ image_inputs, video_inputs, video_kwargs = process_vision_info(messages, return_video_kwargs=True)
227
+
228
+ mm_data = {}
229
+ if image_inputs is not None:
230
+ mm_data["image"] = image_inputs
231
+ if video_inputs is not None:
232
+ mm_data["video"] = video_inputs
233
+
234
+ llm_inputs = {
235
+ "prompt": prompt,
236
+ "multi_modal_data": mm_data,
237
+
238
+ # FPS will be returned in video_kwargs
239
+ "mm_processor_kwargs": video_kwargs,
240
+ }
241
+
242
+ outputs = llm.generate([llm_inputs], sampling_params=sampling_params)
243
+ generated_text = outputs[0].outputs[0].text
244
+
245
+ print(generated_text)
246
+ ```
247
+
248
+
249
+ ## Ethical Considerations
250
+
251
+ NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
252
+
253
+ Users are responsible for model inputs and outputs. Users are responsible for ensuring safe integration of this model, including implementing guardrails as well as other safety mechanisms, prior to deployment.
254
+
255
+ For more detailed information on ethical considerations for this model, please see the subcards of Explainability, Bias, Safety & Security, and Privacy below.
256
+
257
+ Please report security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/).
258
+
259
+ ### Plus Plus (++) Promise
260
+
261
+ We value you, the datasets, the diversity they represent, and what we have been entrusted with. This model and its associated data have been:
262
+
263
+ * Verified to comply with current applicable disclosure laws, regulations, and industry standards.
264
+ * Verified to comply with applicable privacy labeling requirements.
265
+ * Annotated to describe the collector/source (NVIDIA or a third-party).
266
+ * Characterized for technical limitations.
267
+ * Reviewed to ensure proper disclosure is accessible to, maintained for, and in compliance with NVIDIA data subjects and their requests.
268
+ * Reviewed before release.
269
+ * Tagged for known restrictions and potential safety implications.
270
+
271
+ ### Bias
272
+
273
+ | Field | Response |
274
+ | :--------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------- |
275
+ | Participation considerations from adversely impacted groups [protected classes](https://www.senate.ca.gov/content/protected-classes) in model design and testing: | None |
276
+ | Measures taken to mitigate against unwanted bias: | The training video sources contain multiple physical embodiments and environments including human, car, single arm robot, bimanual robot in indoor and outdoor environments. By training on numerous and various physical interactions and curated datasets, we strive to provide a model that does not possess biases towards certain embodiments or environments. |
277
+
278
+ ### Explainability
279
+
280
+ | Field | Response |
281
+ | :-------------------------------------------------------- | :------------------------------------------------------------------------------------------------------------------- |
282
+ | Intended Application & Domain: | Physical AI Reasoning |
283
+ | Model Type: | Transformer |
284
+ | Intended Users: | Physical AI developers |
285
+ | Output: | Text |
286
+ | Describe how the model works: | Generates text answers based on input text prompt and video |
287
+ | Technical Limitations: | The model may not follow the video or text input accurately in challenging cases, where the input video shows complex scene composition and temporal dynamics. Examples of challenging scenes include: fast camera movements, overlapping human-object interactions, low lighting with high motion blur, and multiple people performing different actions simultaneously. |
288
+ | Verified to have met prescribed NVIDIA quality standards: | Yes |
289
+ | Performance Metrics: | Quantitative and Qualitative Evaluation. Cosmos-Reason1 proposes the embodied reasoning benchmark and physical common sense benchmark to evaluate accuracy with visual question answering. |
290
+ | Potential Known Risks: | The model's output can generate all forms of texts, including what may be considered toxic, offensive, or indecent. |
291
+ | Licensing: | [NVIDIA Open Model License](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license) |
292
+
293
+ ### Privacy
294
+
295
+ | Field | Response |
296
+ | :------------------------------------------------------------------ | :------------- |
297
+ | Generatable or reverse engineerable personal information? | None Known |
298
+ | Protected class data used to create this model? | None Known |
299
+ | Was consent obtained for any personal data used? | None Known |
300
+ | How often is dataset reviewed? | Before Release |
301
+ | Is there provenance for all datasets used in training? | Yes |
302
+ | Does data labeling (annotation, metadata) comply with privacy laws? | Yes |
303
+ | Applicable Privacy Policy | [NVIDIA Privacy Policy](https://www.nvidia.com/en-us/about-nvidia/privacy-policy) |
304
+
305
+
306
+ ### Safety
307
+
308
+ | Field | Response |
309
+ | :---------------------------------------------- | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
310
+ | Model Application(s): | Physical AI common sense understanding and embodied reasoning |
311
+ | Describe the life critical impact (if present). | None Known |
312
+ | Use Case Restrictions: | [NVIDIA Open Model License](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license) |
313
+ | Model and dataset restrictions: | The Principle of least privilege (PoLP) is applied limiting access for dataset generation and model development. Restrictions enforce dataset access during training, and dataset license constraints adhered to. Model checkpoints are made available on Hugging Face, and may become available on cloud providers' model catalog. |
chat_template.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ {
2
+ "chat_template": "{% set image_count = namespace(value=0) %}{% set video_count = namespace(value=0) %}{% for message in messages %}{% if loop.first and message['role'] != 'system' %}<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n{% endif %}<|im_start|>{{ message['role'] }}\n{% if message['content'] is string %}{{ message['content'] }}<|im_end|>\n{% else %}{% for content in message['content'] %}{% if content['type'] == 'image' or 'image' in content or 'image_url' in content %}{% set image_count.value = image_count.value + 1 %}{% if add_vision_id %}Picture {{ image_count.value }}: {% endif %}<|vision_start|><|image_pad|><|vision_end|>{% elif content['type'] == 'video' or 'video' in content %}{% set video_count.value = video_count.value + 1 %}{% if add_vision_id %}Video {{ video_count.value }}: {% endif %}<|vision_start|><|video_pad|><|vision_end|>{% elif 'text' in content %}{{ content['text'] }}{% endif %}{% endfor %}<|im_end|>\n{% endif %}{% endfor %}{% if add_generation_prompt %}<|im_start|>assistant\n{% endif %}"
3
+ }
config.json ADDED
@@ -0,0 +1,61 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "Qwen2_5_VLForConditionalGeneration"
4
+ ],
5
+ "attention_dropout": 0.0,
6
+ "bos_token_id": 151643,
7
+ "eos_token_id": 151645,
8
+ "vision_start_token_id": 151652,
9
+ "vision_end_token_id": 151653,
10
+ "vision_token_id": 151654,
11
+ "image_token_id": 151655,
12
+ "video_token_id": 151656,
13
+ "hidden_act": "silu",
14
+ "hidden_size": 3584,
15
+ "initializer_range": 0.02,
16
+ "intermediate_size": 18944,
17
+ "max_position_embeddings": 128000,
18
+ "max_window_layers": 28,
19
+ "model_type": "qwen2_5_vl",
20
+ "num_attention_heads": 28,
21
+ "num_hidden_layers": 28,
22
+ "num_key_value_heads": 4,
23
+ "rms_norm_eps": 1e-06,
24
+ "rope_theta": 1000000.0,
25
+ "sliding_window": 32768,
26
+ "tie_word_embeddings": false,
27
+ "torch_dtype": "bfloat16",
28
+ "transformers_version": "4.41.2",
29
+ "use_cache": true,
30
+ "use_sliding_window": false,
31
+ "vision_config": {
32
+ "depth": 32,
33
+ "hidden_act": "silu",
34
+ "hidden_size": 1280,
35
+ "intermediate_size": 3420,
36
+ "num_heads": 16,
37
+ "in_chans": 3,
38
+ "out_hidden_size": 3584,
39
+ "patch_size": 14,
40
+ "spatial_merge_size": 2,
41
+ "spatial_patch_size": 14,
42
+ "window_size": 112,
43
+ "fullatt_block_indexes": [
44
+ 7,
45
+ 15,
46
+ 23,
47
+ 31
48
+ ],
49
+ "tokens_per_second": 2,
50
+ "temporal_patch_size": 2
51
+ },
52
+ "rope_scaling": {
53
+ "type": "mrope",
54
+ "mrope_section": [
55
+ 16,
56
+ 24,
57
+ 24
58
+ ]
59
+ },
60
+ "vocab_size": 152064
61
+ }
generation_config.json ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token_id": 151643,
3
+ "pad_token_id": 151643,
4
+ "do_sample": true,
5
+ "eos_token_id": [
6
+ 151645,
7
+ 151643
8
+ ],
9
+ "repetition_penalty": 1.05,
10
+ "temperature": 0.000001,
11
+ "transformers_version": "4.37.0"
12
+ }
model-00001-of-00004.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cb3789b9843f08b8a181ee43fc074f79edfacb9081677b8f35fae69c34de9efd
3
+ size 4968243304
model-00002-of-00004.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:522dd37d8d1ed41b2134903d374167e7336c7f4e5eac3ca7b310dfbb42606a05
3
+ size 4991495816
model-00003-of-00004.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2f74ab185f8f51ceb0d5521dbe6ec51b11e16d986412269456a0e5c043f526a6
3
+ size 4932751040
model-00004-of-00004.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5e0879e4d16019a1af04bec65f3058d114de947bd1412dd5cc8096b3fb7c6969
3
+ size 1691924384
model.safetensors.index.json ADDED
@@ -0,0 +1,736 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "metadata": {
3
+ "total_size": 16584333312
4
+ },
5
+ "weight_map": {
6
+ "visual.patch_embed.proj.weight": "model-00001-of-00004.safetensors",
7
+ "visual.blocks.0.norm1.weight": "model-00001-of-00004.safetensors",
8
+ "visual.blocks.0.norm2.weight": "model-00001-of-00004.safetensors",
9
+ "visual.blocks.0.attn.qkv.weight": "model-00001-of-00004.safetensors",
10
+ "visual.blocks.0.attn.qkv.bias": "model-00001-of-00004.safetensors",
11
+ "visual.blocks.0.attn.proj.weight": "model-00001-of-00004.safetensors",
12
+ "visual.blocks.0.attn.proj.bias": "model-00001-of-00004.safetensors",
13
+ "visual.blocks.0.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
14
+ "visual.blocks.0.mlp.gate_proj.bias": "model-00001-of-00004.safetensors",
15
+ "visual.blocks.0.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
16
+ "visual.blocks.0.mlp.up_proj.bias": "model-00001-of-00004.safetensors",
17
+ "visual.blocks.0.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
18
+ "visual.blocks.0.mlp.down_proj.bias": "model-00001-of-00004.safetensors",
19
+ "visual.blocks.1.norm1.weight": "model-00001-of-00004.safetensors",
20
+ "visual.blocks.1.norm2.weight": "model-00001-of-00004.safetensors",
21
+ "visual.blocks.1.attn.qkv.weight": "model-00001-of-00004.safetensors",
22
+ "visual.blocks.1.attn.qkv.bias": "model-00001-of-00004.safetensors",
23
+ "visual.blocks.1.attn.proj.weight": "model-00001-of-00004.safetensors",
24
+ "visual.blocks.1.attn.proj.bias": "model-00001-of-00004.safetensors",
25
+ "visual.blocks.1.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
26
+ "visual.blocks.1.mlp.gate_proj.bias": "model-00001-of-00004.safetensors",
27
+ "visual.blocks.1.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
28
+ "visual.blocks.1.mlp.up_proj.bias": "model-00001-of-00004.safetensors",
29
+ "visual.blocks.1.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
30
+ "visual.blocks.1.mlp.down_proj.bias": "model-00001-of-00004.safetensors",
31
+ "visual.blocks.2.norm1.weight": "model-00001-of-00004.safetensors",
32
+ "visual.blocks.2.norm2.weight": "model-00001-of-00004.safetensors",
33
+ "visual.blocks.2.attn.qkv.weight": "model-00001-of-00004.safetensors",
34
+ "visual.blocks.2.attn.qkv.bias": "model-00001-of-00004.safetensors",
35
+ "visual.blocks.2.attn.proj.weight": "model-00001-of-00004.safetensors",
36
+ "visual.blocks.2.attn.proj.bias": "model-00001-of-00004.safetensors",
37
+ "visual.blocks.2.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
38
+ "visual.blocks.2.mlp.gate_proj.bias": "model-00001-of-00004.safetensors",
39
+ "visual.blocks.2.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
40
+ "visual.blocks.2.mlp.up_proj.bias": "model-00001-of-00004.safetensors",
41
+ "visual.blocks.2.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
42
+ "visual.blocks.2.mlp.down_proj.bias": "model-00001-of-00004.safetensors",
43
+ "visual.blocks.3.norm1.weight": "model-00001-of-00004.safetensors",
44
+ "visual.blocks.3.norm2.weight": "model-00001-of-00004.safetensors",
45
+ "visual.blocks.3.attn.qkv.weight": "model-00001-of-00004.safetensors",
46
+ "visual.blocks.3.attn.qkv.bias": "model-00001-of-00004.safetensors",
47
+ "visual.blocks.3.attn.proj.weight": "model-00001-of-00004.safetensors",
48
+ "visual.blocks.3.attn.proj.bias": "model-00001-of-00004.safetensors",
49
+ "visual.blocks.3.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
50
+ "visual.blocks.3.mlp.gate_proj.bias": "model-00001-of-00004.safetensors",
51
+ "visual.blocks.3.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
52
+ "visual.blocks.3.mlp.up_proj.bias": "model-00001-of-00004.safetensors",
53
+ "visual.blocks.3.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
54
+ "visual.blocks.3.mlp.down_proj.bias": "model-00001-of-00004.safetensors",
55
+ "visual.blocks.4.norm1.weight": "model-00001-of-00004.safetensors",
56
+ "visual.blocks.4.norm2.weight": "model-00001-of-00004.safetensors",
57
+ "visual.blocks.4.attn.qkv.weight": "model-00001-of-00004.safetensors",
58
+ "visual.blocks.4.attn.qkv.bias": "model-00001-of-00004.safetensors",
59
+ "visual.blocks.4.attn.proj.weight": "model-00001-of-00004.safetensors",
60
+ "visual.blocks.4.attn.proj.bias": "model-00001-of-00004.safetensors",
61
+ "visual.blocks.4.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
62
+ "visual.blocks.4.mlp.gate_proj.bias": "model-00001-of-00004.safetensors",
63
+ "visual.blocks.4.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
64
+ "visual.blocks.4.mlp.up_proj.bias": "model-00001-of-00004.safetensors",
65
+ "visual.blocks.4.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
66
+ "visual.blocks.4.mlp.down_proj.bias": "model-00001-of-00004.safetensors",
67
+ "visual.blocks.5.norm1.weight": "model-00001-of-00004.safetensors",
68
+ "visual.blocks.5.norm2.weight": "model-00001-of-00004.safetensors",
69
+ "visual.blocks.5.attn.qkv.weight": "model-00001-of-00004.safetensors",
70
+ "visual.blocks.5.attn.qkv.bias": "model-00001-of-00004.safetensors",
71
+ "visual.blocks.5.attn.proj.weight": "model-00001-of-00004.safetensors",
72
+ "visual.blocks.5.attn.proj.bias": "model-00001-of-00004.safetensors",
73
+ "visual.blocks.5.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
74
+ "visual.blocks.5.mlp.gate_proj.bias": "model-00001-of-00004.safetensors",
75
+ "visual.blocks.5.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
76
+ "visual.blocks.5.mlp.up_proj.bias": "model-00001-of-00004.safetensors",
77
+ "visual.blocks.5.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
78
+ "visual.blocks.5.mlp.down_proj.bias": "model-00001-of-00004.safetensors",
79
+ "visual.blocks.6.norm1.weight": "model-00001-of-00004.safetensors",
80
+ "visual.blocks.6.norm2.weight": "model-00001-of-00004.safetensors",
81
+ "visual.blocks.6.attn.qkv.weight": "model-00001-of-00004.safetensors",
82
+ "visual.blocks.6.attn.qkv.bias": "model-00001-of-00004.safetensors",
83
+ "visual.blocks.6.attn.proj.weight": "model-00001-of-00004.safetensors",
84
+ "visual.blocks.6.attn.proj.bias": "model-00001-of-00004.safetensors",
85
+ "visual.blocks.6.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
86
+ "visual.blocks.6.mlp.gate_proj.bias": "model-00001-of-00004.safetensors",
87
+ "visual.blocks.6.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
88
+ "visual.blocks.6.mlp.up_proj.bias": "model-00001-of-00004.safetensors",
89
+ "visual.blocks.6.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
90
+ "visual.blocks.6.mlp.down_proj.bias": "model-00001-of-00004.safetensors",
91
+ "visual.blocks.7.norm1.weight": "model-00001-of-00004.safetensors",
92
+ "visual.blocks.7.norm2.weight": "model-00001-of-00004.safetensors",
93
+ "visual.blocks.7.attn.qkv.weight": "model-00001-of-00004.safetensors",
94
+ "visual.blocks.7.attn.qkv.bias": "model-00001-of-00004.safetensors",
95
+ "visual.blocks.7.attn.proj.weight": "model-00001-of-00004.safetensors",
96
+ "visual.blocks.7.attn.proj.bias": "model-00001-of-00004.safetensors",
97
+ "visual.blocks.7.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
98
+ "visual.blocks.7.mlp.gate_proj.bias": "model-00001-of-00004.safetensors",
99
+ "visual.blocks.7.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
100
+ "visual.blocks.7.mlp.up_proj.bias": "model-00001-of-00004.safetensors",
101
+ "visual.blocks.7.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
102
+ "visual.blocks.7.mlp.down_proj.bias": "model-00001-of-00004.safetensors",
103
+ "visual.blocks.8.norm1.weight": "model-00001-of-00004.safetensors",
104
+ "visual.blocks.8.norm2.weight": "model-00001-of-00004.safetensors",
105
+ "visual.blocks.8.attn.qkv.weight": "model-00001-of-00004.safetensors",
106
+ "visual.blocks.8.attn.qkv.bias": "model-00001-of-00004.safetensors",
107
+ "visual.blocks.8.attn.proj.weight": "model-00001-of-00004.safetensors",
108
+ "visual.blocks.8.attn.proj.bias": "model-00001-of-00004.safetensors",
109
+ "visual.blocks.8.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
110
+ "visual.blocks.8.mlp.gate_proj.bias": "model-00001-of-00004.safetensors",
111
+ "visual.blocks.8.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
112
+ "visual.blocks.8.mlp.up_proj.bias": "model-00001-of-00004.safetensors",
113
+ "visual.blocks.8.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
114
+ "visual.blocks.8.mlp.down_proj.bias": "model-00001-of-00004.safetensors",
115
+ "visual.blocks.9.norm1.weight": "model-00001-of-00004.safetensors",
116
+ "visual.blocks.9.norm2.weight": "model-00001-of-00004.safetensors",
117
+ "visual.blocks.9.attn.qkv.weight": "model-00001-of-00004.safetensors",
118
+ "visual.blocks.9.attn.qkv.bias": "model-00001-of-00004.safetensors",
119
+ "visual.blocks.9.attn.proj.weight": "model-00001-of-00004.safetensors",
120
+ "visual.blocks.9.attn.proj.bias": "model-00001-of-00004.safetensors",
121
+ "visual.blocks.9.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
122
+ "visual.blocks.9.mlp.gate_proj.bias": "model-00001-of-00004.safetensors",
123
+ "visual.blocks.9.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
124
+ "visual.blocks.9.mlp.up_proj.bias": "model-00001-of-00004.safetensors",
125
+ "visual.blocks.9.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
126
+ "visual.blocks.9.mlp.down_proj.bias": "model-00001-of-00004.safetensors",
127
+ "visual.blocks.10.norm1.weight": "model-00001-of-00004.safetensors",
128
+ "visual.blocks.10.norm2.weight": "model-00001-of-00004.safetensors",
129
+ "visual.blocks.10.attn.qkv.weight": "model-00001-of-00004.safetensors",
130
+ "visual.blocks.10.attn.qkv.bias": "model-00001-of-00004.safetensors",
131
+ "visual.blocks.10.attn.proj.weight": "model-00001-of-00004.safetensors",
132
+ "visual.blocks.10.attn.proj.bias": "model-00001-of-00004.safetensors",
133
+ "visual.blocks.10.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
134
+ "visual.blocks.10.mlp.gate_proj.bias": "model-00001-of-00004.safetensors",
135
+ "visual.blocks.10.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
136
+ "visual.blocks.10.mlp.up_proj.bias": "model-00001-of-00004.safetensors",
137
+ "visual.blocks.10.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
138
+ "visual.blocks.10.mlp.down_proj.bias": "model-00001-of-00004.safetensors",
139
+ "visual.blocks.11.norm1.weight": "model-00001-of-00004.safetensors",
140
+ "visual.blocks.11.norm2.weight": "model-00001-of-00004.safetensors",
141
+ "visual.blocks.11.attn.qkv.weight": "model-00001-of-00004.safetensors",
142
+ "visual.blocks.11.attn.qkv.bias": "model-00001-of-00004.safetensors",
143
+ "visual.blocks.11.attn.proj.weight": "model-00001-of-00004.safetensors",
144
+ "visual.blocks.11.attn.proj.bias": "model-00001-of-00004.safetensors",
145
+ "visual.blocks.11.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
146
+ "visual.blocks.11.mlp.gate_proj.bias": "model-00001-of-00004.safetensors",
147
+ "visual.blocks.11.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
148
+ "visual.blocks.11.mlp.up_proj.bias": "model-00001-of-00004.safetensors",
149
+ "visual.blocks.11.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
150
+ "visual.blocks.11.mlp.down_proj.bias": "model-00001-of-00004.safetensors",
151
+ "visual.blocks.12.norm1.weight": "model-00001-of-00004.safetensors",
152
+ "visual.blocks.12.norm2.weight": "model-00001-of-00004.safetensors",
153
+ "visual.blocks.12.attn.qkv.weight": "model-00001-of-00004.safetensors",
154
+ "visual.blocks.12.attn.qkv.bias": "model-00001-of-00004.safetensors",
155
+ "visual.blocks.12.attn.proj.weight": "model-00001-of-00004.safetensors",
156
+ "visual.blocks.12.attn.proj.bias": "model-00001-of-00004.safetensors",
157
+ "visual.blocks.12.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
158
+ "visual.blocks.12.mlp.gate_proj.bias": "model-00001-of-00004.safetensors",
159
+ "visual.blocks.12.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
160
+ "visual.blocks.12.mlp.up_proj.bias": "model-00001-of-00004.safetensors",
161
+ "visual.blocks.12.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
162
+ "visual.blocks.12.mlp.down_proj.bias": "model-00001-of-00004.safetensors",
163
+ "visual.blocks.13.norm1.weight": "model-00001-of-00004.safetensors",
164
+ "visual.blocks.13.norm2.weight": "model-00001-of-00004.safetensors",
165
+ "visual.blocks.13.attn.qkv.weight": "model-00001-of-00004.safetensors",
166
+ "visual.blocks.13.attn.qkv.bias": "model-00001-of-00004.safetensors",
167
+ "visual.blocks.13.attn.proj.weight": "model-00001-of-00004.safetensors",
168
+ "visual.blocks.13.attn.proj.bias": "model-00001-of-00004.safetensors",
169
+ "visual.blocks.13.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
170
+ "visual.blocks.13.mlp.gate_proj.bias": "model-00001-of-00004.safetensors",
171
+ "visual.blocks.13.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
172
+ "visual.blocks.13.mlp.up_proj.bias": "model-00001-of-00004.safetensors",
173
+ "visual.blocks.13.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
174
+ "visual.blocks.13.mlp.down_proj.bias": "model-00001-of-00004.safetensors",
175
+ "visual.blocks.14.norm1.weight": "model-00001-of-00004.safetensors",
176
+ "visual.blocks.14.norm2.weight": "model-00001-of-00004.safetensors",
177
+ "visual.blocks.14.attn.qkv.weight": "model-00001-of-00004.safetensors",
178
+ "visual.blocks.14.attn.qkv.bias": "model-00001-of-00004.safetensors",
179
+ "visual.blocks.14.attn.proj.weight": "model-00001-of-00004.safetensors",
180
+ "visual.blocks.14.attn.proj.bias": "model-00001-of-00004.safetensors",
181
+ "visual.blocks.14.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
182
+ "visual.blocks.14.mlp.gate_proj.bias": "model-00001-of-00004.safetensors",
183
+ "visual.blocks.14.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
184
+ "visual.blocks.14.mlp.up_proj.bias": "model-00001-of-00004.safetensors",
185
+ "visual.blocks.14.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
186
+ "visual.blocks.14.mlp.down_proj.bias": "model-00001-of-00004.safetensors",
187
+ "visual.blocks.15.norm1.weight": "model-00001-of-00004.safetensors",
188
+ "visual.blocks.15.norm2.weight": "model-00001-of-00004.safetensors",
189
+ "visual.blocks.15.attn.qkv.weight": "model-00001-of-00004.safetensors",
190
+ "visual.blocks.15.attn.qkv.bias": "model-00001-of-00004.safetensors",
191
+ "visual.blocks.15.attn.proj.weight": "model-00001-of-00004.safetensors",
192
+ "visual.blocks.15.attn.proj.bias": "model-00001-of-00004.safetensors",
193
+ "visual.blocks.15.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
194
+ "visual.blocks.15.mlp.gate_proj.bias": "model-00001-of-00004.safetensors",
195
+ "visual.blocks.15.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
196
+ "visual.blocks.15.mlp.up_proj.bias": "model-00001-of-00004.safetensors",
197
+ "visual.blocks.15.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
198
+ "visual.blocks.15.mlp.down_proj.bias": "model-00001-of-00004.safetensors",
199
+ "visual.blocks.16.norm1.weight": "model-00001-of-00004.safetensors",
200
+ "visual.blocks.16.norm2.weight": "model-00001-of-00004.safetensors",
201
+ "visual.blocks.16.attn.qkv.weight": "model-00001-of-00004.safetensors",
202
+ "visual.blocks.16.attn.qkv.bias": "model-00001-of-00004.safetensors",
203
+ "visual.blocks.16.attn.proj.weight": "model-00001-of-00004.safetensors",
204
+ "visual.blocks.16.attn.proj.bias": "model-00001-of-00004.safetensors",
205
+ "visual.blocks.16.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
206
+ "visual.blocks.16.mlp.gate_proj.bias": "model-00001-of-00004.safetensors",
207
+ "visual.blocks.16.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
208
+ "visual.blocks.16.mlp.up_proj.bias": "model-00001-of-00004.safetensors",
209
+ "visual.blocks.16.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
210
+ "visual.blocks.16.mlp.down_proj.bias": "model-00001-of-00004.safetensors",
211
+ "visual.blocks.17.norm1.weight": "model-00001-of-00004.safetensors",
212
+ "visual.blocks.17.norm2.weight": "model-00001-of-00004.safetensors",
213
+ "visual.blocks.17.attn.qkv.weight": "model-00001-of-00004.safetensors",
214
+ "visual.blocks.17.attn.qkv.bias": "model-00001-of-00004.safetensors",
215
+ "visual.blocks.17.attn.proj.weight": "model-00001-of-00004.safetensors",
216
+ "visual.blocks.17.attn.proj.bias": "model-00001-of-00004.safetensors",
217
+ "visual.blocks.17.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
218
+ "visual.blocks.17.mlp.gate_proj.bias": "model-00001-of-00004.safetensors",
219
+ "visual.blocks.17.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
220
+ "visual.blocks.17.mlp.up_proj.bias": "model-00001-of-00004.safetensors",
221
+ "visual.blocks.17.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
222
+ "visual.blocks.17.mlp.down_proj.bias": "model-00001-of-00004.safetensors",
223
+ "visual.blocks.18.norm1.weight": "model-00001-of-00004.safetensors",
224
+ "visual.blocks.18.norm2.weight": "model-00001-of-00004.safetensors",
225
+ "visual.blocks.18.attn.qkv.weight": "model-00001-of-00004.safetensors",
226
+ "visual.blocks.18.attn.qkv.bias": "model-00001-of-00004.safetensors",
227
+ "visual.blocks.18.attn.proj.weight": "model-00001-of-00004.safetensors",
228
+ "visual.blocks.18.attn.proj.bias": "model-00001-of-00004.safetensors",
229
+ "visual.blocks.18.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
230
+ "visual.blocks.18.mlp.gate_proj.bias": "model-00001-of-00004.safetensors",
231
+ "visual.blocks.18.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
232
+ "visual.blocks.18.mlp.up_proj.bias": "model-00001-of-00004.safetensors",
233
+ "visual.blocks.18.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
234
+ "visual.blocks.18.mlp.down_proj.bias": "model-00001-of-00004.safetensors",
235
+ "visual.blocks.19.norm1.weight": "model-00001-of-00004.safetensors",
236
+ "visual.blocks.19.norm2.weight": "model-00001-of-00004.safetensors",
237
+ "visual.blocks.19.attn.qkv.weight": "model-00001-of-00004.safetensors",
238
+ "visual.blocks.19.attn.qkv.bias": "model-00001-of-00004.safetensors",
239
+ "visual.blocks.19.attn.proj.weight": "model-00001-of-00004.safetensors",
240
+ "visual.blocks.19.attn.proj.bias": "model-00001-of-00004.safetensors",
241
+ "visual.blocks.19.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
242
+ "visual.blocks.19.mlp.gate_proj.bias": "model-00001-of-00004.safetensors",
243
+ "visual.blocks.19.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
244
+ "visual.blocks.19.mlp.up_proj.bias": "model-00001-of-00004.safetensors",
245
+ "visual.blocks.19.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
246
+ "visual.blocks.19.mlp.down_proj.bias": "model-00001-of-00004.safetensors",
247
+ "visual.blocks.20.norm1.weight": "model-00001-of-00004.safetensors",
248
+ "visual.blocks.20.norm2.weight": "model-00001-of-00004.safetensors",
249
+ "visual.blocks.20.attn.qkv.weight": "model-00001-of-00004.safetensors",
250
+ "visual.blocks.20.attn.qkv.bias": "model-00001-of-00004.safetensors",
251
+ "visual.blocks.20.attn.proj.weight": "model-00001-of-00004.safetensors",
252
+ "visual.blocks.20.attn.proj.bias": "model-00001-of-00004.safetensors",
253
+ "visual.blocks.20.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
254
+ "visual.blocks.20.mlp.gate_proj.bias": "model-00001-of-00004.safetensors",
255
+ "visual.blocks.20.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
256
+ "visual.blocks.20.mlp.up_proj.bias": "model-00001-of-00004.safetensors",
257
+ "visual.blocks.20.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
258
+ "visual.blocks.20.mlp.down_proj.bias": "model-00001-of-00004.safetensors",
259
+ "visual.blocks.21.norm1.weight": "model-00001-of-00004.safetensors",
260
+ "visual.blocks.21.norm2.weight": "model-00001-of-00004.safetensors",
261
+ "visual.blocks.21.attn.qkv.weight": "model-00001-of-00004.safetensors",
262
+ "visual.blocks.21.attn.qkv.bias": "model-00001-of-00004.safetensors",
263
+ "visual.blocks.21.attn.proj.weight": "model-00001-of-00004.safetensors",
264
+ "visual.blocks.21.attn.proj.bias": "model-00001-of-00004.safetensors",
265
+ "visual.blocks.21.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
266
+ "visual.blocks.21.mlp.gate_proj.bias": "model-00001-of-00004.safetensors",
267
+ "visual.blocks.21.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
268
+ "visual.blocks.21.mlp.up_proj.bias": "model-00001-of-00004.safetensors",
269
+ "visual.blocks.21.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
270
+ "visual.blocks.21.mlp.down_proj.bias": "model-00001-of-00004.safetensors",
271
+ "visual.blocks.22.norm1.weight": "model-00001-of-00004.safetensors",
272
+ "visual.blocks.22.norm2.weight": "model-00001-of-00004.safetensors",
273
+ "visual.blocks.22.attn.qkv.weight": "model-00001-of-00004.safetensors",
274
+ "visual.blocks.22.attn.qkv.bias": "model-00001-of-00004.safetensors",
275
+ "visual.blocks.22.attn.proj.weight": "model-00001-of-00004.safetensors",
276
+ "visual.blocks.22.attn.proj.bias": "model-00001-of-00004.safetensors",
277
+ "visual.blocks.22.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
278
+ "visual.blocks.22.mlp.gate_proj.bias": "model-00001-of-00004.safetensors",
279
+ "visual.blocks.22.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
280
+ "visual.blocks.22.mlp.up_proj.bias": "model-00001-of-00004.safetensors",
281
+ "visual.blocks.22.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
282
+ "visual.blocks.22.mlp.down_proj.bias": "model-00001-of-00004.safetensors",
283
+ "visual.blocks.23.norm1.weight": "model-00001-of-00004.safetensors",
284
+ "visual.blocks.23.norm2.weight": "model-00001-of-00004.safetensors",
285
+ "visual.blocks.23.attn.qkv.weight": "model-00001-of-00004.safetensors",
286
+ "visual.blocks.23.attn.qkv.bias": "model-00001-of-00004.safetensors",
287
+ "visual.blocks.23.attn.proj.weight": "model-00001-of-00004.safetensors",
288
+ "visual.blocks.23.attn.proj.bias": "model-00001-of-00004.safetensors",
289
+ "visual.blocks.23.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
290
+ "visual.blocks.23.mlp.gate_proj.bias": "model-00001-of-00004.safetensors",
291
+ "visual.blocks.23.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
292
+ "visual.blocks.23.mlp.up_proj.bias": "model-00001-of-00004.safetensors",
293
+ "visual.blocks.23.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
294
+ "visual.blocks.23.mlp.down_proj.bias": "model-00001-of-00004.safetensors",
295
+ "visual.blocks.24.norm1.weight": "model-00001-of-00004.safetensors",
296
+ "visual.blocks.24.norm2.weight": "model-00001-of-00004.safetensors",
297
+ "visual.blocks.24.attn.qkv.weight": "model-00001-of-00004.safetensors",
298
+ "visual.blocks.24.attn.qkv.bias": "model-00001-of-00004.safetensors",
299
+ "visual.blocks.24.attn.proj.weight": "model-00001-of-00004.safetensors",
300
+ "visual.blocks.24.attn.proj.bias": "model-00001-of-00004.safetensors",
301
+ "visual.blocks.24.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
302
+ "visual.blocks.24.mlp.gate_proj.bias": "model-00001-of-00004.safetensors",
303
+ "visual.blocks.24.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
304
+ "visual.blocks.24.mlp.up_proj.bias": "model-00001-of-00004.safetensors",
305
+ "visual.blocks.24.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
306
+ "visual.blocks.24.mlp.down_proj.bias": "model-00001-of-00004.safetensors",
307
+ "visual.blocks.25.norm1.weight": "model-00001-of-00004.safetensors",
308
+ "visual.blocks.25.norm2.weight": "model-00001-of-00004.safetensors",
309
+ "visual.blocks.25.attn.qkv.weight": "model-00001-of-00004.safetensors",
310
+ "visual.blocks.25.attn.qkv.bias": "model-00001-of-00004.safetensors",
311
+ "visual.blocks.25.attn.proj.weight": "model-00001-of-00004.safetensors",
312
+ "visual.blocks.25.attn.proj.bias": "model-00001-of-00004.safetensors",
313
+ "visual.blocks.25.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
314
+ "visual.blocks.25.mlp.gate_proj.bias": "model-00001-of-00004.safetensors",
315
+ "visual.blocks.25.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
316
+ "visual.blocks.25.mlp.up_proj.bias": "model-00001-of-00004.safetensors",
317
+ "visual.blocks.25.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
318
+ "visual.blocks.25.mlp.down_proj.bias": "model-00001-of-00004.safetensors",
319
+ "visual.blocks.26.norm1.weight": "model-00001-of-00004.safetensors",
320
+ "visual.blocks.26.norm2.weight": "model-00001-of-00004.safetensors",
321
+ "visual.blocks.26.attn.qkv.weight": "model-00001-of-00004.safetensors",
322
+ "visual.blocks.26.attn.qkv.bias": "model-00001-of-00004.safetensors",
323
+ "visual.blocks.26.attn.proj.weight": "model-00001-of-00004.safetensors",
324
+ "visual.blocks.26.attn.proj.bias": "model-00001-of-00004.safetensors",
325
+ "visual.blocks.26.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
326
+ "visual.blocks.26.mlp.gate_proj.bias": "model-00001-of-00004.safetensors",
327
+ "visual.blocks.26.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
328
+ "visual.blocks.26.mlp.up_proj.bias": "model-00001-of-00004.safetensors",
329
+ "visual.blocks.26.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
330
+ "visual.blocks.26.mlp.down_proj.bias": "model-00001-of-00004.safetensors",
331
+ "visual.blocks.27.norm1.weight": "model-00001-of-00004.safetensors",
332
+ "visual.blocks.27.norm2.weight": "model-00001-of-00004.safetensors",
333
+ "visual.blocks.27.attn.qkv.weight": "model-00001-of-00004.safetensors",
334
+ "visual.blocks.27.attn.qkv.bias": "model-00001-of-00004.safetensors",
335
+ "visual.blocks.27.attn.proj.weight": "model-00001-of-00004.safetensors",
336
+ "visual.blocks.27.attn.proj.bias": "model-00001-of-00004.safetensors",
337
+ "visual.blocks.27.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
338
+ "visual.blocks.27.mlp.gate_proj.bias": "model-00001-of-00004.safetensors",
339
+ "visual.blocks.27.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
340
+ "visual.blocks.27.mlp.up_proj.bias": "model-00001-of-00004.safetensors",
341
+ "visual.blocks.27.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
342
+ "visual.blocks.27.mlp.down_proj.bias": "model-00001-of-00004.safetensors",
343
+ "visual.blocks.28.norm1.weight": "model-00001-of-00004.safetensors",
344
+ "visual.blocks.28.norm2.weight": "model-00001-of-00004.safetensors",
345
+ "visual.blocks.28.attn.qkv.weight": "model-00001-of-00004.safetensors",
346
+ "visual.blocks.28.attn.qkv.bias": "model-00001-of-00004.safetensors",
347
+ "visual.blocks.28.attn.proj.weight": "model-00001-of-00004.safetensors",
348
+ "visual.blocks.28.attn.proj.bias": "model-00001-of-00004.safetensors",
349
+ "visual.blocks.28.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
350
+ "visual.blocks.28.mlp.gate_proj.bias": "model-00001-of-00004.safetensors",
351
+ "visual.blocks.28.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
352
+ "visual.blocks.28.mlp.up_proj.bias": "model-00001-of-00004.safetensors",
353
+ "visual.blocks.28.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
354
+ "visual.blocks.28.mlp.down_proj.bias": "model-00001-of-00004.safetensors",
355
+ "visual.blocks.29.norm1.weight": "model-00001-of-00004.safetensors",
356
+ "visual.blocks.29.norm2.weight": "model-00001-of-00004.safetensors",
357
+ "visual.blocks.29.attn.qkv.weight": "model-00001-of-00004.safetensors",
358
+ "visual.blocks.29.attn.qkv.bias": "model-00001-of-00004.safetensors",
359
+ "visual.blocks.29.attn.proj.weight": "model-00001-of-00004.safetensors",
360
+ "visual.blocks.29.attn.proj.bias": "model-00001-of-00004.safetensors",
361
+ "visual.blocks.29.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
362
+ "visual.blocks.29.mlp.gate_proj.bias": "model-00001-of-00004.safetensors",
363
+ "visual.blocks.29.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
364
+ "visual.blocks.29.mlp.up_proj.bias": "model-00001-of-00004.safetensors",
365
+ "visual.blocks.29.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
366
+ "visual.blocks.29.mlp.down_proj.bias": "model-00001-of-00004.safetensors",
367
+ "visual.blocks.30.norm1.weight": "model-00001-of-00004.safetensors",
368
+ "visual.blocks.30.norm2.weight": "model-00001-of-00004.safetensors",
369
+ "visual.blocks.30.attn.qkv.weight": "model-00001-of-00004.safetensors",
370
+ "visual.blocks.30.attn.qkv.bias": "model-00001-of-00004.safetensors",
371
+ "visual.blocks.30.attn.proj.weight": "model-00001-of-00004.safetensors",
372
+ "visual.blocks.30.attn.proj.bias": "model-00001-of-00004.safetensors",
373
+ "visual.blocks.30.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
374
+ "visual.blocks.30.mlp.gate_proj.bias": "model-00001-of-00004.safetensors",
375
+ "visual.blocks.30.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
376
+ "visual.blocks.30.mlp.up_proj.bias": "model-00001-of-00004.safetensors",
377
+ "visual.blocks.30.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
378
+ "visual.blocks.30.mlp.down_proj.bias": "model-00001-of-00004.safetensors",
379
+ "visual.blocks.31.norm1.weight": "model-00001-of-00004.safetensors",
380
+ "visual.blocks.31.norm2.weight": "model-00001-of-00004.safetensors",
381
+ "visual.blocks.31.attn.qkv.weight": "model-00001-of-00004.safetensors",
382
+ "visual.blocks.31.attn.qkv.bias": "model-00001-of-00004.safetensors",
383
+ "visual.blocks.31.attn.proj.weight": "model-00001-of-00004.safetensors",
384
+ "visual.blocks.31.attn.proj.bias": "model-00001-of-00004.safetensors",
385
+ "visual.blocks.31.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
386
+ "visual.blocks.31.mlp.gate_proj.bias": "model-00001-of-00004.safetensors",
387
+ "visual.blocks.31.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
388
+ "visual.blocks.31.mlp.up_proj.bias": "model-00001-of-00004.safetensors",
389
+ "visual.blocks.31.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
390
+ "visual.blocks.31.mlp.down_proj.bias": "model-00001-of-00004.safetensors",
391
+ "visual.merger.ln_q.weight": "model-00001-of-00004.safetensors",
392
+ "visual.merger.mlp.0.weight": "model-00001-of-00004.safetensors",
393
+ "visual.merger.mlp.0.bias": "model-00001-of-00004.safetensors",
394
+ "visual.merger.mlp.2.weight": "model-00001-of-00004.safetensors",
395
+ "visual.merger.mlp.2.bias": "model-00001-of-00004.safetensors",
396
+ "model.embed_tokens.weight": "model-00001-of-00004.safetensors",
397
+ "model.layers.0.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
398
+ "model.layers.0.self_attn.q_proj.bias": "model-00001-of-00004.safetensors",
399
+ "model.layers.0.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
400
+ "model.layers.0.self_attn.k_proj.bias": "model-00001-of-00004.safetensors",
401
+ "model.layers.0.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
402
+ "model.layers.0.self_attn.v_proj.bias": "model-00001-of-00004.safetensors",
403
+ "model.layers.0.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
404
+ "model.layers.0.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
405
+ "model.layers.0.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
406
+ "model.layers.0.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
407
+ "model.layers.0.input_layernorm.weight": "model-00001-of-00004.safetensors",
408
+ "model.layers.0.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
409
+ "model.layers.1.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
410
+ "model.layers.1.self_attn.q_proj.bias": "model-00001-of-00004.safetensors",
411
+ "model.layers.1.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
412
+ "model.layers.1.self_attn.k_proj.bias": "model-00001-of-00004.safetensors",
413
+ "model.layers.1.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
414
+ "model.layers.1.self_attn.v_proj.bias": "model-00001-of-00004.safetensors",
415
+ "model.layers.1.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
416
+ "model.layers.1.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
417
+ "model.layers.1.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
418
+ "model.layers.1.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
419
+ "model.layers.1.input_layernorm.weight": "model-00001-of-00004.safetensors",
420
+ "model.layers.1.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
421
+ "model.layers.2.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
422
+ "model.layers.2.self_attn.q_proj.bias": "model-00001-of-00004.safetensors",
423
+ "model.layers.2.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
424
+ "model.layers.2.self_attn.k_proj.bias": "model-00001-of-00004.safetensors",
425
+ "model.layers.2.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
426
+ "model.layers.2.self_attn.v_proj.bias": "model-00001-of-00004.safetensors",
427
+ "model.layers.2.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
428
+ "model.layers.2.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
429
+ "model.layers.2.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
430
+ "model.layers.2.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
431
+ "model.layers.2.input_layernorm.weight": "model-00001-of-00004.safetensors",
432
+ "model.layers.2.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
433
+ "model.layers.3.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
434
+ "model.layers.3.self_attn.q_proj.bias": "model-00001-of-00004.safetensors",
435
+ "model.layers.3.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
436
+ "model.layers.3.self_attn.k_proj.bias": "model-00001-of-00004.safetensors",
437
+ "model.layers.3.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
438
+ "model.layers.3.self_attn.v_proj.bias": "model-00001-of-00004.safetensors",
439
+ "model.layers.3.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
440
+ "model.layers.3.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
441
+ "model.layers.3.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
442
+ "model.layers.3.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
443
+ "model.layers.3.input_layernorm.weight": "model-00001-of-00004.safetensors",
444
+ "model.layers.3.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
445
+ "model.layers.4.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
446
+ "model.layers.4.self_attn.q_proj.bias": "model-00001-of-00004.safetensors",
447
+ "model.layers.4.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
448
+ "model.layers.4.self_attn.k_proj.bias": "model-00001-of-00004.safetensors",
449
+ "model.layers.4.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
450
+ "model.layers.4.self_attn.v_proj.bias": "model-00001-of-00004.safetensors",
451
+ "model.layers.4.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
452
+ "model.layers.4.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
453
+ "model.layers.4.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
454
+ "model.layers.4.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
455
+ "model.layers.4.input_layernorm.weight": "model-00001-of-00004.safetensors",
456
+ "model.layers.4.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
457
+ "model.layers.5.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
458
+ "model.layers.5.self_attn.q_proj.bias": "model-00001-of-00004.safetensors",
459
+ "model.layers.5.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
460
+ "model.layers.5.self_attn.k_proj.bias": "model-00001-of-00004.safetensors",
461
+ "model.layers.5.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
462
+ "model.layers.5.self_attn.v_proj.bias": "model-00001-of-00004.safetensors",
463
+ "model.layers.5.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
464
+ "model.layers.5.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
465
+ "model.layers.5.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
466
+ "model.layers.5.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
467
+ "model.layers.5.input_layernorm.weight": "model-00002-of-00004.safetensors",
468
+ "model.layers.5.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
469
+ "model.layers.6.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
470
+ "model.layers.6.self_attn.q_proj.bias": "model-00002-of-00004.safetensors",
471
+ "model.layers.6.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
472
+ "model.layers.6.self_attn.k_proj.bias": "model-00002-of-00004.safetensors",
473
+ "model.layers.6.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
474
+ "model.layers.6.self_attn.v_proj.bias": "model-00002-of-00004.safetensors",
475
+ "model.layers.6.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
476
+ "model.layers.6.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
477
+ "model.layers.6.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
478
+ "model.layers.6.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
479
+ "model.layers.6.input_layernorm.weight": "model-00002-of-00004.safetensors",
480
+ "model.layers.6.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
481
+ "model.layers.7.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
482
+ "model.layers.7.self_attn.q_proj.bias": "model-00002-of-00004.safetensors",
483
+ "model.layers.7.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
484
+ "model.layers.7.self_attn.k_proj.bias": "model-00002-of-00004.safetensors",
485
+ "model.layers.7.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
486
+ "model.layers.7.self_attn.v_proj.bias": "model-00002-of-00004.safetensors",
487
+ "model.layers.7.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
488
+ "model.layers.7.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
489
+ "model.layers.7.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
490
+ "model.layers.7.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
491
+ "model.layers.7.input_layernorm.weight": "model-00002-of-00004.safetensors",
492
+ "model.layers.7.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
493
+ "model.layers.8.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
494
+ "model.layers.8.self_attn.q_proj.bias": "model-00002-of-00004.safetensors",
495
+ "model.layers.8.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
496
+ "model.layers.8.self_attn.k_proj.bias": "model-00002-of-00004.safetensors",
497
+ "model.layers.8.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
498
+ "model.layers.8.self_attn.v_proj.bias": "model-00002-of-00004.safetensors",
499
+ "model.layers.8.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
500
+ "model.layers.8.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
501
+ "model.layers.8.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
502
+ "model.layers.8.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
503
+ "model.layers.8.input_layernorm.weight": "model-00002-of-00004.safetensors",
504
+ "model.layers.8.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
505
+ "model.layers.9.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
506
+ "model.layers.9.self_attn.q_proj.bias": "model-00002-of-00004.safetensors",
507
+ "model.layers.9.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
508
+ "model.layers.9.self_attn.k_proj.bias": "model-00002-of-00004.safetensors",
509
+ "model.layers.9.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
510
+ "model.layers.9.self_attn.v_proj.bias": "model-00002-of-00004.safetensors",
511
+ "model.layers.9.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
512
+ "model.layers.9.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
513
+ "model.layers.9.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
514
+ "model.layers.9.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
515
+ "model.layers.9.input_layernorm.weight": "model-00002-of-00004.safetensors",
516
+ "model.layers.9.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
517
+ "model.layers.10.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
518
+ "model.layers.10.self_attn.q_proj.bias": "model-00002-of-00004.safetensors",
519
+ "model.layers.10.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
520
+ "model.layers.10.self_attn.k_proj.bias": "model-00002-of-00004.safetensors",
521
+ "model.layers.10.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
522
+ "model.layers.10.self_attn.v_proj.bias": "model-00002-of-00004.safetensors",
523
+ "model.layers.10.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
524
+ "model.layers.10.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
525
+ "model.layers.10.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
526
+ "model.layers.10.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
527
+ "model.layers.10.input_layernorm.weight": "model-00002-of-00004.safetensors",
528
+ "model.layers.10.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
529
+ "model.layers.11.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
530
+ "model.layers.11.self_attn.q_proj.bias": "model-00002-of-00004.safetensors",
531
+ "model.layers.11.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
532
+ "model.layers.11.self_attn.k_proj.bias": "model-00002-of-00004.safetensors",
533
+ "model.layers.11.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
534
+ "model.layers.11.self_attn.v_proj.bias": "model-00002-of-00004.safetensors",
535
+ "model.layers.11.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
536
+ "model.layers.11.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
537
+ "model.layers.11.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
538
+ "model.layers.11.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
539
+ "model.layers.11.input_layernorm.weight": "model-00002-of-00004.safetensors",
540
+ "model.layers.11.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
541
+ "model.layers.12.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
542
+ "model.layers.12.self_attn.q_proj.bias": "model-00002-of-00004.safetensors",
543
+ "model.layers.12.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
544
+ "model.layers.12.self_attn.k_proj.bias": "model-00002-of-00004.safetensors",
545
+ "model.layers.12.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
546
+ "model.layers.12.self_attn.v_proj.bias": "model-00002-of-00004.safetensors",
547
+ "model.layers.12.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
548
+ "model.layers.12.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
549
+ "model.layers.12.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
550
+ "model.layers.12.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
551
+ "model.layers.12.input_layernorm.weight": "model-00002-of-00004.safetensors",
552
+ "model.layers.12.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
553
+ "model.layers.13.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
554
+ "model.layers.13.self_attn.q_proj.bias": "model-00002-of-00004.safetensors",
555
+ "model.layers.13.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
556
+ "model.layers.13.self_attn.k_proj.bias": "model-00002-of-00004.safetensors",
557
+ "model.layers.13.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
558
+ "model.layers.13.self_attn.v_proj.bias": "model-00002-of-00004.safetensors",
559
+ "model.layers.13.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
560
+ "model.layers.13.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
561
+ "model.layers.13.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
562
+ "model.layers.13.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
563
+ "model.layers.13.input_layernorm.weight": "model-00002-of-00004.safetensors",
564
+ "model.layers.13.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
565
+ "model.layers.14.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
566
+ "model.layers.14.self_attn.q_proj.bias": "model-00002-of-00004.safetensors",
567
+ "model.layers.14.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
568
+ "model.layers.14.self_attn.k_proj.bias": "model-00002-of-00004.safetensors",
569
+ "model.layers.14.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
570
+ "model.layers.14.self_attn.v_proj.bias": "model-00002-of-00004.safetensors",
571
+ "model.layers.14.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
572
+ "model.layers.14.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
573
+ "model.layers.14.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
574
+ "model.layers.14.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
575
+ "model.layers.14.input_layernorm.weight": "model-00002-of-00004.safetensors",
576
+ "model.layers.14.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
577
+ "model.layers.15.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
578
+ "model.layers.15.self_attn.q_proj.bias": "model-00002-of-00004.safetensors",
579
+ "model.layers.15.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
580
+ "model.layers.15.self_attn.k_proj.bias": "model-00002-of-00004.safetensors",
581
+ "model.layers.15.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
582
+ "model.layers.15.self_attn.v_proj.bias": "model-00002-of-00004.safetensors",
583
+ "model.layers.15.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
584
+ "model.layers.15.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
585
+ "model.layers.15.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
586
+ "model.layers.15.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
587
+ "model.layers.15.input_layernorm.weight": "model-00002-of-00004.safetensors",
588
+ "model.layers.15.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
589
+ "model.layers.16.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
590
+ "model.layers.16.self_attn.q_proj.bias": "model-00002-of-00004.safetensors",
591
+ "model.layers.16.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
592
+ "model.layers.16.self_attn.k_proj.bias": "model-00002-of-00004.safetensors",
593
+ "model.layers.16.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
594
+ "model.layers.16.self_attn.v_proj.bias": "model-00002-of-00004.safetensors",
595
+ "model.layers.16.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
596
+ "model.layers.16.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
597
+ "model.layers.16.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
598
+ "model.layers.16.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
599
+ "model.layers.16.input_layernorm.weight": "model-00003-of-00004.safetensors",
600
+ "model.layers.16.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
601
+ "model.layers.17.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
602
+ "model.layers.17.self_attn.q_proj.bias": "model-00003-of-00004.safetensors",
603
+ "model.layers.17.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
604
+ "model.layers.17.self_attn.k_proj.bias": "model-00003-of-00004.safetensors",
605
+ "model.layers.17.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
606
+ "model.layers.17.self_attn.v_proj.bias": "model-00003-of-00004.safetensors",
607
+ "model.layers.17.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
608
+ "model.layers.17.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
609
+ "model.layers.17.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
610
+ "model.layers.17.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
611
+ "model.layers.17.input_layernorm.weight": "model-00003-of-00004.safetensors",
612
+ "model.layers.17.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
613
+ "model.layers.18.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
614
+ "model.layers.18.self_attn.q_proj.bias": "model-00003-of-00004.safetensors",
615
+ "model.layers.18.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
616
+ "model.layers.18.self_attn.k_proj.bias": "model-00003-of-00004.safetensors",
617
+ "model.layers.18.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
618
+ "model.layers.18.self_attn.v_proj.bias": "model-00003-of-00004.safetensors",
619
+ "model.layers.18.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
620
+ "model.layers.18.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
621
+ "model.layers.18.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
622
+ "model.layers.18.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
623
+ "model.layers.18.input_layernorm.weight": "model-00003-of-00004.safetensors",
624
+ "model.layers.18.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
625
+ "model.layers.19.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
626
+ "model.layers.19.self_attn.q_proj.bias": "model-00003-of-00004.safetensors",
627
+ "model.layers.19.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
628
+ "model.layers.19.self_attn.k_proj.bias": "model-00003-of-00004.safetensors",
629
+ "model.layers.19.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
630
+ "model.layers.19.self_attn.v_proj.bias": "model-00003-of-00004.safetensors",
631
+ "model.layers.19.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
632
+ "model.layers.19.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
633
+ "model.layers.19.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
634
+ "model.layers.19.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
635
+ "model.layers.19.input_layernorm.weight": "model-00003-of-00004.safetensors",
636
+ "model.layers.19.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
637
+ "model.layers.20.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
638
+ "model.layers.20.self_attn.q_proj.bias": "model-00003-of-00004.safetensors",
639
+ "model.layers.20.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
640
+ "model.layers.20.self_attn.k_proj.bias": "model-00003-of-00004.safetensors",
641
+ "model.layers.20.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
642
+ "model.layers.20.self_attn.v_proj.bias": "model-00003-of-00004.safetensors",
643
+ "model.layers.20.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
644
+ "model.layers.20.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
645
+ "model.layers.20.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
646
+ "model.layers.20.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
647
+ "model.layers.20.input_layernorm.weight": "model-00003-of-00004.safetensors",
648
+ "model.layers.20.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
649
+ "model.layers.21.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
650
+ "model.layers.21.self_attn.q_proj.bias": "model-00003-of-00004.safetensors",
651
+ "model.layers.21.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
652
+ "model.layers.21.self_attn.k_proj.bias": "model-00003-of-00004.safetensors",
653
+ "model.layers.21.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
654
+ "model.layers.21.self_attn.v_proj.bias": "model-00003-of-00004.safetensors",
655
+ "model.layers.21.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
656
+ "model.layers.21.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
657
+ "model.layers.21.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
658
+ "model.layers.21.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
659
+ "model.layers.21.input_layernorm.weight": "model-00003-of-00004.safetensors",
660
+ "model.layers.21.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
661
+ "model.layers.22.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
662
+ "model.layers.22.self_attn.q_proj.bias": "model-00003-of-00004.safetensors",
663
+ "model.layers.22.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
664
+ "model.layers.22.self_attn.k_proj.bias": "model-00003-of-00004.safetensors",
665
+ "model.layers.22.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
666
+ "model.layers.22.self_attn.v_proj.bias": "model-00003-of-00004.safetensors",
667
+ "model.layers.22.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
668
+ "model.layers.22.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
669
+ "model.layers.22.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
670
+ "model.layers.22.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
671
+ "model.layers.22.input_layernorm.weight": "model-00003-of-00004.safetensors",
672
+ "model.layers.22.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
673
+ "model.layers.23.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
674
+ "model.layers.23.self_attn.q_proj.bias": "model-00003-of-00004.safetensors",
675
+ "model.layers.23.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
676
+ "model.layers.23.self_attn.k_proj.bias": "model-00003-of-00004.safetensors",
677
+ "model.layers.23.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
678
+ "model.layers.23.self_attn.v_proj.bias": "model-00003-of-00004.safetensors",
679
+ "model.layers.23.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
680
+ "model.layers.23.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
681
+ "model.layers.23.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
682
+ "model.layers.23.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
683
+ "model.layers.23.input_layernorm.weight": "model-00003-of-00004.safetensors",
684
+ "model.layers.23.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
685
+ "model.layers.24.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
686
+ "model.layers.24.self_attn.q_proj.bias": "model-00003-of-00004.safetensors",
687
+ "model.layers.24.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
688
+ "model.layers.24.self_attn.k_proj.bias": "model-00003-of-00004.safetensors",
689
+ "model.layers.24.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
690
+ "model.layers.24.self_attn.v_proj.bias": "model-00003-of-00004.safetensors",
691
+ "model.layers.24.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
692
+ "model.layers.24.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
693
+ "model.layers.24.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
694
+ "model.layers.24.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
695
+ "model.layers.24.input_layernorm.weight": "model-00003-of-00004.safetensors",
696
+ "model.layers.24.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
697
+ "model.layers.25.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
698
+ "model.layers.25.self_attn.q_proj.bias": "model-00003-of-00004.safetensors",
699
+ "model.layers.25.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
700
+ "model.layers.25.self_attn.k_proj.bias": "model-00003-of-00004.safetensors",
701
+ "model.layers.25.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
702
+ "model.layers.25.self_attn.v_proj.bias": "model-00003-of-00004.safetensors",
703
+ "model.layers.25.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
704
+ "model.layers.25.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
705
+ "model.layers.25.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
706
+ "model.layers.25.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
707
+ "model.layers.25.input_layernorm.weight": "model-00003-of-00004.safetensors",
708
+ "model.layers.25.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
709
+ "model.layers.26.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
710
+ "model.layers.26.self_attn.q_proj.bias": "model-00003-of-00004.safetensors",
711
+ "model.layers.26.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
712
+ "model.layers.26.self_attn.k_proj.bias": "model-00003-of-00004.safetensors",
713
+ "model.layers.26.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
714
+ "model.layers.26.self_attn.v_proj.bias": "model-00003-of-00004.safetensors",
715
+ "model.layers.26.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
716
+ "model.layers.26.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
717
+ "model.layers.26.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
718
+ "model.layers.26.mlp.down_proj.weight": "model-00004-of-00004.safetensors",
719
+ "model.layers.26.input_layernorm.weight": "model-00004-of-00004.safetensors",
720
+ "model.layers.26.post_attention_layernorm.weight": "model-00004-of-00004.safetensors",
721
+ "model.layers.27.self_attn.q_proj.weight": "model-00004-of-00004.safetensors",
722
+ "model.layers.27.self_attn.q_proj.bias": "model-00004-of-00004.safetensors",
723
+ "model.layers.27.self_attn.k_proj.weight": "model-00004-of-00004.safetensors",
724
+ "model.layers.27.self_attn.k_proj.bias": "model-00004-of-00004.safetensors",
725
+ "model.layers.27.self_attn.v_proj.weight": "model-00004-of-00004.safetensors",
726
+ "model.layers.27.self_attn.v_proj.bias": "model-00004-of-00004.safetensors",
727
+ "model.layers.27.self_attn.o_proj.weight": "model-00004-of-00004.safetensors",
728
+ "model.layers.27.mlp.gate_proj.weight": "model-00004-of-00004.safetensors",
729
+ "model.layers.27.mlp.up_proj.weight": "model-00004-of-00004.safetensors",
730
+ "model.layers.27.mlp.down_proj.weight": "model-00004-of-00004.safetensors",
731
+ "model.layers.27.input_layernorm.weight": "model-00004-of-00004.safetensors",
732
+ "model.layers.27.post_attention_layernorm.weight": "model-00004-of-00004.safetensors",
733
+ "model.norm.weight": "model-00004-of-00004.safetensors",
734
+ "lm_head.weight": "model-00004-of-00004.safetensors"
735
+ }
736
+ }
preprocessor_config.json ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "min_pixels": 3136,
3
+ "max_pixels": 12845056,
4
+ "patch_size": 14,
5
+ "temporal_patch_size": 2,
6
+ "merge_size": 2,
7
+ "image_mean": [
8
+ 0.48145466,
9
+ 0.4578275,
10
+ 0.40821073
11
+ ],
12
+ "image_std": [
13
+ 0.26862954,
14
+ 0.26130258,
15
+ 0.27577711
16
+ ],
17
+ "image_processor_type": "Qwen2VLImageProcessor",
18
+ "processor_class": "Qwen2_5_VLProcessor"
19
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,207 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_prefix_space": false,
3
+ "added_tokens_decoder": {
4
+ "151643": {
5
+ "content": "<|endoftext|>",
6
+ "lstrip": false,
7
+ "normalized": false,
8
+ "rstrip": false,
9
+ "single_word": false,
10
+ "special": true
11
+ },
12
+ "151644": {
13
+ "content": "<|im_start|>",
14
+ "lstrip": false,
15
+ "normalized": false,
16
+ "rstrip": false,
17
+ "single_word": false,
18
+ "special": true
19
+ },
20
+ "151645": {
21
+ "content": "<|im_end|>",
22
+ "lstrip": false,
23
+ "normalized": false,
24
+ "rstrip": false,
25
+ "single_word": false,
26
+ "special": true
27
+ },
28
+ "151646": {
29
+ "content": "<|object_ref_start|>",
30
+ "lstrip": false,
31
+ "normalized": false,
32
+ "rstrip": false,
33
+ "single_word": false,
34
+ "special": true
35
+ },
36
+ "151647": {
37
+ "content": "<|object_ref_end|>",
38
+ "lstrip": false,
39
+ "normalized": false,
40
+ "rstrip": false,
41
+ "single_word": false,
42
+ "special": true
43
+ },
44
+ "151648": {
45
+ "content": "<|box_start|>",
46
+ "lstrip": false,
47
+ "normalized": false,
48
+ "rstrip": false,
49
+ "single_word": false,
50
+ "special": true
51
+ },
52
+ "151649": {
53
+ "content": "<|box_end|>",
54
+ "lstrip": false,
55
+ "normalized": false,
56
+ "rstrip": false,
57
+ "single_word": false,
58
+ "special": true
59
+ },
60
+ "151650": {
61
+ "content": "<|quad_start|>",
62
+ "lstrip": false,
63
+ "normalized": false,
64
+ "rstrip": false,
65
+ "single_word": false,
66
+ "special": true
67
+ },
68
+ "151651": {
69
+ "content": "<|quad_end|>",
70
+ "lstrip": false,
71
+ "normalized": false,
72
+ "rstrip": false,
73
+ "single_word": false,
74
+ "special": true
75
+ },
76
+ "151652": {
77
+ "content": "<|vision_start|>",
78
+ "lstrip": false,
79
+ "normalized": false,
80
+ "rstrip": false,
81
+ "single_word": false,
82
+ "special": true
83
+ },
84
+ "151653": {
85
+ "content": "<|vision_end|>",
86
+ "lstrip": false,
87
+ "normalized": false,
88
+ "rstrip": false,
89
+ "single_word": false,
90
+ "special": true
91
+ },
92
+ "151654": {
93
+ "content": "<|vision_pad|>",
94
+ "lstrip": false,
95
+ "normalized": false,
96
+ "rstrip": false,
97
+ "single_word": false,
98
+ "special": true
99
+ },
100
+ "151655": {
101
+ "content": "<|image_pad|>",
102
+ "lstrip": false,
103
+ "normalized": false,
104
+ "rstrip": false,
105
+ "single_word": false,
106
+ "special": true
107
+ },
108
+ "151656": {
109
+ "content": "<|video_pad|>",
110
+ "lstrip": false,
111
+ "normalized": false,
112
+ "rstrip": false,
113
+ "single_word": false,
114
+ "special": true
115
+ },
116
+ "151657": {
117
+ "content": "<tool_call>",
118
+ "lstrip": false,
119
+ "normalized": false,
120
+ "rstrip": false,
121
+ "single_word": false,
122
+ "special": false
123
+ },
124
+ "151658": {
125
+ "content": "</tool_call>",
126
+ "lstrip": false,
127
+ "normalized": false,
128
+ "rstrip": false,
129
+ "single_word": false,
130
+ "special": false
131
+ },
132
+ "151659": {
133
+ "content": "<|fim_prefix|>",
134
+ "lstrip": false,
135
+ "normalized": false,
136
+ "rstrip": false,
137
+ "single_word": false,
138
+ "special": false
139
+ },
140
+ "151660": {
141
+ "content": "<|fim_middle|>",
142
+ "lstrip": false,
143
+ "normalized": false,
144
+ "rstrip": false,
145
+ "single_word": false,
146
+ "special": false
147
+ },
148
+ "151661": {
149
+ "content": "<|fim_suffix|>",
150
+ "lstrip": false,
151
+ "normalized": false,
152
+ "rstrip": false,
153
+ "single_word": false,
154
+ "special": false
155
+ },
156
+ "151662": {
157
+ "content": "<|fim_pad|>",
158
+ "lstrip": false,
159
+ "normalized": false,
160
+ "rstrip": false,
161
+ "single_word": false,
162
+ "special": false
163
+ },
164
+ "151663": {
165
+ "content": "<|repo_name|>",
166
+ "lstrip": false,
167
+ "normalized": false,
168
+ "rstrip": false,
169
+ "single_word": false,
170
+ "special": false
171
+ },
172
+ "151664": {
173
+ "content": "<|file_sep|>",
174
+ "lstrip": false,
175
+ "normalized": false,
176
+ "rstrip": false,
177
+ "single_word": false,
178
+ "special": false
179
+ }
180
+ },
181
+ "additional_special_tokens": [
182
+ "<|im_start|>",
183
+ "<|im_end|>",
184
+ "<|object_ref_start|>",
185
+ "<|object_ref_end|>",
186
+ "<|box_start|>",
187
+ "<|box_end|>",
188
+ "<|quad_start|>",
189
+ "<|quad_end|>",
190
+ "<|vision_start|>",
191
+ "<|vision_end|>",
192
+ "<|vision_pad|>",
193
+ "<|image_pad|>",
194
+ "<|video_pad|>"
195
+ ],
196
+ "bos_token": null,
197
+ "chat_template": "{% set image_count = namespace(value=0) %}{% set video_count = namespace(value=0) %}{% for message in messages %}{% if loop.first and message['role'] != 'system' %}<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n{% endif %}<|im_start|>{{ message['role'] }}\n{% if message['content'] is string %}{{ message['content'] }}<|im_end|>\n{% else %}{% for content in message['content'] %}{% if content['type'] == 'image' or 'image' in content or 'image_url' in content %}{% set image_count.value = image_count.value + 1 %}{% if add_vision_id %}Picture {{ image_count.value }}: {% endif %}<|vision_start|><|image_pad|><|vision_end|>{% elif content['type'] == 'video' or 'video' in content %}{% set video_count.value = video_count.value + 1 %}{% if add_vision_id %}Video {{ video_count.value }}: {% endif %}<|vision_start|><|video_pad|><|vision_end|>{% elif 'text' in content %}{{ content['text'] }}{% endif %}{% endfor %}<|im_end|>\n{% endif %}{% endfor %}{% if add_generation_prompt %}<|im_start|>assistant\n{% endif %}",
198
+ "clean_up_tokenization_spaces": false,
199
+ "eos_token": "<|im_end|>",
200
+ "errors": "replace",
201
+ "model_max_length": 131072,
202
+ "pad_token": "<|endoftext|>",
203
+ "split_special_tokens": false,
204
+ "tokenizer_class": "Qwen2Tokenizer",
205
+ "unk_token": null,
206
+ "add_bos_token": false
207
+ }