vincentzed-hf commited on
Commit
ac0e437
·
verified ·
1 Parent(s): 4123b75

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,5 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ model.safetensors.index.json filter=lfs diff=lfs merge=lfs -text
37
+ tokenizer.json filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,145 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ pipeline_tag: image-text-to-text
3
+ base_model:
4
+ - Qwen/Qwen3.5-VL-MoE
5
+ license: apache-2.0
6
+ library_name: Model Optimizer
7
+ tags:
8
+ - nvidia
9
+ - ModelOpt
10
+ - Qwen3.5
11
+ - quantized
12
+ - NVFP4
13
+ - nvfp4
14
+ - multimodal
15
+ - vision-language
16
+ ---
17
+
18
+ # NVIDIA Qwen3.5-VL-MoE-NVFP4 Model Card
19
+
20
+ # Model Overview
21
+
22
+ ## Description:
23
+ The NVIDIA Qwen3.5-VL-MoE-NVFP4 model is a quantized version of Qwen's Qwen3.5-VL-MoE model, an autoregressive multimodal language model that uses an optimized Transformer architecture with Mixture of Experts (MoE) and vision-language capabilities. For more information, refer to the [Qwen3.5-VL-MoE model card](https://huggingface.co/Qwen/Qwen3.5-VL-MoE). The NVIDIA Qwen3.5-VL-MoE-NVFP4 model was quantized using the [TensorRT Model Optimizer](https://github.com/NVIDIA/TensorRT-Model-Optimizer).
24
+
25
+ This model is ready for commercial/non-commercial use. <br>
26
+
27
+ ## Third-Party Community Consideration
28
+ This model is not owned or developed by NVIDIA. This model has been developed and built to a third-party's requirements for this application and use case; see link to Non-NVIDIA [(Qwen3.5-VL-MoE) Model Card](https://huggingface.co/Qwen/Qwen3.5-VL-MoE).
29
+
30
+ ### License/Terms of Use:
31
+ [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0)
32
+
33
+ ### Deployment Geography:
34
+ Global <br>
35
+
36
+ ### Use Case:
37
+ Developers looking to take off the shelf, pre-quantized models for deployment in AI Agent systems, chatbots, RAG systems, and other AI-powered applications. <br>
38
+
39
+ ### Release Date:
40
+ Huggingface via https://huggingface.co/nvidia/Qwen3.5-VL-MoE-NVFP4 <br>
41
+
42
+ ## Model Architecture:
43
+ **Architecture Type:** Transformers (Hybrid) <br>
44
+ **Network Architecture:** Qwen3_5MoeForConditionalGeneration <br>
45
+ **Model Details:**
46
+ * **Total Parameters:** 199.7B
47
+ * **Expert Configuration:** 512 total experts, 10 activated per token + 1 shared expert.
48
+ * **Attention Mechanisms:** Hybrid layout combining **Gated DeltaNet** (linear attention for long-context efficiency) and **Gated Attention** (sliding window/standard attention).
49
+ * **Context Window:** 262,144 tokens (native).
50
+ * **Layers:** 60
51
+
52
+ ## Input:
53
+ **Input Type(s):** Text, Image, Video <br>
54
+ **Input Format(s):** String, Image, Video <br>
55
+ **Input Parameters:** 1D (One-Dimensional): Sequences, 2D (Two-Dimensional): Images, 3D (Three-Dimensional): Video <br>
56
+
57
+ ## Output:
58
+ **Output Type(s):** Text <br>
59
+ **Output Format:** String <br>
60
+ **Output Parameters:** 1D (One-Dimensional): Sequences <br>
61
+ **Other Properties Related to Output:** N/A <br>
62
+
63
+ Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA's hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions. <br>
64
+
65
+ ## Software Integration:
66
+ **Runtime Engine(s):** <br>
67
+ * SGLang <br>
68
+
69
+ **Supported Hardware Microarchitecture Compatibility:** <br>
70
+ * NVIDIA Blackwell <br>
71
+
72
+ **Preferred Operating System(s):** <br>
73
+ * Linux <br>
74
+
75
+ The integration of foundation and fine-tuned models into AI systems requires additional testing using use-case-specific data to ensure safe and effective deployment. Following the V-model methodology, iterative testing and validation at both unit and system levels are essential to mitigate risks, meet technical and functional requirements, and ensure compliance with safety and ethical standards before deployment
76
+
77
+ ## Model Version(s):
78
+ ** The model is quantized with nvidia-modelopt **0.42.0rc1.dev21+g421985313** <br>
79
+
80
+ ## Training, Testing, and Evaluation Datasets:
81
+
82
+ ## Calibration Dataset:
83
+ * Link: [Nemotron-Post-Training-Dataset-v2](https://huggingface.co/datasets/nvidia/Nemotron-Post-Training-Dataset-v2) <br>
84
+ * Data collection method: Automated. <br>
85
+ * Labeling method: Automated. <br>
86
+
87
+ ## Training Datasets:
88
+ * Data Collection Method by Dataset: Undisclosed <br>
89
+ * Labeling Method by Dataset: Undisclosed<br>
90
+ * Properties: Undisclosed
91
+
92
+ ## Testing Dataset:
93
+ * Data Collection Method by Dataset: Undisclosed <br>
94
+ * Labeling Method by Dataset: Undisclosed <br>
95
+ * Properties: Undisclosed <br>
96
+
97
+ ## Evaluation Dataset:
98
+ * Data collection method: Hybrid: Automated, Human <br>
99
+ * Labeling method: Hybrid: Human, Automated <br>
100
+
101
+
102
+ ## Inference:
103
+ **Acceleration Engine:** SGLang <br>
104
+ **Test Hardware:** B300 <br>
105
+
106
+ ## Post Training Quantization
107
+ This model was obtained by quantizing the weights and activations of Qwen3.5-VL-MoE to NVFP4 data type, ready for inference with SGLang. Only the weights and activations of the linear operators within transformer blocks are quantized, as well as the KV-cache to FP8. Vision encoder weights are not quantized. This optimization reduces the number of bits per parameter from 16 to 4, reducing the disk size and GPU memory requirements by approximately 4x.
108
+
109
+ ## Usage
110
+
111
+ ### Deploy with SGLang
112
+
113
+ To serve the quantized NVFP4 checkpoint with [SGLang](https://github.com/sgl-project/sglang):
114
+
115
+ ```bash
116
+ sglang serve --model-path vincentzed-hf/Qwen3.5-VL-MoE-NVFP4 --quantization modelopt_fp4
117
+ ```
118
+ Please install from source:
119
+ `git clone git@github.com:sgl-project/sglang.git`
120
+ Once the repo is cloned, do `uv pip install -e . "python"` and run the serve command.
121
+ When a release is cut with the bugfix for this model's launch, we will update this model card.
122
+
123
+ ### Reproduce with ModelOpt
124
+
125
+ You may want to produce this checkpoint yourself. To reproduce the NVFP4 quantized checkpoint using [TensorRT Model Optimizer](https://github.com/NVIDIA/TensorRT-Model-Optimizer):
126
+
127
+ ```bash
128
+ python3 examples/llm_ptq/hf_ptq.py \
129
+ --pyt_ckpt_path Qwen/Qwen3.5-VL-MoE \
130
+ --qformat nvfp4 \
131
+ --export_path ./qwen3-5-nvfp4
132
+ ```
133
+
134
+ > **Note:** NVFP4 and FP8 KVCache provides a significant memory footprint reduction (~3.5x vs BF16) with negligible accuracy degradation.
135
+
136
+ > Baseline: [Qwen3.5-VL-MoE](https://huggingface.co/Qwen/Qwen3.5-VL-MoE).
137
+
138
+ ## Model Limitations:
139
+ The base model was trained on data that contains toxic language and societal biases originally crawled from the internet. Therefore, the model may amplify those biases and return toxic responses especially when prompted with toxic prompts. The model may generate answers that may be inaccurate, omit key information, or include irrelevant or redundant text producing socially unacceptable or undesirable text, even if the prompt itself does not include anything explicitly offensive.
140
+
141
+ ## Ethical Considerations
142
+
143
+ NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
144
+
145
+ Please report model quality, risk, security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/).
chat_template.jinja ADDED
@@ -0,0 +1,154 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {%- set image_count = namespace(value=0) %}
2
+ {%- set video_count = namespace(value=0) %}
3
+ {%- macro render_content(content, do_vision_count, is_system_content=false) %}
4
+ {%- if content is string %}
5
+ {{- content }}
6
+ {%- elif content is iterable and content is not mapping %}
7
+ {%- for item in content %}
8
+ {%- if 'image' in item or 'image_url' in item or item.type == 'image' %}
9
+ {%- if is_system_content %}
10
+ {{- raise_exception('System message cannot contain images.') }}
11
+ {%- endif %}
12
+ {%- if do_vision_count %}
13
+ {%- set image_count.value = image_count.value + 1 %}
14
+ {%- endif %}
15
+ {%- if add_vision_id %}
16
+ {{- 'Picture ' ~ image_count.value ~ ': ' }}
17
+ {%- endif %}
18
+ {{- '<|vision_start|><|image_pad|><|vision_end|>' }}
19
+ {%- elif 'video' in item or item.type == 'video' %}
20
+ {%- if is_system_content %}
21
+ {{- raise_exception('System message cannot contain videos.') }}
22
+ {%- endif %}
23
+ {%- if do_vision_count %}
24
+ {%- set video_count.value = video_count.value + 1 %}
25
+ {%- endif %}
26
+ {%- if add_vision_id %}
27
+ {{- 'Video ' ~ video_count.value ~ ': ' }}
28
+ {%- endif %}
29
+ {{- '<|vision_start|><|video_pad|><|vision_end|>' }}
30
+ {%- elif 'text' in item %}
31
+ {{- item.text }}
32
+ {%- else %}
33
+ {{- raise_exception('Unexpected item type in content.') }}
34
+ {%- endif %}
35
+ {%- endfor %}
36
+ {%- elif content is none or content is undefined %}
37
+ {{- '' }}
38
+ {%- else %}
39
+ {{- raise_exception('Unexpected content type.') }}
40
+ {%- endif %}
41
+ {%- endmacro %}
42
+ {%- if not messages %}
43
+ {{- raise_exception('No messages provided.') }}
44
+ {%- endif %}
45
+ {%- if tools and tools is iterable and tools is not mapping %}
46
+ {{- '<|im_start|>system\n' }}
47
+ {{- "# Tools\n\nYou have access to the following functions:\n\n<tools>" }}
48
+ {%- for tool in tools %}
49
+ {{- "\n" }}
50
+ {{- tool | tojson }}
51
+ {%- endfor %}
52
+ {{- "\n</tools>" }}
53
+ {{- '\n\nIf you choose to call a function ONLY reply in the following format with NO suffix:\n\n<tool_call>\n<function=example_function_name>\n<parameter=example_parameter_1>\nvalue_1\n</parameter>\n<parameter=example_parameter_2>\nThis is the value for the second parameter\nthat can span\nmultiple lines\n</parameter>\n</function>\n</tool_call>\n\n<IMPORTANT>\nReminder:\n- Function calls MUST follow the specified format: an inner <function=...></function> block must be nested within <tool_call></tool_call> XML tags\n- Required parameters MUST be specified\n- You may provide optional reasoning for your function call in natural language BEFORE the function call, but NOT after\n- If there is no function call available, answer the question like normal with your current knowledge and do not tell the user about function calls\n</IMPORTANT>' }}
54
+ {%- if messages[0].role == 'system' %}
55
+ {%- set content = render_content(messages[0].content, false, true)|trim %}
56
+ {%- if content %}
57
+ {{- '\n\n' + content }}
58
+ {%- endif %}
59
+ {%- endif %}
60
+ {{- '<|im_end|>\n' }}
61
+ {%- else %}
62
+ {%- if messages[0].role == 'system' %}
63
+ {%- set content = render_content(messages[0].content, false, true)|trim %}
64
+ {{- '<|im_start|>system\n' + content + '<|im_end|>\n' }}
65
+ {%- endif %}
66
+ {%- endif %}
67
+ {%- set ns = namespace(multi_step_tool=true, last_query_index=messages|length - 1) %}
68
+ {%- for message in messages[::-1] %}
69
+ {%- set index = (messages|length - 1) - loop.index0 %}
70
+ {%- if ns.multi_step_tool and message.role == "user" %}
71
+ {%- set content = render_content(message.content, false)|trim %}
72
+ {%- if not(content.startswith('<tool_response>') and content.endswith('</tool_response>')) %}
73
+ {%- set ns.multi_step_tool = false %}
74
+ {%- set ns.last_query_index = index %}
75
+ {%- endif %}
76
+ {%- endif %}
77
+ {%- endfor %}
78
+ {%- if ns.multi_step_tool %}
79
+ {{- raise_exception('No user query found in messages.') }}
80
+ {%- endif %}
81
+ {%- for message in messages %}
82
+ {%- set content = render_content(message.content, true)|trim %}
83
+ {%- if message.role == "system" %}
84
+ {%- if not loop.first %}
85
+ {{- raise_exception('System message must be at the beginning.') }}
86
+ {%- endif %}
87
+ {%- elif message.role == "user" %}
88
+ {{- '<|im_start|>' + message.role + '\n' + content + '<|im_end|>' + '\n' }}
89
+ {%- elif message.role == "assistant" %}
90
+ {%- set reasoning_content = '' %}
91
+ {%- if message.reasoning_content is string %}
92
+ {%- set reasoning_content = message.reasoning_content %}
93
+ {%- else %}
94
+ {%- if '</think>' in content %}
95
+ {%- set reasoning_content = content.split('</think>')[0].rstrip('\n').split('<think>')[-1].lstrip('\n') %}
96
+ {%- set content = content.split('</think>')[-1].lstrip('\n') %}
97
+ {%- endif %}
98
+ {%- endif %}
99
+ {%- set reasoning_content = reasoning_content|trim %}
100
+ {%- if loop.index0 > ns.last_query_index %}
101
+ {{- '<|im_start|>' + message.role + '\n<think>\n' + reasoning_content + '\n</think>\n\n' + content }}
102
+ {%- else %}
103
+ {{- '<|im_start|>' + message.role + '\n' + content }}
104
+ {%- endif %}
105
+ {%- if message.tool_calls and message.tool_calls is iterable and message.tool_calls is not mapping %}
106
+ {%- for tool_call in message.tool_calls %}
107
+ {%- if tool_call.function is defined %}
108
+ {%- set tool_call = tool_call.function %}
109
+ {%- endif %}
110
+ {%- if loop.first %}
111
+ {%- if content|trim %}
112
+ {{- '\n\n<tool_call>\n<function=' + tool_call.name + '>\n' }}
113
+ {%- else %}
114
+ {{- '<tool_call>\n<function=' + tool_call.name + '>\n' }}
115
+ {%- endif %}
116
+ {%- else %}
117
+ {{- '\n<tool_call>\n<function=' + tool_call.name + '>\n' }}
118
+ {%- endif %}
119
+ {%- if tool_call.arguments is defined %}
120
+ {%- for args_name, args_value in tool_call.arguments|items %}
121
+ {{- '<parameter=' + args_name + '>\n' }}
122
+ {%- set args_value = args_value | tojson | safe if args_value is mapping or (args_value is sequence and args_value is not string) else args_value | string %}
123
+ {{- args_value }}
124
+ {{- '\n</parameter>\n' }}
125
+ {%- endfor %}
126
+ {%- endif %}
127
+ {{- '</function>\n</tool_call>' }}
128
+ {%- endfor %}
129
+ {%- endif %}
130
+ {{- '<|im_end|>\n' }}
131
+ {%- elif message.role == "tool" %}
132
+ {%- if loop.previtem and loop.previtem.role != "tool" %}
133
+ {{- '<|im_start|>user' }}
134
+ {%- endif %}
135
+ {{- '\n<tool_response>\n' }}
136
+ {{- content }}
137
+ {{- '\n</tool_response>' }}
138
+ {%- if not loop.last and loop.nextitem.role != "tool" %}
139
+ {{- '<|im_end|>\n' }}
140
+ {%- elif loop.last %}
141
+ {{- '<|im_end|>\n' }}
142
+ {%- endif %}
143
+ {%- else %}
144
+ {{- raise_exception('Unexpected message role.') }}
145
+ {%- endif %}
146
+ {%- endfor %}
147
+ {%- if add_generation_prompt %}
148
+ {{- '<|im_start|>assistant\n' }}
149
+ {%- if enable_thinking is defined and enable_thinking is false %}
150
+ {{- '<think>\n\n</think>\n\n' }}
151
+ {%- else %}
152
+ {{- '<think>\n' }}
153
+ {%- endif %}
154
+ {%- endif %}
config.json ADDED
@@ -0,0 +1,284 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "Qwen3_5MoeForConditionalGeneration"
4
+ ],
5
+ "dtype": "bfloat16",
6
+ "image_token_id": 248056,
7
+ "model_type": "qwen3_5_moe",
8
+ "text_config": {
9
+ "attention_bias": false,
10
+ "attention_dropout": 0.0,
11
+ "attn_output_gate": true,
12
+ "bos_token_id": null,
13
+ "dtype": "bfloat16",
14
+ "eos_token_id": 248044,
15
+ "full_attention_interval": 4,
16
+ "head_dim": 256,
17
+ "hidden_act": "silu",
18
+ "hidden_size": 4096,
19
+ "initializer_range": 0.02,
20
+ "layer_types": [
21
+ "linear_attention",
22
+ "linear_attention",
23
+ "linear_attention",
24
+ "full_attention",
25
+ "linear_attention",
26
+ "linear_attention",
27
+ "linear_attention",
28
+ "full_attention",
29
+ "linear_attention",
30
+ "linear_attention",
31
+ "linear_attention",
32
+ "full_attention",
33
+ "linear_attention",
34
+ "linear_attention",
35
+ "linear_attention",
36
+ "full_attention",
37
+ "linear_attention",
38
+ "linear_attention",
39
+ "linear_attention",
40
+ "full_attention",
41
+ "linear_attention",
42
+ "linear_attention",
43
+ "linear_attention",
44
+ "full_attention",
45
+ "linear_attention",
46
+ "linear_attention",
47
+ "linear_attention",
48
+ "full_attention",
49
+ "linear_attention",
50
+ "linear_attention",
51
+ "linear_attention",
52
+ "full_attention",
53
+ "linear_attention",
54
+ "linear_attention",
55
+ "linear_attention",
56
+ "full_attention",
57
+ "linear_attention",
58
+ "linear_attention",
59
+ "linear_attention",
60
+ "full_attention",
61
+ "linear_attention",
62
+ "linear_attention",
63
+ "linear_attention",
64
+ "full_attention",
65
+ "linear_attention",
66
+ "linear_attention",
67
+ "linear_attention",
68
+ "full_attention",
69
+ "linear_attention",
70
+ "linear_attention",
71
+ "linear_attention",
72
+ "full_attention",
73
+ "linear_attention",
74
+ "linear_attention",
75
+ "linear_attention",
76
+ "full_attention",
77
+ "linear_attention",
78
+ "linear_attention",
79
+ "linear_attention",
80
+ "full_attention"
81
+ ],
82
+ "linear_conv_kernel_dim": 4,
83
+ "linear_key_head_dim": 128,
84
+ "linear_num_key_heads": 16,
85
+ "linear_num_value_heads": 64,
86
+ "linear_value_head_dim": 128,
87
+ "mamba_ssm_dtype": "float32",
88
+ "max_position_embeddings": 262144,
89
+ "mlp_only_layers": [],
90
+ "model_type": "qwen3_5_moe_text",
91
+ "moe_intermediate_size": 1024,
92
+ "mtp_num_hidden_layers": 1,
93
+ "mtp_use_dedicated_embeddings": false,
94
+ "num_attention_heads": 32,
95
+ "num_experts": 512,
96
+ "num_experts_per_tok": 10,
97
+ "num_hidden_layers": 60,
98
+ "num_key_value_heads": 2,
99
+ "output_router_logits": false,
100
+ "pad_token_id": null,
101
+ "partial_rotary_factor": 0.25,
102
+ "rms_norm_eps": 1e-06,
103
+ "rope_parameters": {
104
+ "mrope_interleaved": true,
105
+ "mrope_section": [
106
+ 11,
107
+ 11,
108
+ 10
109
+ ],
110
+ "partial_rotary_factor": 0.25,
111
+ "rope_theta": 10000000,
112
+ "rope_type": "default"
113
+ },
114
+ "router_aux_loss_coef": 0.001,
115
+ "shared_expert_intermediate_size": 1024,
116
+ "tie_word_embeddings": false,
117
+ "use_cache": true,
118
+ "vocab_size": 248320
119
+ },
120
+ "tie_word_embeddings": false,
121
+ "transformers_version": "5.2.0",
122
+ "video_token_id": 248057,
123
+ "vision_config": {
124
+ "deepstack_visual_indexes": [],
125
+ "depth": 27,
126
+ "dtype": "bfloat16",
127
+ "hidden_act": "gelu_pytorch_tanh",
128
+ "hidden_size": 1152,
129
+ "in_channels": 3,
130
+ "initializer_range": 0.02,
131
+ "intermediate_size": 4304,
132
+ "model_type": "qwen3_5_moe",
133
+ "num_heads": 16,
134
+ "num_position_embeddings": 2304,
135
+ "out_hidden_size": 4096,
136
+ "patch_size": 16,
137
+ "spatial_merge_size": 2,
138
+ "temporal_patch_size": 2
139
+ },
140
+ "vision_end_token_id": 248054,
141
+ "vision_start_token_id": 248053,
142
+ "quantization_config": {
143
+ "config_groups": {
144
+ "group_0": {
145
+ "input_activations": {
146
+ "dynamic": false,
147
+ "num_bits": 4,
148
+ "type": "float",
149
+ "group_size": 16
150
+ },
151
+ "weights": {
152
+ "dynamic": false,
153
+ "num_bits": 4,
154
+ "type": "float",
155
+ "group_size": 16
156
+ },
157
+ "targets": [
158
+ "Linear"
159
+ ]
160
+ }
161
+ },
162
+ "ignore": [
163
+ "lm_head",
164
+ "model.language_model.layers.0.linear_attn.conv1d",
165
+ "model.language_model.layers.0.mlp.shared_expert_gate",
166
+ "model.language_model.layers.1.linear_attn.conv1d",
167
+ "model.language_model.layers.1.mlp.shared_expert_gate",
168
+ "model.language_model.layers.10.linear_attn.conv1d",
169
+ "model.language_model.layers.10.mlp.shared_expert_gate",
170
+ "model.language_model.layers.11.mlp.shared_expert_gate",
171
+ "model.language_model.layers.12.linear_attn.conv1d",
172
+ "model.language_model.layers.12.mlp.shared_expert_gate",
173
+ "model.language_model.layers.13.linear_attn.conv1d",
174
+ "model.language_model.layers.13.mlp.shared_expert_gate",
175
+ "model.language_model.layers.14.linear_attn.conv1d",
176
+ "model.language_model.layers.14.mlp.shared_expert_gate",
177
+ "model.language_model.layers.15.mlp.shared_expert_gate",
178
+ "model.language_model.layers.16.linear_attn.conv1d",
179
+ "model.language_model.layers.16.mlp.shared_expert_gate",
180
+ "model.language_model.layers.17.linear_attn.conv1d",
181
+ "model.language_model.layers.17.mlp.shared_expert_gate",
182
+ "model.language_model.layers.18.linear_attn.conv1d",
183
+ "model.language_model.layers.18.mlp.shared_expert_gate",
184
+ "model.language_model.layers.19.mlp.shared_expert_gate",
185
+ "model.language_model.layers.2.linear_attn.conv1d",
186
+ "model.language_model.layers.2.mlp.shared_expert_gate",
187
+ "model.language_model.layers.20.linear_attn.conv1d",
188
+ "model.language_model.layers.20.mlp.shared_expert_gate",
189
+ "model.language_model.layers.21.linear_attn.conv1d",
190
+ "model.language_model.layers.21.mlp.shared_expert_gate",
191
+ "model.language_model.layers.22.linear_attn.conv1d",
192
+ "model.language_model.layers.22.mlp.shared_expert_gate",
193
+ "model.language_model.layers.23.mlp.shared_expert_gate",
194
+ "model.language_model.layers.24.linear_attn.conv1d",
195
+ "model.language_model.layers.24.mlp.shared_expert_gate",
196
+ "model.language_model.layers.25.linear_attn.conv1d",
197
+ "model.language_model.layers.25.mlp.shared_expert_gate",
198
+ "model.language_model.layers.26.linear_attn.conv1d",
199
+ "model.language_model.layers.26.mlp.shared_expert_gate",
200
+ "model.language_model.layers.27.mlp.shared_expert_gate",
201
+ "model.language_model.layers.28.linear_attn.conv1d",
202
+ "model.language_model.layers.28.mlp.shared_expert_gate",
203
+ "model.language_model.layers.29.linear_attn.conv1d",
204
+ "model.language_model.layers.29.mlp.shared_expert_gate",
205
+ "model.language_model.layers.3.mlp.shared_expert_gate",
206
+ "model.language_model.layers.30.linear_attn.conv1d",
207
+ "model.language_model.layers.30.mlp.shared_expert_gate",
208
+ "model.language_model.layers.31.mlp.shared_expert_gate",
209
+ "model.language_model.layers.32.linear_attn.conv1d",
210
+ "model.language_model.layers.32.mlp.shared_expert_gate",
211
+ "model.language_model.layers.33.linear_attn.conv1d",
212
+ "model.language_model.layers.33.mlp.shared_expert_gate",
213
+ "model.language_model.layers.34.linear_attn.conv1d",
214
+ "model.language_model.layers.34.mlp.shared_expert_gate",
215
+ "model.language_model.layers.35.mlp.shared_expert_gate",
216
+ "model.language_model.layers.36.linear_attn.conv1d",
217
+ "model.language_model.layers.36.mlp.shared_expert_gate",
218
+ "model.language_model.layers.37.linear_attn.conv1d",
219
+ "model.language_model.layers.37.mlp.shared_expert_gate",
220
+ "model.language_model.layers.38.linear_attn.conv1d",
221
+ "model.language_model.layers.38.mlp.shared_expert_gate",
222
+ "model.language_model.layers.39.mlp.shared_expert_gate",
223
+ "model.language_model.layers.4.linear_attn.conv1d",
224
+ "model.language_model.layers.4.mlp.shared_expert_gate",
225
+ "model.language_model.layers.40.linear_attn.conv1d",
226
+ "model.language_model.layers.40.mlp.shared_expert_gate",
227
+ "model.language_model.layers.41.linear_attn.conv1d",
228
+ "model.language_model.layers.41.mlp.shared_expert_gate",
229
+ "model.language_model.layers.42.linear_attn.conv1d",
230
+ "model.language_model.layers.42.mlp.shared_expert_gate",
231
+ "model.language_model.layers.43.mlp.shared_expert_gate",
232
+ "model.language_model.layers.44.linear_attn.conv1d",
233
+ "model.language_model.layers.44.mlp.shared_expert_gate",
234
+ "model.language_model.layers.45.linear_attn.conv1d",
235
+ "model.language_model.layers.45.mlp.shared_expert_gate",
236
+ "model.language_model.layers.46.linear_attn.conv1d",
237
+ "model.language_model.layers.46.mlp.shared_expert_gate",
238
+ "model.language_model.layers.47.mlp.shared_expert_gate",
239
+ "model.language_model.layers.48.linear_attn.conv1d",
240
+ "model.language_model.layers.48.mlp.shared_expert_gate",
241
+ "model.language_model.layers.49.linear_attn.conv1d",
242
+ "model.language_model.layers.49.mlp.shared_expert_gate",
243
+ "model.language_model.layers.5.linear_attn.conv1d",
244
+ "model.language_model.layers.5.mlp.shared_expert_gate",
245
+ "model.language_model.layers.50.linear_attn.conv1d",
246
+ "model.language_model.layers.50.mlp.shared_expert_gate",
247
+ "model.language_model.layers.51.mlp.shared_expert_gate",
248
+ "model.language_model.layers.52.linear_attn.conv1d",
249
+ "model.language_model.layers.52.mlp.shared_expert_gate",
250
+ "model.language_model.layers.53.linear_attn.conv1d",
251
+ "model.language_model.layers.53.mlp.shared_expert_gate",
252
+ "model.language_model.layers.54.linear_attn.conv1d",
253
+ "model.language_model.layers.54.mlp.shared_expert_gate",
254
+ "model.language_model.layers.55.mlp.shared_expert_gate",
255
+ "model.language_model.layers.56.linear_attn.conv1d",
256
+ "model.language_model.layers.56.mlp.shared_expert_gate",
257
+ "model.language_model.layers.57.linear_attn.conv1d",
258
+ "model.language_model.layers.57.mlp.shared_expert_gate",
259
+ "model.language_model.layers.58.linear_attn.conv1d",
260
+ "model.language_model.layers.58.mlp.shared_expert_gate",
261
+ "model.language_model.layers.59.mlp.shared_expert_gate",
262
+ "model.language_model.layers.6.linear_attn.conv1d",
263
+ "model.language_model.layers.6.mlp.shared_expert_gate",
264
+ "model.language_model.layers.7.mlp.shared_expert_gate",
265
+ "model.language_model.layers.8.linear_attn.conv1d",
266
+ "model.language_model.layers.8.mlp.shared_expert_gate",
267
+ "model.language_model.layers.9.linear_attn.conv1d",
268
+ "model.language_model.layers.9.mlp.shared_expert_gate",
269
+ "model.visual*",
270
+ "mtp.layers.0*"
271
+ ],
272
+ "quant_algo": "NVFP4",
273
+ "kv_cache_scheme": {
274
+ "dynamic": false,
275
+ "num_bits": 8,
276
+ "type": "float"
277
+ },
278
+ "producer": {
279
+ "name": "modelopt",
280
+ "version": "0.42.0rc1.dev21+g421985313"
281
+ },
282
+ "quant_method": "modelopt"
283
+ }
284
+ }
generation_config.json ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token_id": 248044,
3
+ "do_sample": true,
4
+ "eos_token_id": [
5
+ 248046,
6
+ 248044
7
+ ],
8
+ "pad_token_id": 248044,
9
+ "temperature": 0.6,
10
+ "top_k": 20,
11
+ "top_p": 0.95,
12
+ "transformers_version": "4.57.0.dev0"
13
+ }
hf_quant_config.json ADDED
@@ -0,0 +1,121 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "producer": {
3
+ "name": "modelopt",
4
+ "version": "0.42.0rc1.dev21+g421985313"
5
+ },
6
+ "quantization": {
7
+ "quant_algo": "NVFP4",
8
+ "kv_cache_quant_algo": "FP8",
9
+ "group_size": 16,
10
+ "exclude_modules": [
11
+ "lm_head",
12
+ "model.language_model.layers.0.linear_attn.conv1d",
13
+ "model.language_model.layers.0.mlp.shared_expert_gate",
14
+ "model.language_model.layers.1.linear_attn.conv1d",
15
+ "model.language_model.layers.1.mlp.shared_expert_gate",
16
+ "model.language_model.layers.10.linear_attn.conv1d",
17
+ "model.language_model.layers.10.mlp.shared_expert_gate",
18
+ "model.language_model.layers.11.mlp.shared_expert_gate",
19
+ "model.language_model.layers.12.linear_attn.conv1d",
20
+ "model.language_model.layers.12.mlp.shared_expert_gate",
21
+ "model.language_model.layers.13.linear_attn.conv1d",
22
+ "model.language_model.layers.13.mlp.shared_expert_gate",
23
+ "model.language_model.layers.14.linear_attn.conv1d",
24
+ "model.language_model.layers.14.mlp.shared_expert_gate",
25
+ "model.language_model.layers.15.mlp.shared_expert_gate",
26
+ "model.language_model.layers.16.linear_attn.conv1d",
27
+ "model.language_model.layers.16.mlp.shared_expert_gate",
28
+ "model.language_model.layers.17.linear_attn.conv1d",
29
+ "model.language_model.layers.17.mlp.shared_expert_gate",
30
+ "model.language_model.layers.18.linear_attn.conv1d",
31
+ "model.language_model.layers.18.mlp.shared_expert_gate",
32
+ "model.language_model.layers.19.mlp.shared_expert_gate",
33
+ "model.language_model.layers.2.linear_attn.conv1d",
34
+ "model.language_model.layers.2.mlp.shared_expert_gate",
35
+ "model.language_model.layers.20.linear_attn.conv1d",
36
+ "model.language_model.layers.20.mlp.shared_expert_gate",
37
+ "model.language_model.layers.21.linear_attn.conv1d",
38
+ "model.language_model.layers.21.mlp.shared_expert_gate",
39
+ "model.language_model.layers.22.linear_attn.conv1d",
40
+ "model.language_model.layers.22.mlp.shared_expert_gate",
41
+ "model.language_model.layers.23.mlp.shared_expert_gate",
42
+ "model.language_model.layers.24.linear_attn.conv1d",
43
+ "model.language_model.layers.24.mlp.shared_expert_gate",
44
+ "model.language_model.layers.25.linear_attn.conv1d",
45
+ "model.language_model.layers.25.mlp.shared_expert_gate",
46
+ "model.language_model.layers.26.linear_attn.conv1d",
47
+ "model.language_model.layers.26.mlp.shared_expert_gate",
48
+ "model.language_model.layers.27.mlp.shared_expert_gate",
49
+ "model.language_model.layers.28.linear_attn.conv1d",
50
+ "model.language_model.layers.28.mlp.shared_expert_gate",
51
+ "model.language_model.layers.29.linear_attn.conv1d",
52
+ "model.language_model.layers.29.mlp.shared_expert_gate",
53
+ "model.language_model.layers.3.mlp.shared_expert_gate",
54
+ "model.language_model.layers.30.linear_attn.conv1d",
55
+ "model.language_model.layers.30.mlp.shared_expert_gate",
56
+ "model.language_model.layers.31.mlp.shared_expert_gate",
57
+ "model.language_model.layers.32.linear_attn.conv1d",
58
+ "model.language_model.layers.32.mlp.shared_expert_gate",
59
+ "model.language_model.layers.33.linear_attn.conv1d",
60
+ "model.language_model.layers.33.mlp.shared_expert_gate",
61
+ "model.language_model.layers.34.linear_attn.conv1d",
62
+ "model.language_model.layers.34.mlp.shared_expert_gate",
63
+ "model.language_model.layers.35.mlp.shared_expert_gate",
64
+ "model.language_model.layers.36.linear_attn.conv1d",
65
+ "model.language_model.layers.36.mlp.shared_expert_gate",
66
+ "model.language_model.layers.37.linear_attn.conv1d",
67
+ "model.language_model.layers.37.mlp.shared_expert_gate",
68
+ "model.language_model.layers.38.linear_attn.conv1d",
69
+ "model.language_model.layers.38.mlp.shared_expert_gate",
70
+ "model.language_model.layers.39.mlp.shared_expert_gate",
71
+ "model.language_model.layers.4.linear_attn.conv1d",
72
+ "model.language_model.layers.4.mlp.shared_expert_gate",
73
+ "model.language_model.layers.40.linear_attn.conv1d",
74
+ "model.language_model.layers.40.mlp.shared_expert_gate",
75
+ "model.language_model.layers.41.linear_attn.conv1d",
76
+ "model.language_model.layers.41.mlp.shared_expert_gate",
77
+ "model.language_model.layers.42.linear_attn.conv1d",
78
+ "model.language_model.layers.42.mlp.shared_expert_gate",
79
+ "model.language_model.layers.43.mlp.shared_expert_gate",
80
+ "model.language_model.layers.44.linear_attn.conv1d",
81
+ "model.language_model.layers.44.mlp.shared_expert_gate",
82
+ "model.language_model.layers.45.linear_attn.conv1d",
83
+ "model.language_model.layers.45.mlp.shared_expert_gate",
84
+ "model.language_model.layers.46.linear_attn.conv1d",
85
+ "model.language_model.layers.46.mlp.shared_expert_gate",
86
+ "model.language_model.layers.47.mlp.shared_expert_gate",
87
+ "model.language_model.layers.48.linear_attn.conv1d",
88
+ "model.language_model.layers.48.mlp.shared_expert_gate",
89
+ "model.language_model.layers.49.linear_attn.conv1d",
90
+ "model.language_model.layers.49.mlp.shared_expert_gate",
91
+ "model.language_model.layers.5.linear_attn.conv1d",
92
+ "model.language_model.layers.5.mlp.shared_expert_gate",
93
+ "model.language_model.layers.50.linear_attn.conv1d",
94
+ "model.language_model.layers.50.mlp.shared_expert_gate",
95
+ "model.language_model.layers.51.mlp.shared_expert_gate",
96
+ "model.language_model.layers.52.linear_attn.conv1d",
97
+ "model.language_model.layers.52.mlp.shared_expert_gate",
98
+ "model.language_model.layers.53.linear_attn.conv1d",
99
+ "model.language_model.layers.53.mlp.shared_expert_gate",
100
+ "model.language_model.layers.54.linear_attn.conv1d",
101
+ "model.language_model.layers.54.mlp.shared_expert_gate",
102
+ "model.language_model.layers.55.mlp.shared_expert_gate",
103
+ "model.language_model.layers.56.linear_attn.conv1d",
104
+ "model.language_model.layers.56.mlp.shared_expert_gate",
105
+ "model.language_model.layers.57.linear_attn.conv1d",
106
+ "model.language_model.layers.57.mlp.shared_expert_gate",
107
+ "model.language_model.layers.58.linear_attn.conv1d",
108
+ "model.language_model.layers.58.mlp.shared_expert_gate",
109
+ "model.language_model.layers.59.mlp.shared_expert_gate",
110
+ "model.language_model.layers.6.linear_attn.conv1d",
111
+ "model.language_model.layers.6.mlp.shared_expert_gate",
112
+ "model.language_model.layers.7.mlp.shared_expert_gate",
113
+ "model.language_model.layers.8.linear_attn.conv1d",
114
+ "model.language_model.layers.8.mlp.shared_expert_gate",
115
+ "model.language_model.layers.9.linear_attn.conv1d",
116
+ "model.language_model.layers.9.mlp.shared_expert_gate",
117
+ "model.visual*",
118
+ "mtp.layers.0*"
119
+ ]
120
+ }
121
+ }
model-00001-of-00005.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5d40753b071d4cb8d66ed47a632cb15101f06f4eb07f2c116232f2650b47a2a0
3
+ size 50010695296
model-00002-of-00005.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:512a179d431fbad42c7f151fdabd34b2b3e401d4fcb258fcb8ec3b9065928ba1
3
+ size 50009452648
model-00003-of-00005.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:853f555feb158410863d936d12411b6b30f7996273949777dda1348993cd5a21
3
+ size 50011278424
model-00004-of-00005.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:701f202a74f25e2a1a4ae3a9a41871013b124b8d4006af21f7b9bc66607f32f9
3
+ size 50009715248
model-00005-of-00005.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6675feeb52c712dffd61f2e837229410faac79133f04f22dc73aab191d95f90c
3
+ size 40167721312
model.safetensors.index.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e401472531e8f3901ad164da5f126f85e5fa0b419d88c12c8fe8e91665f3508b
3
+ size 41223198
preprocessor_config.json ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "size": {
3
+ "longest_edge": 16777216,
4
+ "shortest_edge": 65536
5
+ },
6
+ "patch_size": 16,
7
+ "temporal_patch_size": 2,
8
+ "merge_size": 2,
9
+ "image_mean": [
10
+ 0.5,
11
+ 0.5,
12
+ 0.5
13
+ ],
14
+ "image_std": [
15
+ 0.5,
16
+ 0.5,
17
+ 0.5
18
+ ],
19
+ "processor_class": "Qwen3VLProcessor",
20
+ "image_processor_type": "Qwen2VLImageProcessorFast"
21
+ }
processor_config.json ADDED
@@ -0,0 +1,63 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "image_processor": {
3
+ "data_format": "channels_first",
4
+ "do_convert_rgb": true,
5
+ "do_normalize": true,
6
+ "do_rescale": true,
7
+ "do_resize": true,
8
+ "image_mean": [
9
+ 0.5,
10
+ 0.5,
11
+ 0.5
12
+ ],
13
+ "image_processor_type": "Qwen2VLImageProcessorFast",
14
+ "image_std": [
15
+ 0.5,
16
+ 0.5,
17
+ 0.5
18
+ ],
19
+ "merge_size": 2,
20
+ "patch_size": 16,
21
+ "resample": 3,
22
+ "rescale_factor": 0.00392156862745098,
23
+ "size": {
24
+ "longest_edge": 16777216,
25
+ "shortest_edge": 65536
26
+ },
27
+ "temporal_patch_size": 2
28
+ },
29
+ "processor_class": "Qwen3VLProcessor",
30
+ "video_processor": {
31
+ "data_format": "channels_first",
32
+ "default_to_square": true,
33
+ "do_convert_rgb": true,
34
+ "do_normalize": true,
35
+ "do_rescale": true,
36
+ "do_resize": true,
37
+ "do_sample_frames": true,
38
+ "fps": 2,
39
+ "image_mean": [
40
+ 0.5,
41
+ 0.5,
42
+ 0.5
43
+ ],
44
+ "image_std": [
45
+ 0.5,
46
+ 0.5,
47
+ 0.5
48
+ ],
49
+ "max_frames": 768,
50
+ "merge_size": 2,
51
+ "min_frames": 4,
52
+ "patch_size": 16,
53
+ "resample": 3,
54
+ "rescale_factor": 0.00392156862745098,
55
+ "return_metadata": false,
56
+ "size": {
57
+ "longest_edge": 25165824,
58
+ "shortest_edge": 4096
59
+ },
60
+ "temporal_patch_size": 2,
61
+ "video_processor_type": "Qwen3VLVideoProcessor"
62
+ }
63
+ }
tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5f9e4d4901a92b997e463c1f46055088b6cca5ca61a6522d1b9f64c4bb81cb42
3
+ size 12807982
tokenizer_config.json ADDED
@@ -0,0 +1,305 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_prefix_space": false,
3
+ "added_tokens_decoder": {
4
+ "248044": {
5
+ "content": "<|endoftext|>",
6
+ "lstrip": false,
7
+ "normalized": false,
8
+ "rstrip": false,
9
+ "single_word": false,
10
+ "special": true
11
+ },
12
+ "248045": {
13
+ "content": "<|im_start|>",
14
+ "lstrip": false,
15
+ "normalized": false,
16
+ "rstrip": false,
17
+ "single_word": false,
18
+ "special": true
19
+ },
20
+ "248046": {
21
+ "content": "<|im_end|>",
22
+ "lstrip": false,
23
+ "normalized": false,
24
+ "rstrip": false,
25
+ "single_word": false,
26
+ "special": true
27
+ },
28
+ "248047": {
29
+ "content": "<|object_ref_start|>",
30
+ "lstrip": false,
31
+ "normalized": false,
32
+ "rstrip": false,
33
+ "single_word": false,
34
+ "special": true
35
+ },
36
+ "248048": {
37
+ "content": "<|object_ref_end|>",
38
+ "lstrip": false,
39
+ "normalized": false,
40
+ "rstrip": false,
41
+ "single_word": false,
42
+ "special": true
43
+ },
44
+ "248049": {
45
+ "content": "<|box_start|>",
46
+ "lstrip": false,
47
+ "normalized": false,
48
+ "rstrip": false,
49
+ "single_word": false,
50
+ "special": true
51
+ },
52
+ "248050": {
53
+ "content": "<|box_end|>",
54
+ "lstrip": false,
55
+ "normalized": false,
56
+ "rstrip": false,
57
+ "single_word": false,
58
+ "special": true
59
+ },
60
+ "248051": {
61
+ "content": "<|quad_start|>",
62
+ "lstrip": false,
63
+ "normalized": false,
64
+ "rstrip": false,
65
+ "single_word": false,
66
+ "special": true
67
+ },
68
+ "248052": {
69
+ "content": "<|quad_end|>",
70
+ "lstrip": false,
71
+ "normalized": false,
72
+ "rstrip": false,
73
+ "single_word": false,
74
+ "special": true
75
+ },
76
+ "248053": {
77
+ "content": "<|vision_start|>",
78
+ "lstrip": false,
79
+ "normalized": false,
80
+ "rstrip": false,
81
+ "single_word": false,
82
+ "special": true
83
+ },
84
+ "248054": {
85
+ "content": "<|vision_end|>",
86
+ "lstrip": false,
87
+ "normalized": false,
88
+ "rstrip": false,
89
+ "single_word": false,
90
+ "special": true
91
+ },
92
+ "248055": {
93
+ "content": "<|vision_pad|>",
94
+ "lstrip": false,
95
+ "normalized": false,
96
+ "rstrip": false,
97
+ "single_word": false,
98
+ "special": true
99
+ },
100
+ "248056": {
101
+ "content": "<|image_pad|>",
102
+ "lstrip": false,
103
+ "normalized": false,
104
+ "rstrip": false,
105
+ "single_word": false,
106
+ "special": true
107
+ },
108
+ "248057": {
109
+ "content": "<|video_pad|>",
110
+ "lstrip": false,
111
+ "normalized": false,
112
+ "rstrip": false,
113
+ "single_word": false,
114
+ "special": true
115
+ },
116
+ "248058": {
117
+ "content": "<tool_call>",
118
+ "lstrip": false,
119
+ "normalized": false,
120
+ "rstrip": false,
121
+ "single_word": false,
122
+ "special": false
123
+ },
124
+ "248059": {
125
+ "content": "</tool_call>",
126
+ "lstrip": false,
127
+ "normalized": false,
128
+ "rstrip": false,
129
+ "single_word": false,
130
+ "special": false
131
+ },
132
+ "248060": {
133
+ "content": "<|fim_prefix|>",
134
+ "lstrip": false,
135
+ "normalized": false,
136
+ "rstrip": false,
137
+ "single_word": false,
138
+ "special": false
139
+ },
140
+ "248061": {
141
+ "content": "<|fim_middle|>",
142
+ "lstrip": false,
143
+ "normalized": false,
144
+ "rstrip": false,
145
+ "single_word": false,
146
+ "special": false
147
+ },
148
+ "248062": {
149
+ "content": "<|fim_suffix|>",
150
+ "lstrip": false,
151
+ "normalized": false,
152
+ "rstrip": false,
153
+ "single_word": false,
154
+ "special": false
155
+ },
156
+ "248063": {
157
+ "content": "<|fim_pad|>",
158
+ "lstrip": false,
159
+ "normalized": false,
160
+ "rstrip": false,
161
+ "single_word": false,
162
+ "special": false
163
+ },
164
+ "248064": {
165
+ "content": "<|repo_name|>",
166
+ "lstrip": false,
167
+ "normalized": false,
168
+ "rstrip": false,
169
+ "single_word": false,
170
+ "special": false
171
+ },
172
+ "248065": {
173
+ "content": "<|file_sep|>",
174
+ "lstrip": false,
175
+ "normalized": false,
176
+ "rstrip": false,
177
+ "single_word": false,
178
+ "special": false
179
+ },
180
+ "248066": {
181
+ "content": "<tool_response>",
182
+ "lstrip": false,
183
+ "normalized": false,
184
+ "rstrip": false,
185
+ "single_word": false,
186
+ "special": false
187
+ },
188
+ "248067": {
189
+ "content": "</tool_response>",
190
+ "lstrip": false,
191
+ "normalized": false,
192
+ "rstrip": false,
193
+ "single_word": false,
194
+ "special": false
195
+ },
196
+ "248068": {
197
+ "content": "<think>",
198
+ "lstrip": false,
199
+ "normalized": false,
200
+ "rstrip": false,
201
+ "single_word": false,
202
+ "special": false
203
+ },
204
+ "248069": {
205
+ "content": "</think>",
206
+ "lstrip": false,
207
+ "normalized": false,
208
+ "rstrip": false,
209
+ "single_word": false,
210
+ "special": false
211
+ },
212
+ "248070": {
213
+ "content": "<|audio_start|>",
214
+ "lstrip": false,
215
+ "normalized": false,
216
+ "rstrip": false,
217
+ "single_word": false,
218
+ "special": true
219
+ },
220
+ "248071": {
221
+ "content": "<|audio_end|>",
222
+ "lstrip": false,
223
+ "normalized": false,
224
+ "rstrip": false,
225
+ "single_word": false,
226
+ "special": true
227
+ },
228
+ "248072": {
229
+ "content": "<tts_pad>",
230
+ "lstrip": false,
231
+ "normalized": false,
232
+ "rstrip": false,
233
+ "single_word": false,
234
+ "special": true
235
+ },
236
+ "248073": {
237
+ "content": "<tts_text_bos>",
238
+ "lstrip": false,
239
+ "normalized": false,
240
+ "rstrip": false,
241
+ "single_word": false,
242
+ "special": true
243
+ },
244
+ "248074": {
245
+ "content": "<tts_text_eod>",
246
+ "lstrip": false,
247
+ "normalized": false,
248
+ "rstrip": false,
249
+ "single_word": false,
250
+ "special": true
251
+ },
252
+ "248075": {
253
+ "content": "<tts_text_bos_single>",
254
+ "lstrip": false,
255
+ "normalized": false,
256
+ "rstrip": false,
257
+ "single_word": false,
258
+ "special": true
259
+ },
260
+ "248076": {
261
+ "content": "<|audio_pad|>",
262
+ "lstrip": false,
263
+ "normalized": false,
264
+ "rstrip": false,
265
+ "single_word": false,
266
+ "special": true
267
+ }
268
+ },
269
+ "additional_special_tokens": [
270
+ "<|im_start|>",
271
+ "<|im_end|>",
272
+ "<|object_ref_start|>",
273
+ "<|object_ref_end|>",
274
+ "<|box_start|>",
275
+ "<|box_end|>",
276
+ "<|quad_start|>",
277
+ "<|quad_end|>",
278
+ "<|vision_start|>",
279
+ "<|vision_end|>",
280
+ "<|vision_pad|>",
281
+ "<|image_pad|>",
282
+ "<|video_pad|>"
283
+ ],
284
+ "bos_token": null,
285
+ "chat_template": "{%- set image_count = namespace(value=0) %}\n{%- set video_count = namespace(value=0) %}\n{%- macro render_content(content, do_vision_count, is_system_content=false) %}\n {%- if content is string %}\n {{- content }}\n {%- elif content is iterable and content is not mapping %}\n {%- for item in content %}\n {%- if 'image' in item or 'image_url' in item or item.type == 'image' %}\n {%- if is_system_content %}\n {{- raise_exception('System message cannot contain images.') }}\n {%- endif %}\n {%- if do_vision_count %}\n {%- set image_count.value = image_count.value + 1 %}\n {%- endif %}\n {%- if add_vision_id %}\n {{- 'Picture ' ~ image_count.value ~ ': ' }}\n {%- endif %}\n {{- '<|vision_start|><|image_pad|><|vision_end|>' }}\n {%- elif 'video' in item or item.type == 'video' %}\n {%- if is_system_content %}\n {{- raise_exception('System message cannot contain videos.') }}\n {%- endif %}\n {%- if do_vision_count %}\n {%- set video_count.value = video_count.value + 1 %}\n {%- endif %}\n {%- if add_vision_id %}\n {{- 'Video ' ~ video_count.value ~ ': ' }}\n {%- endif %}\n {{- '<|vision_start|><|video_pad|><|vision_end|>' }}\n {%- elif 'text' in item %}\n {{- item.text }}\n {%- else %}\n {{- raise_exception('Unexpected item type in content.') }}\n {%- endif %}\n {%- endfor %}\n {%- elif content is none or content is undefined %}\n {{- '' }}\n {%- else %}\n {{- raise_exception('Unexpected content type.') }}\n {%- endif %}\n{%- endmacro %}\n{%- if not messages %}\n {{- raise_exception('No messages provided.') }}\n{%- endif %}\n{%- if tools and tools is iterable and tools is not mapping %}\n {{- '<|im_start|>system\\n' }}\n {{- \"# Tools\\n\\nYou have access to the following functions:\\n\\n<tools>\" }}\n {%- for tool in tools %}\n {{- \"\\n\" }}\n {{- tool | tojson }}\n {%- endfor %}\n {{- \"\\n</tools>\" }}\n {{- '\\n\\nIf you choose to call a function ONLY reply in the following format with NO suffix:\\n\\n<tool_call>\\n<function=example_function_name>\\n<parameter=example_parameter_1>\\nvalue_1\\n</parameter>\\n<parameter=example_parameter_2>\\nThis is the value for the second parameter\\nthat can span\\nmultiple lines\\n</parameter>\\n</function>\\n</tool_call>\\n\\n<IMPORTANT>\\nReminder:\\n- Function calls MUST follow the specified format: an inner <function=...></function> block must be nested within <tool_call></tool_call> XML tags\\n- Required parameters MUST be specified\\n- You may provide optional reasoning for your function call in natural language BEFORE the function call, but NOT after\\n- If there is no function call available, answer the question like normal with your current knowledge and do not tell the user about function calls\\n</IMPORTANT>' }}\n {%- if messages[0].role == 'system' %}\n {%- set content = render_content(messages[0].content, false, true)|trim %}\n {%- if content %}\n {{- '\\n\\n' + content }}\n {%- endif %}\n {%- endif %}\n {{- '<|im_end|>\\n' }}\n{%- else %}\n {%- if messages[0].role == 'system' %}\n {%- set content = render_content(messages[0].content, false, true)|trim %}\n {{- '<|im_start|>system\\n' + content + '<|im_end|>\\n' }}\n {%- endif %}\n{%- endif %}\n{%- set ns = namespace(multi_step_tool=true, last_query_index=messages|length - 1) %}\n{%- for message in messages[::-1] %}\n {%- set index = (messages|length - 1) - loop.index0 %}\n {%- if ns.multi_step_tool and message.role == \"user\" %}\n {%- set content = render_content(message.content, false)|trim %}\n {%- if not(content.startswith('<tool_response>') and content.endswith('</tool_response>')) %}\n {%- set ns.multi_step_tool = false %}\n {%- set ns.last_query_index = index %}\n {%- endif %}\n {%- endif %}\n{%- endfor %}\n{%- if ns.multi_step_tool %}\n {{- raise_exception('No user query found in messages.') }}\n{%- endif %}\n{%- for message in messages %}\n {%- set content = render_content(message.content, true)|trim %}\n {%- if message.role == \"system\" %}\n {%- if not loop.first %}\n {{- raise_exception('System message must be at the beginning.') }}\n {%- endif %}\n {%- elif message.role == \"user\" %}\n {{- '<|im_start|>' + message.role + '\\n' + content + '<|im_end|>' + '\\n' }}\n {%- elif message.role == \"assistant\" %}\n {%- set reasoning_content = '' %}\n {%- if message.reasoning_content is string %}\n {%- set reasoning_content = message.reasoning_content %}\n {%- else %}\n {%- if '</think>' in content %}\n {%- set reasoning_content = content.split('</think>')[0].rstrip('\\n').split('<think>')[-1].lstrip('\\n') %}\n {%- set content = content.split('</think>')[-1].lstrip('\\n') %}\n {%- endif %}\n {%- endif %}\n {%- set reasoning_content = reasoning_content|trim %}\n {%- if loop.index0 > ns.last_query_index %}\n {{- '<|im_start|>' + message.role + '\\n<think>\\n' + reasoning_content + '\\n</think>\\n\\n' + content }}\n {%- else %}\n {{- '<|im_start|>' + message.role + '\\n' + content }}\n {%- endif %}\n {%- if message.tool_calls and message.tool_calls is iterable and message.tool_calls is not mapping %}\n {%- for tool_call in message.tool_calls %}\n {%- if tool_call.function is defined %}\n {%- set tool_call = tool_call.function %}\n {%- endif %}\n {%- if loop.first %}\n {%- if content|trim %}\n {{- '\\n\\n<tool_call>\\n<function=' + tool_call.name + '>\\n' }}\n {%- else %}\n {{- '<tool_call>\\n<function=' + tool_call.name + '>\\n' }}\n {%- endif %}\n {%- else %}\n {{- '\\n<tool_call>\\n<function=' + tool_call.name + '>\\n' }}\n {%- endif %}\n {%- if tool_call.arguments is defined %}\n {%- for args_name, args_value in tool_call.arguments|items %}\n {{- '<parameter=' + args_name + '>\\n' }}\n {%- set args_value = args_value | tojson | safe if args_value is mapping or (args_value is sequence and args_value is not string) else args_value | string %}\n {{- args_value }}\n {{- '\\n</parameter>\\n' }}\n {%- endfor %}\n {%- endif %}\n {{- '</function>\\n</tool_call>' }}\n {%- endfor %}\n {%- endif %}\n {{- '<|im_end|>\\n' }}\n {%- elif message.role == \"tool\" %}\n {%- if loop.previtem and loop.previtem.role != \"tool\" %}\n {{- '<|im_start|>user' }}\n {%- endif %}\n {{- '\\n<tool_response>\\n' }}\n {{- content }}\n {{- '\\n</tool_response>' }}\n {%- if not loop.last and loop.nextitem.role != \"tool\" %}\n {{- '<|im_end|>\\n' }}\n {%- elif loop.last %}\n {{- '<|im_end|>\\n' }}\n {%- endif %}\n {%- else %}\n {{- raise_exception('Unexpected message role.') }}\n {%- endif %}\n{%- endfor %}\n{%- if add_generation_prompt %}\n {{- '<|im_start|>assistant\\n' }}\n {%- if enable_thinking is defined and enable_thinking is false %}\n {{- '<think>\\n\\n</think>\\n\\n' }}\n {%- else %}\n {{- '<think>\\n' }}\n {%- endif %}\n{%- endif %}",
286
+ "clean_up_tokenization_spaces": false,
287
+ "eos_token": "<|im_end|>",
288
+ "errors": "replace",
289
+ "model_max_length": 262144,
290
+ "pad_token": "<|endoftext|>",
291
+ "split_special_tokens": false,
292
+ "tokenizer_class": "Qwen2Tokenizer",
293
+ "unk_token": null,
294
+ "add_bos_token": false,
295
+ "pretokenize_regex": "(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\\r\\n\\p{L}\\p{N}]?[\\p{L}\\p{M}]+|\\p{N}| ?[^\\s\\p{L}\\p{M}\\p{N}]+[\\r\\n]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+",
296
+ "extra_special_tokens": {
297
+ "audio_bos_token": "<|audio_start|>",
298
+ "audio_eos_token": "<|audio_end|>",
299
+ "audio_token": "<|audio_pad|>",
300
+ "image_token": "<|image_pad|>",
301
+ "video_token": "<|video_pad|>",
302
+ "vision_bos_token": "<|vision_start|>",
303
+ "vision_eos_token": "<|vision_end|>"
304
+ }
305
+ }
video_preprocessor_config.json ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "size": {
3
+ "longest_edge": 25165824,
4
+ "shortest_edge": 4096
5
+ },
6
+ "patch_size": 16,
7
+ "temporal_patch_size": 2,
8
+ "merge_size": 2,
9
+ "image_mean": [
10
+ 0.5,
11
+ 0.5,
12
+ 0.5
13
+ ],
14
+ "image_std": [
15
+ 0.5,
16
+ 0.5,
17
+ 0.5
18
+ ],
19
+ "processor_class": "Qwen3VLProcessor",
20
+ "video_processor_type": "Qwen3VLVideoProcessor"
21
+ }
vocab.json ADDED
The diff for this file is too large to render. See raw diff