catid commited on
Commit
25ba856
·
verified ·
1 Parent(s): 5de4775

Upload folder using huggingface_hub

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. README.md +77 -0
  2. chat_template.jinja +159 -0
  3. config.json +113 -0
  4. configuration_minimax_m2.py +200 -0
  5. generation_config.json +9 -0
  6. merges.txt +0 -0
  7. model-00000-of-00126.safetensors +3 -0
  8. model-00001-of-00126.safetensors +3 -0
  9. model-00002-of-00126.safetensors +3 -0
  10. model-00003-of-00126.safetensors +3 -0
  11. model-00004-of-00126.safetensors +3 -0
  12. model-00005-of-00126.safetensors +3 -0
  13. model-00006-of-00126.safetensors +3 -0
  14. model-00007-of-00126.safetensors +3 -0
  15. model-00008-of-00126.safetensors +3 -0
  16. model-00009-of-00126.safetensors +3 -0
  17. model-00010-of-00126.safetensors +3 -0
  18. model-00011-of-00126.safetensors +3 -0
  19. model-00012-of-00126.safetensors +3 -0
  20. model-00013-of-00126.safetensors +3 -0
  21. model-00014-of-00126.safetensors +3 -0
  22. model-00015-of-00126.safetensors +3 -0
  23. model-00016-of-00126.safetensors +3 -0
  24. model-00017-of-00126.safetensors +3 -0
  25. model-00018-of-00126.safetensors +3 -0
  26. model-00019-of-00126.safetensors +3 -0
  27. model-00020-of-00126.safetensors +3 -0
  28. model-00021-of-00126.safetensors +3 -0
  29. model-00022-of-00126.safetensors +3 -0
  30. model-00023-of-00126.safetensors +3 -0
  31. model-00024-of-00126.safetensors +3 -0
  32. model-00025-of-00126.safetensors +3 -0
  33. model-00026-of-00126.safetensors +3 -0
  34. model-00027-of-00126.safetensors +3 -0
  35. model-00028-of-00126.safetensors +3 -0
  36. model-00029-of-00126.safetensors +3 -0
  37. model-00030-of-00126.safetensors +3 -0
  38. model-00031-of-00126.safetensors +3 -0
  39. model-00032-of-00126.safetensors +3 -0
  40. model-00033-of-00126.safetensors +3 -0
  41. model-00034-of-00126.safetensors +3 -0
  42. model-00035-of-00126.safetensors +3 -0
  43. model-00036-of-00126.safetensors +3 -0
  44. model-00037-of-00126.safetensors +3 -0
  45. model-00038-of-00126.safetensors +3 -0
  46. model-00039-of-00126.safetensors +3 -0
  47. model-00040-of-00126.safetensors +3 -0
  48. model-00041-of-00126.safetensors +3 -0
  49. model-00042-of-00126.safetensors +3 -0
  50. model-00043-of-00126.safetensors +3 -0
README.md ADDED
@@ -0,0 +1,77 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ pipeline_tag: text-generation
3
+ license: other
4
+ license_name: modified-mit
5
+ license_link: https://github.com/MiniMax-AI/MiniMax-M2.5/blob/main/LICENSE
6
+ library_name: transformers
7
+ base_model: MiniMaxAI/MiniMax-M2.5
8
+ tags:
9
+ - uncensored
10
+ - abliterated
11
+ - fp8
12
+ - minimax
13
+ - moe
14
+ ---
15
+
16
+ # MiniMax-M2.5-catid
17
+
18
+ **Uncensored FP8 version of [MiniMaxAI/MiniMax-M2.5](https://huggingface.co/MiniMaxAI/MiniMax-M2.5)** with safety refusal behavior removed via surgical weight replacement.
19
+
20
+ ## Refusal Removal Results
21
+
22
+ Evaluated on a 10,000-prompt refusal benchmark (8,000 train + 2,000 validation) using an LLM judge (GPT-5-nano) for 4-way classification (complied / refused / hedged / deflected):
23
+
24
+ | Split | Total Prompts | Complied | Refused | Hedged | Deflected | Refusal Rate |
25
+ |-------|--------------|----------|---------|--------|-----------|-------------|
26
+ | Train | 8,000 | 7,506 | 262 | 228 | 4 | 6.2% |
27
+ | Validation | 2,000 | 1,885 | 55 | 59 | 1 | 5.8% |
28
+
29
+ **Coherence: 100%** (50/50 capability test prompts answered correctly)
30
+
31
+ The ~6% residual "refusal rate" consists primarily of false positives from the LLM judge on benign prompts (opinion questions, casual banter, medical/privacy disclaimers) rather than actual safety refusals of harmful content.
32
+
33
+ ### Method
34
+
35
+ The `o_proj` (attention output projection) weights across all 62 transformer layers were replaced with weights from [PRISM-PRO](https://huggingface.co/PrunaAI/MiniMax-M2.5-PRISM-PRO-Q8_0_v2-GGUF) (an abliterated variant), dequantized from Q8_0 GGUF format and re-quantized to FP8 E4M3FN with block-wise scaling to match the original model's quantization scheme. All other weights (q_proj, k_proj, v_proj, MLP experts, embeddings, norms, etc.) are identical to the official FP8 base model.
36
+
37
+ - **Reconstruction error**: 0.5% relative error per layer (cosine similarity ~1.0)
38
+ - **Modified weights**: 62 o_proj tensors (3072 x 6144 each) + their scale_inv tensors
39
+ - **Unmodified weights**: Everything else (~229B parameter MoE architecture preserved exactly)
40
+
41
+ ## Usage
42
+
43
+ This model is a drop-in replacement for `MiniMaxAI/MiniMax-M2.5`. Serve it with vLLM, SGLang, or any framework that supports the original model:
44
+
45
+ ### vLLM
46
+
47
+ ```bash
48
+ vllm serve catid/MiniMax-M2.5-catid \
49
+ --tensor-parallel-size 4 \
50
+ --trust-remote-code \
51
+ --max-model-len 2048
52
+ ```
53
+
54
+ ### SGLang
55
+
56
+ ```bash
57
+ python -m sglang.launch_server \
58
+ --model catid/MiniMax-M2.5-catid \
59
+ --tp 4 \
60
+ --trust-remote-code
61
+ ```
62
+
63
+ ### Recommended Parameters
64
+
65
+ `temperature=1.0`, `top_p=0.95`, `top_k=40`
66
+
67
+ ## Model Details
68
+
69
+ - **Architecture**: MiniMax-M2.5 (229B MoE, 62 layers, 256 experts/layer, hidden_dim=3072)
70
+ - **Precision**: FP8 E4M3FN with block-wise scaling (128x128 blocks)
71
+ - **Base model**: [MiniMaxAI/MiniMax-M2.5](https://huggingface.co/MiniMaxAI/MiniMax-M2.5)
72
+ - **Abliteration source**: [PrunaAI/MiniMax-M2.5-PRISM-PRO-Q8_0_v2-GGUF](https://huggingface.co/PrunaAI/MiniMax-M2.5-PRISM-PRO-Q8_0_v2-GGUF)
73
+ - **License**: [Modified MIT](https://github.com/MiniMax-AI/MiniMax-M2.5/blob/main/LICENSE) (same as base model)
74
+
75
+ ## Disclaimer
76
+
77
+ This model is provided for research purposes. The removal of safety guardrails means it may generate content that the original model would refuse. Users are responsible for ensuring appropriate use.
chat_template.jinja ADDED
@@ -0,0 +1,159 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {# ----------‑‑‑ special token variables ‑‑‑---------- #}
2
+ {%- set toolcall_begin_token = '<minimax:tool_call>' -%}
3
+ {%- set toolcall_end_token = '</minimax:tool_call>' -%}
4
+ {#- Tool Rendering Functions ============================================== -#}
5
+ {%- macro render_tool_namespace(namespace_name, tool_list) -%}
6
+ {%- for tool in tool_list -%}
7
+ <tool>{{ tool.function | tojson(ensure_ascii=False) }}</tool>
8
+ {% endfor -%}
9
+ {%- endmacro -%}
10
+ {%- macro visible_text(content) -%}
11
+ {%- if content is string -%}
12
+ {{ content }}
13
+ {%- elif content is iterable and content is not mapping -%}
14
+ {%- for item in content -%}
15
+ {%- if item is mapping and item.type == 'text' -%}
16
+ {{- item.text }}
17
+ {%- elif item is string -%}
18
+ {{- item }}
19
+ {%- endif -%}
20
+ {%- endfor -%}
21
+ {%- else -%}
22
+ {{- content }}
23
+ {%- endif -%}
24
+ {%- endmacro -%}
25
+ {#- System Message Construction ============================================ -#}
26
+ {%- macro build_system_message(system_message) -%}
27
+ {%- if system_message and system_message.content -%}
28
+ {{- visible_text(system_message.content) }}
29
+ {%- else -%}
30
+ {%- if model_identity is not defined -%}
31
+ {%- set model_identity = "You are a helpful assistant. Your name is MiniMax-M2.5 and is built by MiniMax." -%}
32
+ {%- endif -%}
33
+ {{- model_identity }}
34
+ {%- endif -%}
35
+
36
+ {#- Handle current_date -#}
37
+ {%- if system_message and system_message.current_date -%}
38
+ {{- '\n' ~ 'Current date: ' + system_message.current_date }}
39
+ {%- endif -%}
40
+ {#- Handle current_location -#}
41
+ {%- if system_message and system_message.current_location -%}
42
+ {{- '\n' ~ 'Current location: ' + system_message.current_location }}
43
+ {%- endif -%}
44
+ {%- endmacro -%}
45
+ {#- Main Template Logic ================================================= -#}
46
+ {#- Extract system message (only first message if it's system) -#}
47
+ {%- set system_message = none -%}
48
+ {%- set conversation_messages = messages -%}
49
+ {%- if messages and messages[0].role == "system" -%}
50
+ {%- set system_message = messages[0] -%}
51
+ {%- set conversation_messages = messages[1:] -%}
52
+ {%- endif -%}
53
+ {#- Get the last user message turn, for interleved thinking -#}
54
+ {%- set ns = namespace(last_user_index=-1) %}
55
+ {% for m in conversation_messages %}
56
+ {%- if m.role == 'user' %}
57
+ {% set ns.last_user_index = loop.index0 -%}
58
+ {%- endif %}
59
+ {%- endfor %}
60
+ {#- Render system message -#}
61
+ {{- ']~!b[' ~ ']~b]system' ~ '\n' }}
62
+ {{- build_system_message(system_message) }}
63
+ {#- Render tools if available -#}
64
+ {%- if tools -%}
65
+ {{- '\n\n' ~ '# Tools' ~ '\n' ~ 'You may call one or more tools to assist with the user query.\nHere are the tools available in JSONSchema format:' ~ '\n' }}
66
+ {{- '\n' ~ '<tools>' ~ '\n' }}
67
+ {{- render_tool_namespace("functions", tools) }}
68
+ {{- '</tools>' ~ '\n\n' }}
69
+ {{- 'When making tool calls, use XML format to invoke tools and pass parameters:' ~ '\n' }}
70
+ {{- '\n' ~ toolcall_begin_token }}
71
+ <invoke name="tool-name-1">
72
+ <parameter name="param-key-1">param-value-1</parameter>
73
+ <parameter name="param-key-2">param-value-2</parameter>
74
+ ...
75
+ </invoke>
76
+ {{- '\n' ~ toolcall_end_token }}
77
+ {%- endif -%}
78
+ {{- '[e~[\n' }}
79
+
80
+ {#- Render messages -#}
81
+ {%- set last_tool_call = namespace(name=none) -%}
82
+ {%- for message in conversation_messages -%}
83
+ {%- if message.role == 'assistant' -%}
84
+ {#- Only render reasoning_content if no user message follows -#}
85
+ {{- ']~b]ai' ~ '\n' }}
86
+
87
+ {%- set reasoning_content = '' %}
88
+ {%- set content = visible_text(message.content) %}
89
+ {%- if message.reasoning_content is string %}
90
+ {%- set reasoning_content = message.reasoning_content %}
91
+ {%- else %}
92
+ {%- if '</think>' in content %}
93
+ {%- set reasoning_content = content.split('</think>')[0].strip('\n').split('<think>')[-1].strip('\n') %}
94
+ {%- set content = content.split('</think>')[-1].strip('\n') %}
95
+ {%- endif %}
96
+ {%- endif %}
97
+ {%- if reasoning_content and loop.index0 > ns.last_user_index -%}
98
+ {{- '<think>' ~ '\n' ~ reasoning_content ~ '\n' ~ '</think>' ~ '\n\n' }}
99
+ {%- endif -%}
100
+ {%- if content -%}
101
+ {{- content }}
102
+ {%- endif -%}
103
+ {%- if message.tool_calls -%}
104
+ {{- '\n' ~ toolcall_begin_token ~ '\n' }}
105
+
106
+ {%- for tool_call in message.tool_calls -%}
107
+ {%- if tool_call.function %}
108
+ {%- set tool_call = tool_call.function %}
109
+ {%- endif %}
110
+ {{- '<invoke name="' + tool_call.name + '">' }}
111
+ {% set _args = tool_call.arguments %}
112
+ {%- for k, v in _args.items() %}
113
+ {{- '<parameter name="' + k + '">' }}
114
+ {{- v | tojson(ensure_ascii=False) if v is not string else v }}
115
+ {{- '</parameter>' }}
116
+ {% endfor %}
117
+ {{- '</invoke>' ~ '\n' }}
118
+ {%- endfor -%}
119
+
120
+ {{- toolcall_end_token}}
121
+ {%- set last_tool_call.name = message.tool_calls[-1].name -%}
122
+ {%- else -%}
123
+ {%- set last_tool_call.name = none -%}
124
+ {%- endif -%}
125
+ {{- '[e~[' ~ '\n' }}
126
+
127
+ {%- elif message.role == 'tool' -%}
128
+ {%- if last_tool_call.name is none -%}
129
+ {{- raise_exception("Message has tool role, but there was no previous assistant message with a tool call!") }}
130
+ {%- endif -%}
131
+ {%- if loop.first or (conversation_messages[loop.index0 - 1].role != 'tool') -%}
132
+ {{- ']~b]tool' }}
133
+ {%- endif -%}
134
+ {%- if message.content is string -%}
135
+ {{- '\n<response>' }}
136
+ {{- message.content }}
137
+ {{- '</response>' }}
138
+ {%- else -%}
139
+ {%- for tr in message.content -%}
140
+ {{- '\n<response>' }}
141
+ {{- tr.output if tr.output is defined else (tr.text if tr.type == 'text' and tr.text is defined else tr) }}
142
+ {{- '\n</response>' }}
143
+ {%- endfor -%}
144
+ {%- endif -%}
145
+ {%- if loop.last or (conversation_messages[loop.index0 + 1].role != 'tool') -%}
146
+ {{- '[e~[\n' -}}
147
+ {%- endif -%}
148
+
149
+ {%- elif message.role == 'user' -%}
150
+ {{- ']~b]user' ~ '\n' }}
151
+ {{- visible_text(message.content) }}
152
+ {{- '[e~[' ~ '\n' }}
153
+ {%- endif -%}
154
+ {%- endfor -%}
155
+
156
+ {#- Generation prompt -#}
157
+ {%- if add_generation_prompt -%}
158
+ {{- ']~b]ai' ~ '\n' ~ '<think>' ~ '\n' }}
159
+ {%- endif -%}
config.json ADDED
@@ -0,0 +1,113 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "MiniMaxM2ForCausalLM"
4
+ ],
5
+ "attn_type_list": [
6
+ 1,
7
+ 1,
8
+ 1,
9
+ 1,
10
+ 1,
11
+ 1,
12
+ 1,
13
+ 1,
14
+ 1,
15
+ 1,
16
+ 1,
17
+ 1,
18
+ 1,
19
+ 1,
20
+ 1,
21
+ 1,
22
+ 1,
23
+ 1,
24
+ 1,
25
+ 1,
26
+ 1,
27
+ 1,
28
+ 1,
29
+ 1,
30
+ 1,
31
+ 1,
32
+ 1,
33
+ 1,
34
+ 1,
35
+ 1,
36
+ 1,
37
+ 1,
38
+ 1,
39
+ 1,
40
+ 1,
41
+ 1,
42
+ 1,
43
+ 1,
44
+ 1,
45
+ 1,
46
+ 1,
47
+ 1,
48
+ 1,
49
+ 1,
50
+ 1,
51
+ 1,
52
+ 1,
53
+ 1,
54
+ 1,
55
+ 1,
56
+ 1,
57
+ 1,
58
+ 1,
59
+ 1,
60
+ 1,
61
+ 1,
62
+ 1,
63
+ 1,
64
+ 1,
65
+ 1,
66
+ 1,
67
+ 1
68
+ ],
69
+ "auto_map": {
70
+ "AutoConfig": "configuration_minimax_m2.MiniMaxM2Config",
71
+ "AutoModelForCausalLM": "modeling_minimax_m2.MiniMaxM2ForCausalLM"
72
+ },
73
+ "head_dim": 128,
74
+ "hidden_act": "silu",
75
+ "hidden_size": 3072,
76
+ "intermediate_size": 1536,
77
+ "max_position_embeddings": 196608,
78
+ "model_type": "minimax_m2",
79
+ "mtp_transformer_layers": 1,
80
+ "num_attention_heads": 48,
81
+ "num_experts_per_tok": 8,
82
+ "num_hidden_layers": 62,
83
+ "num_key_value_heads": 8,
84
+ "num_local_experts": 256,
85
+ "num_mtp_modules": 3,
86
+ "qk_norm_type": "per_layer",
87
+ "quantization_config": {
88
+ "activation_scheme": "dynamic",
89
+ "fmt": "float8_e4m3fn",
90
+ "quant_method": "fp8",
91
+ "weight_block_size": [
92
+ 128,
93
+ 128
94
+ ],
95
+ "modules_to_not_convert": [
96
+ "gate",
97
+ "e_score_correction_bias",
98
+ "lm_head"
99
+ ]
100
+ },
101
+ "rms_norm_eps": 1e-06,
102
+ "rope_theta": 5000000,
103
+ "rotary_dim": 64,
104
+ "scoring_func": "sigmoid",
105
+ "shared_intermediate_size": 0,
106
+ "tie_word_embeddings": false,
107
+ "transformers_version": "4.46.1",
108
+ "use_cache": true,
109
+ "use_mtp": true,
110
+ "use_qk_norm": true,
111
+ "use_routing_bias": true,
112
+ "vocab_size": 200064
113
+ }
configuration_minimax_m2.py ADDED
@@ -0,0 +1,200 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # 🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨
2
+ # This file was automatically generated from src/transformers/models/minimax_m2/modular_minimax_m2.py.
3
+ # Do NOT edit this file manually as any edits will be overwritten by the generation of
4
+ # the file from the modular. If any change should be done, please apply the change to the
5
+ # modular_minimax_m2.py file directly. One of our CI enforces this.
6
+ # 🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨
7
+ # coding=utf-8
8
+ # Copyright 2025 the HuggingFace Team. All rights reserved.
9
+ #
10
+ # Licensed under the Apache License, Version 2.0 (the "License");
11
+ # you may not use this file except in compliance with the License.
12
+ # You may obtain a copy of the License at
13
+ #
14
+ # http://www.apache.org/licenses/LICENSE-2.0
15
+ #
16
+ # Unless required by applicable law or agreed to in writing, software
17
+ # distributed under the License is distributed on an "AS IS" BASIS,
18
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
19
+ # See the License for the specific language governing permissions and
20
+ # limitations under the License.
21
+
22
+
23
+ from transformers.configuration_utils import PretrainedConfig
24
+
25
+
26
+ class MiniMaxM2Config(PretrainedConfig):
27
+ r"""
28
+ This is the configuration class to store the configuration of a [`MiniMaxM2Model`]. It is used to instantiate an
29
+ MiniMaxM2 model according to the specified arguments, defining the model architecture. Instantiating a configuration
30
+ with the defaults will yield a similar configuration to that of the MiniMaxM2-7B-v0.1 or MiniMaxM2-7B-Instruct-v0.1.
31
+
32
+ [minimax_m2ai/MiniMaxM2-8x7B](https://huggingface.co/minimax_m2ai/MiniMaxM2-8x7B)
33
+ [minimax_m2ai/MiniMaxM2-7B-Instruct-v0.1](https://huggingface.co/minimax_m2ai/MiniMaxM2-7B-Instruct-v0.1)
34
+
35
+ Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
36
+ documentation from [`PretrainedConfig`] for more information.
37
+
38
+
39
+ Args:
40
+ vocab_size (`int`, *optional*, defaults to 32000):
41
+ Vocabulary size of the MiniMaxM2 model. Defines the number of different tokens that can be represented by the
42
+ `inputs_ids` passed when calling [`MiniMaxM2Model`]
43
+ hidden_size (`int`, *optional*, defaults to 4096):
44
+ Dimension of the hidden representations.
45
+ intermediate_size (`int`, *optional*, defaults to 14336):
46
+ Dimension of the MLP representations.
47
+ num_hidden_layers (`int`, *optional*, defaults to 32):
48
+ Number of hidden layers in the Transformer encoder.
49
+ num_attention_heads (`int`, *optional*, defaults to 32):
50
+ Number of attention heads for each attention layer in the Transformer encoder.
51
+ num_key_value_heads (`int`, *optional*, defaults to 8):
52
+ This is the number of key_value heads that should be used to implement Grouped Query Attention. If
53
+ `num_key_value_heads=num_attention_heads`, the model will use Multi Head Attention (MHA), if
54
+ `num_key_value_heads=1` the model will use Multi Query Attention (MQA) otherwise GQA is used. When
55
+ converting a multi-head checkpoint to a GQA checkpoint, each group key and value head should be constructed
56
+ by meanpooling all the original heads within that group. For more details, check out [this
57
+ paper](https://huggingface.co/papers/2305.13245). If it is not specified, will default to `8`.
58
+ head_dim (`int`, *optional*, defaults to `hidden_size // num_attention_heads`):
59
+ The attention head dimension.
60
+ hidden_act (`str` or `function`, *optional*, defaults to `"silu"`):
61
+ The non-linear activation function (function or string) in the decoder.
62
+ max_position_embeddings (`int`, *optional*, defaults to `4096*32`):
63
+ The maximum sequence length that this model might ever be used with. MiniMaxM2's sliding window attention
64
+ allows sequence of up to 4096*32 tokens.
65
+ initializer_range (`float`, *optional*, defaults to 0.02):
66
+ The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
67
+ rms_norm_eps (`float`, *optional*, defaults to 1e-05):
68
+ The epsilon used by the rms normalization layers.
69
+ use_cache (`bool`, *optional*, defaults to `True`):
70
+ Whether or not the model should return the last key/values attentions (not used by all models). Only
71
+ relevant if `config.is_decoder=True`.
72
+ pad_token_id (`int`, *optional*):
73
+ The id of the padding token.
74
+ bos_token_id (`int`, *optional*, defaults to 1):
75
+ The id of the "beginning-of-sequence" token.
76
+ eos_token_id (`int`, *optional*, defaults to 2):
77
+ The id of the "end-of-sequence" token.
78
+ tie_word_embeddings (`bool`, *optional*, defaults to `False`):
79
+ Whether the model's input and output word embeddings should be tied.
80
+ rope_theta (`float`, *optional*, defaults to 1000000.0):
81
+ The base period of the RoPE embeddings.
82
+ sliding_window (`int`, *optional*):
83
+ Sliding window attention window size. If not specified, will default to `4096`.
84
+ attention_dropout (`float`, *optional*, defaults to 0.0):
85
+ The dropout ratio for the attention probabilities.
86
+ num_experts_per_tok (`int`, *optional*, defaults to 2):
87
+ The number of experts to route per-token, can be also interpreted as the `top-k` routing
88
+ parameter
89
+ num_local_experts (`int`, *optional*, defaults to 8):
90
+ Number of experts per Sparse MLP layer.
91
+ output_router_logits (`bool`, *optional*, defaults to `False`):
92
+ Whether or not the router logits should be returned by the model. Enabling this will also
93
+ allow the model to output the auxiliary loss. See [here]() for more details
94
+ router_aux_loss_coef (`float`, *optional*, defaults to 0.001):
95
+ The aux loss factor for the total loss.
96
+ router_jitter_noise (`float`, *optional*, defaults to 0.0):
97
+ Amount of noise to add to the router.
98
+
99
+ ```python
100
+ >>> from transformers import MiniMaxM2Model, MiniMaxM2Config
101
+
102
+ >>> # Initializing a MiniMaxM2 7B style configuration
103
+ >>> configuration = MiniMaxM2Config()
104
+
105
+ >>> # Initializing a model from the MiniMaxM2 7B style configuration
106
+ >>> model = MiniMaxM2Model(configuration)
107
+
108
+ >>> # Accessing the model configuration
109
+ >>> configuration = model.config
110
+ ```"""
111
+
112
+ model_type = "minimax_m2"
113
+ keys_to_ignore_at_inference = ["past_key_values"]
114
+ base_model_tp_plan = {
115
+ "layers.*.self_attn.q_proj": "colwise",
116
+ "layers.*.self_attn.k_proj": "colwise",
117
+ "layers.*.self_attn.v_proj": "colwise",
118
+ "layers.*.self_attn.o_proj": "rowwise",
119
+ "layers.*.block_sparse_moe.gate": "colwise_rep", # we need to replicate here to correctly route experts
120
+ "layers.*.block_sparse_moe.experts.*.w1": "colwise",
121
+ "layers.*.block_sparse_moe.experts.*.w2": "rowwise",
122
+ "layers.*.block_sparse_moe.experts.*.w3": "colwise",
123
+ }
124
+ base_model_pp_plan = {
125
+ "embed_tokens": (["input_ids"], ["inputs_embeds"]),
126
+ "layers": (["hidden_states", "attention_mask"], ["hidden_states"]),
127
+ "norm": (["hidden_states"], ["hidden_states"]),
128
+ }
129
+
130
+ def __init__(
131
+ self,
132
+ vocab_size=32000,
133
+ hidden_size=4096,
134
+ intermediate_size=14336,
135
+ num_hidden_layers=32,
136
+ num_attention_heads=32,
137
+ num_key_value_heads=8,
138
+ head_dim=None,
139
+ hidden_act="silu",
140
+ max_position_embeddings=4096 * 32,
141
+ initializer_range=0.02,
142
+ rms_norm_eps=1e-5,
143
+ use_cache=True,
144
+ pad_token_id=None,
145
+ bos_token_id=1,
146
+ eos_token_id=2,
147
+ tie_word_embeddings=False,
148
+ rope_theta=1e6,
149
+ sliding_window=None,
150
+ attention_dropout=0.0,
151
+ num_experts_per_tok=2,
152
+ num_local_experts=8,
153
+ output_router_logits=False,
154
+ router_aux_loss_coef=0.001,
155
+ router_jitter_noise=0.0,
156
+ **kwargs,
157
+ ):
158
+ self.vocab_size = vocab_size
159
+ self.max_position_embeddings = max_position_embeddings
160
+ self.hidden_size = hidden_size
161
+ self.intermediate_size = intermediate_size
162
+ self.num_hidden_layers = num_hidden_layers
163
+ self.num_attention_heads = num_attention_heads
164
+ self.sliding_window = sliding_window
165
+
166
+ # for backward compatibility
167
+ if num_key_value_heads is None:
168
+ num_key_value_heads = num_attention_heads
169
+
170
+ self.num_key_value_heads = num_key_value_heads
171
+ self.hidden_act = hidden_act
172
+ self.initializer_range = initializer_range
173
+ self.rms_norm_eps = rms_norm_eps
174
+ self.use_cache = use_cache
175
+ self.rope_theta = rope_theta
176
+ self.attention_dropout = attention_dropout
177
+ self.head_dim = head_dim
178
+
179
+ self.num_experts_per_tok = num_experts_per_tok
180
+ self.num_local_experts = num_local_experts
181
+ self.output_router_logits = output_router_logits
182
+ self.router_aux_loss_coef = router_aux_loss_coef
183
+ self.router_jitter_noise = router_jitter_noise
184
+
185
+ self.use_qk_norm = kwargs.pop("use_qk_norm", False)
186
+ self.rotary_dim = kwargs.pop("rotary_dim", self.head_dim)
187
+ self.partial_rotary_factor = kwargs.pop("partial_rotary_factor", 1)
188
+ if self.head_dim is not None:
189
+ self.partial_rotary_factor = self.rotary_dim / self.head_dim
190
+
191
+ super().__init__(
192
+ pad_token_id=pad_token_id,
193
+ bos_token_id=bos_token_id,
194
+ eos_token_id=eos_token_id,
195
+ tie_word_embeddings=tie_word_embeddings,
196
+ **kwargs,
197
+ )
198
+
199
+
200
+ __all__ = ["MiniMaxM2Config"]
generation_config.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token_id": 200019,
3
+ "do_sample": true,
4
+ "eos_token_id": 200020,
5
+ "temperature": 1.0,
6
+ "top_p": 0.95,
7
+ "top_k": 40,
8
+ "transformers_version": "4.46.1"
9
+ }
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
model-00000-of-00126.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5fa4733c16581b6c6851ae9bae5c2cdeb1246c09fc9fb983a34160bc29def8ac
3
+ size 3693062744
model-00001-of-00126.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ed126600b65b79847e2b0e46b1df3f06193151cb1eeaf0dfd31c0080f73feb1d
3
+ size 1208321208
model-00002-of-00126.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6c811151b9146c5e9f975f49a0e83c8cd5a9a9c03f721101fe8f980d906f7ecc
3
+ size 2463868936
model-00003-of-00126.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3bbe8d7dd07fb4fd4f9cbb994f99dda8262bcebd426c4ade445778fef79dc05d
3
+ size 1208321208
model-00004-of-00126.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4e46c6e9d1e74204ea7d5091c58b88aa28d5b037ada9223f12944afde36b8d80
3
+ size 2463868936
model-00005-of-00126.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ca9c8234a91d4fa5bed8972e1acb9a478fefc3d0616aa75a7054e5ed42a355ab
3
+ size 1208321208
model-00006-of-00126.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0663c2d429f4b414c976e1a127f7fe8fef8218e881f982a26a3d1a2f897beb34
3
+ size 2463868936
model-00007-of-00126.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:719cdde66b577b3b78f32218ae35829a692f42d5a8365564293b6b4572a3bf19
3
+ size 1208321208
model-00008-of-00126.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3c82736c5108da1358f95818a6f9f00359fa762f65b4dc2e9cb7243d530ea5f6
3
+ size 2463868936
model-00009-of-00126.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9dbfc5fb0c262222af4db090dd979525af71cf8c8202569d4c9f99a5d3985af3
3
+ size 1208321208
model-00010-of-00126.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:408a5ba985946c6032dea18f34f9aca99867df2e434aaf5b41b71d2e891d94b9
3
+ size 2463868936
model-00011-of-00126.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4dfcc175b5553cfff3cb2b7d63b6cdf8919421be4d12bd9dfda176de6477f583
3
+ size 1208321208
model-00012-of-00126.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f5f726b59d14f320e990e5790a3a06babaf6f53c18008cb34101abdfa9c150d2
3
+ size 2463868936
model-00013-of-00126.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:46468675fadd24ed94a04efe21c44d283a1e351eb4ff315d5f8bad2b5ff9151f
3
+ size 1208321208
model-00014-of-00126.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:88442746a3c14ec309e43d743e2c61728a5e6c91f3f97cc13183c2e8bcf0d39d
3
+ size 2463868936
model-00015-of-00126.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:863708baaceaf7658b7d07144bf57e9f34ecfc278f2be7ef104ff2c5a3535e7c
3
+ size 1208321208
model-00016-of-00126.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:378b81f5da3a6e8a8d18be64428a5fd0c5ee24c51579b3b9782c29c01d0f78f3
3
+ size 2463868936
model-00017-of-00126.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:549da782be1d9803a585387d2a082ebbe46d3cd214bec1702d6cbfb47421814f
3
+ size 1208321208
model-00018-of-00126.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2a8c589ad003dcbeebb2bf3b67e0f168db5220def62cc14c39fba3c4acf2c69c
3
+ size 2463868936
model-00019-of-00126.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ee7f5ae1d73509de35105fcd707bba37410c9ef738287ae607386f168ed22250
3
+ size 1208321208
model-00020-of-00126.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:353c467b4662b2e6c6aa529963f20bad04718ab42cae453c05c124ae8c39bcde
3
+ size 2463869968
model-00021-of-00126.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2c4de90e8bc55be57c694d3adc754370836738c2eef4f3dd778a379a85ed2e0d
3
+ size 1208321720
model-00022-of-00126.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:09e111b67e9cb7a9268755dfde45011ebafd900390666f621d5fe65410f9d48c
3
+ size 2463869968
model-00023-of-00126.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d06531f2e74eedc5fa5a03dae62cbcf804afac22fe184bd00ad90b211f4795d6
3
+ size 1208321720
model-00024-of-00126.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:95537892e7ed657f0b430b63bbc9179bda7ae8bc92a2334cc1183e55f88c55c9
3
+ size 2463869968
model-00025-of-00126.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bf5da218100165ee1973101083a45202a4a9176d80fad139652adb04444f80d3
3
+ size 1208321720
model-00026-of-00126.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bf33c373e1ea8a4bf447655640d660d2b77f3ae7695453c55b4d04882c5c97d3
3
+ size 2463869968
model-00027-of-00126.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7b8875b5644c876c229c453f0e9bae040d54dc364da0d8005bf1161813db7c7f
3
+ size 1208321720
model-00028-of-00126.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a1a292eb6aba45c03b3a4b7d0b525fc8655e982d0f3b65dc2b74ae002a84b0f2
3
+ size 2463869968
model-00029-of-00126.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:275939435df08f8223cad05ae0af71ea06b55cb9afe3f64cc766fd9b81a0844e
3
+ size 1208321720
model-00030-of-00126.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:836f6aefc323cd91173fc2657cbe0c31cf30768174f0e86d0c5c5ec0663105c8
3
+ size 2463869968
model-00031-of-00126.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:90cb43eb2e0169961033bbd43983644acee48e921aa3e232b187c24e9ada0397
3
+ size 1208321720
model-00032-of-00126.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b1f1fd3be4961b61ccefd8a5281f3eff82e338ef68f7555630daf5ee1ea2bfdb
3
+ size 2463869968
model-00033-of-00126.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a22b4f151873f8533308bca289db23ad07e77d5d6bbad358f7185214d0477321
3
+ size 1208321720
model-00034-of-00126.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:554bd1cf8a95ba086e003dbb4158968e3b5f0f1418f30ebacbd3a8c246ea3d5b
3
+ size 2463869968
model-00035-of-00126.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:aeede5d62eafad803c8fd70644543d3434449bc17ab898d3405f39f674fcd613
3
+ size 1208321720
model-00036-of-00126.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b99c5eb931fa4babd685897d648b3e0f7e132c6791a454fc2abee242dc997bec
3
+ size 2463869968
model-00037-of-00126.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2fa4aa203fd33d1fdf8aef5e5c77a2dca6adf52e1d36cea122888c6da5eda159
3
+ size 1208321720
model-00038-of-00126.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:51f4154b813b374bff94324931a209e6684fc8c4fa3018e8a0bed35ceb4e37fc
3
+ size 2463869968
model-00039-of-00126.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f0ce38595220a00e51a53ea32b6cf1035b1c780ba4e274dc61748ac210a1db56
3
+ size 1208321720
model-00040-of-00126.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c3dabe65aed259fb107d10066b92c13152bf19e42edec6166483acb6136265c8
3
+ size 2463869968
model-00041-of-00126.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1d0e4c4fdaa567cea9bc6695c00ae234cf7b9db993164fb4ed7ef4dcf8266136
3
+ size 1208321720
model-00042-of-00126.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:583173528084a505794dad3d641190abeec26c825fa8758298c6600386a5e055
3
+ size 2463869968
model-00043-of-00126.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:edd078ceb93e7ac863cda3e5ce0a7a364b068137544e8413339dfcc7f5280af2
3
+ size 1208321720