bartowski commited on
Commit
799abea
·
verified ·
1 Parent(s): b11f708

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +174 -0
README.md ADDED
@@ -0,0 +1,174 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ quantized_by: bartowski
3
+ pipeline_tag: text-generation
4
+ ---
5
+
6
+ ## Llamacpp imatrix Quantizations of MiniMax-M2 by MiniMaxAI
7
+
8
+ Using <a href="https://github.com/ggml-org/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggml-org/llama.cpp/releases/tag/b6907">b6907</a> for quantization.
9
+
10
+ Original model: https://huggingface.co/MiniMaxAI/MiniMax-M2
11
+
12
+ All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8) combined with a subset of combined_all_small.parquet from Ed Addario [here](https://huggingface.co/datasets/eaddario/imatrix-calibration/blob/main/combined_all_small.parquet)
13
+
14
+ Run them in [LM Studio](https://lmstudio.ai/)
15
+
16
+ Run them directly with [llama.cpp](https://github.com/ggml-org/llama.cpp), or any other llama.cpp based project
17
+
18
+ ## Prompt format
19
+
20
+ No chat template specified so default is used. This may be incorrect, check original model card for details.
21
+
22
+ ```
23
+ ]~!b[]~b]system
24
+ {system_prompt}[e~[
25
+ ]~b]user
26
+ {prompt}[e~[
27
+ ]~b]ai
28
+ [e~[
29
+ ]~b]ai
30
+ <think>
31
+ ```
32
+
33
+ ## Download a file (not the whole branch) from below:
34
+
35
+ | Filename | Quant type | File Size | Split | Description |
36
+ | -------- | ---------- | --------- | ----- | ----------- |
37
+ | [MiniMax-M2-Q8_0.gguf](https://huggingface.co/bartowski/MiniMaxAI_MiniMax-M2-GGUF/tree/main/MiniMaxAI_MiniMax-M2-Q8_0) | Q8_0 | 243.14GB | true | Extremely high quality, generally unneeded but max available quant. |
38
+ | [MiniMax-M2-Q6_K.gguf](https://huggingface.co/bartowski/MiniMaxAI_MiniMax-M2-GGUF/tree/main/MiniMaxAI_MiniMax-M2-Q6_K) | Q6_K | 187.81GB | true | Very high quality, near perfect, *recommended*. |
39
+ | [MiniMax-M2-Q5_K_M.gguf](https://huggingface.co/bartowski/MiniMaxAI_MiniMax-M2-GGUF/tree/main/MiniMaxAI_MiniMax-M2-Q5_K_M) | Q5_K_M | 162.38GB | true | High quality, *recommended*. |
40
+ | [MiniMax-M2-Q5_K_S.gguf](https://huggingface.co/bartowski/MiniMaxAI_MiniMax-M2-GGUF/tree/main/MiniMaxAI_MiniMax-M2-Q5_K_S) | Q5_K_S | 157.55GB | true | High quality, *recommended*. |
41
+ | [MiniMax-M2-Q4_1.gguf](https://huggingface.co/bartowski/MiniMaxAI_MiniMax-M2-GGUF/tree/main/MiniMaxAI_MiniMax-M2-Q4_1) | Q4_1 | 143.31GB | true | Legacy format, similar performance to Q4_K_S but with improved tokens/watt on Apple silicon. |
42
+ | [MiniMax-M2-Q4_K_M.gguf](https://huggingface.co/bartowski/MiniMaxAI_MiniMax-M2-GGUF/tree/main/MiniMaxAI_MiniMax-M2-Q4_K_M) | Q4_K_M | 138.59GB | true | Good quality, default size for most use cases, *recommended*. |
43
+ | [MiniMax-M2-Q4_K_S.gguf](https://huggingface.co/bartowski/MiniMaxAI_MiniMax-M2-GGUF/tree/main/MiniMaxAI_MiniMax-M2-Q4_K_S) | Q4_K_S | 133.75GB | true | Slightly lower quality with more space savings, *recommended*. |
44
+ | [MiniMax-M2-Q4_0.gguf](https://huggingface.co/bartowski/MiniMaxAI_MiniMax-M2-GGUF/tree/main/MiniMaxAI_MiniMax-M2-Q4_0) | Q4_0 | 131.34GB | true | Legacy format, offers online repacking for ARM and AVX CPU inference. |
45
+ | [MiniMax-M2-IQ4_NL.gguf](https://huggingface.co/bartowski/MiniMaxAI_MiniMax-M2-GGUF/tree/main/MiniMaxAI_MiniMax-M2-IQ4_NL) | IQ4_NL | 129.24GB | true | Similar to IQ4_XS, but slightly larger. Offers online repacking for ARM CPU inference. |
46
+ | [MiniMax-M2-IQ4_XS.gguf](https://huggingface.co/bartowski/MiniMaxAI_MiniMax-M2-GGUF/tree/main/MiniMaxAI_MiniMax-M2-IQ4_XS) | IQ4_XS | 122.17GB | true | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
47
+ | [MiniMax-M2-Q3_K_XL.gguf](https://huggingface.co/bartowski/MiniMaxAI_MiniMax-M2-GGUF/tree/main/MiniMaxAI_MiniMax-M2-Q3_K_XL) | Q3_K_XL | 108.74GB | true | Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability. |
48
+ | [MiniMax-M2-Q3_K_L.gguf](https://huggingface.co/bartowski/MiniMaxAI_MiniMax-M2-GGUF/tree/main/MiniMaxAI_MiniMax-M2-Q3_K_L) | Q3_K_L | 108.21GB | true | Lower quality but usable, good for low RAM availability. |
49
+ | [MiniMax-M2-Q3_K_M.gguf](https://huggingface.co/bartowski/MiniMaxAI_MiniMax-M2-GGUF/tree/main/MiniMaxAI_MiniMax-M2-Q3_K_M) | Q3_K_M | 103.96GB | true | Low quality. |
50
+ | [MiniMax-M2-IQ3_M.gguf](https://huggingface.co/bartowski/MiniMaxAI_MiniMax-M2-GGUF/tree/main/MiniMaxAI_MiniMax-M2-IQ3_M) | IQ3_M | 103.95GB | true | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
51
+ | [MiniMax-M2-Q3_K_S.gguf](https://huggingface.co/bartowski/MiniMaxAI_MiniMax-M2-GGUF/tree/main/MiniMaxAI_MiniMax-M2-Q3_K_S) | Q3_K_S | 99.12GB | true | Low quality, not recommended. |
52
+ | [MiniMax-M2-IQ3_XS.gguf](https://huggingface.co/bartowski/MiniMaxAI_MiniMax-M2-GGUF/tree/main/MiniMaxAI_MiniMax-M2-IQ3_XS) | IQ3_XS | 93.76GB | true | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
53
+ | [MiniMax-M2-IQ3_XXS.gguf](https://huggingface.co/bartowski/MiniMaxAI_MiniMax-M2-GGUF/tree/main/MiniMaxAI_MiniMax-M2-IQ3_XXS) | IQ3_XXS | 90.10GB | true | Lower quality, new method with decent performance, comparable to Q3 quants. |
54
+ | [MiniMax-M2-Q2_K_L.gguf](https://huggingface.co/bartowski/MiniMaxAI_MiniMax-M2-GGUF/tree/main/MiniMaxAI_MiniMax-M2-Q2_K_L) | Q2_K_L | 80.42GB | true | Uses Q8_0 for embed and output weights. Very low quality but surprisingly usable. |
55
+ | [MiniMax-M2-Q2_K.gguf](https://huggingface.co/bartowski/MiniMaxAI_MiniMax-M2-GGUF/tree/main/MiniMaxAI_MiniMax-M2-Q2_K) | Q2_K | 79.82GB | true | Very low quality but surprisingly usable. |
56
+ | [MiniMax-M2-IQ2_M.gguf](https://huggingface.co/bartowski/MiniMaxAI_MiniMax-M2-GGUF/tree/main/MiniMaxAI_MiniMax-M2-IQ2_M) | IQ2_M | 72.00GB | true | Relatively low quality, uses SOTA techniques to be surprisingly usable. |
57
+ | [MiniMax-M2-IQ2_S.gguf](https://huggingface.co/bartowski/MiniMaxAI_MiniMax-M2-GGUF/tree/main/MiniMaxAI_MiniMax-M2-IQ2_S) | IQ2_S | 63.35GB | true | Low quality, uses SOTA techniques to be usable. |
58
+ | [MiniMax-M2-IQ2_XS.gguf](https://huggingface.co/bartowski/MiniMaxAI_MiniMax-M2-GGUF/tree/main/MiniMaxAI_MiniMax-M2-IQ2_XS) | IQ2_XS | 63.14GB | true | Low quality, uses SOTA techniques to be usable. |
59
+ | [MiniMax-M2-IQ2_XXS.gguf](https://huggingface.co/bartowski/MiniMaxAI_MiniMax-M2-GGUF/tree/main/MiniMaxAI_MiniMax-M2-IQ2_XXS) | IQ2_XXS | 54.73GB | true | Very low quality, uses SOTA techniques to be usable. |
60
+ | [MiniMax-M2-IQ1_M.gguf](https://huggingface.co/bartowski/MiniMaxAI_MiniMax-M2-GGUF/blob/main/MiniMaxAI_MiniMax-M2-IQ1_M.gguf) | IQ1_M | 49.02GB | false | Extremely low quality, *not* recommended. |
61
+ | [MiniMax-M2-IQ1_S.gguf](https://huggingface.co/bartowski/MiniMaxAI_MiniMax-M2-GGUF/blob/main/MiniMaxAI_MiniMax-M2-IQ1_S.gguf) | IQ1_S | 47.01GB | false | Extremely low quality, *not* recommended. |
62
+
63
+ ## Embed/output weights
64
+
65
+ Some of these quants (Q3_K_XL, Q4_K_L etc) are the standard quantization method with the embeddings and output weights quantized to Q8_0 instead of what they would normally default to.
66
+
67
+ ## Downloading using huggingface-cli
68
+
69
+ <details>
70
+ <summary>Click to view download instructions</summary>
71
+
72
+ First, make sure you have hugginface-cli installed:
73
+
74
+ ```
75
+ pip install -U "huggingface_hub[cli]"
76
+ ```
77
+
78
+ Then, you can target the specific file you want:
79
+
80
+ ```
81
+ huggingface-cli download bartowski/MiniMaxAI_MiniMax-M2-GGUF --include "MiniMaxAI_MiniMax-M2-Q4_K_M.gguf" --local-dir ./
82
+ ```
83
+
84
+ If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
85
+
86
+ ```
87
+ huggingface-cli download bartowski/MiniMaxAI_MiniMax-M2-GGUF --include "MiniMaxAI_MiniMax-M2-Q8_0/*" --local-dir ./
88
+ ```
89
+
90
+ You can either specify a new local-dir (MiniMaxAI_MiniMax-M2-Q8_0) or download them all in place (./)
91
+
92
+ </details>
93
+
94
+ ## ARM/AVX information
95
+
96
+ Previously, you would download Q4_0_4_4/4_8/8_8, and these would have their weights interleaved in memory in order to improve performance on ARM and AVX machines by loading up more data in one pass.
97
+
98
+ Now, however, there is something called "online repacking" for weights. details in [this PR](https://github.com/ggml-org/llama.cpp/pull/9921). If you use Q4_0 and your hardware would benefit from repacking weights, it will do it automatically on the fly.
99
+
100
+ As of llama.cpp build [b4282](https://github.com/ggml-org/llama.cpp/releases/tag/b4282) you will not be able to run the Q4_0_X_X files and will instead need to use Q4_0.
101
+
102
+ Additionally, if you want to get slightly better quality for , you can use IQ4_NL thanks to [this PR](https://github.com/ggml-org/llama.cpp/pull/10541) which will also repack the weights for ARM, though only the 4_4 for now. The loading time may be slower but it will result in an overall speed incrase.
103
+
104
+ <details>
105
+ <summary>Click to view Q4_0_X_X information (deprecated</summary>
106
+
107
+ I'm keeping this section to show the potential theoretical uplift in performance from using the Q4_0 with online repacking.
108
+
109
+ <details>
110
+ <summary>Click to view benchmarks on an AVX2 system (EPYC7702)</summary>
111
+
112
+ | model | size | params | backend | threads | test | t/s | % (vs Q4_0) |
113
+ | ------------------------------ | ---------: | ---------: | ---------- | ------: | ------------: | -------------------: |-------------: |
114
+ | qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp512 | 204.03 ± 1.03 | 100% |
115
+ | qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp1024 | 282.92 ± 0.19 | 100% |
116
+ | qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp2048 | 259.49 ± 0.44 | 100% |
117
+ | qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg128 | 39.12 ± 0.27 | 100% |
118
+ | qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg256 | 39.31 ± 0.69 | 100% |
119
+ | qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg512 | 40.52 ± 0.03 | 100% |
120
+ | qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp512 | 301.02 ± 1.74 | 147% |
121
+ | qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp1024 | 287.23 ± 0.20 | 101% |
122
+ | qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp2048 | 262.77 ± 1.81 | 101% |
123
+ | qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg128 | 18.80 ± 0.99 | 48% |
124
+ | qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg256 | 24.46 ± 3.04 | 83% |
125
+ | qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg512 | 36.32 ± 3.59 | 90% |
126
+ | qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp512 | 271.71 ± 3.53 | 133% |
127
+ | qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp1024 | 279.86 ± 45.63 | 100% |
128
+ | qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp2048 | 320.77 ± 5.00 | 124% |
129
+ | qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg128 | 43.51 ± 0.05 | 111% |
130
+ | qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg256 | 43.35 ± 0.09 | 110% |
131
+ | qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg512 | 42.60 ± 0.31 | 105% |
132
+
133
+ Q4_0_8_8 offers a nice bump to prompt processing and a small bump to text generation
134
+
135
+ </details>
136
+
137
+ </details>
138
+
139
+ ## Which file should I choose?
140
+
141
+ <details>
142
+ <summary>Click here for details</summary>
143
+
144
+ A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
145
+
146
+ The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
147
+
148
+ If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
149
+
150
+ If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
151
+
152
+ Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
153
+
154
+ If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
155
+
156
+ If you want to get more into the weeds, you can check out this extremely useful feature chart:
157
+
158
+ [llama.cpp feature matrix](https://github.com/ggml-org/llama.cpp/wiki/Feature-matrix)
159
+
160
+ But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
161
+
162
+ These I-quants can also be used on CPU, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
163
+
164
+ </details>
165
+
166
+ ## Credits
167
+
168
+ Thank you kalomaze and Dampf for assistance in creating the imatrix calibration dataset.
169
+
170
+ Thank you ZeroWw for the inspiration to experiment with embed/output.
171
+
172
+ Thank you to LM Studio for sponsoring my work.
173
+
174
+ Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski