Update README.md
Browse files
README.md
CHANGED
|
@@ -1,4 +1,16 @@
|
|
| 1 |
---
|
| 2 |
base_model:
|
| 3 |
- stepfun-ai/Step-3.5-Flash
|
| 4 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
base_model:
|
| 3 |
- stepfun-ai/Step-3.5-Flash
|
| 4 |
+
---
|
| 5 |
+
# GGUF quants of stepfun-ai/Step-3.5-Flash
|
| 6 |
+
|
| 7 |
+
Quantization was performed without imatrix for the purposes comparison and experimentation. Perplixity may be worse than expected due to this naive approach.
|
| 8 |
+
|
| 9 |
+
Sample outputs and comparative evaluation coming eventually.
|
| 10 |
+
|
| 11 |
+
| Name | Version |
|
| 12 |
+
| --------------------- | ------------------------------------------------------------------------------------------------------------------ |
|
| 13 |
+
| moonshotai/Kimi-K2-Instruct | [a9197e1b758e](https://huggingface.co/stepfun-ai/Step-3.5-Flash/commit/a9197e1b758ebb54f801f6a1c4abbdddb1fea181) |
|
| 14 |
+
| `convert_hf_to_gguf.py`, `llama-quantize` and `llama-gguf-split` | [b7964](https://github.com/ggml-org/llama.cpp/tree/b7964) |
|
| 15 |
+
|
| 16 |
+
See the original model card [here](https://huggingface.co/stepfun-ai/Step-3.5-Flash).
|