Update README.md
Browse files
README.md
CHANGED
|
@@ -101,7 +101,7 @@ Please note that these GGMLs are **not compatible with llama.cpp**. Please see b
|
|
| 101 |
## Repositories available
|
| 102 |
|
| 103 |
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/starcoderplus-GPTQ)
|
| 104 |
-
* [
|
| 105 |
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/bigcode/starcoderplus)
|
| 106 |
|
| 107 |
<!-- compatibility_ggml start -->
|
|
|
|
| 101 |
## Repositories available
|
| 102 |
|
| 103 |
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/starcoderplus-GPTQ)
|
| 104 |
+
* [4, 5, and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/starcoderplus-GGML)
|
| 105 |
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/bigcode/starcoderplus)
|
| 106 |
|
| 107 |
<!-- compatibility_ggml start -->
|