steampunque commited on
Commit
da043ca
·
verified ·
1 Parent(s): bcd568e

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +56 -0
README.md ADDED
@@ -0,0 +1,56 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: llama3.1
3
+ base_model: meta-llama/Llama-3.1-8B-Instruct
4
+ base_model_relation: quantized
5
+ tags:
6
+ - Llama 3.1 8b
7
+ - GGUF
8
+ - quantized
9
+ - 6-bit
10
+ ---
11
+
12
+ ## Llama.cpp hybrid layer quantization of Llama 3.1 8B Instruct by meta-llama
13
+
14
+ Original model: https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct
15
+
16
+ The hybrid quant employs different quantization levels on a per layer basis to enable
17
+ both high performance and small file size at the same time. This particular quant
18
+ achieves a ~6G gguf with the same perplexity as a ~6.6G Q6_K GGUF. The quants
19
+ employed are all K to avoid slow CPU or older GPU processing of IQ quants. For this
20
+ file the layer quants are as follows:
21
+ ```
22
+ LAYER_TYPES='[
23
+ [0 ,"Q6_K" ],[1 ,"Q5_K_M"],[2 ,"Q4_K_M"],[3 ,"Q4_K_M"],[4 ,"Q4_K_M"],[5 ,"Q4_K_M"],[6 ,"Q4_K_M"],[7 ,"Q4_K_M"],
24
+ [8 ,"Q5_K_M"],[9 ,"Q5_K_S"],[10,"Q5_K_M"],[11,"Q5_K_S"],[12,"Q5_K_M"],[13,"Q5_K_S"],[14,"Q5_K_M"],[15,"Q5_K_S"],
25
+ [16,"Q5_K_M"],[17,"Q5_K_M"],[18,"Q5_K_M"],[19,"Q5_K_M"],[20,"Q5_K_M"],[21,"Q5_K_M"],[22,"Q5_K_M"],[23,"Q5_K_M"],
26
+ [24,"Q5_K_M"],[25,"Q5_K_M"],[26,"Q6_K" ],[27,"Q6_K" ],[28,"Q6_K" ],[29,"Q8_0" ],[30,"Q8_0" ],[31,"Q8_0" ]
27
+ ]'
28
+ FLAGS="--token-embedding-type Q6_K --output-tensor-type Q6_K"
29
+ ```
30
+ Comparison:
31
+
32
+ Quant | size | PPL | Comment
33
+ ---------|---------|------|-----------
34
+ Q6_K | 6.6e9 | 7.2 | Q6_K with default embedding and output
35
+ Q6_K_H | 6.0e9 | 7.2 | Hybrid quant with Q6_K embedding Q6_K output
36
+
37
+ Usage:
38
+
39
+ This model may be used together with fixie-ai ultravox-v0_5-llama-3_1-8b to enable it to process audio (.mp3 and .wav files) and text inputs
40
+ and generate text outputs. The mmproj file is made available here: https://huggingface.co/steampunque/ultravox-v0_5-llama-3_1-8b-Hybrid-GGUF
41
+ More information about running multimedia may be found in the docs in the mtmd readme in the tools directory of the llama.cpp source tree
42
+ https://github.com/ggml-org/llama.cpp/blob/master/tools/mtmd/README.md.
43
+
44
+ Benchmarks:
45
+
46
+ A full set of benchmarks for the model will eventually be given here: https://huggingface.co/spaces/steampunque/benchlm
47
+
48
+ ## Download the file from below:
49
+ | Link | Type | Size/e9 B | Notes |
50
+ |------|------|-----------|-------|
51
+ | [Llama-3.1-8B-Instruct.Q6_K_H.gguf](https://huggingface.co/steampunque/Llama-3.1-8B-Instruct-Hybrid-GGUF/resolve/main/Llama-3.1-8B-Instruct.Q6_K_H.gguf) | Q6_K_H | 6e9 B | 0.6B smaller than Q6_K |
52
+ | [ultravox-v0_5-llama-3_1-8b.mmproj.gguf](https://huggingface.co/steampunque/ultravox-v0_5-llama-3_1-8b-Hybrid-GGUF/resolve/main/ultravox-v0_5-llama-3_1-8b.mmproj.gguf) | mmproj | 1.38e9 B | multimedia projector |
53
+
54
+ A discussion thread about the hybrid layer quant approach can be found here on the llama.cpp git repository:
55
+
56
+ https://github.com/ggml-org/llama.cpp/discussions/13040