DeProgrammer commited on
Commit
5551a82
·
verified ·
1 Parent(s): c566979

Clarify settings, quantization level in README

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -13,6 +13,6 @@ tags:
13
 
14
  This model [DeProgrammer/Jan-v3-4B-base-instruct-MNN](https://huggingface.co/DeProgrammer/Jan-v3-4B-base-instruct-MNN) was
15
  converted to MNN format from [janhq/Jan-v3-4B-base-instruct](https://huggingface.co/janhq/Jan-v3-4B-base-instruct)
16
- using [llmexport.py](https://github.com/alibaba/MNN/issues/4153#issuecomment-3866182869) in [MNN version **3.4.0**](https://github.com/alibaba/MNN/commit/a874b302f094599e2838a9186e5ce2cf6a81a7a7).
17
 
18
- Inference can be run via MNN, e.g., MNN Chat on Android.
 
13
 
14
  This model [DeProgrammer/Jan-v3-4B-base-instruct-MNN](https://huggingface.co/DeProgrammer/Jan-v3-4B-base-instruct-MNN) was
15
  converted to MNN format from [janhq/Jan-v3-4B-base-instruct](https://huggingface.co/janhq/Jan-v3-4B-base-instruct)
16
+ using [llmexport.py](https://github.com/alibaba/MNN/issues/4153#issuecomment-3866182869) in [MNN version **3.4.0**](https://github.com/alibaba/MNN/commit/a874b302f094599e2838a9186e5ce2cf6a81a7a7) with default settings (4-bit quantization).
17
 
18
+ Inference can be run via MNN, e.g., MNN Chat on Android.