This model DeProgrammer/shisa-v2.1-qwen3-8b-MNN was converted to MNN format from shisa-ai/shisa-v2.1-qwen3-8b using llmexport.py in MNN version 3.4.0 with default settings (4-bit quantization).
Inference can be run via MNN, e.g., MNN Chat on Android.
- Downloads last month
- 5