quantization and input maxlength

coreML: using linear quantize nbit=8

input max = 128

mlpackages for swift

note

i tried turn it into float`6, but it changed too much for its prediction. quantization using linear nbit=8, it works almost like the original.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for yihuizhou/intfloat-multilingual-e5-small-mlpackage-Q8

Finetuned
(130)
this model