Gliese-OCR-7B-Post2.0-final-GGUF

The Gliese-OCR-7B-Post2.0-final model is a refined and optimized version of Gliese-OCR-7B-Post1.0, built upon the Qwen2.5-VL architecture. It represents the final iteration in the Gliese-OCR series, offering enhanced efficiency, precision, and visualization capabilities for document OCR, visual analysis, and information extraction. Fine-tuned with extended document visualization data and OCR-focused objectives, this model delivers superior accuracy across a wide range of document types, including scanned PDFs, handwritten pages, structured forms, and analytical reports

Model Files

File Name Quant Type File Size
Gliese-OCR-7B-Post2.0-final.f16.gguf F16 15.2 GB
Gliese-OCR-7B-Post2.0-final.Q2_K.gguf Q2_K 3.02 GB
Gliese-OCR-7B-Post2.0-final.Q3_K_L.gguf Q3_K_L 4.09 GB
Gliese-OCR-7B-Post2.0-final.Q3_K_M.gguf Q3_K_M 3.81 GB
Gliese-OCR-7B-Post2.0-final.Q3_K_S.gguf Q3_K_S 3.49 GB
Gliese-OCR-7B-Post2.0-final.Q4_K_M.gguf Q4_K_M 4.68 GB
Gliese-OCR-7B-Post2.0-final.Q4_K_S.gguf Q4_K_S 4.46 GB
Gliese-OCR-7B-Post2.0-final.Q5_K_M.gguf Q5_K_M 5.44 GB
Gliese-OCR-7B-Post2.0-final.Q5_K_S.gguf Q5_K_S 5.32 GB
Gliese-OCR-7B-Post2.0-final.Q6_K.gguf Q6_K 6.25 GB
Gliese-OCR-7B-Post2.0-final.Q8_0.gguf Q8_0 8.1 GB
Gliese-OCR-7B-Post2.0-final.IQ4_XS.gguf IQ4_XS 4.25 GB
Gliese-OCR-7B-Post2.0-final.i1-IQ1_M.gguf i1-IQ1_M 2.04 GB
Gliese-OCR-7B-Post2.0-final.i1-IQ1_S.gguf i1-IQ1_S 1.9 GB
Gliese-OCR-7B-Post2.0-final.i1-IQ2_M.gguf i1-IQ2_M 2.78 GB
Gliese-OCR-7B-Post2.0-final.i1-IQ2_S.gguf i1-IQ2_S 2.6 GB
Gliese-OCR-7B-Post2.0-final.i1-IQ2_XS.gguf i1-IQ2_XS 2.47 GB
Gliese-OCR-7B-Post2.0-final.i1-IQ2_XXS.gguf i1-IQ2_XXS 2.27 GB
Gliese-OCR-7B-Post2.0-final.i1-IQ3_M.gguf i1-IQ3_M 3.57 GB
Gliese-OCR-7B-Post2.0-final.i1-IQ3_S.gguf i1-IQ3_S 3.5 GB
Gliese-OCR-7B-Post2.0-final.i1-IQ3_XS.gguf i1-IQ3_XS 3.35 GB
Gliese-OCR-7B-Post2.0-final.i1-IQ3_XXS.gguf i1-IQ3_XXS 3.11 GB
Gliese-OCR-7B-Post2.0-final.i1-IQ4_NL.gguf i1-IQ4_NL 4.44 GB
Gliese-OCR-7B-Post2.0-final.i1-IQ4_XS.gguf i1-IQ4_XS 4.22 GB
Gliese-OCR-7B-Post2.0-final.i1-Q2_K.gguf i1-Q2_K 3.02 GB
Gliese-OCR-7B-Post2.0-final.i1-Q2_K_S.gguf i1-Q2_K_S 2.83 GB
Gliese-OCR-7B-Post2.0-final.i1-Q3_K_L.gguf i1-Q3_K_L 4.09 GB
Gliese-OCR-7B-Post2.0-final.i1-Q3_K_M.gguf i1-Q3_K_M 3.81 GB
Gliese-OCR-7B-Post2.0-final.i1-Q3_K_S.gguf i1-Q3_K_S 3.49 GB
Gliese-OCR-7B-Post2.0-final.i1-Q4_0.gguf i1-Q4_0 4.44 GB
Gliese-OCR-7B-Post2.0-final.i1-Q4_1.gguf i1-Q4_1 4.87 GB
Gliese-OCR-7B-Post2.0-final.i1-Q4_K_M.gguf i1-Q4_K_M 4.68 GB
Gliese-OCR-7B-Post2.0-final.i1-Q4_K_S.gguf i1-Q4_K_S 4.46 GB
Gliese-OCR-7B-Post2.0-final.i1-Q5_K_M.gguf i1-Q5_K_M 5.44 GB
Gliese-OCR-7B-Post2.0-final.i1-Q5_K_S.gguf i1-Q5_K_S 5.32 GB
Gliese-OCR-7B-Post2.0-final.i1-Q6_K.gguf i1-Q6_K 6.25 GB
Gliese-OCR-7B-Post2.0-final.imatrix.gguf imatrix 4.56 MB
Gliese-OCR-7B-Post2.0-final.mmproj-Q8_0.gguf mmproj-Q8_0 856 MB
Gliese-OCR-7B-Post2.0-final.mmproj-f16.gguf mmproj-f16 1.35 GB

Quants Usage

(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)

Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):

image.png

Downloads last month
5,925
GGUF
Model size
8B params
Architecture
qwen2vl
Hardware compatibility
Log In to view the estimation

1-bit

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for prithivMLmods/Gliese-OCR-7B-Post2.0-final-GGUF

Datasets used to train prithivMLmods/Gliese-OCR-7B-Post2.0-final-GGUF

Collections including prithivMLmods/Gliese-OCR-7B-Post2.0-final-GGUF