Image-Text-to-Text
Transformers
GGUF
English
Chinese
qwen2_5_vl
ggml
llama.cpp
Document
VLM
KIE
OCR
VL
Camel
Openpdf
text-generation-inference
Extraction
Linking
Markdown
.Md
Document Digitization
Intelligent Document Processing (IDP)
Intelligent Word Recognition (IWR)
Optical Mark Recognition (OMR)
conversational
metadata
license: apache-2.0
pipeline_tag: image-text-to-text
language:
- en
- zh
base_model:
- prithivMLmods/Gliese-OCR-7B-Post1.0
library_name: transformers
tags:
- ggml
- llama.cpp
- Document
- VLM
- KIE
- OCR
- VL
- Camel
- Openpdf
- text-generation-inference
- Extraction
- Linking
- Markdown
- .Md
- Document Digitization
- Intelligent Document Processing (IDP)
- Intelligent Word Recognition (IWR)
- Optical Mark Recognition (OMR)
Gliese-OCR-7B-Post1.0-GGUF
The Gliese-OCR-7B-Post1.0 model is a fine-tuned version of Camel-Doc-OCR-062825, optimized for Document Retrieval, Content Extraction, and Analysis Recognition. Built on top of the Qwen2.5-VL architecture, this model enhances document comprehension capabilities with focused training on the Opendoc2-Analysis-Recognition dataset for superior document analysis and information extraction tasks.
Model Files
| File Name | Quant Type | File Size |
|---|---|---|
| Gliese-OCR-7B-Post1.0.f16.gguf | F16 | 15.2 GB |
| Gliese-OCR-7B-Post1.0.Q2_K.gguf | Q2_K | 3.02 GB |
| Gliese-OCR-7B-Post1.0.Q3_K_L.gguf | Q3_K_L | 4.09 GB |
| Gliese-OCR-7B-Post1.0.Q3_K_M.gguf | Q3_K_M | 3.81 GB |
| Gliese-OCR-7B-Post1.0.Q3_K_S.gguf | Q3_K_S | 3.49 GB |
| Gliese-OCR-7B-Post1.0.Q4_K_M.gguf | Q4_K_M | 4.68 GB |
| Gliese-OCR-7B-Post1.0.Q4_K_S.gguf | Q4_K_S | 4.46 GB |
| Gliese-OCR-7B-Post1.0.Q5_K_M.gguf | Q5_K_M | 5.44 GB |
| Gliese-OCR-7B-Post1.0.Q5_K_S.gguf | Q5_K_S | 5.32 GB |
| Gliese-OCR-7B-Post1.0.Q6_K.gguf | Q6_K | 6.25 GB |
| Gliese-OCR-7B-Post1.0.Q8_0.gguf | Q8_0 | 8.1 GB |
| Gliese-OCR-7B-Post1.0.IQ4_XS.gguf | IQ4_XS | 4.25 GB |
| Gliese-OCR-7B-Post1.0.i1-IQ1_M.gguf | i1-IQ1_M | 2.04 GB |
| Gliese-OCR-7B-Post1.0.i1-IQ1_S.gguf | i1-IQ1_S | 1.9 GB |
| Gliese-OCR-7B-Post1.0.i1-IQ2_M.gguf | i1-IQ2_M | 2.78 GB |
| Gliese-OCR-7B-Post1.0.i1-IQ2_S.gguf | i1-IQ2_S | 2.6 GB |
| Gliese-OCR-7B-Post1.0.i1-IQ2_XS.gguf | i1-IQ2_XS | 2.47 GB |
| Gliese-OCR-7B-Post1.0.i1-IQ2_XXS.gguf | i1-IQ2_XXS | 2.27 GB |
| Gliese-OCR-7B-Post1.0.i1-IQ3_M.gguf | i1-IQ3_M | 3.57 GB |
| Gliese-OCR-7B-Post1.0.i1-IQ3_S.gguf | i1-IQ3_S | 3.5 GB |
| Gliese-OCR-7B-Post1.0.i1-IQ3_XS.gguf | i1-IQ3_XS | 3.35 GB |
| Gliese-OCR-7B-Post1.0.i1-IQ3_XXS.gguf | i1-IQ3_XXS | 3.11 GB |
| Gliese-OCR-7B-Post1.0.i1-IQ4_NL.gguf | i1-IQ4_NL | 4.44 GB |
| Gliese-OCR-7B-Post1.0.i1-IQ4_XS.gguf | i1-IQ4_XS | 4.22 GB |
| Gliese-OCR-7B-Post1.0.i1-Q2_K.gguf | i1-Q2_K | 3.02 GB |
| Gliese-OCR-7B-Post1.0.i1-Q2_K_S.gguf | i1-Q2_K_S | 2.83 GB |
| Gliese-OCR-7B-Post1.0.i1-Q3_K_L.gguf | i1-Q3_K_L | 4.09 GB |
| Gliese-OCR-7B-Post1.0.i1-Q3_K_M.gguf | i1-Q3_K_M | 3.81 GB |
| Gliese-OCR-7B-Post1.0.i1-Q3_K_S.gguf | i1-Q3_K_S | 3.49 GB |
| Gliese-OCR-7B-Post1.0.i1-Q4_0.gguf | i1-Q4_0 | 4.44 GB |
| Gliese-OCR-7B-Post1.0.i1-Q4_1.gguf | i1-Q4_1 | 4.87 GB |
| Gliese-OCR-7B-Post1.0.i1-Q4_K_M.gguf | i1-Q4_K_M | 4.68 GB |
| Gliese-OCR-7B-Post1.0.i1-Q4_K_S.gguf | i1-Q4_K_S | 4.46 GB |
| Gliese-OCR-7B-Post1.0.i1-Q5_K_M.gguf | i1-Q5_K_M | 5.44 GB |
| Gliese-OCR-7B-Post1.0.i1-Q5_K_S.gguf | i1-Q5_K_S | 5.32 GB |
| Gliese-OCR-7B-Post1.0.i1-Q6_K.gguf | i1-Q6_K | 6.25 GB |
| Gliese-OCR-7B-Post1.0.imatrix.gguf | imatrix | 4.56 MB |
| Gliese-OCR-7B-Post1.0.mmproj-Q8_0.gguf | mmproj-Q8_0 | 856 MB |
| Gliese-OCR-7B-Post1.0.mmproj-f16.gguf | mmproj-f16 | 1.35 GB |
Quants Usage
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):
