Dreano commited on
Commit
98f5a82
·
verified ·
1 Parent(s): 07b4264

Upload 3 files

Browse files
Files changed (4) hide show
  1. .gitattributes +1 -0
  2. README.md +194 -0
  3. logo_nuextract.svg +90 -0
  4. nuextract2_bench.png +3 -0
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ nuextract2_bench.png filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,194 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: mit
4
+ base_model:
5
+ - Qwen/Qwen2.5-VL-8B-Instruct
6
+ pipeline_tag: image-text-to-text
7
+ ---
8
+
9
+ <p align="center">
10
+ <a href="https://nuextract.ai/">
11
+ <img src="logo_nuextract.svg" width="200"/>
12
+ </a>
13
+ </p>
14
+ <p align="center">
15
+ 🖥️ <a href="https://nuextract.ai/">API / Platform</a>&nbsp&nbsp | &nbsp&nbsp📑 <a href="https://numind.ai/blog">Blog</a>&nbsp&nbsp | &nbsp&nbsp🗣️ <a href="https://discord.gg/3tsEtJNCDe">Discord</a>&nbsp&nbsp | &nbsp&nbsp🔗 <a href="https://github.com/numindai/nuextract">GitHub</a>
16
+ </p>
17
+
18
+ # NuExtract 2.0 8B GGUF by NuMind 🔥
19
+
20
+ NuExtract 2.0 is a family of models trained specifically for structured information extraction tasks. It supports both multimodal inputs and is multilingual.
21
+
22
+ We provide several versions of different sizes, all based on pre-trained models from the QwenVL family.
23
+ | Model Size | Model Name | Base Model | License | Huggingface Link |
24
+ |------------|------------|------------|---------|------------------|
25
+ | 2B | NuExtract-2.0-2B | [Qwen2-VL-2B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-2B-Instruct) | MIT | 🤗 [NuExtract-2.0-2B](https://huggingface.co/numind/NuExtract-2.0-2B) |
26
+ | 2B | NuExtract-2.0-2B-GGUF | [Qwen2-VL-2B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-2B-Instruct) | MIT | 🤗 [NuExtract-2.0-2B-GGUF](https://huggingface.co/numind/NuExtract-2.0-2B-GGUF) |
27
+ | 4B | NuExtract-2.0-4B | [Qwen2.5-VL-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct) | Qwen Research License | 🤗 [NuExtract-2.0-4B](https://huggingface.co/numind/NuExtract-2.0-4B) |
28
+ | 4B | NuExtract-2.0-4B-GGUF | [Qwen2.5-VL-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct) | Qwen Research License | 🤗 [NuExtract-2.0-4B-GGUF](https://huggingface.co/numind/NuExtract-2.0-4B-GGUF) |
29
+ | 8B | NuExtract-2.0-8B | [Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct) | MIT | 🤗 [NuExtract-2.0-8B](https://huggingface.co/numind/NuExtract-2.0-8B) |
30
+ | 8B | NuExtract-2.0-8B-GGUF | [Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct) | MIT | 🤗 [NuExtract-2.0-8B-GGUF](https://huggingface.co/numind/NuExtract-2.0-8B-GGUF) |
31
+
32
+ ❗️Note: `NuExtract-2.0-2B` is based on Qwen2-VL rather than Qwen2.5-VL because the smallest Qwen2.5-VL model (3B) has a more restrictive, non-commercial license. We therefore include `NuExtract-2.0-2B` as a small model option that can be used commercially.
33
+
34
+ ## Benchmark
35
+ Performance on collection of ~1,000 diverse extraction examples containing both text and image inputs.
36
+ <a href="https://nuextract.ai/">
37
+ <img src="nuextract2_bench.png" width="500"/>
38
+ </a>
39
+
40
+ ## Overview
41
+
42
+ To use the model, provide an input text/image and a JSON template describing the information you need to extract. The template should be a JSON object, specifying field names and their expected type.
43
+
44
+ Support types include:
45
+ * `verbatim-string` - instructs the model to extract text that is present verbatim in the input.
46
+ * `string` - a generic string field that can incorporate paraphrasing/abstraction.
47
+ * `integer` - a whole number.
48
+ * `number` - a whole or decimal number.
49
+ * `date-time` - ISO formatted date.
50
+ * Array of any of the above types (e.g. `["string"]`)
51
+ * `enum` - a choice from set of possible answers (represented in template as an array of options, e.g. `["yes", "no", "maybe"]`).
52
+ * `multi-label` - an enum that can have multiple possible answers (represented in template as a double-wrapped array, e.g. `[["A", "B", "C"]]`).
53
+
54
+ If the model does not identify relevant information for a field, it will return `null` or `[]` (for arrays and multi-labels).
55
+
56
+ The following is an example template:
57
+ ```json
58
+ {
59
+ "first_name": "verbatim-string",
60
+ "last_name": "verbatim-string",
61
+ "description": "string",
62
+ "age": "integer",
63
+ "gpa": "number",
64
+ "birth_date": "date-time",
65
+ "nationality": ["France", "England", "Japan", "USA", "China"],
66
+ "languages_spoken": [["English", "French", "Japanese", "Mandarin", "Spanish"]]
67
+ }
68
+ ```
69
+ An example output:
70
+ ```json
71
+ {
72
+ "first_name": "Susan",
73
+ "last_name": "Smith",
74
+ "description": "A student studying computer science.",
75
+ "age": 20,
76
+ "gpa": 3.7,
77
+ "birth_date": "2005-03-01",
78
+ "nationality": "England",
79
+ "languages_spoken": ["English", "French"]
80
+ }
81
+ ```
82
+
83
+ ⚠️ We recommend using NuExtract with a temperature at or very close to 0. Some inference frameworks, such as Ollama, use a default of 0.7 which is not well suited to many extraction tasks.
84
+
85
+ ## Using NuExtract with llama.cpp
86
+
87
+ ### Download the model
88
+
89
+ ```bash
90
+ mkdir models
91
+ hf download numind/NuExtract-2.0-8B-GGUF --local-dir ./models
92
+ ```
93
+
94
+ ### Start the llama.cpp server
95
+ ```bash
96
+ docker run --gpus all -it -p 8000:8080 -v ./models:/models --entrypoint /app/llama-server ghcr.io/ggml-org/llama.cpp:full-cuda -m /models/NuExtract-2.0-8B-Q8_0.gguf --mmproj /models/mmproj-BF16.gguf --host 0.0.0.0
97
+ ```
98
+
99
+ ## Text Extraction
100
+ The `docker run` command above maps the port 8080 of the llama.cpp container to the port 8000 of the host.
101
+ ```python
102
+ import openai
103
+ import json
104
+
105
+ client = openai.OpenAI(
106
+ api_key="EMPTY",
107
+ base_url="http://localhost:8000",
108
+ )
109
+ ```
110
+ llama.cpp is not compatible with vllm's `chat_template_kwargs`. Thus, the template has to be applied manually
111
+ ## Text extraction
112
+ ```python
113
+ flight_text = """Date: Tuesday March 25th 2025
114
+ User info: Male, 32 yo
115
+
116
+ Book me a flight this Saturday morning to go to Marrakesh and come back on April 5th. I want it to be business class. Air France if possible."""
117
+ flight_template = """{
118
+ "Destination": "verbatim-string",
119
+ "Departure date range": {
120
+ "beginning": "date-time",
121
+ "end": "date-time"
122
+ },
123
+ "Return date range": {
124
+ "beginning": "date-time",
125
+ "end": "date-time"
126
+ },
127
+ "Requested Class": [
128
+ "1st",
129
+ "business",
130
+ "economy"
131
+ ],
132
+ "Preferred airlines": [
133
+ "string"
134
+ ]
135
+ }"""
136
+
137
+ response = client.chat.completions.create(
138
+ model="NuExtract",
139
+ temperature=0.0,
140
+ messages=[
141
+ {
142
+ "role": "user",
143
+ "content": [
144
+ {
145
+ "type": "text",
146
+ "text": f"# Template:\n{json.dumps(json.loads(flight_template), indent=4)}\n{flight_text}",
147
+ },
148
+ ],
149
+ },
150
+ ],
151
+ )
152
+ ```
153
+
154
+ ## Image Extraction
155
+
156
+ ```python
157
+ identity_template = """{
158
+ "Last name": "verbatim-string",
159
+ "First names": [
160
+ "verbatim-string"
161
+ ],
162
+ "Document number": "verbatim-string",
163
+ "Date of birth": "date-time",
164
+ "Gender": [
165
+ "Male",
166
+ "Female",
167
+ "Other"
168
+ ],
169
+ "Expiration date": "date-time",
170
+ "Country ISO code": "string"
171
+ }"""
172
+
173
+ response = client.chat.completions.create(
174
+ model="NuExtract",
175
+ temperature=0.0,
176
+ messages=[
177
+ {
178
+ "role": "user",
179
+ "content": [
180
+ {
181
+ "type": "text",
182
+ "text": f"# Template:\n{json.dumps(json.loads(identity_template), indent=4)}\n<image>",
183
+ },
184
+ {
185
+ "type": "image_url",
186
+ "image_url": {
187
+ "url": f"https://upload.wikimedia.org/wikipedia/commons/thumb/4/49/Carte_identit%C3%A9_%C3%A9lectronique_fran%C3%A7aise_%282021%2C_recto%29.png/2880px-Carte_identit%C3%A9_%C3%A9lectronique_fran%C3%A7aise_%282021%2C_recto%29.png"
188
+ },
189
+ },
190
+ ],
191
+ },
192
+ ],
193
+ )
194
+ ```
logo_nuextract.svg ADDED
nuextract2_bench.png ADDED

Git LFS Details

  • SHA256: b2cdf1eec686510aaa05e91d098ddda56f4674e7448a3e4b66e50a915240b545
  • Pointer size: 131 Bytes
  • Size of remote file: 106 kB