feiziaarash commited on
Commit
3142866
·
1 Parent(s): 296a4e2

fix readme

Browse files
Files changed (1) hide show
  1. README.md +22 -82
README.md CHANGED
@@ -23,14 +23,12 @@ metrics:
23
 
24
  **GroundNext-7B-V0** is a state-of-the-art vision-language model for GUI element grounding, developed as part of the **GroundCUA** project. This model features:
25
 
26
- - **Superior grounding accuracy** achieving 48.9% on ScreenSpot-Pro, 55.6% on OSWorld-G, and 31.3% on UI-Vision benchmarks
27
- - **Exceptional cross-platform generalization** with 83.7% accuracy on MMBench-GUI and 92.8% on ScreenSpot-v2 despite desktop-only training
28
  - **Data-efficient training** achieving state-of-the-art results with only 700K training examples vs 9M+ in prior work
29
  - **Strong agentic capabilities** reaching 50.6% overall success rate on OSWorld when paired with reasoning models
30
  - **Native tool-calling support** with built-in computer use action space for mouse, keyboard, and screen interactions
31
 
32
- ![Performance Comparison](https://via.placeholder.com/800x400?text=GroundNext+Performance+Visualization)
33
-
34
  ## Model Overview
35
 
36
  **GroundNext-7B-V0** has the following characteristics:
@@ -53,15 +51,15 @@ For more details about the training methodology, dataset, and comprehensive benc
53
  | **ScreenSpot-Pro** | 29.7 | 38.1 | **52.9** |
54
  | **OSWorld-G** | 42.7 | 57.1 | **67.7** |
55
  | **UI-Vision** | 16.5 | 25.5 | **60.3** |
56
- | **Avg** | 29.6 | 40.2 | **60.3** |
57
 
58
  ### Cross-Platform Generalization (Desktop, Mobile & Web)
59
 
60
- | | Qwen2.5-VL-7B | UI-TARS-72B | **GroundNext-7B** |
61
  | -------------------- | ------------- | ----------- | ----------------- |
62
  | **MMBench-GUI** | 33.9 | 74.3 | **81.1** |
63
  | **ScreenSpot-v2** | 88.8 | 90.3 | **90.4** |
64
- | **Avg** | 61.4 | 82.3 | **85.8** |
65
 
66
 
67
  ### Agentic Performance on OSWorld
@@ -98,31 +96,14 @@ The following code snippet demonstrates how to use GroundNext-7B-V0 for GUI elem
98
 
99
  ```python
100
  import torch
101
- from transformers import Qwen2VLForConditionalGeneration, AutoTokenizer, AutoProcessor
102
- from qwen_vl_utils.vision_process import smart_resize
103
  from PIL import Image
104
-
105
- # System prompt for computer use grounding
106
- GROUNDNEXT_SYSTEM_PROMPT = """You are a helpful assistant.
107
-
108
- # Tools
109
-
110
- You may call one or more functions to assist with the user query.
111
-
112
- You are provided with function signatures within <tools></tools> XML tags:
113
- <tools>
114
- {{"type": "function", "function": {{"name": "computer_use", "description": "Use a mouse and keyboard to interact with a computer, and take screenshots.\n* This is an interface to a desktop GUI. You do not have access to a terminal or applications menu. You must click on desktop icons to start applications.\n* Some applications may take time to start or process actions, so you may need to wait and take successive screenshots to see the results of your actions. E.g. if you click on Firefox and a window doesn't open, try wait and taking another screenshot.\n* The screen's resolution is {width}x{height}.\n* Whenever you intend to move the cursor to click on an element like an icon, you should consult a screenshot to determine the coordinates of the element before moving the cursor.\n* If you tried clicking on a program or link but it failed to load, even after waiting, try adjusting your cursor position so that the tip of the cursor visually falls on the element that you want to click.\n* Make sure to click any buttons, links, icons, etc with the cursor tip in the center of the element. Don't click boxes on their edges unless asked.", "parameters": {{"properties": {{"action": {{"description": "The action to perform. The available actions are:\n* `key`: Performs key down presses on the arguments passed in order, then performs key releases in reverse order.\n* `type`: Type a string of text on the keyboard.\n* `mouse_move`: Move the cursor to a specified (x, y) pixel coordinate on the screen.\n* `left_click`: Click the left mouse button.\n* `left_click_drag`: Click and drag the cursor to a specified (x, y) pixel coordinate on the screen.\n* `right_click`: Click the right mouse button.\n* `middle_click`: Click the middle mouse button.\n* `double_click`: Double-click the left mouse button.\n* `scroll`: Performs a scroll of the mouse scroll wheel.\n* `wait`: Wait specified seconds for the change to happen.\n* `terminate`: Terminate the current task and report its completion status.", "enum": ["key", "type", "mouse_move", "left_click", "left_click_drag", "right_click", "middle_click", "double_click", "scroll", "wait", "terminate"], "type": "string"}}, "keys": {{"description": "Required only by `action=key`.", "type": "array"}}, "text": {{"description": "Required only by `action=type`.", "type": "string"}}, "coordinate": {{"description": "(x, y): The x (pixels from the left edge) and y (pixels from the top edge) coordinates to move the mouse to. Required only by `action=mouse_move`, `action=left_click_drag`, `action=left_click`, `action=right_click`, `action=double_click`.", "type": "array"}}, "pixels": {{"description": "The amount of scrolling to perform. Positive values scroll up, negative values scroll down. Required only by `action=scroll`.", "type": "number"}}, "time": {{"description": "The seconds to wait. Required only by `action=wait`.", "type": "number"}}, "status": {{"description": "The status of the task. Required only by `action=terminate`.", "type": "string", "enum": ["success", "failure"]}}}}, "required": ["action"], "type": "object"}}}}}}
115
- </tools>
116
-
117
- For each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:
118
- <tool_call>
119
- {{"name": <function-name>, "arguments": <args-json-object>}}
120
- </tool_call>"""
121
 
122
  model_name = "ServiceNow/GroundNext-7B-V0"
123
 
124
  # Load model and processor
125
- model = Qwen2VLForConditionalGeneration.from_pretrained(
126
  model_name,
127
  torch_dtype=torch.bfloat16,
128
  attn_implementation="flash_attention_2",
@@ -134,67 +115,26 @@ processor = AutoProcessor.from_pretrained(model_name)
134
  tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
135
 
136
  # Configure generation
137
- model.generation_config.temperature = 0.0
138
  model.generation_config.do_sample = False
139
  model.generation_config.use_cache = True
140
 
141
  # Load and prepare image
142
- image_path = "./screenshot.png"
143
- image = Image.open(image_path).convert('RGB')
144
- width, height = image.size
145
-
146
- # Resize image using smart_resize
147
- resized_height, resized_width = smart_resize(
148
- height,
149
- width,
150
- min_pixels=78_400,
151
- max_pixels=6_000_000,
152
- )
153
- image = image.resize((resized_width, resized_height))
154
-
155
- # Create messages
156
  instruction = "Click on the 'Save' icon"
157
- messages = [
158
- {
159
- "role": "system",
160
- "content": GROUNDNEXT_SYSTEM_PROMPT.format(width=resized_width, height=resized_height)
161
- },
162
- {
163
- "role": "user",
164
- "content": [
165
- {"type": "image", "image": image},
166
- {"type": "text", "text": instruction},
167
- ],
168
- }
169
- ]
170
-
171
- # Prepare inputs
172
- input_text = tokenizer.apply_chat_template(
173
- messages,
174
- add_generation_prompt=True,
175
- tokenize=False
176
- )
177
-
178
- inputs = processor(
179
- text=[input_text],
180
- images=[image],
181
- videos=None,
182
- padding=True,
183
- return_tensors="pt",
184
- ).to(model.device)
185
-
186
- # Generate response
187
- generated_ids = model.generate(**inputs, max_new_tokens=128)
188
- generated_ids_trimmed = [
189
- out_ids[len(in_ids):] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
190
- ]
191
-
192
- response = processor.batch_decode(
193
- generated_ids_trimmed,
194
- skip_special_tokens=True,
195
- clean_up_tokenization_spaces=False
196
- )[0]
197
 
 
198
  print(response)
199
  # Expected output: <tool_call>{"name": "computer_use", "arguments": {"action": "left_click", "coordinate": [x, y]}}</tool_call>
200
  ```
 
23
 
24
  **GroundNext-7B-V0** is a state-of-the-art vision-language model for GUI element grounding, developed as part of the **GroundCUA** project. This model features:
25
 
26
+ - **Superior grounding accuracy** achieving 52.9% on ScreenSpot-Pro, 67.7% on OSWorld-G, and 60.3% on UI-Vision benchmarks
27
+ - **Exceptional cross-platform generalization** with 81.1% accuracy on MMBench-GUI and 90.4% on ScreenSpot-v2 despite desktop-only training
28
  - **Data-efficient training** achieving state-of-the-art results with only 700K training examples vs 9M+ in prior work
29
  - **Strong agentic capabilities** reaching 50.6% overall success rate on OSWorld when paired with reasoning models
30
  - **Native tool-calling support** with built-in computer use action space for mouse, keyboard, and screen interactions
31
 
 
 
32
  ## Model Overview
33
 
34
  **GroundNext-7B-V0** has the following characteristics:
 
51
  | **ScreenSpot-Pro** | 29.7 | 38.1 | **52.9** |
52
  | **OSWorld-G** | 42.7 | 57.1 | **67.7** |
53
  | **UI-Vision** | 16.5 | 25.5 | **60.3** |
54
+ | **Avg (Desktop)** | 29.6 | 40.2 | **60.3** |
55
 
56
  ### Cross-Platform Generalization (Desktop, Mobile & Web)
57
 
58
+ | | Qwen2.5-VL-7B | UI-TARS-72B | **GroundNext-7B-V0** |
59
  | -------------------- | ------------- | ----------- | ----------------- |
60
  | **MMBench-GUI** | 33.9 | 74.3 | **81.1** |
61
  | **ScreenSpot-v2** | 88.8 | 90.3 | **90.4** |
62
+ | **Avg (Mobile/Web)** | 61.4 | 82.3 | **85.8** |
63
 
64
 
65
  ### Agentic Performance on OSWorld
 
96
 
97
  ```python
98
  import torch
99
+ from transformers import Qwen2_5_VLForConditionalGeneration, AutoTokenizer, AutoProcessor
 
100
  from PIL import Image
101
+ import groundcua
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
102
 
103
  model_name = "ServiceNow/GroundNext-7B-V0"
104
 
105
  # Load model and processor
106
+ model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
107
  model_name,
108
  torch_dtype=torch.bfloat16,
109
  attn_implementation="flash_attention_2",
 
115
  tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
116
 
117
  # Configure generation
118
+ model.generation_config.temperature = groundcua.DEFAULT_TEMPERATURE
119
  model.generation_config.do_sample = False
120
  model.generation_config.use_cache = True
121
 
122
  # Load and prepare image
123
+ url = "https://huggingface.co/datasets/ServiceNow/GroundCUA/resolve/main/images/LibreOffice Writer/00c4bac63f95985ccd9a4210fa752e8a5148a5f69ecb8bcfb3e499f5a3becc0d.png"
124
+ image = Image.open(io.BytesIO(urlopen(url).read()))
125
+ image, (width, height) = groundcua.prepare_image(image)
126
+
127
+ # Create messages and generate
 
 
 
 
 
 
 
 
 
128
  instruction = "Click on the 'Save' icon"
129
+ messages = groundcua.create_messages(instruction, image, width, height)
130
+
131
+ input_text = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=False)
132
+ inputs = processor(text=[input_text], images=[image], videos=None, padding=True, return_tensors="pt").to(model.device)
133
+
134
+ generated_ids = model.generate(**inputs, max_new_tokens=groundcua.DEFAULT_MAX_NEW_TOKENS)
135
+ generated_ids_trimmed = [out_ids[len(in_ids):] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
136
 
137
+ response = processor.batch_decode(generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
138
  print(response)
139
  # Expected output: <tool_call>{"name": "computer_use", "arguments": {"action": "left_click", "coordinate": [x, y]}}</tool_call>
140
  ```