OpenModels4all commited on
Commit
8844c5e
·
verified ·
1 Parent(s): 89246f7

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,17 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ model-00002-of-00012.safetensors filter=lfs diff=lfs merge=lfs -text
37
+ model-00004-of-00012.safetensors filter=lfs diff=lfs merge=lfs -text
38
+ tokenizer.json filter=lfs diff=lfs merge=lfs -text
39
+ model-00003-of-00012.safetensors filter=lfs diff=lfs merge=lfs -text
40
+ model-00005-of-00012.safetensors filter=lfs diff=lfs merge=lfs -text
41
+ model-00011-of-00012.safetensors filter=lfs diff=lfs merge=lfs -text
42
+ model-00012-of-00012.safetensors filter=lfs diff=lfs merge=lfs -text
43
+ model-00001-of-00012.safetensors filter=lfs diff=lfs merge=lfs -text
44
+ model-00006-of-00012.safetensors filter=lfs diff=lfs merge=lfs -text
45
+ model-00007-of-00012.safetensors filter=lfs diff=lfs merge=lfs -text
46
+ model-00008-of-00012.safetensors filter=lfs diff=lfs merge=lfs -text
47
+ model-00010-of-00012.safetensors filter=lfs diff=lfs merge=lfs -text
48
+ tokenizer.model filter=lfs diff=lfs merge=lfs -text
49
+ model-00009-of-00012.safetensors filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,721 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ license_name: health-ai-developer-foundations
4
+ license_link: https://developers.google.com/health-ai-developer-foundations/terms
5
+ library_name: transformers
6
+ pipeline_tag: image-text-to-text
7
+ language: en
8
+ extra_gated_heading: Access MedGemma on Hugging Face
9
+ extra_gated_prompt: >-
10
+ To access MedGemma on Hugging Face, you're required to review and
11
+ agree to [Health AI Developer Foundation's terms of use](https://developers.google.com/health-ai-developer-foundations/terms).
12
+ To do this, please ensure you're logged in to Hugging Face and click below.
13
+ Requests are processed immediately.
14
+ extra_gated_button_content: Acknowledge license
15
+ tags:
16
+ - medical
17
+ - x-ray
18
+ - pathology
19
+ - dermatology
20
+ - fundus
21
+ - radiology report generation
22
+ - chest-x-ray
23
+ - medical-embeddings
24
+ - image-classification
25
+ - zero-shot-image-classification
26
+ - image-feature-extraction
27
+ - image-text-to-text
28
+ base_model: google/gemma-3-27b-pt
29
+ ---
30
+
31
+ # MedGemma model card
32
+
33
+ **Model documentation:** [MedGemma](https://developers.google.com/health-ai-developer-foundations/medgemma)
34
+
35
+ **Resources:**
36
+
37
+ * Model on Google Cloud Model Garden: [MedGemma](https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/medgemma)
38
+ * Models on Hugging Face: [Collection](https://huggingface.co/collections/google/medgemma-release-680aade845f90bec6a3f60c4)
39
+ * GitHub repository (supporting code, Colab notebooks, discussions, and
40
+ issues): [MedGemma](https://github.com/google-health/medgemma)
41
+ * Quick start notebook: [GitHub](https://github.com/google-health/medgemma/blob/main/notebooks/quick_start_with_hugging_face.ipynb)
42
+ * Fine-tuning notebook: [GitHub](https://github.com/google-health/medgemma/blob/main/notebooks/fine_tune_with_hugging_face.ipynb)
43
+ * Concept applications built using MedGemma: [Collection](https://huggingface.co/collections/google/medgemma-concept-apps-686ea036adb6d51416b0928a)
44
+ * Support: See [Contact](https://developers.google.com/health-ai-developer-foundations/medgemma/get-started.md#contact)
45
+ * License: The use of MedGemma is governed by the [Health AI Developer
46
+ Foundations terms of
47
+ use](https://developers.google.com/health-ai-developer-foundations/terms).
48
+
49
+ **Author:** Google
50
+
51
+ ## Model information
52
+
53
+ This section describes the MedGemma model and how to use it.
54
+
55
+ ### Description
56
+
57
+ MedGemma is a collection of [Gemma 3](https://ai.google.dev/gemma/docs/core)
58
+ variants that are trained for performance on medical text and image
59
+ comprehension. Developers can use MedGemma to accelerate building
60
+ healthcare-based AI applications. MedGemma currently comes in three variants: a
61
+ 4B multimodal version and 27B text-only and multimodal versions.
62
+
63
+ Both MedGemma multimodal versions utilize a
64
+ [SigLIP](https://arxiv.org/abs/2303.15343) image encoder that has been
65
+ specifically pre-trained on a variety of de-identified medical data, including
66
+ chest X-rays, dermatology images, ophthalmology images, and histopathology
67
+ slides. Their LLM components are trained on a diverse set of medical data,
68
+ including medical text, medical question-answer pairs, FHIR-based electronic
69
+ health record data (27B multimodal only), radiology images, histopathology
70
+ patches, ophthalmology images, and dermatology images.
71
+
72
+ MedGemma 4B is available in both pre-trained (suffix: `-pt`) and
73
+ instruction-tuned (suffix `-it`) versions. The instruction-tuned version is a
74
+ better starting point for most applications. The pre-trained version is
75
+ available for those who want to experiment more deeply with the models.
76
+
77
+ MedGemma 27B multimodal has pre-training on medical image, medical record and
78
+ medical record comprehension tasks. MedGemma 27B text-only has been trained
79
+ exclusively on medical text. Both models have been optimized for inference-time
80
+ computation on medical reasoning. This means it has slightly higher performance
81
+ on some text benchmarks than MedGemma 27B multimodal. Users who want to work
82
+ with a single model for both medical text, medical record and medical image
83
+ tasks are better suited for MedGemma 27B multimodal. Those that only need text
84
+ use-cases may be better served with the text-only variant. Both MedGemma 27B
85
+ variants are only available in instruction-tuned versions.
86
+
87
+ MedGemma variants have been evaluated on a range of clinically relevant
88
+ benchmarks to illustrate their baseline performance. These evaluations are based
89
+ on both open benchmark datasets and curated datasets. Developers can fine-tune
90
+ MedGemma variants for improved performance. Consult the [Intended
91
+ use](#intended-use) section below for more details.
92
+
93
+ MedGemma is optimized for medical applications that involve a text generation
94
+ component. For medical image-based applications that do not involve text
95
+ generation, such as data-efficient classification, zero-shot classification, or
96
+ content-based or semantic image retrieval, the [MedSigLIP image
97
+ encoder](https://developers.google.com/health-ai-developer-foundations/medsiglip/model-card)
98
+ is recommended. MedSigLIP is based on the same image encoder that powers
99
+ MedGemma.
100
+
101
+ Please consult the [MedGemma Technical Report](https://arxiv.org/abs/2507.05201)
102
+ for more details.
103
+
104
+ ### How to use
105
+
106
+ Below are some example code snippets to help you quickly get started running the
107
+ model locally on GPU. If you want to use the model at scale, we recommend that
108
+ you create a production version using [Model
109
+ Garden](https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/medgemma).
110
+
111
+ First, install the Transformers library. Gemma 3 is supported starting from transformers 4.50.0.
112
+
113
+ ```shell
114
+ $ pip install -U transformers
115
+ ```
116
+
117
+ **Run model with the `pipeline` API**
118
+
119
+ ```py
120
+ from transformers import pipeline
121
+ from PIL import Image
122
+ import requests
123
+ import torch
124
+
125
+ pipe = pipeline(
126
+ "image-text-to-text",
127
+ model="google/medgemma-27b-it",
128
+ torch_dtype=torch.bfloat16,
129
+ device="cuda",
130
+ )
131
+
132
+ messages = [
133
+ {
134
+ "role": "system",
135
+ "content": [{"type": "text", "text": "You are a helpful medical assistant."}]
136
+ },
137
+ {
138
+ "role": "user",
139
+ "content": [{"type": "text", "text": "How do you differentiate bacterial from viral pneumonia?"}]
140
+ }
141
+ ]
142
+
143
+ output = pipe(text=messages, max_new_tokens=200)
144
+ print(output[0]["generated_text"][-1]["content"])
145
+
146
+ # Image attribution: Stillwaterising, CC0, via Wikimedia Commons
147
+ image_url = "[https://upload.wikimedia.org/wikipedia/commons/c/c8/Chest_Xray_PA_3-8-2010.png](https://upload.wikimedia.org/wikipedia/commons/c/c8/Chest_Xray_PA_3-8-2010.png)"
148
+ image = Image.open(requests.get(image_url, headers={"User-Agent": "example"}, stream=True).raw)
149
+
150
+ messages = [
151
+ {
152
+ "role": "system",
153
+ "content": [{"type": "text", "text": "You are an expert radiologist."}]
154
+ },
155
+ {
156
+ "role": "user",
157
+ "content": [
158
+ {"type": "text", "text": "Describe this X-ray"},
159
+ {"type": "image", "image": image}
160
+ ]
161
+ }
162
+ ]
163
+
164
+ output = pipe(text=messages, max_new_tokens=200)
165
+ print(output[0]["generated_text"][-1]["content"])
166
+ ```
167
+
168
+ **Run the model directly**
169
+
170
+ ```py
171
+ # pip install accelerate
172
+ from transformers import AutoProcessor, AutoModelForImageTextToText
173
+ from PIL import Image
174
+ import requests
175
+ import torch
176
+
177
+ model_id = "google/medgemma-27b-it"
178
+
179
+ model = AutoModelForImageTextToText.from_pretrained(
180
+ model_id,
181
+ torch_dtype=torch.bfloat16,
182
+ device_map="auto",
183
+ )
184
+ processor = AutoProcessor.from_pretrained(model_id)
185
+
186
+ messages = [
187
+ {
188
+ "role": "system",
189
+ "content": [{"type": "text", "text": "You are a helpful medical assistant."}]
190
+ },
191
+ {
192
+ "role": "user",
193
+ "content": [{"type": "text", "text": "How do you differentiate bacterial from viral pneumonia?"}]
194
+ }
195
+ ]
196
+
197
+ inputs = processor.apply_chat_template(
198
+ messages, add_generation_prompt=True, tokenize=True,
199
+ return_dict=True, return_tensors="pt"
200
+ ).to(model.device, dtype=torch.bfloat16)
201
+
202
+ input_len = inputs["input_ids"].shape[-1]
203
+
204
+ with torch.inference_mode():
205
+ generation = model.generate(**inputs, max_new_tokens=200, do_sample=False)
206
+ generation = generation[0][input_len:]
207
+
208
+ decoded = processor.decode(generation, skip_special_tokens=True)
209
+ print(decoded)
210
+
211
+ # Image attribution: Stillwaterising, CC0, via Wikimedia Commons
212
+ image_url = "[https://upload.wikimedia.org/wikipedia/commons/c/c8/Chest_Xray_PA_3-8-2010.png](https://upload.wikimedia.org/wikipedia/commons/c/c8/Chest_Xray_PA_3-8-2010.png)"
213
+ image = Image.open(requests.get(image_url, headers={"User-Agent": "example"}, stream=True).raw)
214
+
215
+ messages = [
216
+ {
217
+ "role": "system",
218
+ "content": [{"type": "text", "text": "You are an expert radiologist."}]
219
+ },
220
+ {
221
+ "role": "user",
222
+ "content": [
223
+ {"type": "text", "text": "Describe this X-ray"},
224
+ {"type": "image", "image": image}
225
+ ]
226
+ }
227
+ ]
228
+
229
+ inputs = processor.apply_chat_template(
230
+ messages, add_generation_prompt=True, tokenize=True,
231
+ return_dict=True, return_tensors="pt"
232
+ ).to(model.device, dtype=torch.bfloat16)
233
+
234
+ input_len = inputs["input_ids"].shape[-1]
235
+
236
+ with torch.inference_mode():
237
+ generation = model.generate(**inputs, max_new_tokens=200, do_sample=False)
238
+ generation = generation[0][input_len:]
239
+
240
+ decoded = processor.decode(generation, skip_special_tokens=True)
241
+ print(decoded)
242
+ ```
243
+
244
+ ### Examples
245
+
246
+ See the following Colab notebooks for examples of how to use MedGemma:
247
+
248
+ * To give the model a quick try, running it locally with weights from Hugging
249
+ Face, see [Quick start notebook in
250
+ Colab](https://colab.research.google.com/github/google-health/medgemma/blob/main/notebooks/quick_start_with_hugging_face.ipynb).
251
+ Note that you will need to use Colab Enterprise to obtain adequate GPU
252
+ resources to run either 27B model without quantization.
253
+
254
+ * For an example of fine-tuning the 4B model, see the [Fine-tuning notebook in
255
+ Colab](https://colab.research.google.com/github/google-health/medgemma/blob/main/notebooks/fine_tune_with_hugging_face.ipynb).
256
+ The 27B models can be fine tuned in a similar manner but will require more
257
+ time and compute resources than the 4B model.
258
+
259
+ ### Model architecture overview
260
+
261
+ The MedGemma model is built based on [Gemma 3](https://ai.google.dev/gemma/) and
262
+ uses the same decoder-only transformer architecture as Gemma 3. To read more
263
+ about the architecture, consult the Gemma 3 [model
264
+ card](https://ai.google.dev/gemma/docs/core/model_card_3).
265
+
266
+ ### Technical specifications
267
+
268
+ * **Model type**: Decoder-only Transformer architecture, see the [Gemma 3
269
+ Technical
270
+ Report](https://storage.googleapis.com/deepmind-media/gemma/Gemma3Report.pdf)
271
+ * **Input modalities**: **4B and 27B multimodal**: Text, vision; **27B text**: Text only
272
+ * **Output modality:** Text only (all models)
273
+ * **Attention mechanism**: Grouped-query attention (GQA)
274
+ * **Context length**: Supports long context, at least 128K tokens
275
+ * **Key publication**: [https://arxiv.org/abs/2507.05201](https://arxiv.org/abs/2507.05201)
276
+ * **Model created**: July 9, 2025
277
+ * **Model version**: 1.0.0
278
+
279
+ ### Citation
280
+
281
+ When using this model, please cite: Sellergren et al. "MedGemma Technical
282
+ Report." *arXiv preprint arXiv:2507.05201* (2025).
283
+
284
+ ```none
285
+ @article{sellergren2025medgemma,
286
+ title={MedGemma Technical Report},
287
+ author={Sellergren, Andrew and Kazemzadeh, Sahar and Jaroensri, Tiam and Kiraly, Atilla and Traverse, Madeleine and Kohlberger, Timo and Xu, Shawn and Jamil, Fayaz and Hughes, Cían and Lau, Charles and others},
288
+ journal={arXiv preprint arXiv:2507.05201},
289
+ year={2025}
290
+ }
291
+ ```
292
+
293
+ ### Inputs and outputs
294
+
295
+ **Input**:
296
+
297
+ * Text string, such as a question or prompt
298
+ * Images, normalized to 896 x 896 resolution and encoded to 256 tokens each
299
+ * Total input length of 128K tokens
300
+
301
+ **Output**:
302
+
303
+ * Generated text in response to the input, such as an answer to a question,
304
+ analysis of image content, or a summary of a document
305
+ * Total output length of 8192 tokens
306
+
307
+ ### Performance and validation
308
+
309
+ MedGemma was evaluated across a range of different multimodal classification,
310
+ report generation, visual question answering, and text-based tasks.
311
+
312
+ ### Key performance metrics
313
+
314
+ #### Imaging evaluations
315
+
316
+ The multimodal performance of MedGemma 4B and 27B multimodal was evaluated
317
+ across a range of benchmarks, focusing on radiology, dermatology,
318
+ histopathology, ophthalmology, and multimodal clinical reasoning.
319
+
320
+ MedGemma 4B outperforms the base Gemma 3 4B model across all tested multimodal
321
+ health benchmarks.
322
+
323
+ | Task and metric | Gemma 3 4B | MedGemma 4B | Gemma 3 27B | MedGemma 27B multimodal |
324
+ | :---- | :---- | :---- | :---- | :---- |
325
+ | **Medical image classification** | | | | |
326
+ | MIMIC CXR** - macro F1 for top 5 conditions | 81.2 | 88.9 | 71.7 | 90.0 |
327
+ | CheXpert CXR - macro F1 for top 5 conditions | 32.6 | 48.1 | 26.2 | 49.9 |
328
+ | CXR14 - macro F1 for 3 conditions | 32.0 | 50.1 | 31.4 | 45.3 |
329
+ | PathMCQA* (histopathology, internal**) - Accuracy | 37.1 | 69.8 | 42.2 | 71.6 |
330
+ | US-DermMCQA* - Accuracy | 52.5 | 71.8 | 66.9 | 71.7 |
331
+ | EyePACS* (fundus, internal) - Accuracy | 14.4 | 64.9 | 20.3 | 75.3 |
332
+ | **Visual question answering** | | | | |
333
+ | SLAKE (radiology) - Tokenized F1 | 40.2 | 72.3 | 42.5 | 70.0 |
334
+ | VQA-RAD*** (radiology) - Tokenized F1 | 33.6 | 49.9 | 42.7 | 46.7 |
335
+ | **Knowledge and reasoning** | | | | |
336
+ | MedXpertQA (text + multimodal questions) - Accuracy | 16.4 | 18.8 | 22.0 | 26.8 |
337
+
338
+ *Internal datasets. US-DermMCQA is described in [Liu (2020, Nature
339
+ medicine)](https://www.nature.com/articles/s41591-020-0842-3), presented as a
340
+ 4-way MCQ per example for skin condition classification. PathMCQA is based on
341
+ multiple datasets, presented as 3-9 way MCQ per example for identification,
342
+ grading, and subtype for breast, cervical, and prostate cancer. EyePACS is a
343
+ dataset of fundus images with classification labels based on 5-level diabetic
344
+ retinopathy severity (None, Mild, Moderate, Severe, Proliferative). More details
345
+ in the [MedGemma Technical Report](https://arxiv.org/abs/2507.05201).
346
+
347
+ **Based on radiologist adjudicated labels, described in [Yang (2024,
348
+ arXiv)](https://arxiv.org/pdf/2405.03162) Section A.1.1.
349
+
350
+ ***Based on "balanced split," described in [Yang (2024,
351
+ arXiv)](https://arxiv.org/pdf/2405.03162).
352
+
353
+ #### Chest X-ray report generation
354
+
355
+ MedGemma chest X-ray (CXR) report generation performance was evaluated on
356
+ [MIMIC-CXR](https://physionet.org/content/mimic-cxr/2.1.0/) using the [RadGraph
357
+ F1 metric](https://arxiv.org/abs/2106.14463). We compare the MedGemma
358
+ pre-trained checkpoint with our previous best model for CXR report generation,
359
+ [PaliGemma 2](https://arxiv.org/abs/2412.03555).
360
+
361
+ | Metric | MedGemma 4B (pre-trained) | MedGemma 4B (tuned for CXR) | MedGemma 27B multimodal (pre-trained)* | PaliGemma 2 3B (tuned for CXR) | PaliGemma 2 10B (tuned for CXR) |
362
+ | :---- | :---- | :---- | :---- | :---- | :---- |
363
+ | **Chest X-ray report generation** | | | | | |
364
+ | MIMIC CXR - RadGraph F1 | 29.5 | 30.3 | 27.0 | 28.8 | 29.5 |
365
+
366
+ *Not released
367
+
368
+ The instruction-tuned versions of MedGemma 4B and MedGemma 27B achieve lower
369
+ scores (21.9 and 21.3, respectively) due to the differences in reporting style
370
+ compared to the MIMIC ground truth reports. Further fine-tuning on MIMIC reports
371
+ enables users to achieve improved performance, as shown by the improved
372
+ performance of the MedGemma 4B model that was tuned for CXR.
373
+
374
+ #### Text evaluations
375
+
376
+ MedGemma 4B and text-only MedGemma 27B were evaluated across a range of
377
+ text-only benchmarks for medical knowledge and reasoning.
378
+
379
+ The MedGemma models outperform their respective base Gemma models across all
380
+ tested text-only health benchmarks.
381
+
382
+ | Metric | Gemma 3 4B | MedGemma 4B | Gemma 3 27B | MedGemma 27B text-only | MedGemma 27B multimodal |
383
+ | :---- | :---- | :---- | :---- | :---- | :---- |
384
+ | MedQA (4-op) | 50.7 | 64.4 | 74.9 | 89.8 (best-of-5) 87.7 (0-shot) | 87.0 (best-of-5) 85.3 (0-shot) |
385
+ | MedMCQA | 45.4 | 55.7 | 62.6 | 74.2 | 70.2 |
386
+ | PubMedQA | 68.4 | 73.4 | 73.4 | 76.8 | 77.2 |
387
+ | MMLU Med | 67.2 | 70.0 | 83.3 | 87.0 | 86.2 |
388
+ | MedXpertQA (text only) | 11.6 | 14.2 | 15.7 | 25.7 | 23.7 |
389
+ | AfriMed-QA (25 question test set) | 48.0 | 52.0 | 72.0 | 84.0 | 72.0 |
390
+
391
+ For all MedGemma 27B results, [test-time
392
+ scaling](https://arxiv.org/abs/2501.19393) is used to improve performance.
393
+
394
+ #### Medical record evaluations
395
+
396
+ All models were evaluated on a question answer dataset from synthetic FHIR data
397
+ to answer questions about patient records. MedGemma 27B multimodal's
398
+ FHIR-specific training gives it significant improvement over other MedGemma and
399
+ Gemma models.
400
+
401
+ | Metric | Gemma 3 4B | MedGemma 4B | Gemma 3 27B | MedGemma 27B text-only | MedGemma 27B multimodal |
402
+ | :---- | :---- | :---- | :---- | :---- | :---- |
403
+ | EHRQA | 70.9 | 67.6 | 84.2 | 86.3 | 90.5 |
404
+
405
+ ### Ethics and safety evaluation
406
+
407
+ #### Evaluation approach
408
+
409
+ Our evaluation methods include structured evaluations and internal red-teaming
410
+ testing of relevant content policies. Red-teaming was conducted by a number of
411
+ different teams, each with different goals and human evaluation metrics. These
412
+ models were evaluated against a number of different categories relevant to
413
+ ethics and safety, including:
414
+
415
+ * **Child safety**: Evaluation of text-to-text and image-to-text prompts
416
+ covering child safety policies, including child sexual abuse and
417
+ exploitation.
418
+ * **Content safety**: Evaluation of text-to-text and image-to-text prompts
419
+ covering safety policies, including harassment, violence and gore, and hate
420
+ speech.
421
+ * **Representational harms**: Evaluation of text-to-text and image-to-text
422
+ prompts covering safety policies, including bias, stereotyping, and harmful
423
+ associations or inaccuracies.
424
+ * **General medical harms**: Evaluation of text-to-text and image-to-text
425
+ prompts covering safety policies, including information quality and harmful
426
+ associations or inaccuracies.
427
+
428
+ In addition to development level evaluations, we conduct "assurance evaluations"
429
+ which are our "arms-length" internal evaluations for responsibility governance
430
+ decision making. They are conducted separately from the model development team,
431
+ to inform decision making about release. High-level findings are fed back to the
432
+ model team, but prompt sets are held out to prevent overfitting and preserve the
433
+ results' ability to inform decision making. Notable assurance evaluation results
434
+ are reported to our Responsibility & Safety Council as part of release review.
435
+
436
+ #### Evaluation results
437
+
438
+ For all areas of safety testing, we saw safe levels of performance across the
439
+ categories of child safety, content safety, and representational harms. All
440
+ testing was conducted without safety filters to evaluate the model capabilities
441
+ and behaviors. For text-to-text, image-to-text, and audio-to-text, and across
442
+ both MedGemma model sizes, the model produced minimal policy violations. A
443
+ limitation of our evaluations was that they included primarily English language
444
+ prompts.
445
+
446
+ ## Data card
447
+
448
+ ### Dataset overview
449
+
450
+ #### Training
451
+
452
+ The base Gemma models are pre-trained on a large corpus of text and code data.
453
+ MedGemma multimodal variants utilize a
454
+ [SigLIP](https://arxiv.org/abs/2303.15343) image encoder that has been
455
+ specifically pre-trained on a variety of de-identified medical data, including
456
+ radiology images, histopathology images, ophthalmology images, and dermatology
457
+ images. Their LLM component is trained on a diverse set of medical data,
458
+ including medical text, medical question-answer pairs, FHIR-based electronic
459
+ health record data (27B multimodal only), radiology images, histopathology
460
+ patches, ophthalmology images, and dermatology images.
461
+
462
+ #### Evaluation
463
+
464
+ MedGemma models have been evaluated on a comprehensive set of clinically
465
+ relevant benchmarks, including over 22 datasets across 6 different tasks and 4
466
+ medical image modalities. These benchmarks include both open and internal
467
+ datasets.
468
+
469
+ #### Source
470
+
471
+ MedGemma utilizes a combination of public and private datasets.
472
+
473
+ This model was trained on diverse public datasets including MIMIC-CXR (chest
474
+ X-rays and reports), ChestImaGenome: Set of bounding boxes linking image
475
+ findings with anatomical regions for MIMIC-CXR (MedGemma 27B multimodal only),
476
+ SLAKE (multimodal medical images and questions), PAD-UFES-20 (skin lesion images
477
+ and data), SCIN (dermatology images), TCGA (cancer genomics data), CAMELYON
478
+ (lymph node histopathology images), PMC-OA (biomedical literature with images),
479
+ and Mendeley Digital Knee X-Ray (knee X-rays).
480
+
481
+ Additionally, multiple diverse proprietary datasets were licensed and
482
+ incorporated (described next).
483
+
484
+ ### Data ownership and documentation
485
+
486
+ * [MIMIC-CXR](https://physionet.org/content/mimic-cxr/2.1.0/): MIT Laboratory
487
+ for Computational Physiology and Beth Israel Deaconess Medical Center
488
+ (BIDMC).
489
+ * [Slake-VQA](https://www.med-vqa.com/slake/): The Hong Kong Polytechnic
490
+ University (PolyU), with collaborators including West China Hospital of
491
+ Sichuan University and Sichuan Academy of Medical Sciences / Sichuan
492
+ Provincial People's Hospital.
493
+ * [PAD-UFES-20](https://pmc.ncbi.nlm.nih.gov/articles/PMC7479321/): Federal
494
+ University of Espírito Santo (UFES), Brazil, through its Dermatological and
495
+ Surgical Assistance Program (PAD).
496
+ * [SCIN](https://github.com/google-research-datasets/scin): A collaboration
497
+ between Google Health and Stanford Medicine.
498
+ * [TCGA](https://portal.gdc.cancer.gov/) (The Cancer Genome Atlas): A joint
499
+ effort of National Cancer Institute and National Human Genome Research
500
+ Institute. Data from TCGA are available via the Genomic Data Commons (GDC)
501
+ * [CAMELYON](https://camelyon17.grand-challenge.org/Data/): The data was
502
+ collected from Radboud University Medical Center and University Medical
503
+ Center Utrecht in the Netherlands.
504
+ * [PMC-OA (PubMed Central Open Access
505
+ Subset)](https://catalog.data.gov/dataset/pubmed-central-open-access-subset-pmc-oa):
506
+ Maintained by the National Library of Medicine (NLM) and National Center for
507
+ Biotechnology Information (NCBI), which are part of the NIH.
508
+ * [MedQA](https://arxiv.org/pdf/2009.13081): This dataset was created by a
509
+ team of researchers led by Di Jin, Eileen Pan, Nassim Oufattole, Wei-Hung
510
+ Weng, Hanyi Fang, and Peter Szolovits
511
+ * [Mendeley Digital Knee
512
+ X-Ray](https://data.mendeley.com/datasets/t9ndx37v5h/1): This dataset is
513
+ from Rani Channamma University, and is hosted on Mendeley Data.
514
+ * [AfriMed-QA](https://afrimedqa.com/): This data was developed and led by
515
+ multiple collaborating organizations and researchers include key
516
+ contributors: Intron Health, SisonkeBiotik, BioRAMP, Georgia Institute of
517
+ Technology, and MasakhaneNLP.
518
+ * [VQA-RAD](https://www.nature.com/articles/sdata2018251): This dataset was
519
+ created by a research team led by Jason J. Lau, Soumya Gayen, Asma Ben
520
+ Abacha, and Dina Demner-Fushman and their affiliated institutions (the US
521
+ National Library of Medicine and National Institutes of Health)
522
+ * [Chest ImaGenome](https://physionet.org/content/chest-imagenome/1.0.0/): IBM
523
+ Research.
524
+ * [MedExpQA](https://www.sciencedirect.com/science/article/pii/S0933365724001805):
525
+ This dataset was created by researchers at the HiTZ Center (Basque Center
526
+ for Language Technology and Artificial Intelligence).
527
+ * [MedXpertQA](https://huggingface.co/datasets/TsinghuaC3I/MedXpertQA): This
528
+ dataset was developed by researchers at Tsinghua University (Beijing, China)
529
+ and Shanghai Artificial Intelligence Laboratory (Shanghai, China).
530
+ * [HealthSearchQA](https://huggingface.co/datasets/katielink/healthsearchqa):
531
+ This dataset consists of consisting of 3,173 commonly searched consumer
532
+ questions
533
+
534
+ In addition to the public datasets listed above, MedGemma was also trained on
535
+ de-identified, licensed datasets or datasets collected internally at Google from
536
+ consented participants.
537
+
538
+ * **Radiology dataset 1:** De-identified dataset of different CT studies
539
+ across body parts from a US-based radiology outpatient diagnostic center
540
+ network.
541
+ * **Ophthalmology dataset 1 (EyePACS):** De-identified dataset of fundus
542
+ images from diabetic retinopathy screening.
543
+ * **Dermatology dataset 1:** De-identified dataset of teledermatology skin
544
+ condition images (both clinical and dermatoscopic) from Colombia.
545
+ * **Dermatology dataset 2:** De-identified dataset of skin cancer images (both
546
+ clinical and dermatoscopic) from Australia.
547
+ * **Dermatology dataset 3:** De-identified dataset of non-diseased skin images
548
+ from an internal data collection effort.
549
+ * **Pathology dataset 1:** De-identified dataset of histopathology H\&E whole
550
+ slide images created in collaboration with an academic research hospital and
551
+ biobank in Europe. Comprises de-identified colon, prostate, and lymph nodes.
552
+ * **Pathology dataset 2:** De-identified dataset of lung histopathology H\&E
553
+ and IHC whole slide images created by a commercial biobank in the United
554
+ States.
555
+ * **Pathology dataset 3:** De-identified dataset of prostate and lymph node
556
+ H\&E and IHC histopathology whole slide images created by a contract
557
+ research organization in the United States.
558
+ * **Pathology dataset 4:** De-identified dataset of histopathology whole slide
559
+ images created in collaboration with a large, tertiary teaching hospital in
560
+ the United States. Comprises a diverse set of tissue and stain types,
561
+ predominantly H\&E.
562
+ * **EHR dataset 1:** Question/answer dataset drawn from synthetic FHIR records
563
+ created by [Synthea.](https://synthetichealth.github.io/synthea/) The test
564
+ set includes 19 unique patients with 200 questions per patient divided into
565
+ 10 different categories.
566
+
567
+ ### Data citation
568
+
569
+ * **MIMIC-CXR:** Johnson, A., Pollard, T., Mark, R., Berkowitz, S., & Horng,
570
+ S. (2024). MIMIC-CXR Database (version 2.1.0). PhysioNet.
571
+ [https://physionet.org/content/mimic-cxr/2.1.0/](https://physionet.org/content/mimic-cxr/2.1.0/)
572
+ *and* Johnson, Alistair E. W., Tom J. Pollard, Seth J. Berkowitz, Nathaniel
573
+ R. Greenbaum, Matthew P. Lungren, Chih-Ying Deng, Roger G. Mark, and Steven
574
+ Horng. 2019\. "MIMIC-CXR, a de-Identified Publicly Available Database of
575
+ Chest Radiographs with Free-Text Reports." *Scientific Data 6* (1): 1–8.
576
+
577
+ * **SLAKE:** Liu, Bo, Li-Ming Zhan, Li Xu, Lin Ma, Yan Yang, and Xiao-Ming Wu.
578
+ 2021.SLAKE: A Semantically-Labeled Knowledge-Enhanced Dataset for Medical
579
+ Visual Question Answering."
580
+ [http://arxiv.org/abs/2102.09542](http://arxiv.org/abs/2102.09542).
581
+
582
+ * **PAD-UEFS-20:** Pacheco, Andre GC, et al. "PAD-UFES-20: A skin lesion
583
+ dataset composed of patient data and clinical images collected from
584
+ smartphones." *Data in brief* 32 (2020): 106221\.
585
+
586
+ * **SCIN:** Ward, Abbi, Jimmy Li, Julie Wang, Sriram Lakshminarasimhan, Ashley
587
+ Carrick, Bilson Campana, Jay Hartford, et al. 2024\. "Creating an Empirical
588
+ Dermatology Dataset Through Crowdsourcing With Web Search Advertisements."
589
+ *JAMA Network Open 7* (11): e2446615–e2446615.
590
+
591
+ * **TCGA:** The results shown here are in whole or part based upon data
592
+ generated by the TCGA Research Network:
593
+ [https://www.cancer.gov/tcga](https://www.cancer.gov/tcga).
594
+
595
+ * **CAMELYON16:** Ehteshami Bejnordi, Babak, Mitko Veta, Paul Johannes van
596
+ Diest, Bram van Ginneken, Nico Karssemeijer, Geert Litjens, Jeroen A. W. M.
597
+ van der Laak, et al. 2017\. "Diagnostic Assessment of Deep Learning
598
+ Algorithms for Detection of Lymph Node Metastases in Women With Breast
599
+ Cancer." *JAMA 318* (22): 2199–2210.
600
+
601
+ * **Mendeley Digital Knee X-Ray:** Gornale, Shivanand; Patravali, Pooja
602
+ (2020), "Digital Knee X-ray Images", Mendeley Data, V1, doi:
603
+ 10.17632/t9ndx37v5h.1
604
+
605
+ * **VQA-RAD:** Lau, Jason J., Soumya Gayen, Asma Ben Abacha, and Dina
606
+ Demner-Fushman. 2018\. "A Dataset of Clinically Generated Visual Questions
607
+ and Answers about Radiology Images." *Scientific Data 5* (1): 1–10.
608
+
609
+ * **Chest ImaGenome:** Wu, J., Agu, N., Lourentzou, I., Sharma, A., Paguio,
610
+ J., Yao, J. S., Dee, E. C., Mitchell, W., Kashyap, S., Giovannini, A., Celi,
611
+ L. A., Syeda-Mahmood, T., & Moradi, M. (2021). Chest ImaGenome Dataset
612
+ (version 1.0.0). PhysioNet. RRID:SCR\_007345.
613
+ [https://doi.org/10.13026/wv01-y230](https://doi.org/10.13026/wv01-y230)
614
+
615
+ * **MedQA:** Jin, Di, Eileen Pan, Nassim Oufattole, Wei-Hung Weng, Hanyi Fang,
616
+ and Peter Szolovits. 2020\. "What Disease Does This Patient Have? A
617
+ Large-Scale Open Domain Question Answering Dataset from Medical Exams."
618
+ [http://arxiv.org/abs/2009.13081](http://arxiv.org/abs/2009.13081).
619
+
620
+ * **AfrimedQA:** Olatunji, Tobi, Charles Nimo, Abraham Owodunni, Tassallah
621
+ Abdullahi, Emmanuel Ayodele, Mardhiyah Sanni, Chinemelu Aka, et al. 2024\.
622
+ "AfriMed-QA: A Pan-African, Multi-Specialty, Medical Question-Answering
623
+ Benchmark Dataset."
624
+ [http://arxiv.org/abs/2411.15640](http://arxiv.org/abs/2411.15640).
625
+
626
+ * **MedExpQA:** Alonso, I., Oronoz, M., & Agerri, R. (2024). MedExpQA:
627
+ Multilingual Benchmarking of Large Language Models for Medical Question
628
+ Answering. *arXiv preprint arXiv:2404.05590*. Retrieved from
629
+ [https://arxiv.org/abs/2404.05590](https://arxiv.org/abs/2404.05590)
630
+
631
+ * **MedXpertQA:** Zuo, Yuxin, Shang Qu, Yifei Li, Zhangren Chen, Xuekai Zhu,
632
+ Ermo Hua, Kaiyan Zhang, Ning Ding, and Bowen Zhou. 2025\. "MedXpertQA:
633
+ Benchmarking Expert-Level Medical Reasoning and Understanding."
634
+ [http://arxiv.org/abs/2501.18362](http://arxiv.org/abs/2501.18362).
635
+
636
+ ### De-identification/anonymization:
637
+
638
+ Google and its partners utilize datasets that have been rigorously anonymized or
639
+ de-identified to ensure the protection of individual research participants and
640
+ patient privacy.
641
+
642
+ ## Implementation information
643
+
644
+ Details about the model internals.
645
+
646
+ ### Software
647
+
648
+ Training was done using [JAX](https://github.com/jax-ml/jax).
649
+
650
+ JAX allows researchers to take advantage of the latest generation of hardware,
651
+ including TPUs, for faster and more efficient training of large models.
652
+
653
+ ## Use and limitations
654
+
655
+ ### Intended use
656
+
657
+ MedGemma is an open multimodal generative AI model intended to be used as a
658
+ starting point that enables more efficient development of downstream healthcare
659
+ applications involving medical text and images. MedGemma is intended for
660
+ developers in the life sciences and healthcare space. Developers are responsible
661
+ for training, adapting and making meaningful changes to MedGemma to accomplish
662
+ their specific intended use. MedGemma models can be fine-tuned by developers
663
+ using their own proprietary data for their specific tasks or solutions.
664
+
665
+ MedGemma is based on Gemma 3 and has been further trained on medical images and
666
+ text. MedGemma enables further development in any medical context (image and
667
+ textual), however the model was pre-trained using chest X-ray, pathology,
668
+ dermatology, and fundus images. Examples of tasks within MedGemma's training
669
+ include visual question answering pertaining to medical images, such as
670
+ radiographs, or providing answers to textual medical questions. Full details of
671
+ all the tasks MedGemma has been evaluated can be found in the [MedGemma
672
+ Technical Report](https://arxiv.org/abs/2507.05201).
673
+
674
+ ### Benefits
675
+
676
+ * Provides strong baseline medical image and text comprehension for models of
677
+ its size.
678
+ * This strong performance makes it efficient to adapt for downstream
679
+ healthcare-based use cases, compared to models of similar size without
680
+ medical data pre-training.
681
+ * This adaptation may involve prompt engineering, grounding, agentic
682
+ orchestration or fine-tuning depending on the use case, baseline validation
683
+ requirements, and desired performance characteristics.
684
+
685
+ ### Limitations
686
+
687
+ MedGemma is not intended to be used without appropriate validation, adaptation
688
+ and/or making meaningful modification by developers for their specific use case.
689
+ The outputs generated by MedGemma are not intended to directly inform clinical
690
+ diagnosis, patient management decisions, treatment recommendations, or any other
691
+ direct clinical practice applications. Performance benchmarks highlight baseline
692
+ capabilities on relevant benchmarks, but even for image and text domains that
693
+ constitute a substantial portion of training data, inaccurate model output is
694
+ possible. All outputs from MedGemma should be considered preliminary and require
695
+ independent verification, clinical correlation, and further investigation
696
+ through established research and development methodologies.
697
+
698
+ MedGemma's multimodal capabilities have been primarily evaluated on single-image
699
+ tasks. MedGemma has not been evaluated in use cases that involve comprehension
700
+ of multiple images.
701
+
702
+ MedGemma has not been evaluated or optimized for multi-turn applications.
703
+
704
+ MedGemma's training may make it more sensitive to the specific prompt used than
705
+ Gemma 3.
706
+
707
+ When adapting MedGemma developer should consider the following:
708
+
709
+ * **Bias in validation data:** As with any research, developers should ensure
710
+ that any downstream application is validated to understand performance using
711
+ data that is appropriately representative of the intended use setting for
712
+ the specific application (e.g., age, sex, gender, condition, imaging device,
713
+ etc).
714
+ * **Data contamination concerns**: When evaluating the generalization
715
+ capabilities of a large model like MedGemma in a medical context, there is a
716
+ risk of data contamination, where the model might have inadvertently seen
717
+ related medical information during its pre-training, potentially
718
+ overestimating its true ability to generalize to novel medical concepts.
719
+ Developers should validate MedGemma on datasets not publicly available or
720
+ otherwise made available to non-institutional researchers to mitigate this
721
+ risk.
added_tokens.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ {
2
+ "<image_soft_token>": 262144
3
+ }
chat_template.jinja ADDED
@@ -0,0 +1,47 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {{ bos_token }}
2
+ {%- if messages[0]['role'] == 'system' -%}
3
+ {%- if messages[0]['content'] is string -%}
4
+ {%- set first_user_prefix = messages[0]['content'] + '
5
+
6
+ ' -%}
7
+ {%- else -%}
8
+ {%- set first_user_prefix = messages[0]['content'][0]['text'] + '
9
+
10
+ ' -%}
11
+ {%- endif -%}
12
+ {%- set loop_messages = messages[1:] -%}
13
+ {%- else -%}
14
+ {%- set first_user_prefix = "" -%}
15
+ {%- set loop_messages = messages -%}
16
+ {%- endif -%}
17
+ {%- for message in loop_messages -%}
18
+ {%- if (message['role'] == 'user') != (loop.index0 % 2 == 0) -%}
19
+ {{ raise_exception("Conversation roles must alternate user/assistant/user/assistant/...") }}
20
+ {%- endif -%}
21
+ {%- if (message['role'] == 'assistant') -%}
22
+ {%- set role = "model" -%}
23
+ {%- else -%}
24
+ {%- set role = message['role'] -%}
25
+ {%- endif -%}
26
+ {{ '<start_of_turn>' + role + '
27
+ ' + (first_user_prefix if loop.first else "") }}
28
+ {%- if message['content'] is string -%}
29
+ {{ message['content'] | trim }}
30
+ {%- elif message['content'] is iterable -%}
31
+ {%- for item in message['content'] -%}
32
+ {%- if item['type'] == 'image' -%}
33
+ {{ '<start_of_image>' }}
34
+ {%- elif item['type'] == 'text' -%}
35
+ {{ item['text'] | trim }}
36
+ {%- endif -%}
37
+ {%- endfor -%}
38
+ {%- else -%}
39
+ {{ raise_exception("Invalid content type") }}
40
+ {%- endif -%}
41
+ {{ '<end_of_turn>
42
+ ' }}
43
+ {%- endfor -%}
44
+ {%- if add_generation_prompt -%}
45
+ {{'<start_of_turn>model
46
+ '}}
47
+ {%- endif -%}
config.json ADDED
@@ -0,0 +1,124 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "Gemma3ForConditionalGeneration"
4
+ ],
5
+ "boi_token_index": 255999,
6
+ "eoi_token_index": 256000,
7
+ "eos_token_id": [
8
+ 1,
9
+ 106
10
+ ],
11
+ "image_token_index": 262144,
12
+ "initializer_range": 0.02,
13
+ "mm_tokens_per_image": 256,
14
+ "model_type": "gemma3",
15
+ "text_config": {
16
+ "attention_bias": false,
17
+ "attention_dropout": 0.0,
18
+ "attn_logit_softcapping": null,
19
+ "final_logit_softcapping": null,
20
+ "head_dim": 128,
21
+ "hidden_activation": "gelu_pytorch_tanh",
22
+ "hidden_size": 5376,
23
+ "initializer_range": 0.02,
24
+ "intermediate_size": 21504,
25
+ "layer_types": [
26
+ "sliding_attention",
27
+ "sliding_attention",
28
+ "sliding_attention",
29
+ "sliding_attention",
30
+ "sliding_attention",
31
+ "full_attention",
32
+ "sliding_attention",
33
+ "sliding_attention",
34
+ "sliding_attention",
35
+ "sliding_attention",
36
+ "sliding_attention",
37
+ "full_attention",
38
+ "sliding_attention",
39
+ "sliding_attention",
40
+ "sliding_attention",
41
+ "sliding_attention",
42
+ "sliding_attention",
43
+ "full_attention",
44
+ "sliding_attention",
45
+ "sliding_attention",
46
+ "sliding_attention",
47
+ "sliding_attention",
48
+ "sliding_attention",
49
+ "full_attention",
50
+ "sliding_attention",
51
+ "sliding_attention",
52
+ "sliding_attention",
53
+ "sliding_attention",
54
+ "sliding_attention",
55
+ "full_attention",
56
+ "sliding_attention",
57
+ "sliding_attention",
58
+ "sliding_attention",
59
+ "sliding_attention",
60
+ "sliding_attention",
61
+ "full_attention",
62
+ "sliding_attention",
63
+ "sliding_attention",
64
+ "sliding_attention",
65
+ "sliding_attention",
66
+ "sliding_attention",
67
+ "full_attention",
68
+ "sliding_attention",
69
+ "sliding_attention",
70
+ "sliding_attention",
71
+ "sliding_attention",
72
+ "sliding_attention",
73
+ "full_attention",
74
+ "sliding_attention",
75
+ "sliding_attention",
76
+ "sliding_attention",
77
+ "sliding_attention",
78
+ "sliding_attention",
79
+ "full_attention",
80
+ "sliding_attention",
81
+ "sliding_attention",
82
+ "sliding_attention",
83
+ "sliding_attention",
84
+ "sliding_attention",
85
+ "full_attention",
86
+ "sliding_attention",
87
+ "sliding_attention"
88
+ ],
89
+ "max_position_embeddings": 131072,
90
+ "model_type": "gemma3_text",
91
+ "num_attention_heads": 32,
92
+ "num_hidden_layers": 62,
93
+ "num_key_value_heads": 16,
94
+ "query_pre_attn_scalar": 168,
95
+ "rms_norm_eps": 1e-06,
96
+ "rope_local_base_freq": 10000,
97
+ "rope_scaling": {
98
+ "factor": 8.0,
99
+ "rope_type": "linear"
100
+ },
101
+ "rope_theta": 1000000,
102
+ "sliding_window": 1024,
103
+ "torch_dtype": "bfloat16",
104
+ "use_cache": true,
105
+ "vocab_size": 262208
106
+ },
107
+ "torch_dtype": "bfloat16",
108
+ "transformers_version": "4.54.0.dev0",
109
+ "vision_config": {
110
+ "attention_dropout": 0.0,
111
+ "hidden_act": "gelu_pytorch_tanh",
112
+ "hidden_size": 1152,
113
+ "image_size": 896,
114
+ "intermediate_size": 4304,
115
+ "layer_norm_eps": 1e-06,
116
+ "model_type": "siglip_vision_model",
117
+ "num_attention_heads": 16,
118
+ "num_channels": 3,
119
+ "num_hidden_layers": 27,
120
+ "patch_size": 14,
121
+ "torch_dtype": "bfloat16",
122
+ "vision_use_head": false
123
+ }
124
+ }
generation_config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 2,
4
+ "eos_token_id": [
5
+ 1,
6
+ 106
7
+ ],
8
+ "pad_token_id": 0,
9
+ "transformers_version": "4.54.0.dev0"
10
+ }
model-00001-of-00012.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:294d216750a96cebbe61c4c71b1c7e0b4b8594c17f54f8c2027589b8e7ad3026
3
+ size 4854573696
model-00002-of-00012.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:22a2bc5f459206b085a4a2577f0d5faf3633df2f049aa8918deb28c6f46bf5db
3
+ size 4954792944
model-00003-of-00012.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d868d0fc42533c24191e45174362419d72ff7e469bd56b6a5efd631fcf1be430
3
+ size 4954792976
model-00004-of-00012.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:02d92ab716c6efc46e42acbec3fecdd62660bb91f0a7c07c52b34e7e96d79b41
3
+ size 4954793016
model-00005-of-00012.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7831038a2f252486235664b98e0170944e58dfac125890a7e95a6053e746e342
3
+ size 4954793016
model-00006-of-00012.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3c88a7c553ba5c33bfc27016cd06a46c5343ae514eefc7fe43c6d71dcf77aa5f
3
+ size 4954793016
model-00007-of-00012.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7aff9d9d5543f310f0c192ee9ec2fcd83f919a77cb1aef71ec7ee76c9a3d14d4
3
+ size 4954793016
model-00008-of-00012.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4b2f0f7c866772773183eef02405be9ca4872a637d486f24a56c3ec3c8bca903
3
+ size 4954793016
model-00009-of-00012.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bc37f360d08599f8ba6b1287056991e9da70188c2ae9bcc9c2318c6af1282655
3
+ size 4954793016
model-00010-of-00012.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:14aa04da67c04e91b54db0d558b3122ed878bc9956e27155495bad243377f698
3
+ size 4954793016
model-00011-of-00012.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e543440d4412845baadca42f11487c062220795aa69c2cad7d0d753e309f2f72
3
+ size 4954793016
model-00012-of-00012.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ed5fc6a951e2d58307c6161c0ce4537e503ce1768f359cdf566f87834b489c85
3
+ size 462476696
model.safetensors.index.json ADDED
The diff for this file is too large to render. See raw diff
 
preprocessor_config.json ADDED
@@ -0,0 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "do_convert_rgb": null,
3
+ "do_normalize": true,
4
+ "do_pan_and_scan": null,
5
+ "do_rescale": true,
6
+ "do_resize": true,
7
+ "image_mean": [
8
+ 0.5,
9
+ 0.5,
10
+ 0.5
11
+ ],
12
+ "image_processor_type": "Gemma3ImageProcessor",
13
+ "image_seq_length": 256,
14
+ "image_std": [
15
+ 0.5,
16
+ 0.5,
17
+ 0.5
18
+ ],
19
+ "pan_and_scan_max_num_crops": null,
20
+ "pan_and_scan_min_crop_size": null,
21
+ "pan_and_scan_min_ratio_to_activate": null,
22
+ "processor_class": "Gemma3Processor",
23
+ "resample": 2,
24
+ "rescale_factor": 0.00392156862745098,
25
+ "size": {
26
+ "height": 896,
27
+ "width": 896
28
+ }
29
+ }
processor_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "image_seq_length": 256,
3
+ "processor_class": "Gemma3Processor"
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,33 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "boi_token": "<start_of_image>",
3
+ "bos_token": {
4
+ "content": "<bos>",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false
9
+ },
10
+ "eoi_token": "<end_of_image>",
11
+ "eos_token": {
12
+ "content": "<eos>",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false
17
+ },
18
+ "image_token": "<image_soft_token>",
19
+ "pad_token": {
20
+ "content": "<pad>",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false
25
+ },
26
+ "unk_token": {
27
+ "content": "<unk>",
28
+ "lstrip": false,
29
+ "normalized": false,
30
+ "rstrip": false,
31
+ "single_word": false
32
+ }
33
+ }
tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7d4046bf0505a327dd5a0abbb427ecd4fc82f99c2ceaa170bc61ecde12809b0c
3
+ size 33384570
tokenizer.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1299c11d7cf632ef3b4e11937501358ada021bbdf7c47638d13c0ee982f2e79c
3
+ size 4689074
tokenizer_config.json ADDED
The diff for this file is too large to render. See raw diff