aarav7397 derek-thomas commited on
Commit
a35427c
·
verified ·
0 Parent(s):

Duplicate from derek-thomas/ScienceQA

Browse files

Co-authored-by: Derek Thomas <derek-thomas@users.noreply.huggingface.co>

.gitattributes ADDED
@@ -0,0 +1,54 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.lz4 filter=lfs diff=lfs merge=lfs -text
12
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
13
+ *.model filter=lfs diff=lfs merge=lfs -text
14
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
15
+ *.npy filter=lfs diff=lfs merge=lfs -text
16
+ *.npz filter=lfs diff=lfs merge=lfs -text
17
+ *.onnx filter=lfs diff=lfs merge=lfs -text
18
+ *.ot filter=lfs diff=lfs merge=lfs -text
19
+ *.parquet filter=lfs diff=lfs merge=lfs -text
20
+ *.pb filter=lfs diff=lfs merge=lfs -text
21
+ *.pickle filter=lfs diff=lfs merge=lfs -text
22
+ *.pkl filter=lfs diff=lfs merge=lfs -text
23
+ *.pt filter=lfs diff=lfs merge=lfs -text
24
+ *.pth filter=lfs diff=lfs merge=lfs -text
25
+ *.rar filter=lfs diff=lfs merge=lfs -text
26
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
27
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
28
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
29
+ *.tflite filter=lfs diff=lfs merge=lfs -text
30
+ *.tgz filter=lfs diff=lfs merge=lfs -text
31
+ *.wasm filter=lfs diff=lfs merge=lfs -text
32
+ *.xz filter=lfs diff=lfs merge=lfs -text
33
+ *.zip filter=lfs diff=lfs merge=lfs -text
34
+ *.zst filter=lfs diff=lfs merge=lfs -text
35
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ # Audio files - uncompressed
37
+ *.pcm filter=lfs diff=lfs merge=lfs -text
38
+ *.sam filter=lfs diff=lfs merge=lfs -text
39
+ *.raw filter=lfs diff=lfs merge=lfs -text
40
+ # Audio files - compressed
41
+ *.aac filter=lfs diff=lfs merge=lfs -text
42
+ *.flac filter=lfs diff=lfs merge=lfs -text
43
+ *.mp3 filter=lfs diff=lfs merge=lfs -text
44
+ *.ogg filter=lfs diff=lfs merge=lfs -text
45
+ *.wav filter=lfs diff=lfs merge=lfs -text
46
+ # Image files - uncompressed
47
+ *.bmp filter=lfs diff=lfs merge=lfs -text
48
+ *.gif filter=lfs diff=lfs merge=lfs -text
49
+ *.png filter=lfs diff=lfs merge=lfs -text
50
+ *.tiff filter=lfs diff=lfs merge=lfs -text
51
+ # Image files - compressed
52
+ *.jpg filter=lfs diff=lfs merge=lfs -text
53
+ *.jpeg filter=lfs diff=lfs merge=lfs -text
54
+ *.webp filter=lfs diff=lfs merge=lfs -text
.gitignore ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ .ipynb_checkpoints
2
+ .idea
3
+ images/
4
+ text/
README.md ADDED
@@ -0,0 +1,301 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-sa-4.0
3
+ annotations_creators:
4
+ - expert-generated
5
+ - found
6
+ language:
7
+ - en
8
+ language_creators:
9
+ - expert-generated
10
+ - found
11
+ multilinguality:
12
+ - monolingual
13
+ paperswithcode_id: scienceqa
14
+ pretty_name: ScienceQA
15
+ size_categories:
16
+ - 10K<n<100K
17
+ source_datasets:
18
+ - original
19
+ tags:
20
+ - multi-modal-qa
21
+ - science
22
+ - chemistry
23
+ - biology
24
+ - physics
25
+ - earth-science
26
+ - engineering
27
+ - geography
28
+ - history
29
+ - world-history
30
+ - civics
31
+ - economics
32
+ - global-studies
33
+ - grammar
34
+ - writing
35
+ - vocabulary
36
+ - natural-science
37
+ - language-science
38
+ - social-science
39
+ task_categories:
40
+ - multiple-choice
41
+ - question-answering
42
+ - other
43
+ - visual-question-answering
44
+ - text-classification
45
+ task_ids:
46
+ - multiple-choice-qa
47
+ - closed-domain-qa
48
+ - open-domain-qa
49
+ - visual-question-answering
50
+ - multi-class-classification
51
+ dataset_info:
52
+ features:
53
+ - name: image
54
+ dtype: image
55
+ - name: question
56
+ dtype: string
57
+ - name: choices
58
+ sequence: string
59
+ - name: answer
60
+ dtype: int8
61
+ - name: hint
62
+ dtype: string
63
+ - name: task
64
+ dtype: string
65
+ - name: grade
66
+ dtype: string
67
+ - name: subject
68
+ dtype: string
69
+ - name: topic
70
+ dtype: string
71
+ - name: category
72
+ dtype: string
73
+ - name: skill
74
+ dtype: string
75
+ - name: lecture
76
+ dtype: string
77
+ - name: solution
78
+ dtype: string
79
+ splits:
80
+ - name: train
81
+ num_bytes: 16416902
82
+ num_examples: 12726
83
+ - name: validation
84
+ num_bytes: 5404896
85
+ num_examples: 4241
86
+ - name: test
87
+ num_bytes: 5441676
88
+ num_examples: 4241
89
+ download_size: 0
90
+ dataset_size: 27263474
91
+ ---
92
+
93
+ # Dataset Card Creation Guide
94
+
95
+ ## Table of Contents
96
+ - [Dataset Card Creation Guide](#dataset-card-creation-guide)
97
+ - [Table of Contents](#table-of-contents)
98
+ - [Dataset Description](#dataset-description)
99
+ - [Dataset Summary](#dataset-summary)
100
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
101
+ - [Languages](#languages)
102
+ - [Dataset Structure](#dataset-structure)
103
+ - [Data Instances](#data-instances)
104
+ - [Data Fields](#data-fields)
105
+ - [Data Splits](#data-splits)
106
+ - [Dataset Creation](#dataset-creation)
107
+ - [Curation Rationale](#curation-rationale)
108
+ - [Source Data](#source-data)
109
+ - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
110
+ - [Who are the source language producers?](#who-are-the-source-language-producers)
111
+ - [Annotations](#annotations)
112
+ - [Annotation process](#annotation-process)
113
+ - [Who are the annotators?](#who-are-the-annotators)
114
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
115
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
116
+ - [Social Impact of Dataset](#social-impact-of-dataset)
117
+ - [Discussion of Biases](#discussion-of-biases)
118
+ - [Other Known Limitations](#other-known-limitations)
119
+ - [Additional Information](#additional-information)
120
+ - [Dataset Curators](#dataset-curators)
121
+ - [Licensing Information](#licensing-information)
122
+ - [Citation Information](#citation-information)
123
+ - [Contributions](#contributions)
124
+
125
+ ## Dataset Description
126
+
127
+ - **Homepage:** [https://scienceqa.github.io/index.html#home](https://scienceqa.github.io/index.html#home)
128
+ - **Repository:** [https://github.com/lupantech/ScienceQA](https://github.com/lupantech/ScienceQA)
129
+ - **Paper:** [https://arxiv.org/abs/2209.09513](https://arxiv.org/abs/2209.09513)
130
+ - **Leaderboard:** [https://paperswithcode.com/dataset/scienceqa](https://paperswithcode.com/dataset/scienceqa)
131
+ - **Point of Contact:** [Pan Lu](https://lupantech.github.io/) or file an issue on [Github](https://github.com/lupantech/ScienceQA/issues)
132
+
133
+ ### Dataset Summary
134
+
135
+ Learn to Explain: Multimodal Reasoning via Thought Chains for Science Question Answering
136
+
137
+ ### Supported Tasks and Leaderboards
138
+
139
+ Multi-modal Multiple Choice
140
+
141
+ ### Languages
142
+
143
+ English
144
+
145
+ ## Dataset Structure
146
+
147
+ ### Data Instances
148
+
149
+ Explore more samples [here](https://scienceqa.github.io/explore.html).
150
+
151
+ ``` json
152
+ {'image': Image,
153
+ 'question': 'Which of these states is farthest north?',
154
+ 'choices': ['West Virginia', 'Louisiana', 'Arizona', 'Oklahoma'],
155
+ 'answer': 0,
156
+ 'hint': '',
157
+ 'task': 'closed choice',
158
+ 'grade': 'grade2',
159
+ 'subject': 'social science',
160
+ 'topic': 'geography',
161
+ 'category': 'Geography',
162
+ 'skill': 'Read a map: cardinal directions',
163
+ 'lecture': 'Maps have four cardinal directions, or main directions. Those directions are north, south, east, and west.\nA compass rose is a set of arrows that point to the cardinal directions. A compass rose usually shows only the first letter of each cardinal direction.\nThe north arrow points to the North Pole. On most maps, north is at the top of the map.',
164
+ 'solution': 'To find the answer, look at the compass rose. Look at which way the north arrow is pointing. West Virginia is farthest north.'}
165
+ ```
166
+
167
+ Some records might be missing any or all of image, lecture, solution.
168
+
169
+ ### Data Fields
170
+
171
+ - `image` : Contextual image
172
+ - `question` : Prompt relating to the `lecture`
173
+ - `choices` : Multiple choice answer with 1 correct to the `question`
174
+ - `answer` : Index of choices corresponding to the correct answer
175
+ - `hint` : Hint to help answer the `question`
176
+ - `task` : Task description
177
+ - `grade` : Grade level from K-12
178
+ - `subject` : High level
179
+ - `topic` : natural-sciences, social-science, or language-science
180
+ - `category` : A subcategory of `topic`
181
+ - `skill` : A description of the task required
182
+ - `lecture` : A relevant lecture that a `question` is generated from
183
+ - `solution` : Instructions on how to solve the `question`
184
+
185
+
186
+ Note that the descriptions can be initialized with the **Show Markdown Data Fields** output of the [Datasets Tagging app](https://huggingface.co/spaces/huggingface/datasets-tagging), you will then only need to refine the generated descriptions.
187
+
188
+ ### Data Splits
189
+ - name: train
190
+ - num_bytes: 16416902
191
+ - num_examples: 12726
192
+ - name: validation
193
+ - num_bytes: 5404896
194
+ - num_examples: 4241
195
+ - name: test
196
+ - num_bytes: 5441676
197
+ - num_examples: 4241
198
+
199
+ ## Dataset Creation
200
+
201
+ ### Curation Rationale
202
+
203
+ When answering a question, humans utilize the information available across different modalities to synthesize a consistent and complete chain of thought (CoT). This process is normally a black box in the case of deep learning models like large-scale language models. Recently, science question benchmarks have been used to diagnose the multi-hop reasoning ability and interpretability of an AI system. However, existing datasets fail to provide annotations for the answers, or are restricted to the textual-only modality, small scales, and limited domain diversity. To this end, we present Science Question Answering (ScienceQA).
204
+
205
+ ### Source Data
206
+
207
+ ScienceQA is collected from elementary and high school science curricula.
208
+
209
+ #### Initial Data Collection and Normalization
210
+
211
+ See Below
212
+
213
+ #### Who are the source language producers?
214
+
215
+ See Below
216
+
217
+ ### Annotations
218
+
219
+ Questions in the ScienceQA dataset are sourced from open resources managed by IXL Learning,
220
+ an online learning platform curated by experts in the field of K-12 education. The dataset includes
221
+ problems that align with California Common Core Content Standards. To construct ScienceQA, we
222
+ downloaded the original science problems and then extracted individual components (e.g. questions,
223
+ hints, images, options, answers, lectures, and solutions) from them based on heuristic rules.
224
+ We manually removed invalid questions, such as questions that have only one choice, questions that
225
+ contain faulty data, and questions that are duplicated, to comply with fair use and transformative
226
+ use of the law. If there were multiple correct answers that applied, we kept only one correct answer.
227
+ Also, we shuffled the answer options of each question to ensure the choices do not follow any
228
+ specific pattern. To make the dataset easy to use, we then used semi-automated scripts to reformat
229
+ the lectures and solutions. Therefore, special structures in the texts, such as tables and lists, are
230
+ easily distinguishable from simple text passages. Similar to ImageNet, ReClor, and PMR datasets,
231
+ ScienceQA is available for non-commercial research purposes only and the copyright belongs to
232
+ the original authors. To ensure data quality, we developed a data exploration tool to review examples
233
+ in the collected dataset, and incorrect annotations were further manually revised by experts. The tool
234
+ can be accessed at https://scienceqa.github.io/explore.html.
235
+
236
+ #### Annotation process
237
+
238
+ See above
239
+
240
+ #### Who are the annotators?
241
+
242
+ See above
243
+
244
+ ### Personal and Sensitive Information
245
+
246
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
247
+
248
+ ## Considerations for Using the Data
249
+
250
+ ### Social Impact of Dataset
251
+
252
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
253
+
254
+ ### Discussion of Biases
255
+
256
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
257
+
258
+ ### Other Known Limitations
259
+
260
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
261
+
262
+ ## Additional Information
263
+
264
+ ### Dataset Curators
265
+
266
+ - Pan Lu1,3
267
+ - Swaroop Mishra2,3
268
+ - Tony Xia1
269
+ - Liang Qiu1
270
+ - Kai-Wei Chang1
271
+ - Song-Chun Zhu1
272
+ - Oyvind Tafjord3
273
+ - Peter Clark3
274
+ - Ashwin Kalyan3
275
+
276
+ From:
277
+ 1. University of California, Los Angeles
278
+ 2. Arizona State University
279
+ 3. Allen Institute for AI
280
+
281
+
282
+
283
+ ### Licensing Information
284
+
285
+ [Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)
286
+ ](https://creativecommons.org/licenses/by-nc-sa/4.0/)
287
+
288
+ ### Citation Information
289
+
290
+ Provide the [BibTex](http://www.bibtex.org/)-formatted reference for the dataset. For example:
291
+ ```
292
+ @inproceedings{lu2022learn,
293
+ title={Learn to Explain: Multimodal Reasoning via Thought Chains for Science Question Answering},
294
+ author={Lu, Pan and Mishra, Swaroop and Xia, Tony and Qiu, Liang and Chang, Kai-Wei and Zhu, Song-Chun and Tafjord, Oyvind and Clark, Peter and Ashwin Kalyan},
295
+ booktitle={The 36th Conference on Neural Information Processing Systems (NeurIPS)},
296
+ year={2022}
297
+ }
298
+ ```
299
+ ### Contributions
300
+
301
+ Thanks to [Derek Thomas](https://huggingface.co/derek-thomas) [@datavistics](https://github.com/datavistics) for adding this dataset.
data/test-00000-of-00001-f0e719df791966ff.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:235e92d1f30155266df76bc9f28fc6e4fcb6bec2c6a8c7d67f9086ea6b392a84
3
+ size 122386007
data/train-00000-of-00001-1028f23e353fbe3e.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:62c90a28e3fb1bc0ad7bbcab1ac62b483ae6758291a655944d8f494bf6445745
3
+ size 377460993
data/validation-00000-of-00001-6c7328ff6c84284c.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e66b5215ce71e748ade4ee629bc14fbf86762071d355a4fcd831581cc04d72d8
3
+ size 126235086
tutorial/ScienceQA.py ADDED
@@ -0,0 +1,122 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import json
2
+ from pathlib import Path
3
+
4
+ import datasets
5
+
6
+ _DESCRIPTION = """Science Question Answering (ScienceQA), a new benchmark that consists of 21,208 multimodal
7
+ multiple choice questions with a diverse set of science topics and annotations of their answers
8
+ with corresponding lectures and explanations.
9
+ The lecture and explanation provide general external knowledge and specific reasons,
10
+ respectively, for arriving at the correct answer."""
11
+
12
+ # Lets use the project page instead of the github repo
13
+ _HOMEPAGE = "https://scienceqa.github.io"
14
+
15
+ _CITATION = """\
16
+ @inproceedings{lu2022learn,
17
+ title={Learn to Explain: Multimodal Reasoning via Thought Chains for Science Question Answering},
18
+ author={Lu, Pan and Mishra, Swaroop and Xia, Tony and Qiu, Liang and Chang, Kai-Wei and Zhu, Song-Chun and Tafjord, Oyvind and Clark, Peter and Ashwin Kalyan},
19
+ booktitle={The 36th Conference on Neural Information Processing Systems (NeurIPS)},
20
+ year={2022}
21
+ }
22
+ """
23
+
24
+ _LICENSE = "Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)"
25
+
26
+
27
+ class ScienceQA(datasets.GeneratorBasedBuilder):
28
+ """Science Question Answering (ScienceQA), a new benchmark that consists of 21,208 multimodal
29
+ multiple choice questions with a diverse set of science topics and annotations of their answers
30
+ with corresponding lectures and explanations.
31
+ The lecture and explanation provide general external knowledge and specific reasons,
32
+ respectively, for arriving at the correct answer."""
33
+
34
+ VERSION = datasets.Version("1.0.0")
35
+
36
+ def _info(self):
37
+ return datasets.DatasetInfo(
38
+ description=_DESCRIPTION,
39
+ features=datasets.Features(
40
+ {
41
+ "image": datasets.Image(),
42
+ "question": datasets.Value("string"),
43
+ "choices": datasets.features.Sequence(datasets.Value("string")),
44
+ "answer": datasets.Value("int8"),
45
+ "hint": datasets.Value("string"),
46
+ "task": datasets.Value("string"),
47
+ "grade": datasets.Value("string"),
48
+ "subject": datasets.Value("string"),
49
+ "topic": datasets.Value("string"),
50
+ "category": datasets.Value("string"),
51
+ "skill": datasets.Value("string"),
52
+ "lecture": datasets.Value("string"),
53
+ "solution": datasets.Value("string")
54
+ }
55
+ ),
56
+ homepage=_HOMEPAGE,
57
+ citation=_CITATION,
58
+ license=_LICENSE,
59
+ )
60
+
61
+ def _split_generators(self, dl_manager):
62
+ text_path = Path.cwd() / 'text' / 'problems.json'
63
+ image_dir = Path.cwd() / 'images'
64
+ return [
65
+ datasets.SplitGenerator(
66
+ name=datasets.Split.TRAIN,
67
+ # These kwargs will be passed to _generate_examples
68
+ gen_kwargs={
69
+ "text_path": text_path,
70
+ "image_dir": image_dir,
71
+ "split": "train",
72
+ },
73
+ ),
74
+ datasets.SplitGenerator(
75
+ name=datasets.Split.VALIDATION,
76
+ # These kwargs will be passed to _generate_examples
77
+ gen_kwargs={
78
+ "text_path": text_path,
79
+ "image_dir": image_dir,
80
+ "split": "val",
81
+ },
82
+ ),
83
+ datasets.SplitGenerator(
84
+ name=datasets.Split.TEST,
85
+ # These kwargs will be passed to _generate_examples
86
+ gen_kwargs={
87
+ "text_path": text_path,
88
+ "image_dir": image_dir,
89
+ "split": "test"
90
+ },
91
+ ),
92
+ ]
93
+
94
+ # method parameters are unpacked from `gen_kwargs` as given in `_split_generators`
95
+ def _generate_examples(self, text_path, image_dir, split):
96
+ with open(text_path, encoding="utf-8") as f:
97
+ # Load all the text. Note that if this was HUGE, we would need to find a better way to load the json
98
+ data = json.load(f)
99
+ ignore_keys = ['image', 'split']
100
+
101
+ # Get image_id from its annoying location
102
+ for image_id, row in data.items():
103
+ # Only look for the rows in our split
104
+ if row['split'] == split:
105
+
106
+ # Note, not all rows have images.
107
+ # Get all the image data we need
108
+ if row['image']:
109
+ image_path = image_dir / split / image_id / 'image.png'
110
+ image_bytes = image_path.read_bytes()
111
+ image_dict = {'path': str(image_path), 'bytes': image_bytes}
112
+ else:
113
+ image_dict = None
114
+
115
+ # Keep only the keys we need
116
+ relevant_row = {k: v for k, v in row.items() if k not in ignore_keys}
117
+
118
+ return_dict = {
119
+ 'image': image_dict,
120
+ **relevant_row
121
+ }
122
+ yield image_id, return_dict
tutorial/create_dataset.ipynb ADDED
The diff for this file is too large to render. See raw diff
 
tutorial/download.sh ADDED
@@ -0,0 +1,36 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/bin/bash
2
+ # Modified from the original here: https://github.com/lupantech/ScienceQA/blob/main/tools/download.sh
3
+
4
+ cd images
5
+
6
+ if [ -d "train" ];
7
+ then
8
+ echo "Already downloaded train"
9
+ else
10
+ ls -alF
11
+ wget https://scienceqa.s3.us-west-1.amazonaws.com/images/train.zip
12
+ unzip -q train.zip
13
+ rm train.zip
14
+ fi
15
+
16
+ if [ -d "val" ];
17
+ then
18
+ echo "Already downloaded val"
19
+ else
20
+ ls -alF
21
+ wget https://scienceqa.s3.us-west-1.amazonaws.com/images/val.zip
22
+ unzip -q val.zip
23
+ rm val.zip
24
+ fi
25
+
26
+ if [ -d "test" ];
27
+ then
28
+ echo "Already downloaded test"
29
+ else
30
+ ls -alF
31
+ wget https://scienceqa.s3.us-west-1.amazonaws.com/images/test.zip
32
+ unzip -q test.zip
33
+ rm test.zip
34
+ fi
35
+
36
+ echo "Completed downloads!"