Datasets:

ArXiv:
License:
LbhYqh commited on
Commit
e488192
·
verified ·
1 Parent(s): 1751977

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +86 -0
README.md CHANGED
@@ -118,6 +118,91 @@ Additionally, metadata about these images is provided in **six JSON files**, cor
118
  ---
119
  ## Evaluation Metric
120
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
121
  ## Prompts
122
  ### Generation Prompts
123
  #### Answer Generation Prompt for LLM-Based Method
@@ -323,6 +408,7 @@ Additionally, metadata about these images is provided in **six JSON files**, cor
323
  Explanation:
324
  This format represents the contextual information surrounding the image within its original document. It provides supplementary information to assist in evaluating the image.
325
 
 
326
  # Revised Evaluation Criteria:
327
  Strictly follow the criteria below to assign a score of 0 or 1:
328
  - 0 point, Inappropriate Position: The image is irrelevant to both the preceding and following context, or the position of the image does not enhance content understanding or visual appeal. The insertion of the image does not align with the logical progression of the text and fails to improve the reading experience or information transmission.
 
118
  ---
119
  ## Evaluation Metric
120
 
121
+ In this section, we will provide a detailed introduction to evaluation metrics.
122
+
123
+ ### Retrieve Evaluation
124
+
125
+ To evaluate the retrieval performance, we consider the following metrics:
126
+
127
+ - **Context Recall** uses LLMs to evaluate whether the retrieved documents contain all the relevant information required for answer generation.
128
+ - **Visual Recall** measures the percentage of retrieved images relative to the total number of images in the ground truth.
129
+ It is computed as:
130
+
131
+ \[
132
+ \text{Visual Recall} = \frac{\text{Retrieved Relevant Images}}{\text{Total Relevant Images in Ground Truth}}
133
+ \]
134
+
135
+ where "Retrieved Relevant Images" refers to the number of images retrieved that are present in the ground truth, and "Total Relevant Images in Ground Truth" refers to the total number of relevant images that should have been retrieved.
136
+
137
+ ### Generation Evaluation
138
+
139
+ To evaluate the performance of multimodal answers, we consider the following metrics, which can be divided into two categories: statistical-based metrics (first six metrics) and LLM-based metrics (last four metrics).
140
+
141
+ We use the following _statistical-based metrics_:
142
+
143
+ - **Image Precision** measures the percentage of correct images in the multimodal answer relative to the total number of inserted images, assessing whether irrelevant images were introduced.
144
+ It is computed as:
145
+
146
+ \[
147
+ \text{Image Precision} = \frac{\text{True Positives}}{\text{True Positives} + \text{False Positives}}
148
+ \]
149
+
150
+ where True Positives are the correctly inserted images, and False Positives are irrelevant images that were included.
151
+
152
+ - **Image Recall** measures the percentage of correct images in the multimodal answer relative to the total number of images in the ground truth, evaluating whether the answer effectively includes useful image information.
153
+ It is computed as:
154
+
155
+ \[
156
+ \text{Image Recall} = \frac{\text{True Positives}}{\text{True Positives} + \text{False Negatives}}
157
+ \]
158
+
159
+ where False Negatives are the images in the ground truth that were omitted in the generated multimodal answer.
160
+
161
+ - **Image F1 Score** is the harmonic mean of Precision and Recall, providing an overall evaluation of the image quality in the multimodal answer.
162
+ It is calculated as:
163
+
164
+ \[
165
+ \text{Image F1 Score} = 2 \times \frac{\text{Image Precision} \times \text{Image Recall}}{\text{Image Precision} + \text{Image Recall}}
166
+ \]
167
+
168
+ - **Image Ordering Score** evaluates whether the order of images inserted into the multimodal answer matches the order of images in the ground truth.
169
+ Specifically, we compute the weighted edit distance between the two image sequences to reflect the difference in their order.
170
+
171
+ - **Data Format** (For lifestyle datasets):
172
+ - **Ground-truth**: \( A = a_1 \rightarrow a_2 \rightarrow \cdots \rightarrow a_n \), where \( a_i \) represents the image at the \( i \)-th position in the order.
173
+ - **Answer**: \( B = b_1 \rightarrow b_2 \rightarrow \cdots \rightarrow b_m \), where \( b_j \) is not necessarily in \(\{a_i\}\), and \( m \) is not necessarily equal to \( n \).
174
+
175
+ - **Scoring Formula**:
176
+
177
+ \[
178
+ \text{Score} = \frac{|A \cap B|}{n} \times \left( 1 - \frac{1}{p} \times \min\left(\frac{\text{dist}(A, B)}{\operatorname{max}(n, m)}, p \right)\right)
179
+ \]
180
+
181
+ - Ensures a score of 0 when no correct images are present.
182
+ - \( \frac{|A \cap B|}{n} \): Normalization factor.
183
+
184
+ - **Details**:
185
+ - Here, \( \text{dist}(A, B) \) represents the weighted edit distance between string \( A \) and string \( B \), i.e., the minimum total cost to transform string \( B \) into string \( A \) through the following three operations:
186
+ - **String Insertion**: If \( B \) is missing certain images, insert an image from \( A \) into a specific position in \( B \). The operation cost is \( p_1 \).
187
+ - **String Deletion**: If \( B \) contains extra irrelevant images, delete them. The operation cost is \( p_2 \).
188
+ - **String Substitution**: If the positions of images in \( B \) do not match \( A \), substitute the image in \( B \) with the corresponding image from \( A \). The operation cost is \( p_3 \).
189
+ - The weights generally satisfy \( p_1 > p_2 > p_3 \), and \( p \geq p_1 \) ensures the final score falls within the range \([0, 1]\).
190
+ - Weighted edit distance can be computed using dynamic programming, with a time complexity of \( O(mn) \).
191
+
192
+ - **Rouge-L** is a text generation evaluation metric based on the longest common subsequence, measuring the structural similarity between the answer and the ground truth.
193
+ - **BERTScore** is a text generation evaluation metric based on the pre-trained language model BERT, used to assess the semantic similarity between the text in the generated multimodal answer and the ground truth.
194
+
195
+ We use the following _LLM-based metrics_:
196
+
197
+ - **Image Relevance** measures evaluate the relevance of the inserted image to the query-answer pair, specifically assessing whether the content described by the image is meaningfully related to the content of the QA. This metric assigns a score to each image appearing in the answer, with scores ranging from 1 to 5.
198
+ - **Image Effectiveness** measures evaluate the effectiveness of the images inserted into the multimodal answer, assessing whether the images align with the QA content and contribute to the understanding of the answer.
199
+ This metric also assigns a score to each image, with scores ranging from 1 to 5.
200
+ - **Image Position Score** is used to assess the appropriateness of the image placement in the multimodal answer.
201
+ It assigns a score of either 0 or 1 to each image, based on whether its position is deemed correct and suitable.
202
+ - **Comprehensive Score** reflects overall quality of the multimodal answer, evaluating whether the answer appropriately addresses the query and maintains overall coherence. It particularly considers whether the insertion of images enhances the answer, making it visually engaging and more expressive.
203
+ This metric assigns a score to the complete answer, with scores ranging from 1 to 5.
204
+
205
+
206
  ## Prompts
207
  ### Generation Prompts
208
  #### Answer Generation Prompt for LLM-Based Method
 
408
  Explanation:
409
  This format represents the contextual information surrounding the image within its original document. It provides supplementary information to assist in evaluating the image.
410
 
411
+
412
  # Revised Evaluation Criteria:
413
  Strictly follow the criteria below to assign a score of 0 or 1:
414
  - 0 point, Inappropriate Position: The image is irrelevant to both the preceding and following context, or the position of the image does not enhance content understanding or visual appeal. The insertion of the image does not align with the logical progression of the text and fails to improve the reading experience or information transmission.