Junming Yang commited on
Commit
810758f
2 Parent(s): e401827 50f8568

Merge pull request #204 from junming-yang/leaderboard

Browse files
Files changed (1) hide show
  1. meta_data.py +9 -0
meta_data.py CHANGED
@@ -184,4 +184,13 @@ LEADERBOARD_MD['OCRVQA_TESTCORE'] = """
184
 
185
  - OCRVQA is a benchmark for visual question answering by reading text in images. It presents a large-scale dataset, OCR-VQA-200K, comprising over 200,000 images of book covers. The study combines techniques from the Optical Character Recognition (OCR) and Visual Question Answering (VQA) domains to address the challenges associated with this new task and dataset.
186
  - Note that some models may not be able to generate standardized responses based on the prompt. We currently do not have reports for these models.
 
 
 
 
 
 
 
 
 
187
  """
 
184
 
185
  - OCRVQA is a benchmark for visual question answering by reading text in images. It presents a large-scale dataset, OCR-VQA-200K, comprising over 200,000 images of book covers. The study combines techniques from the Optical Character Recognition (OCR) and Visual Question Answering (VQA) domains to address the challenges associated with this new task and dataset.
186
  - Note that some models may not be able to generate standardized responses based on the prompt. We currently do not have reports for these models.
187
+ """
188
+
189
+ LEADERBOARD_MD['POPE'] = """
190
+ ## POPE Evaluation Results
191
+
192
+ - POPE is a benchmark for object hallucination evaluation. It includes three tracks of object hallucination: random, popular, and adversarial.
193
+ - Note that the official POPE dataset contains approximately 8910 cases. POPE includes three tracks, and there are some overlapping samples among the three tracks. To reduce the data file size, we have kept only a single copy of the overlapping samples (about 5127 examples). However, the final accuracy will be calculated on the ~9k samples.
194
+ - Some API models, due to safety policies, refuse to answer certain questions, so their actual capabilities may be higher than the reported scores.
195
+ - We report the average F1 score across the three types of data as the overall score. Accuracy, precision, and recall are also shown in the table. F1 score = 2 * (precision * recall) / (precision + recall).
196
  """