FoodieQA / README.md
lyan62's picture
Update README.md
b0350a8 verified
metadata
license: cc-by-nc-nd-4.0
task_categories:
  - visual-question-answering
language:
  - en
  - zh
tags:
  - food
  - culture
  - multilingual
size_categories:
  - n<1K
pretty_name: Multimodal Dataset for Fine-Grained Understanding of Chinese Food Culture

FoodieQA: A Multimodal Dataset for Fine-Grained Understanding of Chinese Food Culture

Github Repo

๐Ÿ˜‹ We release all tools and code used to create the dataset at https://github.com/lyan62/FoodieQA.

Paper

For more details about the dataset, please refer to ๐Ÿ“„ FoodieQA: A Multimodal Dataset for Fine-Grained Understanding of Chinese Food Culture

Terms and Conditions for Data Usage

By downloading and using the data, you acknowledge that you have read, understood, and agreed to the following terms and conditions.

  1. Research Purpose: The data is provided solely for research purposes and must not be used for any commercial activities.

  2. Evaluation Only: The data may only be used for evaluation purposes and not for training models or systems.

  3. Compliance: Users must comply with all applicable laws and regulations when using the data.

  4. Attribution: Proper attribution must be given in any publications or presentations resulting from the use of this data.

  5. License: The data is released under the CC BY-NC-ND 4.0 license. Users must adhere to the terms of this license.

Data Structure

  • /images: contains all images needed for multi-image VQA and single-image VQA task.

  • mivqa_tidy.json questions for Multi-image VQA task.

    • data format
      {
          "question": "ๅ“ชไธ€้“่œ้€‚ๅˆๅ–œๆฌขๅƒ่‚ ็š„ไบบ๏ผŸ",
          "choices": "",
          "answer": "0",
          "question_type": "ingredients",
          "question_id": qid,
          "ann_group": "้—ฝ",
          "images": [
              img1_path, img2_path, img3_path, img4_path
          ],
          "question_en": "Which dish is for people who like intestine?"
      }
      
  • sivqa_tidy.json question for Single-image VQA task.

    • data format
      {
          "question": "ๅ›พ็‰‡ไธญ็š„้ฃŸ็‰ฉๆ˜ฏๅ“ชไธชๅœฐๅŒบ็š„็‰น่‰ฒ็พŽ้ฃŸ?",
          "choices": [
              ...
          ],
          "answer": "3",
          "question_type": "region-2",
          "food_name": "ๆข…่œๆ‰ฃ่‚‰",
          "question_id": "vqa-34",
          "food_meta": {
              "main_ingredient": [
                  "่‚‰"
              ],
              "id": 253,
              "food_name": "ๆข…่œๆ‰ฃ่‚‰",
              "food_type": "ๅฎขๅฎถ่œ",
              "food_location": "้ค้ฆ†",
              "food_file": img_path
          },
          "question_en": translated_question,
          "choices_en": [
              translated_choices1,
              ...
          ]
      }
      
  • textqa_tidy.json

    • data format
      {
          "question": "้…’้…ฟๅœ†ๅญๅฑžไบŽๅ“ชไธช่œ็ณป?",
          "choices": [
              ...
          ],
          "answer": "1",
          "question_type": "cuisine_type",
          "food_name": "้…’้…ฟๅœ†ๅญ",
          "cuisine_type": "่‹่œ",
          "question_id": "textqa-101"
      },
      

Models and results for the VQA tasks

Evaluation Multi-image VQA (ZH) Multi-image VQA (EN) Single-image VQA (ZH) Single-image VQA (EN)
Human 91.69 77.22โ€  74.41 46.53โ€ 
Phi-3-vision-4.2B 29.03 33.75 42.58 44.53
Idefics2-8B 50.87 41.69 46.87 52.73
Mantis-8B 46.65 43.67 41.80 47.66
Qwen-VL-12B 32.26 27.54 48.83 42.97
Yi-VL-6B - - 49.61 41.41
Yi-VL-34B - - 52.73 48.05
GPT-4V 78.92 69.23 63.67 60.16
GPT-4o 86.35 80.64 72.66 67.97

Models and results for the TextQA task

Model Best Accuracy Prompt
Phi-3-medium 41.28 1
Mistral-7B-instruct 35.18 1
Llama3-8B-Chinese 47.38 1
YI-6B 25.53 3
YI-34B 46.38 3
Qwen2-7B-instruct 68.23 3
GPT-4 60.99 1

BibTeX Citation

@article{li2024foodieqa,
  title={FoodieQA: A Multimodal Dataset for Fine-Grained Understanding of Chinese Food Culture},
  author={Li, Wenyan and Zhang, Xinyu and Li, Jiaang and Peng, Qiwei and Tang, Raphael and Zhou, Li and Zhang, Weijia and Hu, Guimin and Yuan, Yifei and S{\o}gaard, Anders and others},
  journal={arXiv preprint arXiv:2406.11030},
  year={2024}
}