visual_haystacks / README.md
tsunghanwu's picture
Update README.md
f23b8e3 verified
|
raw
history blame
1.72 kB
metadata
license: mit

Visual Haystacks Dataset Card

Dataset details

  1. Dataset type: Visual Haystacks (VHs) is a benchmark dataset specifically designed to evaluate the Large Multimodal Model's (LMM's) capability to handle long-context visual information. It can also be viewed as the first visual-centric Needle-In-A-Haystack (NIAH) benchmark dataset. Please also download COCO-2017's training set validation set.

  2. Data Preparation and Benchmarking

  • Download the VQA questions:
    huggingface-cli download --repo-type dataset tsunghanwu/visual_haystacks --local-dir dataset/VHs_qa
    
  • Download the COCO 2017 dataset and organize it as follows, with the default root directory ./dataset/coco:
    dataset/
    β”œβ”€β”€ coco
    β”‚   β”œβ”€β”€ annotations
    β”‚   β”œβ”€β”€ test2017
    β”‚   └── val2017
    └── VHs_qa
        β”œβ”€β”€ VHs_full
        β”‚   β”œβ”€β”€ multi_needle
        β”‚   └── single_needle
        └── VHs_small
            β”œβ”€β”€ multi_needle
            └── single_needle
    
  • Follow the instructions in https://github.com/visual-haystacks/vhs_benchmark to run the evaluation
  1. Please check out our project page for more information. You can also send questions or comments about the model to our github repo

Intended use

Primary intended uses: The primary use of VHs is research on large multimodal models and chatbots.

Primary intended users: The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.