ObjectCount / README.md
jdsannchao's picture
Upload dataset
3226544 verified
metadata
license: apache-2.0
dataset_info:
  - config_name: CLEVRER
    features:
      - name: video_filename
        dtype: string
      - name: scene_index
        dtype: int64
      - name: question_text
        dtype: string
      - name: answer_text
        dtype: string
      - name: attributes_list
        sequence: string
    splits:
      - name: train
        num_bytes: 2029869
        num_examples: 13374
    download_size: 203081
    dataset_size: 2029869
  - config_name: VG_v1
    features:
      - name: img_id
        dtype: int64
      - name: orig_qa
        dtype: string
      - name: question_text
        dtype: string
      - name: answer_text
        dtype: string
    splits:
      - name: train
        num_bytes: 26281742
        num_examples: 424507
    download_size: 7732035
    dataset_size: 26281742
  - config_name: vg_V1
    features:
      - name: img_id
        dtype: int64
      - name: orig_qa
        dtype: string
      - name: question_text
        dtype: string
      - name: answer_text
        dtype: string
    splits:
      - name: train
        num_bytes: 26281742
        num_examples: 424507
    download_size: 7732035
    dataset_size: 26281742
configs:
  - config_name: CLEVRER
    data_files:
      - split: train
        path: CLEVRER/train-*
  - config_name: VG_v1
    data_files:
      - split: train
        path: VG_v1/train-*
  - config_name: vg_V1
    data_files:
      - split: train
        path: vg_V1/train-*

Here we create two datasets (from existing datasets: CLEVRER, VisualGenome) for the Object Counting instruction tuning task.

CLEVRER, a video dataset

CLEVRER has QA pairs for each 5000 training videos.

{'video_filename': int, 'scene_index': str (same as filename), 'questions': list [{'question_type': , 'question_subtype': , 'question_text': , 'answer_text': , 'program'(question attributes): }]} 

We select 'descriptive' type, 'count' subtype questions, they are object counting task questions. In the 'program' list, it shows how complex the question is (longer means more complex), so we filter out those longer than 9 to reduce difficulty.

CLEVRER contains both positive questions and negative (asking for non-exist objects) questions, so we skip generating negative samples for CLEVRER.

Some questions are 'event' specific, counting moving/stationary objects when a certain event happens. i.e., 'How many objects are stationary when the yellow object enters the scene?'

Downloading videos from: http://clevrer.csail.mit.edu/

VisualGenome, an image dataset

We generate some negative questions for non-exist objects in the image. We use the version 1 image sets. Download from: https://homes.cs.washington.edu/~ranjay/visualgenome/api.html

VisualGenome has 100K+ images. And for the objects in the image, there are attributes associated with each object, we only focus on the color attributes.

For each image, we choose to add (1) 3 non-exist objects and (2) 1 non-exist attribute for existing objects as negative samples.

In the original qa dataset, VG has Object Counting questions, we also include them here, with the 'orig_qa'=='Yes'. For those negative questions we generated, 'orig_qa' =='No'.

{'img_id': str, 'orig_qa': Yes/No, 'question_text': 'How many <attribute> <object in plural form> are there? ', 'answer_text': Numbers.(if exist) or None.(if non-exist) }

For more details, plz refer to the dataset.