MOCHI / README.md
tzler's picture
Update README.md
c9cf783 verified
|
raw
history blame
2.52 kB
metadata
dataset_info:
  features:
    - name: dataset
      dtype: string
    - name: condition
      dtype: string
    - name: trial
      dtype: string
    - name: n_objects
      dtype: int64
    - name: oddity_index
      dtype: int64
    - name: images
      sequence: image
    - name: n_subjects
      dtype: int64
    - name: human_avg
      dtype: float64
    - name: human_sem
      dtype: float64
    - name: human_std
      dtype: float64
    - name: RT_avg
      dtype: float64
    - name: RT_sem
      dtype: float64
    - name: RT_std
      dtype: float64
    - name: DINOv2G_avg
      dtype: float64
    - name: DINOv2G_std
      dtype: float64
    - name: DINOv2G_sem
      dtype: float64
  splits:
    - name: train
      num_bytes: 384413356.563
      num_examples: 2019
  download_size: 382548893
  dataset_size: 384413356.563
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*

MOCHI: Multiview Object Consistency in Humans and Image models

We introduce MOCHI (Multiview Obect Consistency in Humans and Image models), a benchmark to evaluate the alignment between humans and image models on 3D shape understanding.

To download dataset from huggingface, install relevant huggingface libraries

pip install datasets huggingface_hub

and download MOCHI

from datasets import load_dataset

# download huggingface dataset 
benchmark = load_dataset("tzler/MOCHI")['train']

# there are 2019 trials let's pick one 
i_trial = benchmark[1879]

Here, i_trial is a dictionary with trial-related data including human (human and RT) and model (DINOv2G) performance measures:

{'dataset': 'shapegen',
 'condition': 'abstract2',
 'trial': 'shapegen2527',
 'n_objects': 3,
 'oddity_index': 2,
 'images': [<PIL.PngImagePlugin.PngImageFile image mode=RGB size=1000x1000>,
  <PIL.PngImagePlugin.PngImageFile image mode=RGB size=1000x1000>,
  <PIL.PngImagePlugin.PngImageFile image mode=RGB size=1000x1000>],
 'n_subjects': 15,
 'human_avg': 1.0,
 'human_sem': 0.0,
 'human_std': 0.0,
 'RT_avg': 4324.733333333334,
 'RT_sem': 544.4202024405384,
 'RT_std': 2108.530377391076,
 'DINOv2G_avg': 1.0,
 'DINOv2G_std': 0.0,
 'DINOv2G_sem': 0.0}```

as well as this trial's images:

plt.figure(figsize=[15,4])
for i_plot in range(len(i_trial['images'])):
  plt.subplot(1,len(i_trial['images']),i_plot+1)
  plt.imshow(i_trial['images'][i_plot])
  if i_plot == i_trial['oddity_index']: plt.title('odd-one-out')
  plt.axis('off')
plt.show()
example trial