File size: 3,582 Bytes
603a583 c9cf783 cca0643 c9cf783 a57dd65 6b3b2c3 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 |
---
dataset_info:
features:
- name: dataset
dtype: string
- name: condition
dtype: string
- name: trial
dtype: string
- name: n_objects
dtype: int64
- name: oddity_index
dtype: int64
- name: images
sequence: image
- name: n_subjects
dtype: int64
- name: human_avg
dtype: float64
- name: human_sem
dtype: float64
- name: human_std
dtype: float64
- name: RT_avg
dtype: float64
- name: RT_sem
dtype: float64
- name: RT_std
dtype: float64
- name: DINOv2G_avg
dtype: float64
- name: DINOv2G_std
dtype: float64
- name: DINOv2G_sem
dtype: float64
splits:
- name: train
num_bytes: 384413356.563
num_examples: 2019
download_size: 382548893
dataset_size: 384413356.563
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
## MOCHI: Multiview Object Consistency in Humans and Image models
We introduce a benchmark to evaluate the alignment between humans and image models on 3D shape understanding: **M**ultiview **O**bject **C**onsistency in **H**umans and **I**mage models (**MOCHI**)
To download dataset from huggingface, install relevant huggingface libraries
```
pip install datasets huggingface_hub
```
and download MOCHI
```python
from datasets import load_dataset
# download huggingface dataset
benchmark = load_dataset("tzler/MOCHI")['train']
# there are 2019 trials let's pick one
i_trial = benchmark[1879]
```
Here, `i_trial` is a dictionary with trial-related data including human (`human` and `RT`) and model (`DINOv2G`) performance measures:
```
{'dataset': 'shapegen',
'condition': 'abstract2',
'trial': 'shapegen2527',
'n_objects': 3,
'oddity_index': 2,
'images': [<PIL.PngImagePlugin.PngImageFile image mode=RGB size=1000x1000>,
<PIL.PngImagePlugin.PngImageFile image mode=RGB size=1000x1000>,
<PIL.PngImagePlugin.PngImageFile image mode=RGB size=1000x1000>],
'n_subjects': 15,
'human_avg': 1.0,
'human_sem': 0.0,
'human_std': 0.0,
'RT_avg': 4324.733333333334,
'RT_sem': 544.4202024405384,
'RT_std': 2108.530377391076,
'DINOv2G_avg': 1.0,
'DINOv2G_std': 0.0,
'DINOv2G_sem': 0.0}```
```
as well as this trial's images:
```python
plt.figure(figsize=[15,4])
for i_plot in range(len(i_trial['images'])):
plt.subplot(1,len(i_trial['images']),i_plot+1)
plt.imshow(i_trial['images'][i_plot])
if i_plot == i_trial['oddity_index']: plt.title('odd-one-out')
plt.axis('off')
plt.show()
```
<img src="example_trial.png" alt="example trial"/>
The complete results on this benchmark, including all of the human and model (e.g., DINOv2, CLIP, and MAE at multiple sizes), can be downloaded from the github repo:
```
git clone https://github.com/tzler/MOCHI.git
```
And then imported with a few lines of code:
```python
import pandas
# load data the github repo we just cloned
df = pandas.read_csv('MOCHI/assets/benchmark.csv')
# extract trial info with the index from huggingface repo above
df.loc[i_trial_index]['trial']
```
This returns the trial, `shapegen2527`, which is the same as the huggingface dataset for this index.
```
@misc{bonnen2024evaluatingmultiviewobjectconsistency,
title={Evaluating Multiview Object Consistency in Humans and Image Models},
author={Tyler Bonnen and Stephanie Fu and Yutong Bai and Thomas O'Connell and Yoni Friedman and Nancy Kanwisher and Joshua B. Tenenbaum and Alexei A. Efros},
year={2024},
eprint={2409.05862},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2409.05862},
}
``` |