Datasets:
Chuanyang-Jin
commited on
Commit
•
419c99f
1
Parent(s):
9ebfe5b
Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,10 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: mit
|
3 |
+
language:
|
4 |
+
- en
|
5 |
+
---
|
6 |
+
MMToM-QA is the first multimodal benchmark to evaluate machine Theory of Mind (ToM), the ability to understand people's minds. This benchmark is introduced in the paper [MMToM-QA: Multimodal Theory of Mind Question Answering](https://arxiv.org/abs/2401.08743) (Outstanding Paper Award at ACL 2024).
|
7 |
+
|
8 |
+
MMToM-QA systematically evaluates the cognitive ability to understand people's minds both on multimodal data and different unimodal data. MMToM-QA consists of 600 questions. The questions are categorized into seven types, evaluating belief inference and goal inference in rich and diverse situations. Each belief inference type has 100 questions, totaling 300 belief questions; each goal inference type has 75 questions, totaling 300 goal questions.
|
9 |
+
|
10 |
+
Currently, only the text-only version of MMToM-QA is available on Hugging Face. For the multimodal or video-only versions, please visit the GitHub repository: https://github.com/chuanyangjin/MMToM-QA
|