Chuanyang-Jin commited on
Commit
35a3a78
1 Parent(s): 419c99f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -1
README.md CHANGED
@@ -2,8 +2,15 @@
2
  license: mit
3
  language:
4
  - en
 
 
 
 
 
 
 
5
  ---
6
- MMToM-QA is the first multimodal benchmark to evaluate machine Theory of Mind (ToM), the ability to understand people's minds. This benchmark is introduced in the paper [MMToM-QA: Multimodal Theory of Mind Question Answering](https://arxiv.org/abs/2401.08743) (Outstanding Paper Award at ACL 2024).
7
 
8
  MMToM-QA systematically evaluates the cognitive ability to understand people's minds both on multimodal data and different unimodal data. MMToM-QA consists of 600 questions. The questions are categorized into seven types, evaluating belief inference and goal inference in rich and diverse situations. Each belief inference type has 100 questions, totaling 300 belief questions; each goal inference type has 75 questions, totaling 300 goal questions.
9
 
 
2
  license: mit
3
  language:
4
  - en
5
+ task_categories:
6
+ - question-answering
7
+ tags:
8
+ - Multimodal
9
+ - Theory_of_Mind
10
+ size_categories:
11
+ - n<1K
12
  ---
13
+ MMToM-QA is the first multimodal benchmark to evaluate machine Theory of Mind (ToM), the ability to understand people's minds. It is introduced in the paper [MMToM-QA: Multimodal Theory of Mind Question Answering](https://arxiv.org/abs/2401.08743) (Outstanding Paper Award at ACL 2024).
14
 
15
  MMToM-QA systematically evaluates the cognitive ability to understand people's minds both on multimodal data and different unimodal data. MMToM-QA consists of 600 questions. The questions are categorized into seven types, evaluating belief inference and goal inference in rich and diverse situations. Each belief inference type has 100 questions, totaling 300 belief questions; each goal inference type has 75 questions, totaling 300 goal questions.
16