jeasinema commited on
Commit
356a24e
1 Parent(s): 69430eb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +98 -0
README.md CHANGED
@@ -1,3 +1,101 @@
1
  ---
2
  license: cc-by-4.0
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: cc-by-4.0
3
+ task_categories:
4
+ - question-answering
5
+ tags:
6
+ - 3D vision
7
+ - embodied AI
8
+ size_categories:
9
+ - 10K<n<100K
10
  ---
11
+
12
+ SQA3D: Situated Question Answering in 3D Scenes
13
+ ===
14
+ 1. Download the [SQA3D dataset](https://zenodo.org/record/7544818/files/sqa_task.zip?download=1) under `assets/data/`. The following files should be used:
15
+ ```plain
16
+ ./assets/data/sqa_task/balanced/*
17
+ ./assets/data/sqa_task/answer_dict.json
18
+ ```
19
+
20
+ 2. The dataset has been splited into `train`, `val` and `test`. For each category, we offer both question file, ex. `v1_balanced_questions_train_scannetv2.json`, and annotations, ex. `v1_balanced_sqa_annotations_train_scannetv2.json`
21
+
22
+ - The format of question file:
23
+
24
+ Run the following code:
25
+ ```python
26
+ import json
27
+ q = json.load(open('v1_balanced_questions_train_scannetv2.json', 'r'))
28
+ # Print the total number of questions
29
+ print('#questions: ', len(q['questions']))
30
+ print(q['questions'][0])
31
+ ```
32
+ The output is:
33
+ ```json
34
+ {
35
+ "alternative_situation":
36
+ [
37
+ "I stand looking out of the window in thought and a radiator is right in front of me.",
38
+ "I am looking outside through the window behind the desk."
39
+ ],
40
+ "question": "What color is the desk to my right?",
41
+ "question_id": 220602000000,
42
+ "scene_id": "scene0380_00",
43
+ "situation": "I am facing a window and there is a desk on my right and a chair behind me."
44
+ }
45
+ ```
46
+ The following fileds are **useful**: `question`, `question_id`, `scene_id`, `situation`.
47
+
48
+ - The format of annotations:
49
+
50
+ Run the following code:
51
+ ```python
52
+ import json
53
+ a = json.load(open('v1_balanced_sqa_annotations_train_scannetv2.json', 'r'))
54
+ # Print the total number of annotations, should be the same as questions
55
+ print('#annotations: ', len(a['annotations']))
56
+ print(a['annotations'][0])
57
+ ```
58
+ The output is
59
+ ```json
60
+ {
61
+ "answer_type": "other",
62
+ "answers":
63
+ [
64
+ {
65
+ "answer": "brown",
66
+ "answer_confidence": "yes",
67
+ "answer_id": 1
68
+ }
69
+ ],
70
+ "position":
71
+ {
72
+ "x": -0.9651003385573296,
73
+ "y": -1.2417634435553606,
74
+ "z": 0
75
+ },
76
+ "question_id": 220602000000,
77
+ "question_type": "N/A",
78
+ "rotation":
79
+ {
80
+ "_w": 0.9950041652780182,
81
+ "_x": 0,
82
+ "_y": 0,
83
+ "_z": 0.09983341664682724
84
+ },
85
+ "scene_id": "scene0380_00"
86
+ }
87
+ ```
88
+ The following fields are **useful**: `answers[0]['answer']`, `question_id`, `scene_id`.
89
+
90
+ **Note**: To find the answer of a question in the question file, you need to use lookup with `question_id`.
91
+
92
+ 3. We provide the mapping between answers and class labels in `answer_dict.json`
93
+ ```python
94
+ import json
95
+ j = json.load(open('answer_dict.json', 'r'))
96
+ print('Total classes: ', len(j[0]))
97
+ print('The class label of answer \'table\' is: ', j[0]['table'])
98
+ print('The corresponding answer of class 123 is: ', j[1]['123'])
99
+ ```
100
+
101
+ 4. Loader, model and training code can be found at https://github.com/SilongYong/SQA3D