matteogabburo commited on
Commit
5302ac5
1 Parent(s): 15aef25

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +24 -4
README.md CHANGED
@@ -17,15 +17,35 @@ size_categories:
17
  ---
18
  ## Dataset Description
19
 
20
- **mWikiQA** is the translated version of WikiQA. It contains 3047 questions sampled from Bing query logs. The candidate answer sentences are extracted from Wikipedia and then manually labelled to assess whether it is a correct answer.
21
 
22
- The dataset has been translated in 5 languages, and the translation process as described in this paper:
23
 
24
- [Datasets for Multilingual Answer Sentence Selection](https://arxiv.org/abs/2406.10172 'Datasets for Multilingual Answer Sentence Selection')
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
25
 
26
  ## Citation
27
 
28
- <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
29
 
30
  **BibTeX:**
31
  ```
 
17
  ---
18
  ## Dataset Description
19
 
20
+ **mWikiQA** is a translated version of WikiQA. It contains 3,047 questions sampled from Bing query logs. The candidate answer sentences are extracted from Wikipedia and then manually labeled to assess whether they are correct answers.
21
 
22
+ The dataset has been translated into five European languages: French, German, Italian, Portuguese, and Spanish, as described in this paper: "Datasets for Multilingual Answer Sentence Selection."
23
 
24
+ Each example has the following format:
25
+
26
+ ```
27
+ {
28
+ 'eid': 1214,
29
+ 'qid': 141,
30
+ 'cid': 0,
31
+ 'label': 1,
32
+ 'question': 'Was bedeutet Karma im Buddhismus?',
33
+ 'candidate': 'Karma (Sanskrit, auch karman, Pali: Kamma) bedeutet "Handlung" oder "Tun"; was auch immer man tut, sagt oder denkt, ist ein Karma.'
34
+ }
35
+ ```
36
+
37
+ Where:
38
+
39
+ - **eid**: is the unique id of the example (question, candidate)
40
+ - **qid**: is the unique id of the question
41
+ - **cid**: is the unique id of the answer candidate
42
+ - **label**: identifies whether the answer candidate ``candidate`` is correct for the ``question`` (1 if correct, 0 otherwise)
43
+ - **question**: the question
44
+ - **candidate**: the answer candidate
45
 
46
  ## Citation
47
 
48
+ If you find this dataset useful, please cite the following paper:
49
 
50
  **BibTeX:**
51
  ```