sivan22 commited on
Commit
7ea0fe0
1 Parent(s): 27c5f49

copy README from github

Browse files
Files changed (1) hide show
  1. README.md +66 -0
README.md CHANGED
@@ -1,3 +1,69 @@
1
  ---
2
  license: cc-by-4.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: cc-by-4.0
3
  ---
4
+ # HeQ - Hebrew Question Answering Dataset
5
+
6
+ ## Summary
7
+ HeQ is a question answering dataset in Modern Hebrew, consisting of 30,147 questions.
8
+
9
+ ## Introduction
10
+ The dataset follows the format and crowdsourcing methodology of SQuAD (Stanford Question Answering Dataset) and the original ParaShoot. A team of crowdworkers formulated and answered reading comprehension questions based on random paragraphs in Hebrew. The answer to each question is a segment of text (span) included in the relevant paragraph. The paragraphs are sourced from two different platforms: (1) Hebrew Wikipedia, and (2) Geektime, an online Israeli news channel specializing in technology.
11
+
12
+ ## Question Features
13
+ Two types of questions were collected:
14
+ 1. Answerable questions (21,784): Questions for which the answer is present in the paragraph.
15
+ 2. Unanswerable questions (8,363): Questions related to the paragraph's content, where the paragraph provides a plausible answer, but the answer is not explicitly included.
16
+
17
+ ## Quality Labels
18
+ As part of ongoing quality control during the collection process, and additional checks on the test and validation sets, 28% of the final data was manually verified. Of the final data, 16% received one of the following labels:
19
+
20
+ - Verified: Questions that passed the threshold and were relatively easy, with wording exactly or similar to the relevant sentence in the paragraph, or very common questions.
21
+ - Good: Questions with wording that was significantly different (lexically or syntactically) from the wording of the relevant sentence in the paragraph.
22
+ - Gold: Questions that involve inference-making.
23
+
24
+ The Geektime dataset also includes the following labels:
25
+ - Deixis: Questions where the answer is an expression of time that depends entirely on the context (e.g., "last year," "tomorrow," etc.).
26
+ - Second: Questions where the answer is written in the second person ("you").
27
+ - Checked: Manually validated questions that did not receive a label based on their quality.
28
+
29
+ ## Additional Answers
30
+ After splitting the data, the test and validation subsets underwent additional processing. In cases where there were multiple correct answer spans for an answerable question, the additional possible answers were added to both subsets to enhance robustness. For example, if the answer appears in quotation marks, another possible answer could be the same answer without the quotation marks. Another example involves answers that may or may not include prepositions preceding the content or appositions. Each answerable question in the test and validation sets received 0 to 6 additional possible answers.
31
+
32
+ ## Dataset Statistics
33
+ The table below shows the number of answerable and unanswerable questions in each source and split of HeQ:
34
+
35
+ | | Hebrew Wikipedia | Geektime |
36
+ |------------------|------------------|----------|
37
+ | Answerable | 9987 | 9667 |
38
+ | Unanswerable | 3533 | 3955 |
39
+ | Train | 13622 | 13622 |
40
+ | Val | 536 | 522 |
41
+ | Test | 545 | 527 |
42
+
43
+ The table below shows the number of unique questions, paragraphs, and articles in each source and split of HeQ:
44
+
45
+ | | Hebrew Wikipedia | Geektime |
46
+ |------------------|------------------|----------|
47
+ | Questions | 13520 | 13622 |
48
+ | Paragraphs | 2006 | 2395 |
49
+ | Articles | 1481 | 2317 |
50
+
51
+ ## Code
52
+ The `Quality_control_Hebrew_QA.ipynb` file is a notebook used to extract specific data features that formed the basis for the validation process. The code for the web-based annotation interface used in this project can be found [here](https://github.com/NNLP-IL/Parashoot-Tagging).
53
+
54
+ ### Paper
55
+ https://u.cs.biu.ac.il/~yogo/heq.pdf
56
+
57
+ ### Model
58
+ https://huggingface.co/amirdnc/HeQ
59
+
60
+ ## Contributors
61
+ HeQ was annotated by Webiks for MAFAT, as part of the National Natural Language Processing Plan of Israel.
62
+
63
+ Contributors: Hilla Merhav Fine (Webiks), Roei Shlezinger (Webiks), Amir David Nissan Cohen (MAFAT)
64
+
65
+ Advisors: Reut Tsarfaty, Kfir Bar, Yoav Goldberg
66
+
67
+ ## Acknowledgments
68
+ We would like to express our gratitude to Omri Keren and Omer Levy, the creators of [the original ParaShoot](https://github.com/omrikeren/ParaShoot), the first question answering dataset in Hebrew, who also contributed to this project their annotation interface.
69
+ We are also grateful to Geektime, for allowing us to annotate their articles and share the data as a public dataset.