JohnnyBoy00 commited on
Commit
b54aa35
1 Parent(s): 50c21de

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +101 -7
README.md CHANGED
@@ -1,4 +1,23 @@
1
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
  dataset_info:
3
  features:
4
  - name: id
@@ -18,21 +37,96 @@ dataset_info:
18
  - name: score
19
  dtype: float64
20
  splits:
21
- - name: test_unseen_answers
22
- num_bytes: 309580
23
- num_examples: 221
24
- - name: test_unseen_questions
25
- num_bytes: 360672
26
- num_examples: 275
27
  - name: train
28
  num_bytes: 2223070
29
  num_examples: 1596
30
  - name: validation
31
  num_bytes: 546759
32
  num_examples: 400
 
 
 
 
 
 
33
  download_size: 455082
34
  dataset_size: 3440081
35
  ---
 
36
  # Dataset Card for "saf_legal_domain_german"
37
 
38
- [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ pretty_name: SAF - Legal Domain - German
3
+ annotations_creators:
4
+ - expert-generated
5
+ language:
6
+ - de
7
+ language_creators:
8
+ - other
9
+ multilinguality:
10
+ - monolingual
11
+ size_categories:
12
+ - 1K<n<10K
13
+ source_datasets:
14
+ - original
15
+ tags:
16
+ - short answer feedback
17
+ - legal domain
18
+ task_categories:
19
+ - text2text-generation
20
+
21
  dataset_info:
22
  features:
23
  - name: id
 
37
  - name: score
38
  dtype: float64
39
  splits:
 
 
 
 
 
 
40
  - name: train
41
  num_bytes: 2223070
42
  num_examples: 1596
43
  - name: validation
44
  num_bytes: 546759
45
  num_examples: 400
46
+ - name: test_unseen_answers
47
+ num_bytes: 309580
48
+ num_examples: 221
49
+ - name: test_unseen_questions
50
+ num_bytes: 360672
51
+ num_examples: 275
52
  download_size: 455082
53
  dataset_size: 3440081
54
  ---
55
+
56
  # Dataset Card for "saf_legal_domain_german"
57
 
58
+ ## Table of Contents
59
+ - [Dataset Description](#dataset-description)
60
+ - [Dataset Summary](#dataset-summary)
61
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
62
+ - [Languages](#languages)
63
+ - [Dataset Structure](#dataset-structure)
64
+ - [Data Instances](#data-instances)
65
+ - [Data Fields](#data-fields)
66
+ - [Data Splits](#data-splits)
67
+ - [Additional Information](#additional-information)
68
+ - [Contributions](#contributions)
69
+
70
+ ## Dataset Description
71
+
72
+ ### Dataset Summary
73
+
74
+ This Short Answer Feedback (SAF) dataset contains 19 German short answers in the domain of the German social law (with reference answers). The idea of constructing a bilingual (English and German) short answer dataset as a way to remedy the lack of content-focused feedback datasets was introduced in [Your Answer is Incorrect... Would you like to know why? Introducing a Bilingual Short Answer Feedback Dataset](https://aclanthology.org/2022.acl-long.587) (Filighera et al., ACL 2022). Please refer to the [saf_legal_domain_german](https://huggingface.co/datasets/JohnnyBoy00/saf_micro_job_german) and [saf_communication_networks_english](https://huggingface.co/datasets/JohnnyBoy00/saf_communication_networks_english) datasets for similarly constructed datasets that can be used for SAF.
75
+
76
+ ### Supported Tasks and Leaderboards
77
+
78
+ - `short_answer_feedback`: The dataset can be used to train a Text2Text Generation model from HuggingFace transformers in order to generate automatic short answer feedback.
79
+
80
+ ### Languages
81
+
82
+ The questions, reference answers, provided answers and the answer feedback in the dataset are written in German.
83
+
84
+ ## Dataset Structure
85
+
86
+ ### Data Instances
87
+
88
+ An example of an entry of the training split looks as follows.
89
+
90
+ ```
91
+ {
92
+ "id": "1",
93
+ "question": "Ist das eine Frage?",
94
+ "reference_answer": "Ja, das ist eine Frage.",
95
+ "provided_answer": "Ich bin mir sicher, dass das eine Frage ist.",
96
+ "answer_feedback": "Korrekt.",
97
+ "verification_feedback": "Correct",
98
+ "error_class": "Keine",
99
+ "score": 1
100
+ }
101
+ ```
102
+
103
+ ### Data Fields
104
+
105
+ The data fields are the same among all splits.
106
+
107
+ - `id`: a `string` feature (UUID4 in HEX format).
108
+ - `question`: a `string` feature representing a question.
109
+ - `reference_answer`: a `string` feature representing a reference answer to the question.
110
+ - `provided_answer`: a `string` feature representing an answer that was provided for a particular question.
111
+ - `answer_feedback`: a `string` feature representing the feedback given to the provided answers.
112
+ - `verification_feedback`: a `string` feature representing an automatic labeling of the score. It can be `Correct` (`score` = 1), `Incorrect` (`score` = 0) or `Partially correct` (all intermediate scores).
113
+ - `error_class`: a `string` feature representing the type of error identified in the case of a not completely correct answer.
114
+ - `score`: a `float64` feature (between 0 and 1) representing the score given to the provided answer.
115
+
116
+ ### Data Splits
117
+
118
+ The dataset is comprised of four data splits.
119
+
120
+ - `train`: used for training, contains a set of questions and the provided answers to them.
121
+ - `validation`: used for validation, contains a set of questions and the provided answers to them (derived from the original training set from which the data came from).
122
+ - `test_unseen_answers`: used for testing, contains unseen answers to the questions present in the `train` split.
123
+ - `test_unseen_questions`: used for testing, contains unseen questions that do not appear in the `train` split.
124
+
125
+ | Split |train|validation|test_unseen_answers|test_unseen_questions|
126
+ |-------------------|----:|---------:|------------------:|--------------------:|
127
+ |Number of instances| 1596| 400| 221| 275|
128
+
129
+ ## Additional Information
130
+
131
+ ### Contributions
132
+ Thanks to [@JohnnyBoy2103](https://github.com/JohnnyBoy2103) for adding this dataset.