elenanereiss commited on
Commit
c279374
1 Parent(s): 001f29e

Update README.md

Browse files

Hello

@thomwolf

,

@jplu

,

@lewtun

,

@lhoestq

,

@stefan-it

, @mariamabarham, i added some information in the description. This makes it easier to find the dataset.

Files changed (1) hide show
  1. README.md +29 -12
README.md CHANGED
@@ -1,5 +1,23 @@
1
  ---
2
- paperswithcode_id: null
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  pretty_name: GermEval14
4
  ---
5
 
@@ -33,8 +51,8 @@ pretty_name: GermEval14
33
 
34
  - **Homepage:** [https://sites.google.com/site/germeval2014ner/](https://sites.google.com/site/germeval2014ner/)
35
  - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
36
- - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
37
- - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
38
  - **Size of downloaded dataset files:** 9.81 MB
39
  - **Size of the generated dataset:** 17.19 MB
40
  - **Total amount of disk used:** 27.00 MB
@@ -49,7 +67,7 @@ The GermEval 2014 NER Shared Task builds on a new dataset with German Named Enti
49
 
50
  ### Languages
51
 
52
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
53
 
54
  ## Dataset Structure
55
 
@@ -61,10 +79,9 @@ The GermEval 2014 NER Shared Task builds on a new dataset with German Named Enti
61
  - **Size of the generated dataset:** 17.19 MB
62
  - **Total amount of disk used:** 27.00 MB
63
 
64
- An example of 'train' looks as follows.
65
- ```
66
- This example was too long and was cropped:
67
 
 
68
  {
69
  "id": "11",
70
  "ner_tags": [13, 14, 14, 14, 14, 0, 0, 0, 0, 0, 0, 0, 19, 20, 13, 0, 1, 0, 0, 0, 0, 0, 19, 20, 20, 0, 0, 0, 0, 3, 4, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
@@ -91,7 +108,7 @@ The data fields are the same among all splits.
91
  |-----------|----:|---------:|---:|
92
  |germeval_14|24000| 2200|5100|
93
 
94
- ## Dataset Creation
95
 
96
  ### Curation Rationale
97
 
@@ -133,17 +150,17 @@ The data fields are the same among all splits.
133
 
134
  ### Other Known Limitations
135
 
136
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
137
 
138
  ## Additional Information
139
 
140
- ### Dataset Curators
141
 
142
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
143
 
144
  ### Licensing Information
145
 
146
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
147
 
148
  ### Citation Information
149
 
 
1
  ---
2
+ annotations_creators:
3
+ - crowdsourced
4
+ language_creators:
5
+ - found
6
+ language:
7
+ - de
8
+ license:
9
+ - cc-by-4.0
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - 100K<n<1M
14
+ source_datasets:
15
+ - original
16
+ task_categories:
17
+ - token-classification
18
+ task_ids:
19
+ - named-entity-recognition
20
+ paperswithcode_id: nosta-d-named-entity-annotation-for-german
21
  pretty_name: GermEval14
22
  ---
23
 
 
51
 
52
  - **Homepage:** [https://sites.google.com/site/germeval2014ner/](https://sites.google.com/site/germeval2014ner/)
53
  - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
54
+ - **Paper:** [https://pdfs.semanticscholar.org/b250/3144ed2152830f6c64a9f797ab3c5a34fee5.pdf](https://pdfs.semanticscholar.org/b250/3144ed2152830f6c64a9f797ab3c5a34fee5.pdf)
55
+ - **Point of Contact:** [Darina Benikova](mailto:benikova@aiphes.tu-darmstadt.de)
56
  - **Size of downloaded dataset files:** 9.81 MB
57
  - **Size of the generated dataset:** 17.19 MB
58
  - **Total amount of disk used:** 27.00 MB
 
67
 
68
  ### Languages
69
 
70
+ German
71
 
72
  ## Dataset Structure
73
 
 
79
  - **Size of the generated dataset:** 17.19 MB
80
  - **Total amount of disk used:** 27.00 MB
81
 
82
+ An example of 'train' looks as follows. This example was too long and was cropped:
 
 
83
 
84
+ ```json
85
  {
86
  "id": "11",
87
  "ner_tags": [13, 14, 14, 14, 14, 0, 0, 0, 0, 0, 0, 0, 19, 20, 13, 0, 1, 0, 0, 0, 0, 0, 19, 20, 20, 0, 0, 0, 0, 3, 4, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
 
108
  |-----------|----:|---------:|---:|
109
  |germeval_14|24000| 2200|5100|
110
 
111
+ <!--## Dataset Creation
112
 
113
  ### Curation Rationale
114
 
 
150
 
151
  ### Other Known Limitations
152
 
153
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)-->
154
 
155
  ## Additional Information
156
 
157
+ <!--### Dataset Curators
158
 
159
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)-->
160
 
161
  ### Licensing Information
162
 
163
+ [CC BY-SA 4.0 license](https://creativecommons.org/licenses/by-sa/4.0/)
164
 
165
  ### Citation Information
166