Datasets:
Tasks:
Token Classification
Sub-tasks:
named-entity-recognition
Languages:
German
Size:
100K<n<1M
License:
Update README.md
#1
by
elenanereiss
- opened
README.md
CHANGED
@@ -1,5 +1,23 @@
|
|
1 |
---
|
2 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
pretty_name: GermEval14
|
4 |
---
|
5 |
|
@@ -33,8 +51,8 @@ pretty_name: GermEval14
|
|
33 |
|
34 |
- **Homepage:** [https://sites.google.com/site/germeval2014ner/](https://sites.google.com/site/germeval2014ner/)
|
35 |
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
36 |
-
- **Paper:** [
|
37 |
-
- **Point of Contact:** [
|
38 |
- **Size of downloaded dataset files:** 9.81 MB
|
39 |
- **Size of the generated dataset:** 17.19 MB
|
40 |
- **Total amount of disk used:** 27.00 MB
|
@@ -49,7 +67,7 @@ The GermEval 2014 NER Shared Task builds on a new dataset with German Named Enti
|
|
49 |
|
50 |
### Languages
|
51 |
|
52 |
-
|
53 |
|
54 |
## Dataset Structure
|
55 |
|
@@ -61,10 +79,9 @@ The GermEval 2014 NER Shared Task builds on a new dataset with German Named Enti
|
|
61 |
- **Size of the generated dataset:** 17.19 MB
|
62 |
- **Total amount of disk used:** 27.00 MB
|
63 |
|
64 |
-
An example of 'train' looks as follows.
|
65 |
-
```
|
66 |
-
This example was too long and was cropped:
|
67 |
|
|
|
68 |
{
|
69 |
"id": "11",
|
70 |
"ner_tags": [13, 14, 14, 14, 14, 0, 0, 0, 0, 0, 0, 0, 19, 20, 13, 0, 1, 0, 0, 0, 0, 0, 19, 20, 20, 0, 0, 0, 0, 3, 4, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
|
@@ -143,7 +160,7 @@ The data fields are the same among all splits.
|
|
143 |
|
144 |
### Licensing Information
|
145 |
|
146 |
-
[
|
147 |
|
148 |
### Citation Information
|
149 |
|
|
|
1 |
---
|
2 |
+
annotations_creators:
|
3 |
+
- crowdsourced
|
4 |
+
language_creators:
|
5 |
+
- found
|
6 |
+
language:
|
7 |
+
- de
|
8 |
+
license:
|
9 |
+
- cc-by-4.0
|
10 |
+
multilinguality:
|
11 |
+
- monolingual
|
12 |
+
size_categories:
|
13 |
+
- 100K<n<1M
|
14 |
+
source_datasets:
|
15 |
+
- original
|
16 |
+
task_categories:
|
17 |
+
- token-classification
|
18 |
+
task_ids:
|
19 |
+
- named-entity-recognition
|
20 |
+
paperswithcode_id: nosta-d-named-entity-annotation-for-german
|
21 |
pretty_name: GermEval14
|
22 |
---
|
23 |
|
|
|
51 |
|
52 |
- **Homepage:** [https://sites.google.com/site/germeval2014ner/](https://sites.google.com/site/germeval2014ner/)
|
53 |
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
54 |
+
- **Paper:** [https://pdfs.semanticscholar.org/b250/3144ed2152830f6c64a9f797ab3c5a34fee5.pdf](https://pdfs.semanticscholar.org/b250/3144ed2152830f6c64a9f797ab3c5a34fee5.pdf)
|
55 |
+
- **Point of Contact:** [Darina Benikova](mailto:benikova@aiphes.tu-darmstadt.de)
|
56 |
- **Size of downloaded dataset files:** 9.81 MB
|
57 |
- **Size of the generated dataset:** 17.19 MB
|
58 |
- **Total amount of disk used:** 27.00 MB
|
|
|
67 |
|
68 |
### Languages
|
69 |
|
70 |
+
German
|
71 |
|
72 |
## Dataset Structure
|
73 |
|
|
|
79 |
- **Size of the generated dataset:** 17.19 MB
|
80 |
- **Total amount of disk used:** 27.00 MB
|
81 |
|
82 |
+
An example of 'train' looks as follows. This example was too long and was cropped:
|
|
|
|
|
83 |
|
84 |
+
```json
|
85 |
{
|
86 |
"id": "11",
|
87 |
"ner_tags": [13, 14, 14, 14, 14, 0, 0, 0, 0, 0, 0, 0, 19, 20, 13, 0, 1, 0, 0, 0, 0, 0, 19, 20, 20, 0, 0, 0, 0, 3, 4, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
|
|
|
160 |
|
161 |
### Licensing Information
|
162 |
|
163 |
+
[CC BY-SA 4.0 license](https://creativecommons.org/licenses/by-sa/4.0/)
|
164 |
|
165 |
### Citation Information
|
166 |
|