Datasets:

Languages:
English
Multilinguality:
monolingual
Size Categories:
100K<n<1M
Language Creators:
crowdsourced
Annotations Creators:
expert-generated
Source Datasets:
original
ArXiv:
Tags:
License:
system HF staff commited on
Commit
cd82dc6
1 Parent(s): f8c8d34

Update files from the datasets library (from 1.3.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.3.0

Files changed (1) hide show
  1. README.md +180 -0
README.md ADDED
@@ -0,0 +1,180 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ ---
3
+
4
+ # Dataset Card for "docred"
5
+
6
+ ## Table of Contents
7
+ - [Dataset Description](#dataset-description)
8
+ - [Dataset Summary](#dataset-summary)
9
+ - [Supported Tasks](#supported-tasks)
10
+ - [Languages](#languages)
11
+ - [Dataset Structure](#dataset-structure)
12
+ - [Data Instances](#data-instances)
13
+ - [Data Fields](#data-fields)
14
+ - [Data Splits Sample Size](#data-splits-sample-size)
15
+ - [Dataset Creation](#dataset-creation)
16
+ - [Curation Rationale](#curation-rationale)
17
+ - [Source Data](#source-data)
18
+ - [Annotations](#annotations)
19
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
20
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
21
+ - [Social Impact of Dataset](#social-impact-of-dataset)
22
+ - [Discussion of Biases](#discussion-of-biases)
23
+ - [Other Known Limitations](#other-known-limitations)
24
+ - [Additional Information](#additional-information)
25
+ - [Dataset Curators](#dataset-curators)
26
+ - [Licensing Information](#licensing-information)
27
+ - [Citation Information](#citation-information)
28
+ - [Contributions](#contributions)
29
+
30
+ ## [Dataset Description](#dataset-description)
31
+
32
+ - **Homepage:** [https://github.com/thunlp/DocRED](https://github.com/thunlp/DocRED)
33
+ - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
34
+ - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
35
+ - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
36
+ - **Size of downloaded dataset files:** 20.03 MB
37
+ - **Size of the generated dataset:** 19.19 MB
38
+ - **Total amount of disk used:** 39.23 MB
39
+
40
+ ### [Dataset Summary](#dataset-summary)
41
+
42
+ Multiple entities in a document generally exhibit complex inter-sentence relations, and cannot be well handled by existing relation extraction (RE) methods that typically focus on extracting intra-sentence relations for single entity pairs. In order to accelerate the research on document-level RE, we introduce DocRED, a new dataset constructed from Wikipedia and Wikidata with three features:
43
+ - DocRED annotates both named entities and relations, and is the largest human-annotated dataset for document-level RE from plain text.
44
+ - DocRED requires reading multiple sentences in a document to extract entities and infer their relations by synthesizing all information of the document.
45
+ - Along with the human-annotated data, we also offer large-scale distantly supervised data, which enables DocRED to be adopted for both supervised and weakly supervised scenarios.
46
+
47
+ ### [Supported Tasks](#supported-tasks)
48
+
49
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
50
+
51
+ ### [Languages](#languages)
52
+
53
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
54
+
55
+ ## [Dataset Structure](#dataset-structure)
56
+
57
+ We show detailed information for up to 5 configurations of the dataset.
58
+
59
+ ### [Data Instances](#data-instances)
60
+
61
+ #### default
62
+
63
+ - **Size of downloaded dataset files:** 20.03 MB
64
+ - **Size of the generated dataset:** 19.19 MB
65
+ - **Total amount of disk used:** 39.23 MB
66
+
67
+ An example of 'train_annotated' looks as follows.
68
+ ```
69
+ {
70
+ "labels": {
71
+ "evidence": [[0]],
72
+ "head": [0],
73
+ "relation_id": ["P1"],
74
+ "relation_text": ["is_a"],
75
+ "tail": [0]
76
+ },
77
+ "sents": [["This", "is", "a", "sentence"], ["This", "is", "another", "sentence"]],
78
+ "title": "Title of the document",
79
+ "vertexSet": [[{
80
+ "name": "sentence",
81
+ "pos": [3],
82
+ "sent_id": 0,
83
+ "type": "NN"
84
+ }, {
85
+ "name": "sentence",
86
+ "pos": [3],
87
+ "sent_id": 1,
88
+ "type": "NN"
89
+ }], [{
90
+ "name": "This",
91
+ "pos": [0],
92
+ "sent_id": 0,
93
+ "type": "NN"
94
+ }]]
95
+ }
96
+ ```
97
+
98
+ ### [Data Fields](#data-fields)
99
+
100
+ The data fields are the same among all splits.
101
+
102
+ #### default
103
+ - `title`: a `string` feature.
104
+ - `sents`: a dictionary feature containing:
105
+ - `feature`: a `string` feature.
106
+ - `name`: a `string` feature.
107
+ - `sent_id`: a `int32` feature.
108
+ - `pos`: a `list` of `int32` features.
109
+ - `type`: a `string` feature.
110
+ - `labels`: a dictionary feature containing:
111
+ - `head`: a `int32` feature.
112
+ - `tail`: a `int32` feature.
113
+ - `relation_id`: a `string` feature.
114
+ - `relation_text`: a `string` feature.
115
+ - `evidence`: a `list` of `int32` features.
116
+
117
+ ### [Data Splits Sample Size](#data-splits-sample-size)
118
+
119
+ | name |train_annotated|train_distant|validation|test|
120
+ |-------|--------------:|------------:|---------:|---:|
121
+ |default| 3053| 1000| 1000|1000|
122
+
123
+ ## [Dataset Creation](#dataset-creation)
124
+
125
+ ### [Curation Rationale](#curation-rationale)
126
+
127
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
128
+
129
+ ### [Source Data](#source-data)
130
+
131
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
132
+
133
+ ### [Annotations](#annotations)
134
+
135
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
136
+
137
+ ### [Personal and Sensitive Information](#personal-and-sensitive-information)
138
+
139
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
140
+
141
+ ## [Considerations for Using the Data](#considerations-for-using-the-data)
142
+
143
+ ### [Social Impact of Dataset](#social-impact-of-dataset)
144
+
145
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
146
+
147
+ ### [Discussion of Biases](#discussion-of-biases)
148
+
149
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
150
+
151
+ ### [Other Known Limitations](#other-known-limitations)
152
+
153
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
154
+
155
+ ## [Additional Information](#additional-information)
156
+
157
+ ### [Dataset Curators](#dataset-curators)
158
+
159
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
160
+
161
+ ### [Licensing Information](#licensing-information)
162
+
163
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
164
+
165
+ ### [Citation Information](#citation-information)
166
+
167
+ ```
168
+ @inproceedings{yao2019DocRED,
169
+ title={{DocRED}: A Large-Scale Document-Level Relation Extraction Dataset},
170
+ author={Yao, Yuan and Ye, Deming and Li, Peng and Han, Xu and Lin, Yankai and Liu, Zhenghao and Liu, Zhiyuan and Huang, Lixin and Zhou, Jie and Sun, Maosong},
171
+ booktitle={Proceedings of ACL 2019},
172
+ year={2019}
173
+ }
174
+
175
+ ```
176
+
177
+
178
+ ### Contributions
179
+
180
+ Thanks to [@ghomasHudson](https://github.com/ghomasHudson), [@thomwolf](https://github.com/thomwolf), [@lhoestq](https://github.com/lhoestq) for adding this dataset.