PereLluis13 commited on
Commit
5885b66
1 Parent(s): 7d0e865

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +21 -14
README.md CHANGED
@@ -65,7 +65,7 @@ Dataset created for [REBEL](https://huggingface.co/Babelscape/rebel-large) datas
65
 
66
  ### Supported Tasks and Leaderboards
67
 
68
- - `text-retrieval-other-relation-extraction`: The dataset can be used to train a model for Relation Extraction, which consists in extracting triplets from raw text, made of subject, object and relation type. Success on this task is typically measured by achieving a *high/low* [F1](https://huggingface.co/metrics/F1). The [BART](https://huggingface.co/transformers/model_doc/bart.html)) model currently achieves the following score. *[IF A LEADERBOARD IS AVAILABLE]:* 74.
69
 
70
  ### Languages
71
 
@@ -75,37 +75,44 @@ The dataset is in English, from the English Wikipedia.
75
 
76
  ### Data Instances
77
 
78
- Provide an JSON-formatted example and brief description of a typical instance in the dataset. If available, provide a link to further examples.
 
 
 
 
 
79
 
80
  ```
81
  {
82
- 'example_field': ...,
83
- ...
 
 
84
  }
85
  ```
86
 
87
- Provide any additional information that is not covered in the other sections about the data here. In particular describe any relationships between data points and if these relationships are made explicit.
88
 
89
  ### Data Fields
90
 
91
  List and describe the fields present in the dataset. Mention their data type, and whether they are used as input or output in any of the tasks the dataset currently supports. If the data has span indices, describe their attributes, such as whether they are at the character level or word level, whether they are contiguous or not, etc. If the datasets contains example IDs, state whether they have an inherent meaning, such as a mapping to other datasets or pointing to relationships between data points.
92
 
93
- - `example_field`: description of `example_field`
94
-
95
- Note that the descriptions can be initialized with the **Show Markdown Data Fields** output of the [tagging app](https://github.com/huggingface/datasets-tagging), you will then only need to refine the generated descriptions.
 
96
 
97
  ### Data Splits
98
 
99
- Describe and name the splits in the dataset if there are more than one.
100
-
101
- Describe any criteria for splitting the data, if used. If their are differences between the splits (e.g. if the training annotations are machine-generated and the dev and test ones are created by humans, or if different numbers of annotators contributed to each example), describe them here.
102
 
103
  Provide the sizes of each split. As appropriate, provide any descriptive statistics for the features, such as average length. For example:
104
 
105
  | | Tain | Valid | Test |
106
  | ----- | ------ | ----- | ---- |
107
- | Input Sentences | | | |
108
- | Average Sentence Length | | | |
 
109
 
110
  ## Dataset Creation
111
 
@@ -133,7 +140,7 @@ Any Wikipedia and Wikidata contributor.
133
 
134
  #### Annotation process
135
 
136
- TThe dataset extraction pipeline [cRocoDiLe: Automati**c** **R**elati**o**n Extra**c**ti**o**n **D**ataset w**i**th N**L**I filt**e**ring](https://github.com/Babelscape/crocodile).
137
 
138
  #### Who are the annotators?
139
 
 
65
 
66
  ### Supported Tasks and Leaderboards
67
 
68
+ - `text-retrieval-other-relation-extraction`: The dataset can be used to train a model for Relation Extraction, which consists in extracting triplets from raw text, made of subject, object and relation type. Success on this task is typically measured by achieving a *high* [F1](https://huggingface.co/metrics/F1). The [BART](https://huggingface.co/transformers/model_doc/bart.html)) model currently achieves the following score: 74 Micro F1 and 51 Macro F1 for the 220 most frequent relation types.
69
 
70
  ### Languages
71
 
 
75
 
76
  ### Data Instances
77
 
78
+ REBEL
79
+
80
+ - `Size of downloaded dataset files`: 1490.02 MB
81
+ - `Size of the generated dataset`: 1199.27 MB
82
+ - `Total amount of disk used`: 2689.29 MB
83
+
84
 
85
  ```
86
  {
87
+ 'id': 'Q82442-1',
88
+ 'title': 'Arsène Lupin, Gentleman Burglar',
89
+ 'context': 'Arsène Lupin , Gentleman Burglar is the first collection of stories by Maurice Leblanc recounting the adventures of Arsène Lupin , released on 10 June 1907 .',
90
+ 'triplets': '<triplet> Arsène Lupin, Gentleman Burglar <subj> Maurice Leblanc <obj> author <triplet> Arsène Lupin <subj> Maurice Leblanc <obj> creator'
91
  }
92
  ```
93
 
94
+ The original data is in jsonl format and contains much more information. It is divided by Wikipedia articles instead of by sentence, and contains metadata about Wikidata entities, their boundaries in the text, how it was annotated, etc. For more information check the [paper repository](https://huggingface.co/Babelscape/rebel-large) and how it was generated using the Relation Extraction dataset pipeline, [cRocoDiLe](https://github.com/Babelscape/crocodile).
95
 
96
  ### Data Fields
97
 
98
  List and describe the fields present in the dataset. Mention their data type, and whether they are used as input or output in any of the tasks the dataset currently supports. If the data has span indices, describe their attributes, such as whether they are at the character level or word level, whether they are contiguous or not, etc. If the datasets contains example IDs, state whether they have an inherent meaning, such as a mapping to other datasets or pointing to relationships between data points.
99
 
100
+ - `id`: ID of the instance. It contains a unique id matching to a Wikipedia page and a number separated by a hyphen indicating which sentence of the Wikipedia article it is.
101
+ - `title`: Title of the Wikipedia page the sentence comes from.
102
+ - `context`: Text from Wikipedia articles that serves as context for the Relation Extraction task.
103
+ - `triplets`: Linearized version of the triplets present in the text, split by the use of special tokens. For more info on this linearization check the [paper](https://github.com/Babelscape/rebel/blob/main/docs/EMNLP_2021_REBEL__Camera_Ready_.pdf).
104
 
105
  ### Data Splits
106
 
107
+ Test and Validation splits are each 5% of the original data.
 
 
108
 
109
  Provide the sizes of each split. As appropriate, provide any descriptive statistics for the features, such as average length. For example:
110
 
111
  | | Tain | Valid | Test |
112
  | ----- | ------ | ----- | ---- |
113
+ | Input Sentences | 3,120,296 | 172,860 | 173,601 |
114
+ | Input Sentences (top 220 relation types as used in original paper) | 784,202 | 43,341 | 43,506 |
115
+ | Number of Triplets (top 220 relation types as used in original paper) | 878,555 | 48,514 | 48,852 |
116
 
117
  ## Dataset Creation
118
 
 
140
 
141
  #### Annotation process
142
 
143
+ The dataset extraction pipeline [cRocoDiLe: Automati**c** **R**elati**o**n Extra**c**ti**o**n **D**ataset w**i**th N**L**I filt**e**ring](https://github.com/Babelscape/crocodile).
144
 
145
  #### Who are the annotators?
146