File size: 9,593 Bytes
68e4216
 
b1e3133
68e4216
b1e3133
8e7d872
27dd8e0
8e7d872
b1e3133
68e4216
b1e3133
 
68e4216
b1e3133
68e4216
 
 
a4619f9
68e4216
b1e3133
68e4216
 
b1e3133
68e4216
 
 
b1e3133
 
 
 
 
68e4216
b1e3133
68e4216
b1e3133
68e4216
b1e3133
68e4216
b1e3133
68e4216
b1e3133
 
 
 
 
 
68e4216
b1e3133
 
68e4216
b1e3133
68e4216
b1e3133
68e4216
b1e3133
68e4216
b1e3133
 
 
68e4216
b1e3133
68e4216
b1e3133
 
 
68e4216
b1e3133
68e4216
b1e3133
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
68e4216
b1e3133
68e4216
b1e3133
 
 
68e4216
 
b1e3133
68e4216
b1e3133
68e4216
b1e3133
 
 
 
68e4216
a4619f9
 
 
 
 
 
 
b1e3133
68e4216
b1e3133
 
 
 
68e4216
a4619f9
 
 
 
 
 
68e4216
b1e3133
68e4216
 
 
b1e3133
68e4216
 
 
 
b1e3133
68e4216
b1e3133
68e4216
b1e3133
68e4216
b1e3133
 
 
68e4216
 
b1e3133
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a4619f9
 
 
 
 
b1e3133
 
a4619f9
 
 
b1e3133
 
 
 
 
 
 
 
 
 
 
 
68e4216
b1e3133
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
68e4216
 
 
 
b1e3133
68e4216
b1e3133
 
 
68e4216
b1e3133
68e4216
b1e3133
 
 
68e4216
 
 
b1e3133
68e4216
b1e3133
68e4216
 
 
b1e3133
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
68e4216
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
---
annotations_creators:
- unknown
language_creators:
- unknown
language:
- fr
license:
- other
multilinguality:
- unknown
pretty_name: OrangeSum
size_categories:
- unknown
source_datasets:
- original
task_categories:
- summarization
task_ids:
- unknown
---

# Dataset Card for GEM/OrangeSum

## Dataset Description

- **Homepage:** [Needs More Information]
- **Repository:** https://github.com/Tixierae/OrangeSum
- **Paper:** https://aclanthology.org/2021.emnlp-main.740
- **Leaderboard:** N/A
- **Point of Contact:** [Needs More Information]

### Link to Main Data Card

You can find the main data card on the [GEM Website](https://gem-benchmark.com/data_cards/OrangeSum).

### Dataset Summary 

OrangeSum is a French summarization dataset inspired by XSum. It features two subtasks: abstract generation and title generation. The data was sourced from "Orange Actu" articles between 2011 and 2020. 

You can load the dataset via:
```
import datasets
data = datasets.load_dataset('GEM/OrangeSum')
```
The data loader can be found [here](https://huggingface.co/datasets/GEM/OrangeSum).

#### paper
[ACL Anthology](https://aclanthology.org/2021.emnlp-main.740)

## Dataset Overview

### Where to find the Data and its Documentation

#### Download

<!-- info: What is the link to where the original dataset is hosted? -->
<!-- scope: telescope -->
[Github](https://github.com/Tixierae/OrangeSum)

#### Paper

<!-- info: What is the link to the paper describing the dataset (open access preferred)? -->
<!-- scope: telescope -->
[ACL Anthology](https://aclanthology.org/2021.emnlp-main.740)

#### BibTex

<!-- info: Provide the BibTex-formatted reference for the dataset. Please use the correct published version (ACL anthology, etc.) instead of google scholar created Bibtex. -->
<!-- scope: microscope -->
```
@inproceedings{kamal-eddine-etal-2021-barthez,
    title = "{BART}hez: a Skilled Pretrained {F}rench Sequence-to-Sequence Model",
    author = "Kamal Eddine, Moussa  and
      Tixier, Antoine  and
      Vazirgiannis, Michalis",
    booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
    month = nov,
    year = "2021",
    address = "Online and Punta Cana, Dominican Republic",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2021.emnlp-main.740",
    doi = "10.18653/v1/2021.emnlp-main.740",
    pages = "9369--9390",
    abstract = "Inductive transfer learning has taken the entire NLP field by storm, with models such as BERT and BART setting new state of the art on countless NLU tasks. However, most of the available models and research have been conducted for English. In this work, we introduce BARThez, the first large-scale pretrained seq2seq model for French. Being based on BART, BARThez is particularly well-suited for generative tasks. We evaluate BARThez on five discriminative tasks from the FLUE benchmark and two generative tasks from a novel summarization dataset, OrangeSum, that we created for this research. We show BARThez to be very competitive with state-of-the-art BERT-based French language models such as CamemBERT and FlauBERT. We also continue the pretraining of a multilingual BART on BARThez{'} corpus, and show our resulting model, mBARThez, to significantly boost BARThez{'} generative performance.",
}
```

#### Has a Leaderboard?

<!-- info: Does the dataset have an active leaderboard? -->
<!-- scope: telescope -->
no


### Languages and Intended Use

#### Multilingual?

<!-- quick -->
<!-- info: Is the dataset multilingual? -->
<!-- scope: telescope -->
no

#### Covered Languages

<!-- quick -->
<!-- info: What languages/dialects are covered in the dataset? -->
<!-- scope: telescope -->
`French`

#### License

<!-- quick -->
<!-- info: What is the license of the dataset? -->
<!-- scope: telescope -->
other: Other license

#### Primary Task

<!-- info: What primary task does the dataset support? -->
<!-- scope: telescope -->
Summarization


### Credit



### Dataset Structure




## Dataset in GEM

### Rationale for Inclusion in GEM

#### Similar Datasets

<!-- info: Do other datasets for the high level task exist? -->
<!-- scope: telescope -->
no


### GEM-Specific Curation

#### Modificatied for GEM?

<!-- info: Has the GEM version of the dataset been modified in any way (data, processing, splits) from the original curated data? -->
<!-- scope: telescope -->
no

#### Additional Splits?

<!-- info: Does GEM provide additional splits to the dataset? -->
<!-- scope: telescope -->
no


### Getting Started with the Task

#### Pointers to Resources

<!-- info: Getting started with in-depth research on the task. Add relevant pointers to resources that researchers can consult when they want to get started digging deeper into the task. -->
<!-- scope: microscope -->
Papers about abstractive summarization using seq2seq models:

- [Abstractive Text Summarization using Sequence-to-sequence RNNs and Beyond](https://aclanthology.org/K16-1028/)
- [Get To The Point: Summarization with Pointer-Generator Networks](https://aclanthology.org/P17-1099/)
- [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://aclanthology.org/2020.acl-main.703)
- [BARThez: a Skilled Pretrained French Sequence-to-Sequence Model](https://aclanthology.org/2021.emnlp-main.740/)

Papers about (pretrained) Transformers:

- [Attention is All you Need](https://papers.nips.cc/paper/2017/hash/3f5ee243547dee91fbd053c1c4a845aa-Abstract.html)
- [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://aclanthology.org/N19-1423/)

#### Technical Terms

<!-- info: Technical terms used in this card and the dataset and their definitions -->
<!-- scope: microscope -->
No unique technical words in this data card.



## Previous Results

### Previous Results

#### Measured Model Abilities

<!-- info: What aspect of model ability can be measured with this dataset? -->
<!-- scope: telescope -->
The ability of the model to generate human like titles and abstracts for given news articles.

#### Metrics

<!-- info: What metrics are typically used for this task? -->
<!-- scope: periscope -->
`ROUGE`, `BERT-Score`

#### Proposed Evaluation

<!-- info: List and describe the purpose of the metrics and evaluation methodology (including human evaluation) that the dataset creators used when introducing this task. -->
<!-- scope: microscope -->
Automatic Evaluation: Rouge-1, Rouge-2, RougeL and BERTScore were used.

Human evalutaion:  a human evaluation study was conducted with 11 French native speakers. The evaluators were PhD students from the computer science department of the university of the authors, working in NLP and other fields of AI. They volunteered after receiving an email announcement. the best-Worst Scaling (Louviere et al.,2015) was used. Two summaries from two different systems, along with their input document, were presented to a human annotator who had to decide which one was better.  The evaluators were asked to base their judgments on accuracy (does the summary contain accurate facts?), informativeness (is important in-formation captured?) and fluency (is the summary written in well-formed French?).

#### Previous results available?

<!-- info: Are previous results available? -->
<!-- scope: telescope -->
no



## Broader Social Context

### Previous Work on the Social Impact of the Dataset

#### Usage of Models based on the Data

<!-- info: Are you aware of cases where models trained on the task featured in this dataset ore related tasks have been used in automated systems? -->
<!-- scope: telescope -->
no


### Impact on Under-Served Communities

#### Addresses needs of underserved Communities?

<!-- info: Does this dataset address the needs of communities that are traditionally underserved in language technology, and particularly language generation technology? Communities may be underserved for exemple because their language, language variety, or social or geographical context is underepresented in NLP and NLG resources (datasets and models). -->
<!-- scope: telescope -->
no


### Discussion of Biases

#### Any Documented Social Biases?

<!-- info: Are there documented social biases in the dataset? Biases in this context are variations in the ways members of different social categories are represented that can have harmful downstream consequences for members of the more disadvantaged group. -->
<!-- scope: telescope -->
no

#### Are the Language Producers Representative of the Language?

<!-- info: Does the distribution of language producers in the dataset accurately represent the full distribution of speakers of the language world-wide? If not, how does it differ? -->
<!-- scope: periscope -->
The dataset contains news articles written by professional authors.



## Considerations for Using the Data

### PII Risks and Liability



### Licenses

#### Copyright Restrictions on the Dataset

<!-- info: Based on your answers in the Intended Use part of the Data Overview Section, which of the following best describe the copyright and licensing status of the dataset? -->
<!-- scope: periscope -->
`open license - commercial use allowed`

#### Copyright Restrictions on the Language Data

<!-- info: Based on your answers in the Language part of the Data Curation Section, which of the following best describe the copyright and licensing status of the underlying language data? -->
<!-- scope: periscope -->
`open license - commercial use allowed`


### Known Technical Limitations