system HF staff commited on
Commit
07aeca7
1 Parent(s): 399e469

Update files from the datasets library (from 1.18.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.18.0

Files changed (1) hide show
  1. README.md +18 -17
README.md CHANGED
@@ -20,9 +20,10 @@ task_ids:
20
  - language-modeling
21
  - summarization
22
  paperswithcode_id: null
 
23
  ---
24
 
25
- # Dataset Card for `thaisum`
26
 
27
  ## Table of Contents
28
  - [Dataset Description](#dataset-description)
@@ -93,7 +94,7 @@ train/valid/test: 358868 / 11000 / 11000
93
 
94
  ### Curation Rationale
95
 
96
- Sequence-to-sequence (Seq2Seq) models have shown great achievement in text summarization. However, Seq2Seq model often requires large-scale training data to achieve effective results. Although many impressive advancements in text summarization field have been made, most of summarization studies focus on resource-rich languages. The progress of Thai text summarization is still far behind. The dearth of large-scale dataset keeps Thai text summarization in its infancy. As far as our knowledge goes, there is not a large-scale dataset for Thai text summarization available anywhere. Thus, we present ThaiSum, a large-scale corpus for Thai text summarization obtained from several online news websites namely Thairath, ThaiPBS, Prachathai, and The Standard.
97
 
98
  ### Source Data
99
 
@@ -102,26 +103,26 @@ Sequence-to-sequence (Seq2Seq) models have shown great achievement in text summa
102
  We used a python library named Scrapy to crawl articles from several news websites namely Thairath, Prachatai, ThaiPBS and, The Standard. We first collected news URLs provided in their sitemaps. During web-crawling, we used HTML markup and metadata available in HTML pages to identify article text, summary, headline, tags and label. Collected articles were published online from 2014 to August 2020. <br> <br>
103
  We further performed data cleansing process to minimize noisy data. We filtered out articles that their article text or summary is missing. Articles that contains article text with less than 150 words or summary with less than 15 words were removed. We also discarded articles that contain at least one of these following tags: ‘ดวง’ (horoscope), ‘นิยาย’ (novel), ‘อินสตราแกรมดารา’ (celebrity Instagram), ‘คลิปสุดฮา’(funny video) and ‘สรุปข่าว’ (highlight news). Some summaries were completely irrelevant to their original article texts. To eliminate those irrelevant summaries, we calculated abstractedness score between summary and its article text. Abstractedness score is written formally as: <br>
104
  <center><a href="https://www.codecogs.com/eqnedit.php?latex=\begin{equation}&space;\frac{|S-A|}{r}&space;\times&space;100&space;\end{equation}" target="_blank"><img src="https://latex.codecogs.com/gif.latex?\begin{equation}&space;\frac{|S-A|}{r}&space;\times&space;100&space;\end{equation}" title="\begin{equation} \frac{|S-A|}{r} \times 100 \end{equation}" /></a></center><br>
105
- <br>Where 𝑆 denotes set of article tokens. 𝐴 denotes set of summary tokens. 𝑟 denotes a total number of summary tokens. We omitted articles that have abstractedness score at 1-grams higher than 60%.
106
  <br><br>
107
 
108
- It is important to point out that we used [PyThaiNLP](https://github.com/PyThaiNLP/pythainlp), version 2.2.4, tokenizing engine = newmm, to process Thai texts in this study. It is challenging to tokenize running Thai text into words or sentences because there are not clear word/sentence delimiters in Thai language. Therefore, using different tokenization engines may result in different segment of words/sentences.
109
 
110
  After data-cleansing process, ThaiSum dataset contains over 358,000 articles. The size of this dataset is comparable to a well-known English document summarization dataset, CNN/Dily mail dataset. Moreover, we analyse the characteristics of this dataset by measuring the abstractedness level, compassion rate, and content diversity. For more details, see [thaisum_exploration.ipynb](https://github.com/nakhunchumpolsathien/ThaiSum/blob/master/thaisum_exploration.ipynb).
111
 
112
  #### Dataset Statistics
113
 
114
- ThaiSum dataset consists of 358,868 articles. Average lengths of article texts and summaries are approximately 530 and 37 words respectively. As mentioned earlier, we also collected headlines, tags and labels provided in each article. Tags are similar to keywords of the article. An article normally contains several tags but a few labels. Tags can be name of places or persons that article is about while labels indicate news category (politic, entertainment, etc.). Ultimatly, ThaiSum contains 538,059 unique tags and 59 unique labels. Note that not every article contains tags or labels.
115
 
116
  |Dataset Size| 358,868 | articles |
117
  |:---|---:|---:|
118
- |Avg. Article Length| 529.5 | words|
119
- |Avg. Summary Length | 37.3 | words|
120
- |Avg. Headline Length | 12.6 | words|
121
- |Unique Vocabulary Size | 407,355 | words|
122
- |Occurring > 10 times | 81,761 | words|
123
- |Unique News Tag Size | 538,059 | tags|
124
- |Unique News Label Size | 59 | labels|
125
 
126
  #### Who are the source language producers?
127
 
@@ -131,7 +132,7 @@ Journalists of respective articles
131
 
132
  #### Annotation process
133
 
134
- `summary`, `type` and `tags` are created by journalists who wrote the articles and/or their publishers.
135
 
136
  #### Who are the annotators?
137
 
@@ -174,13 +175,13 @@ MIT License
174
  ### Citation Information
175
 
176
  ```
177
- @mastersthesis{chumpolsathien_2020,
178
  title={Using Knowledge Distillation from Keyword Extraction to Improve the Informativeness of Neural Cross-lingual Summarization},
179
- author={Chumpolsathien, Nakhun},
180
- year={2020},
181
  school={Beijing Institute of Technology}
182
  ```
183
 
184
  ### Contributions
185
 
186
- Thanks to [@cstorm125](https://github.com/cstorm125) for adding this dataset.
 
20
  - language-modeling
21
  - summarization
22
  paperswithcode_id: null
23
+ pretty_name: ThaiSum
24
  ---
25
 
26
+ # Dataset Card for ThaiSum
27
 
28
  ## Table of Contents
29
  - [Dataset Description](#dataset-description)
 
94
 
95
  ### Curation Rationale
96
 
97
+ Sequence-to-sequence (Seq2Seq) models have shown great achievement in text summarization. However, Seq2Seq model often requires large-scale training data to achieve effective results. Although many impressive advancements in text summarization field have been made, most of summarization studies focus on resource-rich languages. The progress of Thai text summarization is still far behind. The dearth of large-scale dataset keeps Thai text summarization in its infancy. As far as our knowledge goes, there is not a large-scale dataset for Thai text summarization available anywhere. Thus, we present ThaiSum, a large-scale corpus for Thai text summarization obtained from several online news websites namely Thairath, ThaiPBS, Prachathai, and The Standard.
98
 
99
  ### Source Data
100
 
 
103
  We used a python library named Scrapy to crawl articles from several news websites namely Thairath, Prachatai, ThaiPBS and, The Standard. We first collected news URLs provided in their sitemaps. During web-crawling, we used HTML markup and metadata available in HTML pages to identify article text, summary, headline, tags and label. Collected articles were published online from 2014 to August 2020. <br> <br>
104
  We further performed data cleansing process to minimize noisy data. We filtered out articles that their article text or summary is missing. Articles that contains article text with less than 150 words or summary with less than 15 words were removed. We also discarded articles that contain at least one of these following tags: ‘ดวง’ (horoscope), ‘นิยาย’ (novel), ‘อินสตราแกรมดารา’ (celebrity Instagram), ‘คลิปสุดฮา’(funny video) and ‘สรุปข่าว’ (highlight news). Some summaries were completely irrelevant to their original article texts. To eliminate those irrelevant summaries, we calculated abstractedness score between summary and its article text. Abstractedness score is written formally as: <br>
105
  <center><a href="https://www.codecogs.com/eqnedit.php?latex=\begin{equation}&space;\frac{|S-A|}{r}&space;\times&space;100&space;\end{equation}" target="_blank"><img src="https://latex.codecogs.com/gif.latex?\begin{equation}&space;\frac{|S-A|}{r}&space;\times&space;100&space;\end{equation}" title="\begin{equation} \frac{|S-A|}{r} \times 100 \end{equation}" /></a></center><br>
106
+ <br>Where 𝑆 denotes set of article tokens. 𝐴 denotes set of summary tokens. 𝑟 denotes a total number of summary tokens. We omitted articles that have abstractedness score at 1-grams higher than 60%.
107
  <br><br>
108
 
109
+ It is important to point out that we used [PyThaiNLP](https://github.com/PyThaiNLP/pythainlp), version 2.2.4, tokenizing engine = newmm, to process Thai texts in this study. It is challenging to tokenize running Thai text into words or sentences because there are not clear word/sentence delimiters in Thai language. Therefore, using different tokenization engines may result in different segment of words/sentences.
110
 
111
  After data-cleansing process, ThaiSum dataset contains over 358,000 articles. The size of this dataset is comparable to a well-known English document summarization dataset, CNN/Dily mail dataset. Moreover, we analyse the characteristics of this dataset by measuring the abstractedness level, compassion rate, and content diversity. For more details, see [thaisum_exploration.ipynb](https://github.com/nakhunchumpolsathien/ThaiSum/blob/master/thaisum_exploration.ipynb).
112
 
113
  #### Dataset Statistics
114
 
115
+ ThaiSum dataset consists of 358,868 articles. Average lengths of article texts and summaries are approximately 530 and 37 words respectively. As mentioned earlier, we also collected headlines, tags and labels provided in each article. Tags are similar to keywords of the article. An article normally contains several tags but a few labels. Tags can be name of places or persons that article is about while labels indicate news category (politic, entertainment, etc.). Ultimatly, ThaiSum contains 538,059 unique tags and 59 unique labels. Note that not every article contains tags or labels.
116
 
117
  |Dataset Size| 358,868 | articles |
118
  |:---|---:|---:|
119
+ |Avg. Article Length| 529.5 | words|
120
+ |Avg. Summary Length | 37.3 | words|
121
+ |Avg. Headline Length | 12.6 | words|
122
+ |Unique Vocabulary Size | 407,355 | words|
123
+ |Occurring > 10 times | 81,761 | words|
124
+ |Unique News Tag Size | 538,059 | tags|
125
+ |Unique News Label Size | 59 | labels|
126
 
127
  #### Who are the source language producers?
128
 
 
132
 
133
  #### Annotation process
134
 
135
+ `summary`, `type` and `tags` are created by journalists who wrote the articles and/or their publishers.
136
 
137
  #### Who are the annotators?
138
 
 
175
  ### Citation Information
176
 
177
  ```
178
+ @mastersthesis{chumpolsathien_2020,
179
  title={Using Knowledge Distillation from Keyword Extraction to Improve the Informativeness of Neural Cross-lingual Summarization},
180
+ author={Chumpolsathien, Nakhun},
181
+ year={2020},
182
  school={Beijing Institute of Technology}
183
  ```
184
 
185
  ### Contributions
186
 
187
+ Thanks to [@cstorm125](https://github.com/cstorm125) for adding this dataset.