system HF staff commited on
Commit
f9dd34a
1 Parent(s): 38b4fa5

Update files from the datasets library (from 1.2.1)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.2.1

Files changed (3) hide show
  1. README.md +12 -11
  2. dataset_infos.json +1 -1
  3. dbrd.py +5 -5
README.md CHANGED
@@ -56,7 +56,7 @@ task_ids:
56
 
57
  ### Dataset Summary
58
 
59
- The DBRD (pronounced *dee-bird*) dataset contains over 110k book reviews of which 22k have associated binary sentiment polarity labels. It is intended as a benchmark for sentiment classification in Dutch and created due to a lack of annotated datasets in Dutch that are suitable for this task.
60
 
61
  ### Supported Tasks and Leaderboards
62
 
@@ -71,7 +71,7 @@ Non-Dutch reviews were filtered out using [langdetect](https://github.com/Mimino
71
 
72
  ### Data Instances
73
 
74
- The dataset contains three subsets: train, test and unsupervised. The `train` and `test` sets contain labels, while the `unsupervised` set doesn't (the label value is -1 for each instance in `unsupervised`). Here's an example of a positive review, indicated with a label value of `1`.
75
 
76
  ```
77
  {
@@ -83,15 +83,16 @@ The dataset contains three subsets: train, test and unsupervised. The `train` an
83
  ### Data Fields
84
 
85
  - `label`: either 0 (negative) or 1 (positive) in the supervised sets `train` and `test`. These are always -1 for the unsupervised set.
86
- - `text`: book review as utf-8 encoded string.
87
 
88
  ### Data Splits
89
 
90
- The `train` and `test` sets were constructed by extracting all non-neutral reviews, because we want to assign either a positive or negative polarity label to each instance. Furthermore, the positive (pos) and negative (neg) labels were balanced in both train and test sets. The remainder was added to the unsupervised set.
91
 
92
- | | Train | Valid | Test |
93
- | ----- | ------ | ----- | ------ |
94
- | # No. texts | 20028 | 2224 | 96264 |
 
95
 
96
  ## Dataset Creation
97
 
@@ -113,7 +114,7 @@ The reviews are written by users of [Hebban](https://www.hebban.nl) and are of v
113
 
114
  ### Annotations
115
 
116
- Each book review was accompanied by a 1 to 5-star rating. The annotations are produced by mapping the user-provided ratings to either positive or negative label. 1 and 2-star ratings are given the negative label `0` and 4 and 5-star ratings the positive label `1`. Reviews with a rating of 3 stars are considered neutral and left out of the `train`/`test` sets and added to the unsupervised set.
117
 
118
  #### Annotation process
119
 
@@ -121,7 +122,7 @@ Users of [Hebban](https://www.hebban.nl) were unaware that their reviews would b
121
 
122
  #### Who are the annotators?
123
 
124
- The annotators are the [Hebban](https://www.hebban.nl) users who wrote the book review associated with the annotation. Anyone can register on [Hebban](https://www.hebban.nl) and it's impossible to know the demographics of this group.
125
 
126
  ### Personal and Sensitive Information
127
 
@@ -131,7 +132,7 @@ The book reviews and ratings are publicly available on [Hebban](https://www.hebb
131
 
132
  ### Social Impact of Dataset
133
 
134
- While prediciting sentiment of book reviews in itself is not that interesting on its own, the value of this dataset lies in its capability for benchmarking models. The dataset contains some challenges that are common to outings on the internet, such as spelling mistakes and other errors. It is therefore very useful for validating models for their real-world performance. These datasets are abundant for English, but are harder to find for Dutch, making it a valuable resource for ML tasks in this language.
135
 
136
  ### Discussion of Biases
137
 
@@ -139,7 +140,7 @@ While prediciting sentiment of book reviews in itself is not that interesting on
139
 
140
  ### Other Known Limitations
141
 
142
- Reviews on [Hebban](https://www.hebban.nl) are usually written in Dutch, but some have been written in English and possibly in other languages. While we've done our best to filter out non-Dutch texts, it's hard to do this without errors. For example, some reviews are in multiple languages, and these might slip through. Also be aware that some commercial outings can appear in the text, making them different from other reviews and influencing your models. While this doesn't pose a major issue in most cases, I just wanted to mention it briefly.
143
 
144
  ## Additional Information
145
 
 
56
 
57
  ### Dataset Summary
58
 
59
+ The DBRD (pronounced *dee-bird*) dataset contains over 110k book reviews of which 22k have associated binary sentiment polarity labels. It is intended as a benchmark for sentiment classification in Dutch and was created due to a lack of annotated datasets in Dutch that are suitable for this task.
60
 
61
  ### Supported Tasks and Leaderboards
62
 
 
71
 
72
  ### Data Instances
73
 
74
+ The dataset contains three subsets: train, test, and unsupervised. The `train` and `test` sets contain labels, while the `unsupervised` set doesn't (the label value is -1 for each instance in `unsupervised`). Here's an example of a positive review, indicated with a label value of `1`.
75
 
76
  ```
77
  {
 
83
  ### Data Fields
84
 
85
  - `label`: either 0 (negative) or 1 (positive) in the supervised sets `train` and `test`. These are always -1 for the unsupervised set.
86
+ - `text`: book review as a utf-8 encoded string.
87
 
88
  ### Data Splits
89
 
90
+ The `train` and `test` sets were constructed by extracting all non-neutral reviews because we want to assign either a positive or negative polarity label to each instance. Furthermore, the positive (pos) and negative (neg) labels were balanced in both train and test sets. The remainder was added to the unsupervised set.
91
 
92
+ | | Train | Test | Unsupervised |
93
+ | ----- | ------ | ----- | ----------- |
94
+ | # No. texts | 20028 | 2224 | 96264 |
95
+ | % of total | 16.9% | 1.9% | 81.2% |
96
 
97
  ## Dataset Creation
98
 
 
114
 
115
  ### Annotations
116
 
117
+ Each book review was accompanied by a 1 to 5-star rating. The annotations are produced by mapping the user-provided ratings to either a positive or negative label. 1 and 2-star ratings are given the negative label `0` and 4 and 5-star ratings the positive label `1`. Reviews with a rating of 3 stars are considered neutral and left out of the `train`/`test` sets and added to the unsupervised set.
118
 
119
  #### Annotation process
120
 
 
122
 
123
  #### Who are the annotators?
124
 
125
+ The annotators are the [Hebban](https://www.hebban.nl) users who wrote the book reviews associated with the annotation. Anyone can register on [Hebban](https://www.hebban.nl) and it's impossible to know the demographics of this group.
126
 
127
  ### Personal and Sensitive Information
128
 
 
132
 
133
  ### Social Impact of Dataset
134
 
135
+ While predicting sentiment of book reviews in itself is not that interesting, the value of this dataset lies in its usage for benchmarking models. The dataset contains some challenges that are common to outings on the internet, such as spelling mistakes and other errors. It is therefore very useful for validating models for their real-world performance. These datasets are abundant for English but are harder to find for Dutch, making them a valuable resource for ML tasks in this language.
136
 
137
  ### Discussion of Biases
138
 
 
140
 
141
  ### Other Known Limitations
142
 
143
+ Reviews on [Hebban](https://www.hebban.nl) are usually written in Dutch, but some have been written in English and possibly in other languages. While we've done our best to filter out non-Dutch texts, it's hard to do this without errors. For example, some reviews are in multiple languages, and these might slip through. Also be aware that some commercial outings can appear in the text, making them different from other reviews and influencing your models. While this doesn't pose a major issue in most cases, we just wanted to mention it briefly.
144
 
145
  ## Additional Information
146
 
dataset_infos.json CHANGED
@@ -1 +1 @@
1
- {"plain_text": {"description": "Dutch Book Review Dataset\nThe DBRD (pronounced dee-bird) dataset contains over 110k book reviews along with associated binary sentiment polarity labels and is intended as a benchmark for sentiment classification in Dutch.\n", "citation": "@article{DBLP:journals/corr/abs-1910-00896,\n author = {Benjamin van der Burgh and\n Suzan Verberne},\n title = {The merits of Universal Language Model Fine-tuning for Small Datasets\n - a case with Dutch book reviews},\n journal = {CoRR},\n volume = {abs/1910.00896},\n year = {2019},\n url = {http://arxiv.org/abs/1910.00896},\n archivePrefix = {arXiv},\n eprint = {1910.00896},\n timestamp = {Fri, 04 Oct 2019 12:28:06 +0200},\n biburl = {https://dblp.org/rec/journals/corr/abs-1910-00896.bib},\n bibsource = {dblp computer science bibliography, https://dblp.org}\n}\n", "homepage": "https://github.com/benjaminvdb/DBRD", "license": "", "features": {"text": {"dtype": "string", "id": null, "_type": "Value"}, "label": {"num_classes": 2, "names": ["neg", "pos"], "names_file": null, "id": null, "_type": "ClassLabel"}}, "post_processed": null, "supervised_keys": null, "builder_name": "dbrd", "config_name": "plain_text", "version": {"version_str": "3.0.0", "description": "", "major": 3, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 29496333, "num_examples": 20028, "dataset_name": "dbrd"}, "test": {"name": "test", "num_bytes": 3246243, "num_examples": 2224, "dataset_name": "dbrd"}, "unsupervised": {"name": "unsupervised", "num_bytes": 152733031, "num_examples": 96264, "dataset_name": "dbrd"}}, "download_checksums": {"https://drive.google.com/uc?export=download&id=1k5UMoqoB3RT4kK9FI5Xyl7RmWWyBSwux": {"num_bytes": 79065872, "checksum": "2d7eed5a2c56b19fec22f1656722b6036569aa542d362e576bd761eb91e1e76a"}}, "download_size": 79065872, "post_processing_size": null, "dataset_size": 185475607, "size_in_bytes": 264541479}}
 
1
+ {"plain_text": {"description": "The Dutch Book Review Dataset (DBRD) contains over 110k book reviews of which 22k have associated binary sentiment polarity labels. It is intended as a benchmark for sentiment classification in Dutch and created due to a lack of annotated datasets in Dutch that are suitable for this task.\n", "citation": "@article{DBLP:journals/corr/abs-1910-00896,\n author = {Benjamin van der Burgh and\n Suzan Verberne},\n title = {The merits of Universal Language Model Fine-tuning for Small Datasets\n - a case with Dutch book reviews},\n journal = {CoRR},\n volume = {abs/1910.00896},\n year = {2019},\n url = {http://arxiv.org/abs/1910.00896},\n archivePrefix = {arXiv},\n eprint = {1910.00896},\n timestamp = {Fri, 04 Oct 2019 12:28:06 +0200},\n biburl = {https://dblp.org/rec/journals/corr/abs-1910-00896.bib},\n bibsource = {dblp computer science bibliography, https://dblp.org}\n}\n", "homepage": "https://github.com/benjaminvdb/DBRD", "license": "", "features": {"text": {"dtype": "string", "id": null, "_type": "Value"}, "label": {"num_classes": 2, "names": ["neg", "pos"], "names_file": null, "id": null, "_type": "ClassLabel"}}, "post_processed": null, "supervised_keys": null, "builder_name": "dbrd", "config_name": "plain_text", "version": {"version_str": "3.0.0", "description": "", "major": 3, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 29496333, "num_examples": 20028, "dataset_name": "dbrd"}, "test": {"name": "test", "num_bytes": 3246243, "num_examples": 2224, "dataset_name": "dbrd"}, "unsupervised": {"name": "unsupervised", "num_bytes": 152733031, "num_examples": 96264, "dataset_name": "dbrd"}}, "download_checksums": {"https://github.com/benjaminvdb/DBRD/releases/download/v3.0/DBRD_v3.tgz": {"num_bytes": 79065872, "checksum": "2d7eed5a2c56b19fec22f1656722b6036569aa542d362e576bd761eb91e1e76a"}}, "download_size": 79065872, "post_processing_size": null, "dataset_size": 185475607, "size_in_bytes": 264541479}}
dbrd.py CHANGED
@@ -24,10 +24,10 @@ import datasets
24
 
25
 
26
  _DESCRIPTION = """\
27
- Dutch Book Review Dataset
28
- The DBRD (pronounced dee-bird) dataset contains over 110k book reviews along \
29
- with associated binary sentiment polarity labels and is intended as a \
30
- benchmark for sentiment classification in Dutch.
31
  """
32
 
33
  _CITATION = """\
@@ -48,7 +48,7 @@ _CITATION = """\
48
  }
49
  """
50
 
51
- _DOWNLOAD_URL = "https://drive.google.com/uc?export=download&id=1k5UMoqoB3RT4kK9FI5Xyl7RmWWyBSwux"
52
 
53
 
54
  class DBRDConfig(datasets.BuilderConfig):
 
24
 
25
 
26
  _DESCRIPTION = """\
27
+ The Dutch Book Review Dataset (DBRD) contains over 110k book reviews of which \
28
+ 22k have associated binary sentiment polarity labels. It is intended as a \
29
+ benchmark for sentiment classification in Dutch and created due to a lack of \
30
+ annotated datasets in Dutch that are suitable for this task.
31
  """
32
 
33
  _CITATION = """\
 
48
  }
49
  """
50
 
51
+ _DOWNLOAD_URL = "https://github.com/benjaminvdb/DBRD/releases/download/v3.0/DBRD_v3.tgz"
52
 
53
 
54
  class DBRDConfig(datasets.BuilderConfig):