Datasets:

Languages:
English
Multilinguality:
monolingual
Size Categories:
10K<n<100K
Language Creators:
found
Annotations Creators:
no-annotation
Source Datasets:
original
License:
system HF staff commited on
Commit
620bfad
1 Parent(s): 352068b

Update files from the datasets library (from 1.4.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.4.0

Files changed (2) hide show
  1. README.md +21 -21
  2. blog_authorship_corpus.py +6 -4
README.md CHANGED
@@ -27,7 +27,7 @@
27
  - [Citation Information](#citation-information)
28
  - [Contributions](#contributions)
29
 
30
- ## [Dataset Description](#dataset-description)
31
 
32
  - **Homepage:** [https://u.cs.biu.ac.il/~koppel/BlogCorpus.htm](https://u.cs.biu.ac.il/~koppel/BlogCorpus.htm)
33
  - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
@@ -37,7 +37,7 @@
37
  - **Size of the generated dataset:** 617.75 MB
38
  - **Total amount of disk used:** 916.20 MB
39
 
40
- ### [Dataset Summary](#dataset-summary)
41
 
42
  The Blog Authorship Corpus consists of the collected posts of 19,320 bloggers gathered from blogger.com in August 2004. The corpus incorporates a total of 681,288 posts and over 140 million words - or approximately 35 posts and 7250 words per person.
43
 
@@ -57,19 +57,19 @@ Each blog in the corpus includes at least 200 occurrences of common English word
57
 
58
  The corpus may be freely used for non-commercial research purposes
59
 
60
- ### [Supported Tasks](#supported-tasks)
61
 
62
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
63
 
64
- ### [Languages](#languages)
65
 
66
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
67
 
68
- ## [Dataset Structure](#dataset-structure)
69
 
70
  We show detailed information for up to 5 configurations of the dataset.
71
 
72
- ### [Data Instances](#data-instances)
73
 
74
  #### blog-authorship-corpus
75
 
@@ -89,7 +89,7 @@ An example of 'validation' looks as follows.
89
  }
90
  ```
91
 
92
- ### [Data Fields](#data-fields)
93
 
94
  The data fields are the same among all splits.
95
 
@@ -101,55 +101,55 @@ The data fields are the same among all splits.
101
  - `horoscope`: a `string` feature.
102
  - `job`: a `string` feature.
103
 
104
- ### [Data Splits Sample Size](#data-splits-sample-size)
105
 
106
  | name |train |validation|
107
  |----------------------|-----:|---------:|
108
  |blog-authorship-corpus|532812| 31277|
109
 
110
- ## [Dataset Creation](#dataset-creation)
111
 
112
- ### [Curation Rationale](#curation-rationale)
113
 
114
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
115
 
116
- ### [Source Data](#source-data)
117
 
118
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
119
 
120
- ### [Annotations](#annotations)
121
 
122
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
123
 
124
- ### [Personal and Sensitive Information](#personal-and-sensitive-information)
125
 
126
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
127
 
128
- ## [Considerations for Using the Data](#considerations-for-using-the-data)
129
 
130
- ### [Social Impact of Dataset](#social-impact-of-dataset)
131
 
132
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
133
 
134
- ### [Discussion of Biases](#discussion-of-biases)
135
 
136
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
137
 
138
- ### [Other Known Limitations](#other-known-limitations)
139
 
140
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
141
 
142
- ## [Additional Information](#additional-information)
143
 
144
- ### [Dataset Curators](#dataset-curators)
145
 
146
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
147
 
148
- ### [Licensing Information](#licensing-information)
149
 
150
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
151
 
152
- ### [Citation Information](#citation-information)
153
 
154
  ```
155
  @inproceedings{schler2006effects,
 
27
  - [Citation Information](#citation-information)
28
  - [Contributions](#contributions)
29
 
30
+ ## Dataset Description
31
 
32
  - **Homepage:** [https://u.cs.biu.ac.il/~koppel/BlogCorpus.htm](https://u.cs.biu.ac.il/~koppel/BlogCorpus.htm)
33
  - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
 
37
  - **Size of the generated dataset:** 617.75 MB
38
  - **Total amount of disk used:** 916.20 MB
39
 
40
+ ### Dataset Summary
41
 
42
  The Blog Authorship Corpus consists of the collected posts of 19,320 bloggers gathered from blogger.com in August 2004. The corpus incorporates a total of 681,288 posts and over 140 million words - or approximately 35 posts and 7250 words per person.
43
 
 
57
 
58
  The corpus may be freely used for non-commercial research purposes
59
 
60
+ ### Supported Tasks
61
 
62
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
63
 
64
+ ### Languages
65
 
66
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
67
 
68
+ ## Dataset Structure
69
 
70
  We show detailed information for up to 5 configurations of the dataset.
71
 
72
+ ### Data Instances
73
 
74
  #### blog-authorship-corpus
75
 
 
89
  }
90
  ```
91
 
92
+ ### Data Fields
93
 
94
  The data fields are the same among all splits.
95
 
 
101
  - `horoscope`: a `string` feature.
102
  - `job`: a `string` feature.
103
 
104
+ ### Data Splits Sample Size
105
 
106
  | name |train |validation|
107
  |----------------------|-----:|---------:|
108
  |blog-authorship-corpus|532812| 31277|
109
 
110
+ ## Dataset Creation
111
 
112
+ ### Curation Rationale
113
 
114
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
115
 
116
+ ### Source Data
117
 
118
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
119
 
120
+ ### Annotations
121
 
122
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
123
 
124
+ ### Personal and Sensitive Information
125
 
126
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
127
 
128
+ ## Considerations for Using the Data
129
 
130
+ ### Social Impact of Dataset
131
 
132
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
133
 
134
+ ### Discussion of Biases
135
 
136
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
137
 
138
+ ### Other Known Limitations
139
 
140
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
141
 
142
+ ## Additional Information
143
 
144
+ ### Dataset Curators
145
 
146
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
147
 
148
+ ### Licensing Information
149
 
150
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
151
 
152
+ ### Citation Information
153
 
154
  ```
155
  @inproceedings{schler2006effects,
blog_authorship_corpus.py CHANGED
@@ -1,12 +1,14 @@
1
  from __future__ import absolute_import, division, print_function
2
 
3
  import glob
4
- import logging
5
  import os
6
 
7
  import datasets
8
 
9
 
 
 
 
10
  _CITATION = """\
11
  @inproceedings{schler2006effects,
12
  title={Effects of age and gender on blogging.},
@@ -133,7 +135,7 @@ class BlogAuthorshipCorpus(datasets.GeneratorBasedBuilder):
133
  for file_path in files:
134
  counter = 0
135
  file_name = os.path.basename(file_path)
136
- logging.info("generating examples from = %s", file_path)
137
  file_id, gender, age, job, horoscope = tuple(file_name.split(".")[:-1])
138
 
139
  # Note: import xml.etree.ElementTree as etree does not work. File cannot be parsed
@@ -151,7 +153,7 @@ class BlogAuthorshipCorpus(datasets.GeneratorBasedBuilder):
151
  sub_id = counter
152
  counter += 1
153
  if date == "":
154
- logging.warning("Date missing for {} in {}".format(line, file_name))
155
  assert date is not None, "Date is missing before {}".format(line)
156
  blog = {
157
  "text": line,
@@ -165,4 +167,4 @@ class BlogAuthorshipCorpus(datasets.GeneratorBasedBuilder):
165
  else:
166
  continue
167
  except UnicodeDecodeError as e:
168
- logging.warning("{} cannot be loaded. Error message: {}".format(file_path, e))
 
1
  from __future__ import absolute_import, division, print_function
2
 
3
  import glob
 
4
  import os
5
 
6
  import datasets
7
 
8
 
9
+ logger = datasets.logging.get_logger(__name__)
10
+
11
+
12
  _CITATION = """\
13
  @inproceedings{schler2006effects,
14
  title={Effects of age and gender on blogging.},
 
135
  for file_path in files:
136
  counter = 0
137
  file_name = os.path.basename(file_path)
138
+ logger.info("generating examples from = %s", file_path)
139
  file_id, gender, age, job, horoscope = tuple(file_name.split(".")[:-1])
140
 
141
  # Note: import xml.etree.ElementTree as etree does not work. File cannot be parsed
 
153
  sub_id = counter
154
  counter += 1
155
  if date == "":
156
+ logger.warning("Date missing for {} in {}".format(line, file_name))
157
  assert date is not None, "Date is missing before {}".format(line)
158
  blog = {
159
  "text": line,
 
167
  else:
168
  continue
169
  except UnicodeDecodeError as e:
170
+ logger.warning("{} cannot be loaded. Error message: {}".format(file_path, e))