Datasets:

Modalities:
Tabular
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
system HF staff commited on
Commit
40bc19a
0 Parent(s):

Update files from the datasets library (from 1.2.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.2.0

.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,161 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - crowdsourced
4
+ language_creators:
5
+ - found
6
+ languages:
7
+ - en
8
+ licenses:
9
+ - unknown
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - 100K<n<1M
14
+ source_datasets:
15
+ - original
16
+ task_categories:
17
+ - text-classification
18
+ task_ids:
19
+ - multi-class-classification
20
+ ---
21
+
22
+ # Dataset Card for the Gutenberg Time dataset
23
+
24
+ ## Table of Contents
25
+ - [Dataset Description](#dataset-description)
26
+ - [Dataset Summary](#dataset-summary)
27
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
28
+ - [Languages](#languages)
29
+ - [Dataset Structure](#dataset-structure)
30
+ - [Data Instances](#data-instances)
31
+ - [Data Fields](#data-instances)
32
+ - [Data Splits](#data-instances)
33
+ - [Dataset Creation](#dataset-creation)
34
+ - [Curation Rationale](#curation-rationale)
35
+ - [Source Data](#source-data)
36
+ - [Annotations](#annotations)
37
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
38
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
39
+ - [Social Impact of Dataset](#social-impact-of-dataset)
40
+ - [Discussion of Biases](#discussion-of-biases)
41
+ - [Other Known Limitations](#other-known-limitations)
42
+ - [Additional Information](#additional-information)
43
+ - [Dataset Curators](#dataset-curators)
44
+ - [Licensing Information](#licensing-information)
45
+ - [Citation Information](#citation-information)
46
+
47
+ ## Dataset Description
48
+
49
+ - **[Repository](https://github.com/allenkim/what-time-is-it)**
50
+ - **[Paper](https://arxiv.org/abs/2011.04124)**
51
+
52
+ ### Dataset Summary
53
+
54
+ A clean data resource containing all explicit time references in a dataset of 52,183 novels whose full text is available via Project Gutenberg.
55
+
56
+ ### Supported Tasks and Leaderboards
57
+
58
+ [More Information Needed]
59
+
60
+ ### Languages
61
+
62
+ Time-of-the-day classification from excerpts.
63
+
64
+ ## Dataset Structure
65
+
66
+ ### Data Instances
67
+
68
+ ```
69
+ {
70
+ "guten_id": 28999,
71
+ "hour_reference": 12,
72
+ "time_phrase": "midday",
73
+ "is_ambiguous": False,
74
+ "time_pos_start": 133,
75
+ "time_pos_end": 134,
76
+ "tok_context": "Sorrows and trials she had had in plenty in her life , but these the sweetness of her nature had transformed , so that from being things difficult to bear , she had built up with them her own character . Sorrow had increased her own power of sympathy ; out of trials she had learnt patience ; and failure and the gradual sinking of one she had loved into the bottomless slough of evil habit had but left her with an added dower of pity and tolerance . So the past had no sting left , and if iron had ever entered into her soul it now but served to make it strong . She was still young , too ; it was not near sunset with her yet , nor even midday , and the future that , humanly speaking , she counted to be hers was almost dazzling in its brightness . For love had dawned for her again , and no uncertain love , wrapped in the mists of memory , but one that had ripened through liking and friendship and intimacy into the authentic glory . He was in England , too ; she was going back to him . And before very long she would never go away from him again ."
77
+ }
78
+ ```
79
+
80
+ ### Data Fields
81
+
82
+ ```
83
+ guten_id - Gutenberg ID number
84
+ hour_reference - hour from 0 to 23
85
+ time_phrase - the phrase corresponding to the referenced hour
86
+ is_ambiguous - boolean whether it is clear whether time is AM or PM
87
+ time_pos_start - token position where time_phrase begins
88
+ time_pos_end - token position where time_phrase ends (exclusive)
89
+ tok_context - context in which time_phrase appears as space-separated tokens
90
+ ```
91
+
92
+ ### Data Splits
93
+
94
+ No data splits.
95
+
96
+ ## Dataset Creation
97
+
98
+ ### Curation Rationale
99
+
100
+ The flow of time is an indispensable guide for our actions, and provides a framework in which to see a logical progression of events. Just as in real life,the clock provides the background against which literary works play out: when characters wake, eat,and act. In most works of fiction, the events of the story take place during recognizable time periods over the course of the day. Recognizing a story’s flow through time is essential to understanding the text.In this paper, we try to capture the flow of time through novels by attempting to recognize what time of day each event in the story takes place at.
101
+
102
+ ### Source Data
103
+
104
+ #### Initial Data Collection and Normalization
105
+
106
+ [More Information Needed]
107
+
108
+ #### Who are the source language producers?
109
+
110
+ Novel authors.
111
+
112
+ ### Annotations
113
+
114
+ #### Annotation process
115
+
116
+ Manually annotated.
117
+
118
+ #### Who are the annotators?
119
+
120
+ Two of the authors.
121
+
122
+ ### Personal and Sensitive Information
123
+
124
+ No Personal or sensitive information.
125
+
126
+ ## Considerations for Using the Data
127
+
128
+ ### Social Impact of Dataset
129
+
130
+ [More Information Needed]
131
+
132
+ ### Discussion of Biases
133
+
134
+ [More Information Needed]
135
+
136
+ ### Other Known Limitations
137
+
138
+ [More Information Needed]
139
+
140
+ ## Additional Information
141
+
142
+ ### Dataset Curators
143
+
144
+ Allen Kim, Charuta Pethe and Steven Skiena, Stony Brook University
145
+
146
+ ### Licensing Information
147
+
148
+ [More Information Needed]
149
+
150
+ ### Citation Information
151
+
152
+ ```
153
+ @misc{kim2020time,
154
+ title={What time is it? Temporal Analysis of Novels},
155
+ author={Allen Kim and Charuta Pethe and Steven Skiena},
156
+ year={2020},
157
+ eprint={2011.04124},
158
+ archivePrefix={arXiv},
159
+ primaryClass={cs.CL}
160
+ }
161
+ ```
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"gutenberg": {"description": "A clean data resource containing all explicit time references in a dataset of 52,183 novels whose full text is available via Project Gutenberg.\n", "citation": "@misc{kim2020time,\n title={What time is it? Temporal Analysis of Novels},\n author={Allen Kim and Charuta Pethe and Steven Skiena},\n year={2020},\n eprint={2011.04124},\n archivePrefix={arXiv},\n primaryClass={cs.CL}\n}\n", "homepage": "https://github.com/allenkim/what-time-is-it", "license": "[More Information needed]", "features": {"guten_id": {"dtype": "string", "id": null, "_type": "Value"}, "hour_reference": {"dtype": "string", "id": null, "_type": "Value"}, "time_phrase": {"dtype": "string", "id": null, "_type": "Value"}, "is_ambiguous": {"dtype": "bool_", "id": null, "_type": "Value"}, "time_pos_start": {"dtype": "int64", "id": null, "_type": "Value"}, "time_pos_end": {"dtype": "int64", "id": null, "_type": "Value"}, "tok_context": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "gutenberg_time", "config_name": "gutenberg", "version": {"version_str": "0.0.0", "description": null, "major": 0, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 108550391, "num_examples": 120694, "dataset_name": "gutenberg_time"}}, "download_checksums": {"https://github.com/TevenLeScao/what-time-is-it/blob/master/gutenberg_time_phrases.zip?raw=true": {"num_bytes": 35853781, "checksum": "5c1ea2d3c9d1e5bdfd28894c804237dcb45a6998093c490bf0f9a578f95fea9d"}}, "download_size": 35853781, "post_processing_size": null, "dataset_size": 108550391, "size_in_bytes": 144404172}}
dummy/gutenberg/0.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7756beda39b2545c745a147f3752521a97dea628655ca0a058081a186849eb06
3
+ size 2631
gutenberg_time.py ADDED
@@ -0,0 +1,109 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """Recognizing the flow of time in a story is a crucial aspect of understanding it. Prior work related to time has primarily focused on identifying temporal expressions or relative sequencing of events, but here we propose computationally annotating each line of a book with wall clock times, even in the absence of explicit time-descriptive phrases. To do so, we construct a data set of hourly time phrases from 52,183 fictional books."""
16
+
17
+ from __future__ import absolute_import, division, print_function
18
+
19
+ import csv
20
+ import os
21
+
22
+ import datasets
23
+
24
+
25
+ _CITATION = """\
26
+ @misc{kim2020time,
27
+ title={What time is it? Temporal Analysis of Novels},
28
+ author={Allen Kim and Charuta Pethe and Steven Skiena},
29
+ year={2020},
30
+ eprint={2011.04124},
31
+ archivePrefix={arXiv},
32
+ primaryClass={cs.CL}
33
+ }
34
+ """
35
+
36
+ _DESCRIPTION = """\
37
+ A clean data resource containing all explicit time references in a dataset of 52,183 novels whose full text is available via Project Gutenberg.
38
+ """
39
+
40
+ _HOMEPAGE = "https://github.com/allenkim/what-time-is-it"
41
+
42
+ _LICENSE = "[More Information needed]"
43
+
44
+ # The HuggingFace dataset library don't host the datasets but only point to the original files
45
+ # This can be an arbitrary nested dict/list of URLs (see below in `_split_generators` method)
46
+ _URLs = {
47
+ "gutenberg": "https://github.com/TevenLeScao/what-time-is-it/blob/master/gutenberg_time_phrases.zip?raw=true",
48
+ }
49
+
50
+
51
+ class GutenbergTime(datasets.GeneratorBasedBuilder):
52
+ """Novel extracts with time-of-the-day information"""
53
+
54
+ VERSION = datasets.Version("1.1.3")
55
+ BUILDER_CONFIGS = [
56
+ datasets.BuilderConfig(name="gutenberg", description="Data pulled from the Gutenberg project"),
57
+ ]
58
+
59
+ def _info(self):
60
+ features = datasets.Features(
61
+ {
62
+ "guten_id": datasets.Value("string"),
63
+ "hour_reference": datasets.Value("string"),
64
+ "time_phrase": datasets.Value("string"),
65
+ "is_ambiguous": datasets.Value("bool_"),
66
+ "time_pos_start": datasets.Value("int64"),
67
+ "time_pos_end": datasets.Value("int64"),
68
+ "tok_context": datasets.Value("string"),
69
+ }
70
+ )
71
+ return datasets.DatasetInfo(
72
+ description=_DESCRIPTION,
73
+ features=features,
74
+ supervised_keys=None,
75
+ homepage=_HOMEPAGE,
76
+ license=_LICENSE,
77
+ citation=_CITATION,
78
+ )
79
+
80
+ def _split_generators(self, dl_manager):
81
+ """Returns SplitGenerators."""
82
+ my_urls = _URLs[self.config.name]
83
+ data = dl_manager.download_and_extract(my_urls)
84
+ return [
85
+ datasets.SplitGenerator(
86
+ name=datasets.Split.TRAIN,
87
+ # These kwargs will be passed to _generate_examples
88
+ gen_kwargs={
89
+ "filepath": os.path.join(data, "gutenberg_time_phrases.csv"),
90
+ "split": "train",
91
+ },
92
+ )
93
+ ]
94
+
95
+ def _generate_examples(self, filepath, split):
96
+
97
+ with open(filepath, encoding="utf8") as f:
98
+ data = csv.reader(f)
99
+ next(data)
100
+ for id_, row in enumerate(data):
101
+ yield id_, {
102
+ "guten_id": row[0],
103
+ "hour_reference": row[1],
104
+ "time_phrase": row[2],
105
+ "is_ambiguous": row[3],
106
+ "time_pos_start": row[4],
107
+ "time_pos_end": row[5],
108
+ "tok_context": row[6],
109
+ }