system HF staff commited on
Commit
5c03c8b
1 Parent(s): 695b114

Update files from the datasets library (from 1.3.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.3.0

Files changed (1) hide show
  1. README.md +155 -0
README.md ADDED
@@ -0,0 +1,155 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ ---
3
+
4
+ # Dataset Card for "tiny_shakespeare"
5
+
6
+ ## Table of Contents
7
+ - [Dataset Description](#dataset-description)
8
+ - [Dataset Summary](#dataset-summary)
9
+ - [Supported Tasks](#supported-tasks)
10
+ - [Languages](#languages)
11
+ - [Dataset Structure](#dataset-structure)
12
+ - [Data Instances](#data-instances)
13
+ - [Data Fields](#data-fields)
14
+ - [Data Splits Sample Size](#data-splits-sample-size)
15
+ - [Dataset Creation](#dataset-creation)
16
+ - [Curation Rationale](#curation-rationale)
17
+ - [Source Data](#source-data)
18
+ - [Annotations](#annotations)
19
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
20
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
21
+ - [Social Impact of Dataset](#social-impact-of-dataset)
22
+ - [Discussion of Biases](#discussion-of-biases)
23
+ - [Other Known Limitations](#other-known-limitations)
24
+ - [Additional Information](#additional-information)
25
+ - [Dataset Curators](#dataset-curators)
26
+ - [Licensing Information](#licensing-information)
27
+ - [Citation Information](#citation-information)
28
+ - [Contributions](#contributions)
29
+
30
+ ## [Dataset Description](#dataset-description)
31
+
32
+ - **Homepage:** [https://github.com/karpathy/char-rnn/blob/master/data/tinyshakespeare/input.txt](https://github.com/karpathy/char-rnn/blob/master/data/tinyshakespeare/input.txt)
33
+ - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
34
+ - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
35
+ - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
36
+ - **Size of downloaded dataset files:** 1.06 MB
37
+ - **Size of the generated dataset:** 1.06 MB
38
+ - **Total amount of disk used:** 2.13 MB
39
+
40
+ ### [Dataset Summary](#dataset-summary)
41
+
42
+ 40,000 lines of Shakespeare from a variety of Shakespeare's plays. Featured in Andrej Karpathy's blog post 'The Unreasonable Effectiveness of Recurrent Neural Networks': http://karpathy.github.io/2015/05/21/rnn-effectiveness/.
43
+
44
+ To use for e.g. character modelling:
45
+
46
+ ```
47
+ d = datasets.load_dataset(name='tiny_shakespeare')['train']
48
+ d = d.map(lambda x: datasets.Value('strings').unicode_split(x['text'], 'UTF-8'))
49
+ # train split includes vocabulary for other splits
50
+ vocabulary = sorted(set(next(iter(d)).numpy()))
51
+ d = d.map(lambda x: {'cur_char': x[:-1], 'next_char': x[1:]})
52
+ d = d.unbatch()
53
+ seq_len = 100
54
+ batch_size = 2
55
+ d = d.batch(seq_len)
56
+ d = d.batch(batch_size)
57
+ ```
58
+
59
+ ### [Supported Tasks](#supported-tasks)
60
+
61
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
62
+
63
+ ### [Languages](#languages)
64
+
65
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
66
+
67
+ ## [Dataset Structure](#dataset-structure)
68
+
69
+ We show detailed information for up to 5 configurations of the dataset.
70
+
71
+ ### [Data Instances](#data-instances)
72
+
73
+ #### default
74
+
75
+ - **Size of downloaded dataset files:** 1.06 MB
76
+ - **Size of the generated dataset:** 1.06 MB
77
+ - **Total amount of disk used:** 2.13 MB
78
+
79
+ An example of 'train' looks as follows.
80
+ ```
81
+ {
82
+ "text": "First Citizen:\nBefore we proceed any further, hear me "
83
+ }
84
+ ```
85
+
86
+ ### [Data Fields](#data-fields)
87
+
88
+ The data fields are the same among all splits.
89
+
90
+ #### default
91
+ - `text`: a `string` feature.
92
+
93
+ ### [Data Splits Sample Size](#data-splits-sample-size)
94
+
95
+ | name |train|validation|test|
96
+ |-------|----:|---------:|---:|
97
+ |default| 1| 1| 1|
98
+
99
+ ## [Dataset Creation](#dataset-creation)
100
+
101
+ ### [Curation Rationale](#curation-rationale)
102
+
103
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
104
+
105
+ ### [Source Data](#source-data)
106
+
107
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
108
+
109
+ ### [Annotations](#annotations)
110
+
111
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
112
+
113
+ ### [Personal and Sensitive Information](#personal-and-sensitive-information)
114
+
115
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
116
+
117
+ ## [Considerations for Using the Data](#considerations-for-using-the-data)
118
+
119
+ ### [Social Impact of Dataset](#social-impact-of-dataset)
120
+
121
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
122
+
123
+ ### [Discussion of Biases](#discussion-of-biases)
124
+
125
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
126
+
127
+ ### [Other Known Limitations](#other-known-limitations)
128
+
129
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
130
+
131
+ ## [Additional Information](#additional-information)
132
+
133
+ ### [Dataset Curators](#dataset-curators)
134
+
135
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
136
+
137
+ ### [Licensing Information](#licensing-information)
138
+
139
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
140
+
141
+ ### [Citation Information](#citation-information)
142
+
143
+ ```
144
+ @misc{
145
+ author={Karpathy, Andrej},
146
+ title={char-rnn},
147
+ year={2015},
148
+ howpublished={\url{https://github.com/karpathy/char-rnn}}
149
+ }
150
+ ```
151
+
152
+
153
+ ### Contributions
154
+
155
+ Thanks to [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset.