Dataset:
Dataset Card for "tiny_shakespeare"
Dataset Summary
40,000 lines of Shakespeare from a variety of Shakespeare's plays. Featured in Andrej Karpathy's blog post 'The Unreasonable Effectiveness of Recurrent Neural Networks': http://karpathy.github.io/2015/05/21/rnn-effectiveness/.
To use for e.g. character modelling:
d = datasets.load_dataset(name='tiny_shakespeare')['train']
d = d.map(lambda x: datasets.Value('strings').unicode_split(x['text'], 'UTF-8'))
# train split includes vocabulary for other splits
vocabulary = sorted(set(next(iter(d)).numpy()))
d = d.map(lambda x: {'cur_char': x[:-1], 'next_char': x[1:]})
d = d.unbatch()
seq_len = 100
batch_size = 2
d = d.batch(seq_len)
d = d.batch(batch_size)
Supported Tasks
Languages
Dataset Structure
We show detailed information for up to 5 configurations of the dataset.
Data Instances
default
- Size of downloaded dataset files: 1.06 MB
- Size of the generated dataset: 1.06 MB
- Total amount of disk used: 2.13 MB
An example of 'train' looks as follows.
{
"text": "First Citizen:\nBefore we proceed any further, hear me "
}
Data Fields
The data fields are the same among all splits.
default
text
: astring
feature.
Data Splits Sample Size
name | train | validation | test |
---|---|---|---|
default | 1 | 1 | 1 |
Dataset Creation
Curation Rationale
Source Data
Annotations
Personal and Sensitive Information
Considerations for Using the Data
Social Impact of Dataset
Discussion of Biases
Other Known Limitations
Additional Information
Dataset Curators
Licensing Information
Citation Information
@misc{
author={Karpathy, Andrej},
title={char-rnn},
year={2015},
howpublished={\url{https://github.com/karpathy/char-rnn}}
}
Contributions
Thanks to @thomwolf, @lewtun, @patrickvonplaten for adding this dataset.
Homepage:
github.com
Size of downloaded dataset files:
1.06 MB
Size of the generated dataset:
1.06 MB
Total amount of disk used:
2.13 MB
Models trained or fine-tuned on tiny_shakespeare
None yet