Back to all datasets
Dataset: tiny_shakespeare 🏷
Update on GitHub

How to load this dataset directly with the πŸ€—/datasets library:

Copy to clipboard
from datasets import load_dataset dataset = load_dataset("tiny_shakespeare")


40,000 lines of Shakespeare from a variety of Shakespeare's plays. Featured in Andrej Karpathy's blog post 'The Unreasonable Effectiveness of Recurrent Neural Networks': To use for e.g. character modelling: ``` d = datasets.load_dataset(name='tiny_shakespeare')['train'] d = x: datasets.Value('strings').unicode_split(x['text'], 'UTF-8')) # train split includes vocabulary for other splits vocabulary = sorted(set(next(iter(d)).numpy())) d = x: {'cur_char': x[:-1], 'next_char': x[1:]}) d = d.unbatch() seq_len = 100 batch_size = 2 d = d.batch(seq_len) d = d.batch(batch_size) ```


  author={Karpathy, Andrej},

Models trained or fine-tuned on tiny_shakespeare

None yet. Start fine-tuning now =)