File size: 1,780 Bytes
02f25e7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d7e2380
 
 
 
 
 
 
 
484e29a
 
b0cb689
 
 
 
 
 
 
 
 
 
 
af5226c
 
b0cb689
a5a9a71
b0cb689
3cbcc74
b0cb689
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
---
dataset_info:
  features:
  - name: text
    dtype: string
  splits:
  - name: train
    num_bytes: 4077942
    num_examples: 30
  - name: validation
    num_bytes: 245785
    num_examples: 2
  - name: test
    num_bytes: 506679
    num_examples: 4
  download_size: 3073023
  dataset_size: 4830406
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
  - split: validation
    path: data/validation-*
  - split: test
    path: data/test-*
task_categories:
- text-generation
language:
- en
tags:
- shakespeare
size_categories:
- n<1K
license: mit
pretty_name: shakespearefirstfolio
---
# shakespearefirstfolio

## About
🎭 Shakespeare's First Folio (a collection of 36 of Shakespeare's plays) as a Hugging Face dataset!

## Description
In 2015, Andrej Karpathy wrote a post called "The Unreasonable Effectiveness of Recurrent Neural Networks" in his blog. For the needs of this post, he created tinyshakespeare, a subset of Shakespeare's works in a single 40,000 lines file. Surprisingly, language models trained from scratch on this tiny dataset can produce samples that look very close to those written by Shakespeare himself.

Since then, tinyshakespeare has been the defacto dataset used as a first test while developing language models. Unfortunately, it has some problems:
1) It is a single file, which makes further processing difficult
2) It does not contain all of Shakespeare's works
3) It is not clear exactly what works and to what extend are included

This dataset tries to address these problems. It is ~4 times bigger than tinyshakespeare.

It was manually collected from [Folger Shakespeare Library](https://www.folger.edu/).

## Usage
    import datasets

    dataset = datasets.load_dataset("gvlassis/shakespearefirstfolio")