undeleted commited on
Commit
835e3de
1 Parent(s): 171e466

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +132 -0
README.md CHANGED
@@ -1,3 +1,135 @@
1
  ---
2
  license: apache-2.0
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ language:
4
+ - en
5
+ tags:
6
+ - novel
7
+ - training
8
+ - story
9
+ task_categories:
10
+ - text-classification
11
+ - text-generation
12
+ pretty_name: ScribbleHub Stories
13
+ size_categories:
14
+ - 100K<n<1M
15
  ---
16
+
17
+ # Dataset Card for ScribbleHub Stories
18
+
19
+ *The BigKnow2022 dataset and its subsets are not yet complete. Not all information here may be accurate or accessible.*
20
+
21
+ ## Dataset Description
22
+
23
+ - **Homepage:** (TODO)
24
+ - **Repository:** <https://github.com/RyokoAI/BigKnow2022>
25
+ - **Paper:** N/A
26
+ - **Leaderboard:** N/A
27
+ - **Point of Contact:** Ronsor/undeleted <ronsor@ronsor.com>
28
+
29
+ ### Dataset Summary
30
+
31
+ ScribbleHub Stories is a dataset consisting of text from over 373,000 chapters across approximately 17,500 series posted on the
32
+ original story sharing site [Scribble Hub](https://scribblehub.com).
33
+
34
+ ### Supported Tasks and Leaderboards
35
+
36
+ This dataset is primarily intended for unsupervised training of text generation models; however, it may be useful for other purposes.
37
+
38
+ * text-classification
39
+ * text-generation
40
+
41
+ ### Languages
42
+
43
+ * English
44
+
45
+ ## Dataset Structure
46
+
47
+ ### Data Instances
48
+
49
+ ```json
50
+
51
+ ```
52
+
53
+ ### Data Fields
54
+
55
+ * **text**: the actual chapter text
56
+ * **title**: the series chapter title
57
+ * **tag**: source-identifier tag: "scribblehub"
58
+ * **id**: an ID in the format `scribblehub.<series>.<chapter>` where <series> and <chapter> are
59
+ both numeric IDs.
60
+
61
+ ### Data Splits
62
+
63
+ No splitting of the data was performed.
64
+
65
+ ## Dataset Creation
66
+
67
+ ### Curation Rationale
68
+
69
+ TODO
70
+
71
+ ### Source Data
72
+
73
+ #### Initial Data Collection and Normalization
74
+
75
+ TODO
76
+
77
+ #### Who are the source language producers?
78
+
79
+ The authors of each novel.
80
+
81
+ ### Annotations
82
+
83
+ #### Annotation process
84
+
85
+ TODO
86
+
87
+ #### Who are the annotators?
88
+
89
+ TODO
90
+
91
+ ### Personal and Sensitive Information
92
+
93
+ The dataset contains only works of fiction, and we do not believe it contains any PII.
94
+
95
+ ## Considerations for Using the Data
96
+
97
+ ### Social Impact of Dataset
98
+
99
+ This dataset is intended to be useful for anyone who wishes to train a model to generate "more entertaining" content.
100
+ It may also be useful for other languages depending on your language model.
101
+
102
+ ### Discussion of Biases
103
+
104
+ This dataset is composed of fictional works by various authors. Because of this fact, the contents of this dataset will reflect
105
+ the biases of those authors. **Additionally, this dataset contains NSFW material and was not filtered. Beware of stereotypes.**
106
+
107
+ ### Other Known Limitations
108
+
109
+ N/A
110
+
111
+ ## Additional Information
112
+
113
+ ### Dataset Curators
114
+
115
+ Ronsor Labs
116
+
117
+ ### Licensing Information
118
+
119
+ Apache 2.0, for all parts of which Ronsor Labs or the Ryoko AI Production Committee may be considered authors. All other material is
120
+ distributed under fair use principles.
121
+
122
+ ### Citation Information
123
+
124
+ ```
125
+ @misc{ryokoai2023-bigknow2022,
126
+ title = {BigKnow2022: Bringing Language Models Up to Speed},
127
+ author = {Ronsor},
128
+ year = {2023},
129
+ howpublished = {\url{https://github.com/RyokoAI/BigKnow2022}},
130
+ }
131
+ ```
132
+
133
+ ### Contributions
134
+
135
+ Thanks to @ronsor (GH) for gathering this dataset.