ve-forbryderne nazneen commited on
Commit
acfe273
1 Parent(s): 95a7ea7

model documentation (#2)

Browse files

- model documentation (409fd6c254fc234b7b3598e3f4e0d209ffe322b9)
- Fill in some of the missing fields in the model card (789d41c1fd8f969ca1dd1f13a0d32c87993df4dc)


Co-authored-by: Nazneen Rajani <nazneen@users.noreply.huggingface.co>

Files changed (1) hide show
  1. README.md +171 -0
README.md ADDED
@@ -0,0 +1,171 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+
3
+ tags:
4
+ - text-generation
5
+ ---
6
+ # Model Card for GPT-J-6B-Skein
7
+
8
+ # Model Details
9
+
10
+ ## Model Description
11
+
12
+
13
+ - **Developed by:** KoboldAI
14
+ - **Shared by [Optional]:** KoboldAI
15
+ - **Model type:** Text Generation
16
+ - **Language(s) (NLP):** English
17
+ - **License:** Apache License 2.0
18
+ - **Related Models:** [GPT-J 6B](https://huggingface.co/EleutherAI/gpt-j-6B?text=My+name+is+Mariama%2C+my+favorite)
19
+ - **Parent Model:** GPT-J
20
+ - **Resources for more information:**
21
+ - [GitHub Repo](https://github.com/kingoflolz/mesh-transformer-jax)
22
+ - [Associated Model Doc](https://huggingface.co/docs/transformers/main/en/model_doc/gptj#transformers.GPTJForCausalLM)
23
+
24
+ # Uses
25
+
26
+
27
+ ## Direct Use
28
+
29
+ This model is designed for creative story generation. It can understand both free-form text and text written in interactive fiction style with actions starting with "> You", such as:
30
+
31
+ ```
32
+ You become aware of her breathing -- the slight expansion of her ribs, the soft exhalation -- natural, and yet somehow studied. "Ah -- by the way," she says, in a way that utterly fails to be casual, "have you seen the artist out there? -- My artist, that is."
33
+
34
+ "No," you respond, uneasy. You open your mouth and close it again.
35
+
36
+ > You ask about the experience of waking up
37
+ ```
38
+
39
+ ## Downstream Use [Optional]
40
+
41
+ More information needed
42
+
43
+ ## Out-of-Scope Use
44
+
45
+ The model should not be used to intentionally create hostile or alienating environments for people.
46
+
47
+ # Bias, Risks, and Limitations
48
+ The core functionality of GPT-J is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. When prompting GPT-J it is important to remember that the statistically most likely next token is often not the token that produces the most "accurate" text. Never depend upon GPT-J to produce factually accurate output.
49
+ GPT-J was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending upon use case GPT-J may produce socially unacceptable text. See Sections 5 and 6 of the Pile paper for a more detailed analysis of the biases in the Pile.
50
+ As with all language models, it is hard to predict in advance how GPT-J will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.
51
+
52
+ See the [GPT-J 6B model card](https://huggingface.co/EleutherAI/gpt-j-6B?text=My+name+is+Mariama%2C+my+favorite) for more information.
53
+
54
+ ## Recommendations
55
+
56
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
57
+
58
+
59
+ # Training Details
60
+
61
+ ## Training Data
62
+
63
+ The data are mostly comprised of light novels from the dataset of the [KoboldAI/GPT-Neo-2.7B-Horni-LN](https://huggingface.co/KoboldAI/GPT-Neo-2.7B-Horni-LN) model and assorted interactive fiction. The dataset uses `[Themes: <comma-separated list of genres>]` for tagging, which means that if similar text is placed in the context, the model will attempt to generate text in the specified style(s). For more details about the dataset, consult [this document](https://wandb.ai/ve-forbryderne/skein/runs/files/files/datasets/README.txt).
64
+
65
+ ## Training Procedure
66
+
67
+
68
+ ### Preprocessing
69
+
70
+ The data were preprocessed using the Python package ftfy to eliminate as much as possible non-ASCII punctuation characters and possible encoding errors. The interactive fiction in the dataset also underwent deduplication since interactive fiction logs often contain duplicate text from, for example, visiting the same in-game area several times. spaCy was used for grammatical analysis with the purpose of reformatting the actions commonly found in old text adventure games into more complete sentences. There was also some manual elimination of things such as "thank you for playing" messages and title messages.
71
+
72
+ ### Speeds, Sizes, Times
73
+
74
+ Training took approximately 14 hours in total, with the average speed being 5265 tokens per second.
75
+
76
+ # Evaluation
77
+
78
+
79
+ ## Testing Data, Factors & Metrics
80
+
81
+ ### Testing Data
82
+
83
+ More information needed
84
+
85
+ ### Factors
86
+
87
+
88
+ ### Metrics
89
+
90
+ More information needed
91
+ ## Results
92
+
93
+ More information needed
94
+
95
+ # Model Examination
96
+
97
+ More information needed
98
+
99
+ # Environmental Impact
100
+
101
+
102
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
103
+
104
+ - **Hardware Type:** More information needed
105
+ - **Hours used:** More information needed
106
+ - **Cloud Provider:** More information needed
107
+ - **Compute Region:** More information needed
108
+ - **Carbon Emitted:** More information needed
109
+
110
+ # Technical Specifications [optional]
111
+
112
+ ## Model Architecture and Objective
113
+
114
+ More information needed
115
+
116
+ ## Compute Infrastructure
117
+
118
+ More information needed
119
+
120
+ ### Hardware
121
+
122
+ More information needed
123
+
124
+ ### Software
125
+ https://github.com/kingoflolz/mesh-transformer-jax
126
+
127
+ # Citation
128
+
129
+
130
+ **BibTeX:**
131
+ ```
132
+ @misc{mesh-transformer-jax,
133
+ author = {Wang, Ben},
134
+ title = {{Mesh-Transformer-JAX: Model-Parallel Implementation of Transformer Language Model with JAX}},
135
+ howpublished = {\url{https://github.com/kingoflolz/mesh-transformer-jax}},
136
+ year = 2021,
137
+ month = May
138
+ }
139
+ ```
140
+
141
+ # Glossary [optional]
142
+ More information needed
143
+
144
+ # More Information [optional]
145
+
146
+ More information needed
147
+
148
+ # Model Card Authors [optional]
149
+
150
+
151
+ KoboldAI in collaboration with Ezi Ozoani and the Hugging Face team
152
+
153
+ # Model Card Contact
154
+
155
+ More information needed
156
+
157
+ # How to Get Started with the Model
158
+
159
+ Use the code below to get started with the model.
160
+
161
+ <details>
162
+ <summary> Click to expand </summary>
163
+
164
+ ```python
165
+ from transformers import AutoTokenizer, AutoModelForCausalLM
166
+
167
+ tokenizer = AutoTokenizer.from_pretrained("KoboldAI/GPT-J-6B-Skein")
168
+
169
+ model = AutoModelForCausalLM.from_pretrained("KoboldAI/GPT-J-6B-Skein")
170
+ ```
171
+ </details>