wassemgtk commited on
Commit
94bb2c3
•
1 Parent(s): 09bc830

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +103 -68
README.md CHANGED
@@ -30,94 +30,129 @@ img {
30
  Writer-small 128M is a transformer-based language model. GPT refers to a class of transformer decoder-only models similar to GPT-2 and 3 while. It has Tensor Parallelism (TP) of 1, Pipeline Parallelism (PP) of 1 and should fit on a single NVIDIA GPU.
31
 
32
 
33
- ## Getting started
34
 
35
- ### Step 1: Install Writer-small and dependencies
36
 
37
- You will need to install NVIDIA Apex.
38
 
39
- ```
40
- git clone https://github.com/ericharper/apex.git
41
- cd apex
42
- git checkout nm_v1.11.0
43
- pip install -v --disable-pip-version-check --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" --global-option="--fast_layer_norm" --global-option="--distributed_adam" --global-option="--deprecated_fused_adam" ./
44
- ```
45
 
46
- ```
47
- pip install nemo_toolkit['nlp']==1.11.0
48
- ```
 
 
 
 
 
 
 
 
 
 
 
49
 
50
- ### Step 2: Launch eval server
 
 
 
51
 
52
- **Note.** The model has been trained with Tensor Parallelism (TP) of 1 and Pipeline Parallelism (PP) of 1 and should fit on a single NVIDIA GPU.
53
 
54
- ```
55
- git clone https://github.com/NVIDIA/NeMo.git
56
- cd NeMo/examples/nlp/language_modeling
57
- git checkout v1.11.0
58
- python megatron_gpt_eval.py gpt_model_file=Writer-gpt-small.nemo server=True tensor_model_parallel_size=1 trainer.devices=1
59
- ```
60
-
61
- ### Step 3: Send prompts to your model!
62
- ```python
63
- import json
64
- import requests
65
-
66
- port_num = 5555
67
- headers = {"Content-Type": "application/json"}
68
-
69
- def request_data(data):
70
- resp = requests.put('http://localhost:{}/generate'.format(port_num),
71
- data=json.dumps(data),
72
- headers=headers)
73
- sentences = resp.json()['sentences']
74
- return sentences
75
-
76
-
77
- data = {
78
- "sentences": ["Tell me an interesting fact about space travel."]*1,
79
- "tokens_to_generate": 50,
80
- "temperature": 1.0,
81
- "add_BOS": True,
82
- "top_k": 0,
83
- "top_p": 0.9,
84
- "greedy": False,
85
- "all_probs": False,
86
- "repetition_penalty": 1.2,
87
- "min_tokens_to_generate": 2,
88
- }
89
 
90
- sentences = request_data(data)
91
- print(sentences)
92
- ```
93
 
 
94
 
95
- ## Training Data
96
 
97
- The model was trained on ["The Piles" dataset prepared by Eleuther.AI](https://pile.eleuther.ai/). [4]
98
 
99
- ## Evaluation results
100
 
101
- *Zero-shot performance.* Evaluated using [LM Evaluation Test Suite from AI21](https://github.com/AI21Labs/lm-evaluation)
102
 
103
- | ARC-Challenge | ARC-Easy | RACE-middle | RACE-high | Winogrande | RTE | BoolQA | HellaSwag | PiQA |
104
- | ------------- | -------- | ----------- | --------- | ---------- | --- | ------ | --------- | ---- |
105
- | 0.3012 | 0.4596 | 0.459 | 0.3797 | 0.5343 | 0.5451 | 0.5979 | 0.4443 | 0.6834 |
106
 
107
- ## Limitations
 
 
108
 
109
- The model was trained on the data originally crawled from the Internet. This data contains toxic language and societal biases. Therefore, the model may amplify those biases and return toxic responses especially when prompted with toxic prompts.
110
 
111
- ## References
112
 
113
- [1] [Improving Language Understanding by Generative Pre-Training](https://s3-us-west-2.amazonaws.com/openai-assets/research-covers/language-unsupervised/language_understanding_paper.pdf)
114
 
115
- [2] [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/pdf/1909.08053.pdf)
116
 
117
- [3] [NVIDIA NeMo Toolkit](https://github.com/NVIDIA/NeMo)
118
 
119
- [4] [The Pile: An 800GB Dataset of Diverse Text for Language Modeling](https://arxiv.org/abs/2101.00027)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
120
 
121
- ## Licence
 
 
 
 
 
 
 
 
 
122
 
123
- License to use this model is covered by the [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/). By downloading the public and release version of the model, you accept the terms and conditions of the [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/) license.
 
30
  Writer-small 128M is a transformer-based language model. GPT refers to a class of transformer decoder-only models similar to GPT-2 and 3 while. It has Tensor Parallelism (TP) of 1, Pipeline Parallelism (PP) of 1 and should fit on a single NVIDIA GPU.
31
 
32
 
33
+ # GPT-J 6B
34
 
35
+ ## Model Description
36
 
37
+ GPT-J 6B is a transformer model trained using Ben Wang's [Mesh Transformer JAX](https://github.com/kingoflolz/mesh-transformer-jax/). "GPT-J" refers to the class of model, while "6B" represents the number of trainable parameters.
38
 
39
+ <figure>
 
 
 
 
 
40
 
41
+ | Hyperparameter | Value |
42
+ |----------------------|------------|
43
+ | \\(n_{parameters}\\) | 6053381344 |
44
+ | \\(n_{layers}\\) | 28&ast; |
45
+ | \\(d_{model}\\) | 4096 |
46
+ | \\(d_{ff}\\) | 16384 |
47
+ | \\(n_{heads}\\) | 16 |
48
+ | \\(d_{head}\\) | 256 |
49
+ | \\(n_{ctx}\\) | 2048 |
50
+ | \\(n_{vocab}\\) | 50257/50400&dagger; (same tokenizer as GPT-2/3) |
51
+ | Positional Encoding | [Rotary Position Embedding (RoPE)](https://arxiv.org/abs/2104.09864) |
52
+ | RoPE Dimensions | [64](https://github.com/kingoflolz/mesh-transformer-jax/blob/f2aa66e0925de6593dcbb70e72399b97b4130482/mesh_transformer/layers.py#L223) |
53
+ <figcaption><p><strong>&ast;</strong> Each layer consists of one feedforward block and one self attention block.</p>
54
+ <p><strong>&dagger;</strong> Although the embedding matrix has a size of 50400, only 50257 entries are used by the GPT-2 tokenizer.</p></figcaption></figure>
55
 
56
+ The model consists of 28 layers with a model dimension of 4096, and a feedforward dimension of 16384. The model
57
+ dimension is split into 16 heads, each with a dimension of 256. Rotary Position Embedding (RoPE) is applied to 64
58
+ dimensions of each head. The model is trained with a tokenization vocabulary of 50257, using the same set of BPEs as
59
+ GPT-2/GPT-3.
60
 
61
+ ## Training data
62
 
63
+ GPT-J 6B was trained on [the Pile](https://pile.eleuther.ai), a large-scale curated dataset created by [EleutherAI](https://www.eleuther.ai).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
64
 
65
+ ## Training procedure
 
 
66
 
67
+ This model was trained for 402 billion tokens over 383,500 steps on TPU v3-256 pod. It was trained as an autoregressive language model, using cross-entropy loss to maximize the likelihood of predicting the next token correctly.
68
 
69
+ ## Intended Use and Limitations
70
 
71
+ GPT-J learns an inner representation of the English language that can be used to extract features useful for downstream tasks. The model is best at what it was pretrained for however, which is generating text from a prompt.
72
 
73
+ ### How to use
74
 
75
+ This model can be easily loaded using the `AutoModelForCausalLM` functionality:
76
 
77
+ ```python
78
+ from transformers import AutoTokenizer, AutoModelForCausalLM
 
79
 
80
+ tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-j-6B")
81
+ model = AutoModelForCausalLM.from_pretrained("EleutherAI/gpt-j-6B")
82
+ ```
83
 
84
+ ### Limitations and Biases
85
 
86
+ The core functionality of GPT-J is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. When prompting GPT-J it is important to remember that the statistically most likely next token is often not the token that produces the most "accurate" text. Never depend upon GPT-J to produce factually accurate output.
87
 
88
+ GPT-J was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending upon use case GPT-J may produce socially unacceptable text. See [Sections 5 and 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a more detailed analysis of the biases in the Pile.
89
 
90
+ As with all language models, it is hard to predict in advance how GPT-J will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.
91
 
92
+ ## Evaluation results
93
 
94
+ <figure>
95
+
96
+ | Model | Public | Training FLOPs | LAMBADA PPL ↓ | LAMBADA Acc ↑ | Winogrande ↑ | Hellaswag ↑ | PIQA ↑ | Dataset Size (GB) |
97
+ |--------------------------|-------------|----------------|--- |--- |--- |--- |--- |-------------------|
98
+ | Random Chance | &check; | 0 | ~a lot | ~0% | 50% | 25% | 25% | 0 |
99
+ | GPT-3 Ada&ddagger; | &cross; | ----- | 9.95 | 51.6% | 52.9% | 43.4% | 70.5% | ----- |
100
+ | GPT-2 1.5B | &check; | ----- | 10.63 | 51.21% | 59.4% | 50.9% | 70.8% | 40 |
101
+ | GPT-Neo 1.3B&ddagger; | &check; | 3.0e21 | 7.50 | 57.2% | 55.0% | 48.9% | 71.1% | 825 |
102
+ | Megatron-2.5B&ast; | &cross; | 2.4e21 | ----- | 61.7% | ----- | ----- | ----- | 174 |
103
+ | GPT-Neo 2.7B&ddagger; | &check; | 6.8e21 | 5.63 | 62.2% | 56.5% | 55.8% | 73.0% | 825 |
104
+ | GPT-3 1.3B&ast;&ddagger; | &cross; | 2.4e21 | 5.44 | 63.6% | 58.7% | 54.7% | 75.1% | ~800 |
105
+ | GPT-3 Babbage&ddagger; | &cross; | ----- | 5.58 | 62.4% | 59.0% | 54.5% | 75.5% | ----- |
106
+ | Megatron-8.3B&ast; | &cross; | 7.8e21 | ----- | 66.5% | ----- | ----- | ----- | 174 |
107
+ | GPT-3 2.7B&ast;&ddagger; | &cross; | 4.8e21 | 4.60 | 67.1% | 62.3% | 62.8% | 75.6% | ~800 |
108
+ | Megatron-11B&dagger; | &check; | 1.0e22 | ----- | ----- | ----- | ----- | ----- | 161 |
109
+ | **GPT-J 6B&ddagger;** | **&check;** | **1.5e22** | **3.99** | **69.7%** | **65.3%** | **66.1%** | **76.5%** | **825** |
110
+ | GPT-3 6.7B&ast;&ddagger; | &cross; | 1.2e22 | 4.00 | 70.3% | 64.5% | 67.4% | 78.0% | ~800 |
111
+ | GPT-3 Curie&ddagger; | &cross; | ----- | 4.00 | 69.3% | 65.6% | 68.5% | 77.9% | ----- |
112
+ | GPT-3 13B&ast;&ddagger; | &cross; | 2.3e22 | 3.56 | 72.5% | 67.9% | 70.9% | 78.5% | ~800 |
113
+ | GPT-3 175B&ast;&ddagger; | &cross; | 3.1e23 | 3.00 | 76.2% | 70.2% | 78.9% | 81.0% | ~800 |
114
+ | GPT-3 Davinci&ddagger; | &cross; | ----- | 3.0 | 75% | 72% | 78% | 80% | ----- |
115
+ <figcaption><p>Models roughly sorted by performance, or by FLOPs if not available.</p>
116
+
117
+ <p><strong>&ast;</strong> Evaluation numbers reported by their respective authors. All other numbers are provided by
118
+ running <a href="https://github.com/EleutherAI/lm-evaluation-harness/"><code>lm-evaluation-harness</code></a> either with released
119
+ weights or with API access. Due to subtle implementation differences as well as different zero shot task framing, these
120
+ might not be directly comparable. See <a href="https://blog.eleuther.ai/gpt3-model-sizes/">this blog post</a> for more
121
+ details.</p>
122
+
123
+ <p><strong>†</strong> Megatron-11B provides no comparable metrics, and several implementations using the released weights do not
124
+ reproduce the generation quality and evaluations. (see <a href="https://github.com/huggingface/transformers/pull/10301">1</a>
125
+ <a href="https://github.com/pytorch/fairseq/issues/2358">2</a> <a href="https://github.com/pytorch/fairseq/issues/2719">3</a>)
126
+ Thus, evaluation was not attempted.</p>
127
+
128
+ <p><strong>‡</strong> These models have been trained with data which contains possible test set contamination. The OpenAI GPT-3 models
129
+ failed to deduplicate training data for certain test sets, while the GPT-Neo models as well as this one is
130
+ trained on the Pile, which has not been deduplicated against any test sets.</p></figcaption></figure>
131
+
132
+ ## Citation and Related Information
133
+
134
+ ### BibTeX entry
135
+
136
+ To cite this model:
137
+ ```bibtex
138
+ @misc{gpt-j,
139
+ author = {Wang, Ben and Komatsuzaki, Aran},
140
+ title = {{GPT-J-6B: A 6 Billion Parameter Autoregressive Language Model}},
141
+ howpublished = {\url{https://github.com/kingoflolz/mesh-transformer-jax}},
142
+ year = 2021,
143
+ month = May
144
+ }
145
+ ```
146
 
147
+ To cite the codebase that trained this model:
148
+ ```bibtex
149
+ @misc{mesh-transformer-jax,
150
+ author = {Wang, Ben},
151
+ title = {{Mesh-Transformer-JAX: Model-Parallel Implementation of Transformer Language Model with JAX}},
152
+ howpublished = {\url{https://github.com/kingoflolz/mesh-transformer-jax}},
153
+ year = 2021,
154
+ month = May
155
+ }
156
+ ```
157
 
158
+ If you use this model, we would love to hear about it! Reach out on [GitHub](https://github.com/kingoflolz/mesh-transformer-jax), Discord, or shoot Ben an email.