Waseem AlShikh commited on
Commit
09bc830
1 Parent(s): ea7d771

128M model

Browse files
README.md CHANGED
@@ -1,3 +1,123 @@
1
  ---
2
- license: bigscience-bloom-rail-1.0
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - en
4
+ datasets:
5
+ - English
6
+ tags:
7
+ - text generation
8
+ - pytorch
9
+ - causal-lm
10
+ pipeline_tag: text-generation
11
+ library_name: transformers
12
  ---
13
+
14
+ license: cc-by-4.0
15
+
16
+
17
+ # Writer-small 128M
18
+
19
+ <style>
20
+ img {
21
+ display: inline;
22
+ }
23
+ </style>
24
+
25
+ |[![Model architecture](https://img.shields.io/badge/Model%20Arch-Transformer%20Decoder-green)](#model-architecture)|[![Model size](https://img.shields.io/badge/Params-126M-green)](#model-architecture)|[![Language](https://img.shields.io/badge/Language-en--US-lightgrey#model-badge)](#datasets)
26
+
27
+
28
+ ## Model Description
29
+
30
+ Writer-small 128M is a transformer-based language model. GPT refers to a class of transformer decoder-only models similar to GPT-2 and 3 while. It has Tensor Parallelism (TP) of 1, Pipeline Parallelism (PP) of 1 and should fit on a single NVIDIA GPU.
31
+
32
+
33
+ ## Getting started
34
+
35
+ ### Step 1: Install Writer-small and dependencies
36
+
37
+ You will need to install NVIDIA Apex.
38
+
39
+ ```
40
+ git clone https://github.com/ericharper/apex.git
41
+ cd apex
42
+ git checkout nm_v1.11.0
43
+ pip install -v --disable-pip-version-check --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" --global-option="--fast_layer_norm" --global-option="--distributed_adam" --global-option="--deprecated_fused_adam" ./
44
+ ```
45
+
46
+ ```
47
+ pip install nemo_toolkit['nlp']==1.11.0
48
+ ```
49
+
50
+ ### Step 2: Launch eval server
51
+
52
+ **Note.** The model has been trained with Tensor Parallelism (TP) of 1 and Pipeline Parallelism (PP) of 1 and should fit on a single NVIDIA GPU.
53
+
54
+ ```
55
+ git clone https://github.com/NVIDIA/NeMo.git
56
+ cd NeMo/examples/nlp/language_modeling
57
+ git checkout v1.11.0
58
+ python megatron_gpt_eval.py gpt_model_file=Writer-gpt-small.nemo server=True tensor_model_parallel_size=1 trainer.devices=1
59
+ ```
60
+
61
+ ### Step 3: Send prompts to your model!
62
+ ```python
63
+ import json
64
+ import requests
65
+
66
+ port_num = 5555
67
+ headers = {"Content-Type": "application/json"}
68
+
69
+ def request_data(data):
70
+ resp = requests.put('http://localhost:{}/generate'.format(port_num),
71
+ data=json.dumps(data),
72
+ headers=headers)
73
+ sentences = resp.json()['sentences']
74
+ return sentences
75
+
76
+
77
+ data = {
78
+ "sentences": ["Tell me an interesting fact about space travel."]*1,
79
+ "tokens_to_generate": 50,
80
+ "temperature": 1.0,
81
+ "add_BOS": True,
82
+ "top_k": 0,
83
+ "top_p": 0.9,
84
+ "greedy": False,
85
+ "all_probs": False,
86
+ "repetition_penalty": 1.2,
87
+ "min_tokens_to_generate": 2,
88
+ }
89
+
90
+ sentences = request_data(data)
91
+ print(sentences)
92
+ ```
93
+
94
+
95
+ ## Training Data
96
+
97
+ The model was trained on ["The Piles" dataset prepared by Eleuther.AI](https://pile.eleuther.ai/). [4]
98
+
99
+ ## Evaluation results
100
+
101
+ *Zero-shot performance.* Evaluated using [LM Evaluation Test Suite from AI21](https://github.com/AI21Labs/lm-evaluation)
102
+
103
+ | ARC-Challenge | ARC-Easy | RACE-middle | RACE-high | Winogrande | RTE | BoolQA | HellaSwag | PiQA |
104
+ | ------------- | -------- | ----------- | --------- | ---------- | --- | ------ | --------- | ---- |
105
+ | 0.3012 | 0.4596 | 0.459 | 0.3797 | 0.5343 | 0.5451 | 0.5979 | 0.4443 | 0.6834 |
106
+
107
+ ## Limitations
108
+
109
+ The model was trained on the data originally crawled from the Internet. This data contains toxic language and societal biases. Therefore, the model may amplify those biases and return toxic responses especially when prompted with toxic prompts.
110
+
111
+ ## References
112
+
113
+ [1] [Improving Language Understanding by Generative Pre-Training](https://s3-us-west-2.amazonaws.com/openai-assets/research-covers/language-unsupervised/language_understanding_paper.pdf)
114
+
115
+ [2] [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/pdf/1909.08053.pdf)
116
+
117
+ [3] [NVIDIA NeMo Toolkit](https://github.com/NVIDIA/NeMo)
118
+
119
+ [4] [The Pile: An 800GB Dataset of Diverse Text for Language Modeling](https://arxiv.org/abs/2101.00027)
120
+
121
+ ## Licence
122
+
123
+ License to use this model is covered by the [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/). By downloading the public and release version of the model, you accept the terms and conditions of the [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/) license.
config.json ADDED
@@ -0,0 +1,32 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "Writer/Writer-LLM-small",
3
+ "activation_function": "gelu",
4
+ "architectures": [
5
+ "GPT2LMHeadModel"
6
+ ],
7
+ "attn_pdrop": 0.1,
8
+ "bos_token_id": 50256,
9
+ "embd_pdrop": 0.1,
10
+ "eos_token_id": 50256,
11
+ "initializer_range": 0.023,
12
+ "layer_norm_epsilon": 1e-05,
13
+ "model_type": "gpt2",
14
+ "n_embd": 768,
15
+ "n_head": 12,
16
+ "n_inner": 3072,
17
+ "n_layer": 12,
18
+ "n_positions": 2048,
19
+ "reorder_and_upcast_attn": false,
20
+ "resid_pdrop": 0.1,
21
+ "scale_attn_by_inverse_layer_idx": false,
22
+ "scale_attn_weights": true,
23
+ "summary_activation": null,
24
+ "summary_first_dropout": 0.1,
25
+ "summary_proj_to_labels": true,
26
+ "summary_type": "cls_index",
27
+ "summary_use_proj": true,
28
+ "torch_dtype": "float32",
29
+ "transformers_version": "4.24.0",
30
+ "use_cache": true,
31
+ "vocab_size": 50257
32
+ }
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6921cac57faf7302a5a7614be16602556eb58dc2055fb48fd2ebf905aa5d365e
3
+ size 551292477
special_tokens_map.json ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ {
2
+ "bos_token": "<|endoftext|>",
3
+ "eos_token": "<|endoftext|>",
4
+ "unk_token": "<|endoftext|>"
5
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_prefix_space": false,
3
+ "bos_token": "<|endoftext|>",
4
+ "eos_token": "<|endoftext|>",
5
+ "model_max_length": 1024,
6
+ "name_or_path": "gpt2",
7
+ "special_tokens_map_file": null,
8
+ "tokenizer_class": "GPT2Tokenizer",
9
+ "unk_token": "<|endoftext|>"
10
+ }
vocab.json ADDED
The diff for this file is too large to render. See raw diff