okuchaiev commited on
Commit
70077f0
1 Parent(s): 073bf85

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +124 -0
README.md CHANGED
@@ -1,3 +1,127 @@
1
  ---
 
 
 
 
 
 
 
 
 
2
  license: cc-by-4.0
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - en
4
+ library_name: nemo
5
+ datasets:
6
+ - the_pile
7
+ tags:
8
+ - text generation
9
+ - pytorch
10
+ - causal-lm
11
  license: cc-by-4.0
12
+
13
  ---
14
+ # NeMo Megatron-GPT 5B
15
+
16
+ <style>
17
+ img {
18
+ display: inline;
19
+ }
20
+ </style>
21
+
22
+ |[![Model architecture](https://img.shields.io/badge/Model%20Arch-Transformer%20Decoder-green)](#model-architecture)|[![Model size](https://img.shields.io/badge/Params-5B-green)](#model-architecture)|[![Language](https://img.shields.io/badge/Language-en--US-lightgrey#model-badge)](#datasets)
23
+
24
+
25
+ ## Model Description
26
+
27
+ Megatron-GPT 5B is a transformer-based language model. GPT refers to a class of transformer decoder-only models similar to GPT-2 and 3 while 5B refers to the total trainable parameter count (5 Billion) [1, 2].
28
+
29
+ This model was trained with [NeMo Megatron](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/nlp/nemo_megatron/intro.html).
30
+
31
+
32
+ ## Getting started
33
+
34
+ ### Step 1: Install NeMo and dependencies
35
+
36
+ You will need to install NVIDIA Apex and NeMo.
37
+
38
+ ```
39
+ git clone https://github.com/ericharper/apex.git
40
+ cd apex
41
+ git checkout nm_v1.11.0
42
+ pip install -v --disable-pip-version-check --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" --global-option="--fast_layer_norm" --global-option="--distributed_adam" --global-option="--deprecated_fused_adam" ./
43
+ ```
44
+
45
+ ```
46
+ pip install nemo_toolkit['nlp']==1.11.0
47
+ ```
48
+
49
+ Alternatively, you can use NeMo Megatron training docker container with all dependencies pre-installed.
50
+
51
+ ### Step 2: Launch eval server
52
+
53
+ **Note.** The model has been trained with Tensor Parallelism (TP) of 1 and Pipeline Parallelism (PP) of 1 and should fit on a single NVIDIA GPU.
54
+
55
+ The example below launches a model variant with TP2 on 2 GPUs.
56
+
57
+
58
+ ```
59
+ git clone https://github.com/NVIDIA/NeMo.git
60
+ cd NeMo/examples/nlp/language_modeling
61
+ git checkout v1.11.0
62
+ python megatron_gpt_eval.py gpt_model_file=nemo_gpt5B_fp16_tp2.nemo server=True tensor_model_parallel_size=2 trainer.devices=2
63
+ ```
64
+
65
+ ### Step 3: Send prompts to you model!
66
+ ```python
67
+ import json
68
+ import requests
69
+
70
+ port_num = 5555
71
+ headers = {"Content-Type": "application/json"}
72
+
73
+ def request_data(data):
74
+ resp = requests.put('http://localhost:{}/generate'.format(port_num),
75
+ data=json.dumps(data),
76
+ headers=headers)
77
+ sentences = resp.json()['sentences']
78
+ return sentences
79
+
80
+
81
+ data = {
82
+ "sentences": ["Tell me an interesting fact about space travel."]*1,
83
+ "tokens_to_generate": 50,
84
+ "temperature": 1.0,
85
+ "add_BOS": True,
86
+ "top_k": 0,
87
+ "top_p": 0.9,
88
+ "greedy": False,
89
+ "all_probs": False,
90
+ "repetition_penalty": 1.2,
91
+ "min_tokens_to_generate": 2,
92
+ }
93
+
94
+ sentences = request_data(data)
95
+ print(sentences)
96
+ ```
97
+
98
+
99
+ ## Training Data
100
+
101
+ The model was trained on ["The Piles" dataset prepared by Eleuther.AI](https://pile.eleuther.ai/). [4]
102
+
103
+ ## Evaluation results
104
+
105
+ *Zero-shot performance.* Evaluated using [LM Evaluation Test Suite from AI21](https://github.com/AI21Labs/lm-evaluation)
106
+
107
+ | ARC-Challenge | ARC-Easy | RACE-middle | RACE-high | Winogrande | RTE | BoolQA | HellaSwag | PiQA |
108
+ | ------------- | -------- | ----------- | --------- | ---------- | --- | ------ | --------- | ---- |
109
+ | 0.3976 | 0.5566 | 0.5007 | 0.4171 | 0.6133 | 0.5812 | 0.6356 | 0.6298 | 0.7492 |
110
+
111
+ ## Limitations
112
+
113
+ The model was trained on the data originally crawled from the Internet. This data contains toxic language and societal biases. Therefore, the model may amplify those biases and return toxic responses especially when prompted with toxic prompts.
114
+
115
+ ## References
116
+
117
+ [1] [Improving Language Understanding by Generative Pre-Training](https://s3-us-west-2.amazonaws.com/openai-assets/research-covers/language-unsupervised/language_understanding_paper.pdf)
118
+
119
+ [2] [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/pdf/1909.08053.pdf)
120
+
121
+ [3] [NVIDIA NeMo Toolkit](https://github.com/NVIDIA/NeMo)
122
+
123
+ [4] [The Pile: An 800GB Dataset of Diverse Text for Language Modeling](https://arxiv.org/abs/2101.00027)
124
+
125
+ ## Licence
126
+
127
+ License to use this model is covered by the [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/). By downloading the public and release version of the model, you accept the terms and conditions of the [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/) license.