File size: 6,408 Bytes
3902973 28d0190 3902973 28d0190 3902973 28d0190 3902973 c8a878f 3902973 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 |
<!---
# ##############################################################################################
#
# Copyright (c) 2021-, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# ##############################################################################################
-->
[Megatron](https://arxiv.org/pdf/1909.08053.pdf) is a large, powerful transformer developed by the Applied Deep Learning Research team at NVIDIA. This particular Megatron model was trained from a bidirectional transformer in the style of BERT with text sourced from Wikipedia, RealNews, OpenWebText, and CC-Stories. This model contains 345 million parameters. It is made up of 24 layers, 16 attention heads with a hidden size of 1024.
Find more information at [https://github.com/NVIDIA/Megatron-LM](https://github.com/NVIDIA/Megatron-LM)
# How to run Megatron BERT using Transformers
## Prerequisites
In that guide, we run all the commands from a folder called `$MYDIR` and defined as (in `bash`):
```
export MYDIR=$HOME
```
Feel free to change the location at your convenience.
To run some of the commands below, you'll have to clone `Transformers`.
```
git clone https://github.com/huggingface/transformers.git $MYDIR/transformers
```
## Get the checkpoint from the NVIDIA GPU Cloud
You must create a directory called `nvidia/megatron-bert-cased-345m`.
```
mkdir -p $MYDIR/nvidia/megatron-bert-cased-345m
```
You can download the checkpoint from the [NVIDIA GPU Cloud (NGC)](https://ngc.nvidia.com/catalog/models/nvidia:megatron_bert_345m). For that you
have to [sign up](https://ngc.nvidia.com/signup) for and setup the NVIDIA GPU
Cloud (NGC) Registry CLI. Further documentation for downloading models can be
found in the [NGC
documentation](https://docs.nvidia.com/dgx/ngc-registry-cli-user-guide/index.html#topic_6_4_1).
Alternatively, you can directly download the checkpoint using:
```
wget --content-disposition https://api.ngc.nvidia.com/v2/models/nvidia/megatron_bert_345m/versions/v0.1_cased/zip -O $MYDIR/nvidia/megatron-bert-cased-345m/checkpoint.zip
```
## Converting the checkpoint
In order to be loaded into `Transformers`, the checkpoint has to be converted. You should run the following commands for that purpose.
Those commands will create `config.json` and `pytorch_model.bin` in `$MYDIR/nvidia/megatron-bert-cased-345m`.
You can move those files to different directories if needed.
```
python3 $MYDIR/transformers/src/transformers/models/megatron_bert/convert_megatron_bert_checkpoint.py $MYDIR/nvidia/megatron-bert-cased-345m/checkpoint.zip
```
As explained in [PR #14956](https://github.com/huggingface/transformers/pull/14956), if when running this conversion
script and you're getting an exception:
```
ModuleNotFoundError: No module named 'megatron.model.enums'
```
you need to tell python where to find the clone of Megatron-LM, e.g.:
```
cd /tmp
git clone https://github.com/NVIDIA/Megatron-LM
PYTHONPATH=/tmp/Megatron-LM python src/transformers/models/megatron_bert/convert_megatron_bert_checkpoint.py ...
```
Or, if you already have it cloned elsewhere, simply adjust the path to the existing path.
If the training was done using a Megatron-LM fork, e.g. [Megatron-DeepSpeed](https://github.com/microsoft/Megatron-DeepSpeed/) then
you may need to have that one in your path, i.e., /path/to/Megatron-DeepSpeed.
## Masked LM
The following code shows how to use the Megatron BERT checkpoint and the Transformers API to perform a `Masked LM` task.
```
import os
import torch
from transformers import BertTokenizer, MegatronBertForMaskedLM
# The tokenizer. Megatron was trained with standard tokenizer(s).
tokenizer = BertTokenizer.from_pretrained('nvidia/megatron-bert-cased-345m')
# The path to the config/checkpoint (see the conversion step above).
directory = os.path.join(os.environ['MYDIR'], 'nvidia/megatron-bert-cased-345m')
# Load the model from $MYDIR/nvidia/megatron-bert-cased-345m.
model = MegatronBertForMaskedLM.from_pretrained(directory)
# Copy to the device and use FP16.
assert torch.cuda.is_available()
device = torch.device("cuda")
model.to(device)
model.eval()
model.half()
# Create inputs (from the BERT example page).
input = tokenizer("The capital of France is [MASK]", return_tensors="pt").to(device)
label = tokenizer("The capital of France is Paris", return_tensors="pt")["input_ids"].to(device)
# Run the model.
with torch.no_grad():
output = model(**input, labels=label)
print(output)
```
## Next sentence prediction
The following code shows how to use the Megatron BERT checkpoint and the Transformers API to perform next
sentence prediction.
```
import os
import torch
from transformers import BertTokenizer, MegatronBertForNextSentencePrediction
# The tokenizer. Megatron was trained with standard tokenizer(s).
tokenizer = BertTokenizer.from_pretrained('nvidia/megatron-bert-cased-345m')
# The path to the config/checkpoint (see the conversion step above).
directory = os.path.join(os.environ['MYDIR'], 'nvidia/megatron-bert-cased-345m')
# Load the model from $MYDIR/nvidia/megatron-bert-cased-345m.
model = MegatronBertForNextSentencePrediction.from_pretrained(directory)
# Copy to the device and use FP16.
assert torch.cuda.is_available()
device = torch.device("cuda")
model.to(device)
model.eval()
model.half()
# Create inputs (from the BERT example page).
input = tokenizer('In Italy, pizza served in formal settings is presented unsliced.',
'The sky is blue due to the shorter wavelength of blue light.',
return_tensors='pt').to(device)
label = torch.LongTensor([1]).to(device)
# Run the model.
with torch.no_grad():
output = model(**input, labels=label)
print(output)
```
# Original code
The original code for Megatron can be found here: [https://github.com/NVIDIA/Megatron-LM](https://github.com/NVIDIA/Megatron-LM).
|