okuchaiev commited on
Commit
ae366d5
1 Parent(s): c179966

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -50,14 +50,14 @@ Alternatively, you can use NeMo Megatron training docker container with all depe
50
 
51
  ### Step 2: Launch eval server
52
 
53
- **Note.** The example below launches a model variant with Tensor Parallelism (TP) of 8 and Pipeline Parallelism (PP) of 1 on 8 GPUs.
54
 
55
 
56
  ```
57
  git clone https://github.com/NVIDIA/NeMo.git
58
  cd NeMo/examples/nlp/language_modeling
59
  git checkout v1.11.0
60
- python megatron_gpt_eval.py gpt_model_file=nemo_gpt20B_bf16_tp8.nemo server=True tensor_model_parallel_size=2 trainer.devices=2
61
  ```
62
 
63
  ### Step 3: Send prompts to you model!
 
50
 
51
  ### Step 2: Launch eval server
52
 
53
+ **Note.** The example below launches a model variant with Tensor Parallelism (TP) of 4 and Pipeline Parallelism (PP) of 1 on 4 GPUs.
54
 
55
 
56
  ```
57
  git clone https://github.com/NVIDIA/NeMo.git
58
  cd NeMo/examples/nlp/language_modeling
59
  git checkout v1.11.0
60
+ python megatron_gpt_eval.py gpt_model_file=nemo_gpt20B_bf16_tp4.nemo server=True tensor_model_parallel_size=4 trainer.devices=4
61
  ```
62
 
63
  ### Step 3: Send prompts to you model!