rooa commited on
Commit
f849d0d
1 Parent(s): 24ab70b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -16,7 +16,7 @@ This checkpoint (CodeGen-NL 6B) was pre-trained on [the Pile](https://github.com
16
  ## Training procedure
17
 
18
  CodeGen was trained using cross-entropy loss to maximize the likelihood of sequential inputs.
19
- The family of models are trained using 4 TPU-v4 chips by Google, leveraging data and model parallelism.
20
  See Section 2.3 of the [paper](https://arxiv.org/abs/2203.13474) for more details.
21
 
22
  ## Evaluation results
@@ -35,8 +35,8 @@ This model can be easily loaded using the `AutoModelForCausalLM` functionality:
35
 
36
  ```python
37
  from transformers import AutoTokenizer, AutoModelForCausalLM
38
- tokenizer = AutoTokenizer.from_pretrained('Salesforce/codegen-6B-nl')
39
- model = AutoModelForCausalLM.from_pretrained('Salesforce/codegen-6B-nl')
40
 
41
  text = "def hello_world():"
42
  input_ids = tokenizer(text, return_tensors="pt").input_ids
 
16
  ## Training procedure
17
 
18
  CodeGen was trained using cross-entropy loss to maximize the likelihood of sequential inputs.
19
+ The family of models are trained using multiple TPU-v4-512 by Google, leveraging data and model parallelism.
20
  See Section 2.3 of the [paper](https://arxiv.org/abs/2203.13474) for more details.
21
 
22
  ## Evaluation results
 
35
 
36
  ```python
37
  from transformers import AutoTokenizer, AutoModelForCausalLM
38
+ tokenizer = AutoTokenizer.from_pretrained("Salesforce/codegen-6B-nl")
39
+ model = AutoModelForCausalLM.from_pretrained("Salesforce/codegen-6B-nl")
40
 
41
  text = "def hello_world():"
42
  input_ids = tokenizer(text, return_tensors="pt").input_ids