loubnabnl HF staff commited on
Commit
22c60ae
1 Parent(s): ad9d8ae

update html

Browse files
Files changed (1) hide show
  1. architectures/codegen.txt +5 -0
architectures/codegen.txt CHANGED
@@ -1,5 +1,7 @@
1
  [CodeGen](https://huggingface.co/Salesforce/codegen-16B-mono) architecture follows a standard transformer decoder with left-to-right causal masking. With rotary position embedding for the positional encoding [(Su et al., 2021)](https://arxiv.org/abs/2104.09864), and a context length of 2048. CodeGen models are trained in various sizes.
2
 
 
 
3
  |Model | # parameters |
4
  | - | - |
5
  | Decoder | 350M |
@@ -7,6 +9,9 @@
7
  | Decoder | 6.1B |
8
  | Decoder | 16.1B |
9
 
 
 
 
10
  You can load the model and tokenizer directly from [`transformers`](https://huggingface.co/docs/transformers/index):
11
 
12
  ```python
 
1
  [CodeGen](https://huggingface.co/Salesforce/codegen-16B-mono) architecture follows a standard transformer decoder with left-to-right causal masking. With rotary position embedding for the positional encoding [(Su et al., 2021)](https://arxiv.org/abs/2104.09864), and a context length of 2048. CodeGen models are trained in various sizes.
2
 
3
+ <div align="center">
4
+
5
  |Model | # parameters |
6
  | - | - |
7
  | Decoder | 350M |
 
9
  | Decoder | 6.1B |
10
  | Decoder | 16.1B |
11
 
12
+ </div>
13
+
14
+
15
  You can load the model and tokenizer directly from [`transformers`](https://huggingface.co/docs/transformers/index):
16
 
17
  ```python