The CodeGen architecture follows a standard transformer decoder with left-to-right causal masking. With rotary position embedding for the positional encoding [(Su et al., 2021)](https://arxiv.org/abs/2104.09864), and a context length of 2048. CodeGen models are trained in various sizes.
|Model | # parameters | | - | - | | [Salesforce/codegen-350m-mono](https://huggingface.co/Salesforce/codegen-16B-mono) | 350M | | [Salesforce/codegen-2B-mono](https://huggingface.co/Salesforce/codegen-16B-mono) | 2.7B | | [Salesforce/codegen-6B-mono](https://huggingface.co/Salesforce/codegen-16B-mono) | 6.1B | | [Salesforce/codegen-16B-mono](https://huggingface.co/Salesforce/codegen-16B-mono) | 16.1B |
You can load the model and tokenizer directly from 🤗 [`transformers`](https://huggingface.co/docs/transformers/index): ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained('Salesforce/codegen-16B-mono') model = AutoModelForCausalLM.from_pretrained('Salesforce/codegen-16B-mono') inputs = tokenizer("def hello_world():", return_tensors="pt") outputs = model(**inputs) ```