CodeParrot 🦜 is a GPT-2 model (1.5B parameters) trained to generate Python code. After the initial training and release of v1.0 we trained the model some more and released v1.1 (see below for details).
You can load the CodeParrot model and tokenizer directly in
from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("codeparrot/codeparrot") model = AutoModelWithLMHead.from_pretrained("codeparrot/codeparrot") inputs = tokenizer("def hello_world():", return_tensors="pt") outputs = model(**inputs)
or with a
from transformers import pipeline pipe = pipeline("text-generation", model="codeparrot/codeparrot") outputs = pipe("def hello_world():")
The model was trained on the cleaned CodeParrot 🦜 dataset in two steps. After the initial training (v1.0) the model was trained for another 30k steps resulting in v1.1 and you find the settings in the following table:
The training was executed on 16 x A100 (40GB) GPUs. This setting amounts to roughly 26 + 15 billion tokens.
We evaluated the model on OpenAI's HumanEval benchmark which consists of programming challenges:
The pass@k metric tells the probability that at least one out of k generations passes the tests.
- Downloads last month