|
--- |
|
language: |
|
- en |
|
- code |
|
|
|
|
|
tags: |
|
- pytorch |
|
- causal-lm |
|
- code-generation |
|
|
|
|
|
license: apache-2.0 |
|
|
|
--- |
|
|
|
|
|
# FIM-1.3B |
|
|
|
## Model Description |
|
|
|
FIM-1.3B is the first of a series of large-scale infilling-enabled autoregressive language models trained by CarperAI. FIM-1.3B is the first of these models, and future models (both larger and smaller) trained on greater quantities of code data will be released, potentially with different architectural variations optimized for code. |
|
|
|
This is a preliminary release of an experimental artifact and should be treated as such. |
|
|
|
|
|
|
|
## Model Dimensions |
|
|
|
|
|
| Hyperparameter | Value | |
|
|
|
|
|
|----------------------|----------------------------------------------------------------------------------------------------------------------------------------| |
|
|
|
|
|
| \\(n_{parameters}\\) | 1,331,810,304 | |
|
|
|
|
|
| \\(n_{layers}\\) | 24 | |
|
|
|
|
|
| \\(d_{model}\\) | 2,048 | |
|
|
|
|
|
| \\(d_{ff}\\) | 8,192 | |
|
|
|
|
|
| \\(n_{heads}\\) | 16 | |
|
|
|
|
|
| \\(d_{head}\\) | 128 | |
|
|
|
|
|
| \\(n_{ctx}\\) | 2,048 | |
|
|
|
|
|
| \\(n_{vocab}\\) | 50256 | |
|
|
|
|
|
| Positional Encoding | [Rotary Position Embedding (RoPE)](https://arxiv.org/abs/2104.09864) | |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
The model consists of 24 transformer layers with a model dimension of 2048, and a feedforward dimension of 8192. The model |
|
|
|
|
|
dimension is split into 16 heads, each with a dimension of 128. Rotary Position Embedding (RoPE) is used. |
|
|
|
|
|
The model is trained with the same tokenizer as GPT-NeoX-20b (link here), for a vocabulary of 50254 tokens. |
|
|
|
|
|
## Training Data |
|
|
|
The model was trained on the Pile, an 800Gb dataset composed of varied web corpora. The datasheet and paper for the Pile can be found [here] and [here] respectively |
|
|
|
|
|
## Training Details |
|
|
|
This model was trained for 47,000 steps at a batch size of 6,291,456 tokens per step in the [GPT-NeoX codebase](https://github.com/EleutherAI/gpt-neox). It was trained as an autoregressive language model, using cross-entropy loss to maximize the likelihood of predicting the next token correctly. |
|
|
|
Following Bavarian et al. 2022, we train the model to additionally perform infilling via a data transformation applied randomly to 90% of input contexts at train-time. |
|
|
|
Middle segments “to infill” were selected uniformly at random from contexts at the character level, and these contexts were then reformatted as |
|
|
|
|
|
<SUF> {last 1/3rd of the context} <PRE> {first 1/3rd of the context} <MID> {middle 1/3rd of the context} <EOD> |
|
|
|
|
|
|
|
|
|
|
|
|
|
## How to use |
|
|
|
|
|
This model can be easily loaded using the `AutoModelForCausalLM` class: |
|
|
|
|
|
```python |
|
|
|
|
|
from transformers import AutoTokenizer, AutoModelForCausalLM |
|
|
|
|
|
tokenizer = AutoTokenizer.from_pretrained("CarperAI/FIM-1.3B") |
|
model = AutoModelForCausalLM.from_pretrained("CarperAI/FIM-1.3b") |
|
|
|
|
|
``` |
|
|
|
### Performing Infilling |
|
|
|
Suppose we have some text that we would like to perform infilling on at a certain “cursor location”. |
|
|
|
This would have the form {some prelude text here} <INFILLING LOCATION> {some text following cursor}. |
|
|
|
The way to perform infilling generation would be via placing the input text into this format: |
|
|
|
<SUF> {some text following cursor} <PRE> {some prelude text here} <MID> ... language model output is generated after <MID> token! |
|
|
|
|
|
## Intended Uses and Limitations |
|
|
|
FIM-1.3B learns a representation of the English language that can be used to extract features useful for downstream NLP and Code generation tasks. However, the model has solely been trained on a standard next-token-prediction language modeling task on its training data. |
|
|
|
## Limitations and Biases |
|
|
|
FIM-1.3B was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. FIM-1.3B may produce socially unacceptable or otherwise harmful text. See Sections 5 and 6 of the Pile paper for a more detailed analysis of the biases in the Pile. |
|
|
|
As with all language models, it is hard to predict in advance how FIM-1.3B will respond to particular prompts, and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results. Code generated by FIM-1.3B should also be checked for security errors by a human before use in production. |
|
|
|
## Evaluation results |
|
|
|
We evaluate our model on a number of standard NLP datasets to verify that our infilling model performs on par with a comparable autoregressive model. |
|
|
|
We use the [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) developed by EleutherAI. |
|
|
|
|
|
Report: |
|
LogiQA, PIQA, SciQ, WSC, Winogrande, ARC_challenge, ARC_easy, lambada |
|
On FIM-1.3B, the comparable autoregressive model, |
|
|
|
|
|
|
|
|
|
|
|
We also perform preliminary investigation on code generation and infilling capabilities by testing on HumanEval-Infilling [link to github] [Bavarian et al. 2022] |
|
|
|
|
|
|
|
|
|
|
|
|