File size: 1,758 Bytes
151795f
499490c
d97087d
 
 
1e77c56
 
76df626
499490c
50d6392
76df626
499490c
 
 
bc914f7
499490c
50d6392
cf212fc
 
1e77c56
cb371e9
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
[InCoder](https://huggingface.co/facebook/incoder-6B) uses a decoder-only Transformer with Causal Masking objective, to train a left-to-right language model to fill in masked token segments, with a context length of 2048.
<div align="center">

|Model | # parameters |
|   -   |   -  | 
| [facebook/incoder-1B](https://huggingface.co/facebook/incoder-1B) |1.3B |
| [facebook/incoder-6B](https://huggingface.co/facebook/incoder-6B) |6.7B |

</div>

[Causal Masking objective](https://arxiv.org/abs/2201.07520) is a hybrid approach of Causal and Masked language models, "it combines the benefit of per-token generation with optional bi-directionality specifically tailored to prompting".
During the training of InCoder, spans of code were randomly masked and moved to the end of each file, which allows for bidirectional context. Figure below from InCoder [paper](https://arxiv.org/pdf/2204.05999.pdf) illustrates the training process.

<p align="center">
    <img src="https://huggingface.co/datasets/loubnabnl/repo-images/raw/main/incoder.png" alt="drawing" width="750"/>
</p>

So in addition to program synthesis (via left-to-right generation), InCoder can also perform editing (via infilling). The model gives promising results in some zero-shot code infilling tasks such as type prediction, variable re-naming and comment generation.

You can load the model and tokenizer directly from 🤗 [`transformers`](https://huggingface.co/docs/transformers/index):

```python
from transformers import AutoTokenizer, AutoModelWithLMHead
  
tokenizer = AutoTokenizer.from_pretrained("facebook/incoder-6B")
model = AutoModelWithLMHead.from_pretrained("facebook/incoder-6B")

inputs = tokenizer("def hello_world():", return_tensors="pt")
outputs = model(**inputs)

```