Loubna ben allal
add architecture info
d97087d
raw history blame
No virus
288 Bytes
[InCoder](https://huggingface.co/facebook/incoder-6B) uses a decoder-only Transformer with [Causal Masking objective](https://arxiv.org/abs/2201.07520), to train a left-to-right language model to fill in masked token segments.
|Model | # parameters |
| - | - |
| Decoder |6.7B |