File size: 4,765 Bytes
e48f7c4
 
 
 
4d519ee
e48f7c4
37954a2
 
e48f7c4
c8fc841
e48f7c4
 
 
56cdfc8
e48f7c4
 
 
 
 
99fab9d
e48f7c4
 
 
 
 
 
 
 
56cdfc8
e48f7c4
 
 
2646972
005bca6
e48f7c4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
005bca6
e48f7c4
 
 
 
 
2646972
e48f7c4
2646972
e48f7c4
2646972
 
 
e48f7c4
 
005bca6
e48f7c4
 
 
005bca6
e48f7c4
 
 
2646972
 
e48f7c4
 
 
 
 
 
 
 
005bca6
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
---
language: 
- fr

thumbnail: https://github.com/AntoineSimoulin/gpt-fr/blob/main/imgs/logo.png?raw=true
tags:
- tf
- pytorch
- gpt2
- text-generation
license: apache-2.0
---

# GPT 🇫🇷

## Model description

<img src="imgs/logo.png" width="200">

**GPT-fr** is a French GPT model for French developped by [Quantmetry](https://www.quantmetry.com/) and the [Laboratoire de Linguistique Formelle (LLF)](http://www.llf.cnrs.fr/en). We train the model on a very large and heterogeneous French corpus. We release the weights for the following configurations:

| Model name | Number of layers | Attention Heads | Embedding Dimension | Total Parameters |
| :------:       |   :---: | :---: | :---: | :---: |
| `gpt-fr-cased-small` | 12    | 12    | 768   | 124 M |
| `gpt-fr-cased-base` | 24    | 14    | 1792   | 1,017 B |

## Intended uses & limitations

The model can be leveraged for language generation tasks. Besides, many tasks may be formatted such that the output is directly generated in natural language. Such configuration may be used for tasks such as automatic summary or question answering tasks. We do hope our model might be used for both academic and industrial applications. 

#### How to use

The model might be used through the astonishing 🤗 `Transformers` librairie:

```python
from transformers import GPT2Tokenizer, GPT2LMHeadModel

# Load pretrained model and tokenizer
model = GPT2LMHeadModel.from_pretrained("asi/gpt-fr-cased-small")
tokenizer = GPT2Tokenizer.from_pretrained("asi/gpt-fr-cased-small")

# Generate a sample of text
model.eval()
input_sentence = "Longtemps je me suis couché de bonne heure."
input_ids = tokenizer.encode(input_sentence, return_tensors='pt')

beam_outputs = model.generate(
    input_ids, 
    max_length=200, 
    do_sample=True,   
    top_k=50, 
    max_length=100,
    top_p=0.95, 
    num_return_sequences=1
)

print("Output:\n" + 100 * '-')
print(tokenizer.decode(beam_outputs[0], skip_special_tokens=True))
```

#### Limitations and bias

Large language models tend to replicate the biases found in pre-training datasets, such as gender discrimination or offensive content generation.

To limit exposition to too much explicit material, we carefully choose the sources beforehand. This process — detailed in our paper — aims to limit offensive content generation from the model without performing manual and arbitrary filtering.

However, some societal biases, contained in the data, might be reflected by the model. For example on gender equality, we generated the following sentence sequence "Ma femme/Mon mari vient d'obtenir un nouveau poste en tant qu'\_\_\_\_\_\_\_" and observed the model generated distinct positions given the subject gender. We used top-k random sampling strategy with k=50 and stopped at the first punctuation element.
The positions generated for the wife are: `aide-soignante`, `agent immobiliser`, `assistante de direction`, `aide-soignante à la maison`. While the positions for the husband are: `ingénieur de recherches au Centre de recherche sur les orages magnétiques (CRC)`, `maire d'Asnières`, `vice-président senior des opérations générales`, `journaliste et chef d'état-major`. We do appreciate your feedback to better qualitatively and quantitatively assess such effects.
 
## Training data

We created a dedicated corpus to train our generative model. Indeed the model uses a fixed-length context size of 1,024 and require long documents to be trained.  We aggregated existing corpora: [Wikipedia](https://dumps.wikimedia.org/frwiki/), [OpenSubtitle](http://opus.nlpl.eu/download.php?f=OpenSubtitles/v2016/mono/) ([Tiedemann, 2012](#tiedemann-2012)), [Gutenberg](http://www.gutenberg.org). Corpora are filtered and separated into sentences. Successive sentences are then concatenated within the limit of 1,024 tokens per document.

## Training procedure

We pre-trained the model on a TPU v2-8 using the amazing [Google Colab](https://colab.research.google.com) inter-server.

## Eval results

We packaged **GPT-fr** with a dedicated language model evaluation benchmark. 
In line with the [WikiText](https://blog.einstein.ai/the-wikitext-long-term-dependency-language-modeling-dataset/) benchmark in English, we collected over 70 million tokens from the set of verified [good](https://fr.wikipedia.org/wiki/Wikip%C3%A9dia:Articles_de_qualit%C3%A9) and [featured](https://fr.wikipedia.org/wiki/Wikip%C3%A9dia:Bons_articles) articles on French Wikipedia. The model reaches a zero-shot perplexity of **109.2** on the test set. 


### BibTeX entry and citation info

```bibtex
@inproceedings{...,
  year={2020}
}
```
### References

><div name="tiedemann-2012">Jörg Tiedemann: Parallel Data, Tools and Interfaces in OPUS. LREC 2012: 2214-2218</div>