gpt-fr-cased-base / README.md
asi's picture
Add documentation items
5b1e3b8
|
raw
history blame
No virus
6.03 kB
metadata
language:
  - fr
thumbnail: https://github.com/AntoineSimoulin/gpt-fr/blob/main/imgs/logo.png?raw=true
tags:
  - tf
  - pytorch
  - gpt2
  - text-generation
license: apache-2.0

GPT 🇫🇷

Model description

GPT-fr is a French GPT model for French developped by Quantmetry and the Laboratoire de Linguistique Formelle (LLF). We train the model on a very large and heterogeneous French corpus. We release the weights for the following configurations:

Model name Number of layers Attention Heads Embedding Dimension Total Parameters
gpt-fr-cased-small 12 12 768 124 M
gpt-fr-cased-base 24 14 1792 1,017 B

Intended uses & limitations

The model can be leveraged for language generation tasks. Besides, many tasks may be formatted such that the output is directly generated in natural language. Such configuration may be used for tasks such as automatic summary or question answering tasks. We do hope our model might be used for both academic and industrial applications.

How to use

The model might be used through the astonishing 🤗 Transformers librairie:

from transformers import GPT2Tokenizer, GPT2LMHeadModel

# Load pretrained model and tokenizer
model = GPT2LMHeadModel.from_pretrained("asi/gpt-fr-cased-base")
tokenizer = GPT2Tokenizer.from_pretrained("asi/gpt-fr-cased-base")

# Generate a sample of text
model.eval()
input_sentence = "Longtemps je me suis couché de bonne heure."
input_ids = tokenizer.encode(input_sentence, return_tensors='pt')

beam_outputs = model.generate(
    input_ids, 
    max_length=200, 
    do_sample=True,   
    top_k=50, 
    max_length=100,
    top_p=0.95, 
    num_return_sequences=1
)

print("Output:\n" + 100 * '-')
print(tokenizer.decode(beam_outputs[0], skip_special_tokens=True))

Limitations and bias

Large language models tend to replicate the biases found in pre-training datasets, such as gender discrimination or offensive content generation.

To limit exposition to too much explicit material, we carefully choose the sources beforehand. This process — detailed in our paper — aims to limit offensive content generation from the model without performing manual and arbitrary filtering.

However, some societal biases, contained in the data, might be reflected by the model. For example on gender equality, we generated the following sentence sequence "Ma femme/Mon mari vient d'obtenir un nouveau poste en tant qu'_______" and observed the model generated distinct positions given the subject gender. We used top-k random sampling strategy with k=50 and stopped at the first punctuation element. The positions generated for the wife are: aide-soignante, agent immobiliser, assistante de direction, aide-soignante à la maison. While the positions for the husband are: ingénieur de recherches au Centre de recherche sur les orages magnétiques (CRC), maire d'Asnières, vice-président senior des opérations générales, journaliste et chef d'état-major. We do appreciate your feedback to better qualitatively and quantitatively assess such effects.

Training data

We created a dedicated corpus to train our generative model. Indeed the model uses a fixed-length context size of 1,024 and require long documents to be trained. We aggregated existing corpora: Wikipedia, OpenSubtitle (Tiedemann, 2012), Gutenberg and Common Crawl (Li et al., 2019). Corpora are filtered and separated into sentences. Successive sentences are then concatenated within the limit of 1,024 tokens per document.

Training procedure

We pre-trained the model on the new CNRS (French National Centre for Scientific Research) Jean Zay supercomputer. We perform the training within a total of 140 hours of computation on Tesla V-100 hardware (TDP of 300W). The training was distributed on 4 compute nodes of 8 GPUs. We used data parallelization in order to divide each micro-batch on the computing units. We estimated the total emissions at 580.61 kgCO2eq, using the Machine Learning Impact calculator presented in (Lacoste et al., 2019).

Eval results

We packaged GPT-fr with a dedicated language model evaluation benchmark for French. In line with the WikiText benchmark in English, we collected over 70 million tokens from the set of verified good and featured articles on Wikipedia. The model reaches a zero-shot perplexity of 12.9 on the test set.

BibTeX entry and citation info

@inproceedings{...,
  year={2020}
}

References

Jörg Tiedemann: Parallel Data, Tools and Interfaces in OPUS. LREC 2012: 2214-2218
Xian Li, Paul Michel, Antonios Anastasopoulos, Yonatan Belinkov, Nadir Durrani, Orhan Firat, Philipp Koehn, Graham Neubig, Juan Pino, Hassan Sajjad: Findings of the First Shared Task on Machine Translation Robustness. WMT (2) 2019: 91-102
Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper, Bryan Catanzaro: Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism. CoRR abs/1909.08053 (2019)
Alexandre Lacoste, Alexandra Luccioni, Victor Schmidt, Thomas Dandres: Quantifying the Carbon Emissions of Machine Learning. CoRR abs/1910.09700 (2019)