wissamantoun commited on
Commit
915c8cd
1 Parent(s): 89a8cc5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -16,7 +16,7 @@ Code: [GITHUB](https://github.com/lightonai/lairgpt)
16
 
17
  PAGnol is a collection of large French language models, geared towards free-form text generation. With 1.5 billion parameters. PAGnol is based on the [GPT](https://arxiv.org/abs/2005.14165) architecture. PAGnol is the first language model trained by [LightOn](https://lighton.ai/), in cooperation with the [ALMAnaCH team of Inria](http://almanach.inria.fr/index-en.html).
18
 
19
- These model were trained in early 2021 following the then [scaling laws](https://arxiv.org/abs/2001.08361) and using the exact same training data as the [CamemBERT](https://camembert-model.fr/) model trained on [CCNet](https://github.com/facebookresearch/cc_net). We make it available for reproducibility and transparency purposes.
20
  They do not constitute the current state of the art nor are they aiming at it.
21
 
22
  PAGnol was built by [Julien Launay](https://lolo.science/), E.L. Tommasone, [Baptiste Pannier](https://www.linkedin.com/in/baptiste-pannier-b30758154/), [François Boniface](https://www.linkedin.com/in/fran%c3%a7ois-boniface-26313610b/), [Amélie Chatelain](https://www.instagram.com/amelietabatta/), [Iacopo Poli](https://twitter.com/iacopo_poli), and [Djamé Seddah](http://pauillac.inria.fr/~seddah/). It is named after Marcel Pagnol (with PAG standing for pré-apprentissage génératif), and was trained on the IDRIS Jean Zay supercomputer thanks to a GENCI allocation.
 
16
 
17
  PAGnol is a collection of large French language models, geared towards free-form text generation. With 1.5 billion parameters. PAGnol is based on the [GPT](https://arxiv.org/abs/2005.14165) architecture. PAGnol is the first language model trained by [LightOn](https://lighton.ai/), in cooperation with the [ALMAnaCH team of Inria](http://almanach.inria.fr/index-en.html).
18
 
19
+ These model were trained in early 2021 following the then [scaling laws](https://arxiv.org/abs/2001.08361) and using the exact same training data as the [CamemBERT](https://camembert-model.fr/) model trained on [CCNet](https://github.com/facebookresearch/cc_net). We make it available for reproducibility purposes.
20
  They do not constitute the current state of the art nor are they aiming at it.
21
 
22
  PAGnol was built by [Julien Launay](https://lolo.science/), E.L. Tommasone, [Baptiste Pannier](https://www.linkedin.com/in/baptiste-pannier-b30758154/), [François Boniface](https://www.linkedin.com/in/fran%c3%a7ois-boniface-26313610b/), [Amélie Chatelain](https://www.instagram.com/amelietabatta/), [Iacopo Poli](https://twitter.com/iacopo_poli), and [Djamé Seddah](http://pauillac.inria.fr/~seddah/). It is named after Marcel Pagnol (with PAG standing for pré-apprentissage génératif), and was trained on the IDRIS Jean Zay supercomputer thanks to a GENCI allocation.