Safetensors
llama
Pclanglais commited on
Commit
06fee8d
1 Parent(s): c293af0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +12 -14
README.md CHANGED
@@ -18,41 +18,39 @@ language:
18
  </div>
19
 
20
 
21
- **Pleias-pico-350m-RAG 0.1** is a specialized language model designed by PleIAs for Retrieval-Augmented Generation.
22
 
23
- Similarly to its base model, Pleias-pico-350m-Preview, Pleias-pico-350m-RAG 0.1 aims to be a fully open model (weights, code, data), only trained on content with a permissible license and fully compliant with the European AI Act.
24
 
25
  ## Description
26
- Pleias-pico-350m-RAG is continuous pretrain of Pleias-pico-350m-Preview on a new dataset of 45,088,768,000 tokens modeling common retrieval tasks. All the content of the dataset is ultimately coming from [Common Corpus](https://huggingface.co/datasets/PleIAs/common_corpus).
27
 
28
- Pleias-pico-350m-RAG includes the main features of the original base model:
29
  * Only trained on open data under a permissible license and in compliance with the European AI Act. By design, all Pleias model are unable to output copyrighted content.
30
  * Extensive multilingual support for main European languages: English, French, German, Spanish, Italian, Dutch, Latin, Portuguese and Polish.
31
  * Extremely low level of toxicity and problematic content.
32
 
33
- Pleias-pico-350m-RAG supports retrieval-augmented generation with enhanced verifiability, source analysis and grounding on submitted sources. This includes:
34
  * Standardized structure and special tokens to include queries, sources, references.
35
  * Anticipation of various query forms in multiple languages, from actual drafted questions to unstructured list of keyword search.
36
  * Source analysis/criticism which also acts as an integrated reranker step.
37
  * Generation of ground answers with references and excerpts linked to the original sources.
38
 
39
- Given its small size, Pleias-pico-350m-RAG 0.1 was originally conceived as an experimental model.
40
-
41
  Initial tests have shown that the RAG design has significantly improved the factuality and verifiability of the model. Even when the grounding does not work perfectly, the information remains much closer to the original sources.
42
 
43
  As a result, Pleias-pico-350m-RAG 0.1 has been already tested and integrated into multiple applied RAG projects, including Pleias's flagship application Scholasticai.
44
 
45
  ## Training
46
- Pleias-pico-350m-RAG was trained at Jean-Zay with 16 h100s with Nanotron, the pretraining library from HuggingFace. We provide the complete settings as a yaml file as part of our release.
47
 
48
- Pleias-pico-350m-RAG derives from the last checkpoint of Pleias-pico-350m-Preview (518,000). The training schedule reused the last learning rate value (6e-5) without decay for 90,000 steps.
49
 
50
  Training covers the entire RAG dataset we have been designing out of Common Corpus for 1 epoch.
51
 
52
  Further experiments were made with different learning rate values: none of theses tests have provided a better convergence than the one obtained with the final learning rate from the base model.
53
 
54
  ## Inference
55
- Pleias-pico-350m-RAG relies on special tokens to encode the core RAG functionalities:
56
 
57
  A typical example, with excerpts drawn from a Wikipedia article on Wikipedia
58
  ```bash
@@ -63,10 +61,10 @@ A typical example, with excerpts drawn from a Wikipedia article on Wikipedia
63
  <|source_analysis_start|>
64
  ```
65
 
66
- As a specialized language model, Pleias-pico-350m-RAG will be unable to work properly with prompts that detracts from that design.
67
 
68
  ## Acceptable use
69
- Pleias-pico-350m-RAG includes a much wider range of support for verifiability and grounding than most generalist models.
70
 
71
  The model is not a substitute for an integrated RAG application. Retrieval errors as well as challenging texts and questions can still create a range of issues. We especially encourage end users to take advantage of the citations and the references to provide better indicators of accuracy.
72
 
@@ -75,12 +73,12 @@ For best results we recommend the following setting:
75
  * Standardized hashes of 16 characters. While the model has been trained on many other patterns (including full bibliographic entries), this has proven the most convenient for systematic citation parsing.
76
 
77
  ## Future updates
78
- Pleias-pico-350m-RAG will be continuously improved through iterative retraining/adaptation.
79
 
80
  The current roadmap includes the following features:
81
  * Longer training on the same dataset for more than one epochs.
82
  * Context length expansion.
83
- * Better handling of multilingual sources. In its current form, Pleias-pico-350m-RAG will generally switch language if a query is made to sources in a different language.
84
  * New sampling methods inspired by Entropix for a better combined support of text creativity and accuracy.
85
  * Interactive/conversational RAG.
86
 
 
18
  </div>
19
 
20
 
21
+ **Pleias-Pico** is a 353 million parameters specialized language model designed by PleIAs for Retrieval-Augmented Generation.
22
 
23
+ Similarly to its base model, Pleias-350m, Pleias-pico-350m-RAG 0.1 aims to be a fully open model (weights, code, data), only trained on content with a permissible license and fully compliant with the European AI Act.
24
 
25
  ## Description
26
+ Pleias-Pico is continuous pretrain of Pleias-pico-350m-Preview on a new dataset of 45,088,768,000 tokens modeling common retrieval tasks. All the content of the dataset is ultimately coming from [Common Corpus](https://huggingface.co/datasets/PleIAs/common_corpus).
27
 
28
+ Pleias-Pico includes the main features of the original base model:
29
  * Only trained on open data under a permissible license and in compliance with the European AI Act. By design, all Pleias model are unable to output copyrighted content.
30
  * Extensive multilingual support for main European languages: English, French, German, Spanish, Italian, Dutch, Latin, Portuguese and Polish.
31
  * Extremely low level of toxicity and problematic content.
32
 
33
+ Pleias-Pico supports retrieval-augmented generation with enhanced verifiability, source analysis and grounding on submitted sources. This includes:
34
  * Standardized structure and special tokens to include queries, sources, references.
35
  * Anticipation of various query forms in multiple languages, from actual drafted questions to unstructured list of keyword search.
36
  * Source analysis/criticism which also acts as an integrated reranker step.
37
  * Generation of ground answers with references and excerpts linked to the original sources.
38
 
 
 
39
  Initial tests have shown that the RAG design has significantly improved the factuality and verifiability of the model. Even when the grounding does not work perfectly, the information remains much closer to the original sources.
40
 
41
  As a result, Pleias-pico-350m-RAG 0.1 has been already tested and integrated into multiple applied RAG projects, including Pleias's flagship application Scholasticai.
42
 
43
  ## Training
44
+ Pleias-Pico was trained at Jean-Zay with 16 h100s with Nanotron, the pretraining library from HuggingFace. We provide the complete settings as a yaml file as part of our release.
45
 
46
+ Pleias-Pico derives from the last checkpoint of Pleias-350m (518,000). The training schedule reused the last learning rate value (6e-5) without decay for 90,000 steps.
47
 
48
  Training covers the entire RAG dataset we have been designing out of Common Corpus for 1 epoch.
49
 
50
  Further experiments were made with different learning rate values: none of theses tests have provided a better convergence than the one obtained with the final learning rate from the base model.
51
 
52
  ## Inference
53
+ Pleias-Pico relies on special tokens to encode the core RAG functionalities:
54
 
55
  A typical example, with excerpts drawn from a Wikipedia article on Wikipedia
56
  ```bash
 
61
  <|source_analysis_start|>
62
  ```
63
 
64
+ As a specialized language model, Pleias-Pico will be unable to work properly with prompts that detracts from that design.
65
 
66
  ## Acceptable use
67
+ Pleias-Pico includes a much wider range of support for verifiability and grounding than most generalist models.
68
 
69
  The model is not a substitute for an integrated RAG application. Retrieval errors as well as challenging texts and questions can still create a range of issues. We especially encourage end users to take advantage of the citations and the references to provide better indicators of accuracy.
70
 
 
73
  * Standardized hashes of 16 characters. While the model has been trained on many other patterns (including full bibliographic entries), this has proven the most convenient for systematic citation parsing.
74
 
75
  ## Future updates
76
+ Pleias-Pico will be continuously improved through iterative retraining/adaptation.
77
 
78
  The current roadmap includes the following features:
79
  * Longer training on the same dataset for more than one epochs.
80
  * Context length expansion.
81
+ * Better handling of multilingual sources. In its current form, Pleias-Pico will generally switch language if a query is made to sources in a different language.
82
  * New sampling methods inspired by Entropix for a better combined support of text creativity and accuracy.
83
  * Interactive/conversational RAG.
84