willyninja30 commited on
Commit
5067875
1 Parent(s): c1d170d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -9
README.md CHANGED
@@ -27,8 +27,7 @@ Llama 2 is a collection of pretrained and fine-tuned generative text models rang
27
  # *FINETUNING PROCESS **
28
 
29
  We trained the model on a high quality dataset with more than 50.000 rows of french language. The training took 2 days on Amazon Cloud Sagemaker powered by Nvidia GPUs.
30
- ....
31
- ....
32
  # **Timing of training**
33
  2 Days using NVIDIA A10G and Amazon Web services Cloud Instance. We are grateful to Nvidia Inception program.
34
 
@@ -37,19 +36,19 @@ We are also applying rope scalling as experimental approach used by several othe
37
  *Note: Use of this model is governed by the Meta license because it's based on LLAMA 2. In order to download the model weights and tokenizer, please visit the [website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License before requesting access here.*
38
 
39
 
40
- **Model Developers** FARADAY
41
 
42
- **Variations** ARIA comes in a range of parameter sizes — 7B, 40B (based on Falcon), and 70B finetuned on French language datasets.
43
 
44
- **Input** Models input text only.
45
 
46
- **Output** Models generate text only.
47
 
48
- **Model Architecture** ARIA is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety.
49
 
50
- **License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)
51
 
52
- **Research Paper for LLAMA 2** ["Llama-2: Open Foundation and Fine-tuned Chat Models"](arxiv.org/abs/2307.09288)
53
 
54
 
55
 
 
27
  # *FINETUNING PROCESS **
28
 
29
  We trained the model on a high quality dataset with more than 50.000 rows of french language. The training took 2 days on Amazon Cloud Sagemaker powered by Nvidia GPUs.
30
+
 
31
  # **Timing of training**
32
  2 Days using NVIDIA A10G and Amazon Web services Cloud Instance. We are grateful to Nvidia Inception program.
33
 
 
36
  *Note: Use of this model is governed by the Meta license because it's based on LLAMA 2. In order to download the model weights and tokenizer, please visit the [website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License before requesting access here.*
37
 
38
 
39
+ **Model Developers** :FARADAY
40
 
41
+ **Variations** :ARIA comes in a range of parameter sizes — 7B, 40B (based on Falcon), and 70B finetuned on French language datasets.
42
 
43
+ **Input** :Models input text only.
44
 
45
+ **Output** : Models generate text only.
46
 
47
+ **Model Architecture** : ARIA is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety.
48
 
49
+ **License** : A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)
50
 
51
+ **Research Paper for LLAMA 2** : ["Llama-2: Open Foundation and Fine-tuned Chat Models"](arxiv.org/abs/2307.09288)
52
 
53
 
54