willyninja30 commited on
Commit
3ce58cf
1 Parent(s): 76864d1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -1
README.md CHANGED
@@ -23,7 +23,8 @@ Llama 2 is a collection of pretrained and fine-tuned generative text models rang
23
  We trained the model on a high quality dataset with more than 50.000 rows of french language. The training took 2 days on Amazon Cloud Sagemaker powered by Nvidia GPUs.
24
 
25
  # Timing of training
26
- 2 Days using NVIDIA A10G and Amazon Web services Cloud Instance. We are grateful to Nvidia Inception program.
 
27
 
28
  We are also applying rope scalling as experimental approach used by several other Open source teams to increase context lenght of ARIA from 4,096 to over 6,000 tokens. This will allow the model to handle large files for data extraction. This is not active by default and you should add a line of code at parameters to activate rope scaling.
29
 
 
23
  We trained the model on a high quality dataset with more than 50.000 rows of french language. The training took 2 days on Amazon Cloud Sagemaker powered by Nvidia GPUs.
24
 
25
  # Timing of training
26
+
27
+ 1 Day using NVIDIA A100 and a cloud service. We are grateful to Nvidia Inception program.
28
 
29
  We are also applying rope scalling as experimental approach used by several other Open source teams to increase context lenght of ARIA from 4,096 to over 6,000 tokens. This will allow the model to handle large files for data extraction. This is not active by default and you should add a line of code at parameters to activate rope scaling.
30