willyninja30
commited on
Commit
•
647511f
1
Parent(s):
6df6835
Update README.md
Browse files
README.md
CHANGED
@@ -17,7 +17,9 @@ Llama 2 is a collection of pretrained and fine-tuned generative text models rang
|
|
17 |
# *FINETUNING PROCESS - UPDATES**
|
18 |
|
19 |
We trained the model on a high quality dataset with more than 50.000 rows of french language.
|
20 |
-
|
|
|
|
|
21 |
2 Days using NVIDIA A10G and Amazon Web services Cloud Instance. We are grateful to Nvidia Inception program.
|
22 |
|
23 |
We are also applying rope scalling as experimental approach used by several other Open source teams to increase context lenght of LLAMA 2 from 4,096 to over 6,000 tokens. This will allow the model to handle large files for data extraction.
|
|
|
17 |
# *FINETUNING PROCESS - UPDATES**
|
18 |
|
19 |
We trained the model on a high quality dataset with more than 50.000 rows of french language.
|
20 |
+
....
|
21 |
+
....
|
22 |
+
# **Timing of training**
|
23 |
2 Days using NVIDIA A10G and Amazon Web services Cloud Instance. We are grateful to Nvidia Inception program.
|
24 |
|
25 |
We are also applying rope scalling as experimental approach used by several other Open source teams to increase context lenght of LLAMA 2 from 4,096 to over 6,000 tokens. This will allow the model to handle large files for data extraction.
|