Angelectronic
commited on
Commit
•
7ee8e9c
1
Parent(s):
cf2ba36
Update README.md
Browse files
README.md
CHANGED
@@ -20,3 +20,30 @@ base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
|
|
20 |
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
|
21 |
|
22 |
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
20 |
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
|
21 |
|
22 |
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
23 |
+
|
24 |
+
### Evaluation
|
25 |
+
- **ViMMRC test set:** 0.8330 accuracy
|
26 |
+
|
27 |
+
### Training hyperparameters
|
28 |
+
|
29 |
+
The following hyperparameters were used during training:
|
30 |
+
- learning_rate: 0.0002
|
31 |
+
- train_batch_size: 16
|
32 |
+
- eval_batch_size: 8
|
33 |
+
- seed: 3407
|
34 |
+
- gradient_accumulation_steps: 4
|
35 |
+
- eval_accumulation_steps: 4
|
36 |
+
- total_train_batch_size: 64
|
37 |
+
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
38 |
+
- lr_scheduler_type: cosine
|
39 |
+
- lr_scheduler_warmup_steps: 5
|
40 |
+
- num_epochs: 3
|
41 |
+
|
42 |
+
### Framework versions
|
43 |
+
|
44 |
+
- PEFT 0.10.0
|
45 |
+
- Transformers 4.40.2
|
46 |
+
- Pytorch 2.3.0
|
47 |
+
- Datasets 2.19.1
|
48 |
+
- Tokenizers 0.19.1
|
49 |
+
|