Bram Vanroy
init adapters
0877bc4
|
raw
history blame
2.01 kB
---
license: apache-2.0
base_model: tiiuae/falcon-7b
tags:
- generated_from_trainer
datasets:
- yhavinga/mc4_nl_cleaned
model-index:
- name: tiny-3e-4lr+1152tbs+1ep+0.1wd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tiny-3e-4lr+1152tbs+1ep+0.1wd
This model is a fine-tuned version of [tiiuae/falcon-7b](https://huggingface.co/tiiuae/falcon-7b) on the yhavinga/mc4_nl_cleaned micro dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0928
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 12
- eval_batch_size: 24
- seed: 42
- distributed_type: multi-GPU
- num_devices: 16
- gradient_accumulation_steps: 6
- total_train_batch_size: 1152
- total_eval_batch_size: 384
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.6094 | 0.1 | 170 | 2.5980 |
| 2.4503 | 0.19 | 340 | 2.4405 |
| 2.3243 | 0.29 | 510 | 2.3428 |
| 2.2822 | 0.39 | 680 | 2.2752 |
| 2.238 | 0.49 | 850 | 2.2248 |
| 2.2015 | 0.58 | 1020 | 2.1865 |
| 2.1678 | 0.68 | 1190 | 2.1560 |
| 2.1301 | 0.78 | 1360 | 2.1312 |
| 2.1161 | 0.88 | 1530 | 2.1112 |
| 2.0997 | 0.97 | 1700 | 2.0928 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3