File size: 3,669 Bytes
49be7fc 9cf35a4 49be7fc 9cf35a4 49be7fc 9cf35a4 49be7fc 9cf35a4 49be7fc 9cf35a4 49be7fc 67d62ae 49be7fc 9cf35a4 49be7fc 9cf35a4 49be7fc |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 |
---
license: apache-2.0
library_name: peft
tags:
- axolotl
- generated_from_trainer
base_model: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T
model-index:
- name: tinyllama-1.1B_alpaca_2k_lora
results: []
---
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.0`
```yaml
# Upload the final model to Huggingface
hub_model_id: kareemamrr/tinyllama-1.1B_alpaca_2k_lora
# Store the training logs in weights and biases
wandb_entity: kamr54
wandb_project: tinyllama-1.1B_alpaca_2k_peft
wandb_name: lora-run
base_model: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T
model_type: LlamaForCausalLM
tokenizer_type: LlamaTokenizer
load_in_8bit: true
load_in_4bit: false
strict: false
datasets:
- path: mhenrichsen/alpaca_2k_test
type: alpaca
dataset_prepared_path:
val_set_size: 0.05
output_dir: ./outputs/lora-out
sequence_len: 4096
sample_packing: true
eval_sample_packing: false
pad_to_sequence_len: true
adapter: lora
lora_model_dir:
lora_r: 32
lora_alpha: 16
lora_dropout: 0.05
lora_target_linear: true
lora_fan_in_fan_out:
gradient_accumulation_steps: 4
micro_batch_size: 2
num_epochs: 4
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 0.0002
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 10
evals_per_epoch: 4
saves_per_epoch: 1
debug:
deepspeed:
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
```
</details><br>
# tinyllama-1.1B_alpaca_2k_lora
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T) on the [Alpaca 2k test set](https://huggingface.co/datasets/mhenrichsen/alpaca_2k_test).
It achieves the following results on the evaluation set:
- Loss: 1.2127
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.4615 | 0.08 | 1 | 1.4899 |
| 1.3847 | 0.24 | 3 | 1.4865 |
| 1.3673 | 0.48 | 6 | 1.4376 |
| 1.2673 | 0.72 | 9 | 1.3401 |
| 1.2257 | 0.96 | 12 | 1.2967 |
| 1.2511 | 1.16 | 15 | 1.2835 |
| 1.2267 | 1.4 | 18 | 1.2501 |
| 1.1348 | 1.6400 | 21 | 1.2330 |
| 1.2699 | 1.88 | 24 | 1.2276 |
| 1.1486 | 2.08 | 27 | 1.2258 |
| 1.1515 | 2.32 | 30 | 1.2224 |
| 1.1949 | 2.56 | 33 | 1.2175 |
| 1.1127 | 2.8 | 36 | 1.2158 |
| 1.1506 | 3.04 | 39 | 1.2126 |
| 1.1886 | 3.24 | 42 | 1.2110 |
| 1.1002 | 3.48 | 45 | 1.2106 |
| 1.1894 | 3.7200 | 48 | 1.2127 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.2
- Pytorch 2.1.2+cu118
- Datasets 2.19.1
- Tokenizers 0.19.1 |