--- base_model: huggyllama/llama-13b library_name: peft license: other tags: - generated_from_trainer model-index: - name: llama-13b_oasst1_l0.0002_32-32 results: [] --- # llama-13b_oasst1_l0.0002_32-32 This model is a fine-tuned version of [huggyllama/llama-13b](https://huggingface.co/huggyllama/llama-13b) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.3866 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 1 - eval_batch_size: 1 - seed: 0 - gradient_accumulation_steps: 16 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - training_steps: 0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.4264 | 0.0018 | 1 | 1.6140 | | 1.4292 | 0.3392 | 187 | 1.2391 | | 1.0776 | 0.6783 | 374 | 1.2320 | | 1.3037 | 1.0175 | 561 | 1.2323 | | 1.0895 | 1.3566 | 748 | 1.2525 | | 1.1146 | 1.6958 | 935 | 1.2393 | | 0.7616 | 2.0349 | 1122 | 1.2815 | | 0.9368 | 2.3741 | 1309 | 1.3351 | | 0.7076 | 2.7132 | 1496 | 1.3530 | ### Framework versions - PEFT 0.12.1.dev0 - Transformers 4.45.0.dev0 - Pytorch 2.3.0+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1