---
base_model: Qwen/Qwen2-7B-Instruct
library_name: peft
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: workspace/axolotl/vinh/Qwen_Qwen2-7B-Instruct-lora-2024-07-01-14-29-26
results: []
---
[](https://github.com/OpenAccess-AI-Collective/axolotl)
See axolotl config
axolotl version: `0.4.1`
```yaml
base_model: Qwen/Qwen2-7B-Instruct
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
load_in_8bit: false
load_in_4bit: false
strict: false
datasets:
- path: /workspace/axolotl/vinh/PAL/input_output_qwen.json
type: input_output
dataset_prepared_path:
val_set_size: 0.05
eval_sample_packing: false
output_dir: /workspace/axolotl/vinh/Qwen_Qwen2-7B-Instruct-lora-2024-07-01-14-29-26
sequence_len: 2048
sample_packing: false
pad_to_sequence_len: false
adapter: lora
lora_model_dir:
lora_r: 64
lora_alpha: 128
lora_dropout: 0.05
lora_target_linear: true
lora_fan_in_fan_out:
wandb_project:
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:
gradient_accumulation_steps: 128
micro_batch_size: 1
num_epochs: 3
optimizer: paged_adamw_32bit
lr_scheduler: cosine
learning_rate: 2e-4
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: false
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
s2_attention:
loss_watchdog_threshold: 5.0
loss_watchdog_patience: 3
warmup_steps: 10
evals_per_epoch: 10
eval_table_size:
eval_max_new_tokens: 512
saves_per_epoch: 2
save_total_limit: 20
debug:
deepspeed:
weight_decay: 0.0
fsdp:
fsdp_config:
```
# workspace/axolotl/vinh/Qwen_Qwen2-7B-Instruct-lora-2024-07-01-14-29-26
This model is a fine-tuned version of [Qwen/Qwen2-7B-Instruct](https://huggingface.co/Qwen/Qwen2-7B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0356
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 128
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.4503 | 0.0095 | 1 | 0.4264 |
| 0.0836 | 0.1043 | 11 | 0.0792 |
| 0.0532 | 0.2086 | 22 | 0.0566 |
| 0.0511 | 0.3129 | 33 | 0.0496 |
| 0.0511 | 0.4172 | 44 | 0.0457 |
| 0.0475 | 0.5214 | 55 | 0.0436 |
| 0.0435 | 0.6257 | 66 | 0.0420 |
| 0.0361 | 0.7300 | 77 | 0.0407 |
| 0.0406 | 0.8343 | 88 | 0.0391 |
| 0.0349 | 0.9386 | 99 | 0.0384 |
| 0.0304 | 1.0429 | 110 | 0.0373 |
| 0.0305 | 1.1472 | 121 | 0.0374 |
| 0.0251 | 1.2515 | 132 | 0.0365 |
| 0.0288 | 1.3558 | 143 | 0.0370 |
| 0.0251 | 1.4600 | 154 | 0.0366 |
| 0.0236 | 1.5643 | 165 | 0.0353 |
| 0.0266 | 1.6686 | 176 | 0.0353 |
| 0.0281 | 1.7729 | 187 | 0.0348 |
| 0.0246 | 1.8772 | 198 | 0.0340 |
| 0.0249 | 1.9815 | 209 | 0.0339 |
| 0.0169 | 2.0858 | 220 | 0.0349 |
| 0.0155 | 2.1901 | 231 | 0.0371 |
| 0.0178 | 2.2943 | 242 | 0.0369 |
| 0.0194 | 2.3986 | 253 | 0.0361 |
| 0.0139 | 2.5029 | 264 | 0.0357 |
| 0.0157 | 2.6072 | 275 | 0.0356 |
| 0.0197 | 2.7115 | 286 | 0.0357 |
| 0.0188 | 2.8158 | 297 | 0.0357 |
| 0.0163 | 2.9201 | 308 | 0.0356 |
### Framework versions
- PEFT 0.11.1
- Transformers 4.41.1
- Pytorch 2.1.2+cu118
- Datasets 2.19.1
- Tokenizers 0.19.1