File size: 3,636 Bytes
4703b67 0864c36 4891e1c 0864c36 4703b67 4891e1c 4af749a f799c4b 875319a 8985d4d b462949 71a0868 89a45d0 e2c842c f8d0fed ad7076a d0ed385 f8d0fed b462949 4891e1c e2c842c 4891e1c |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 |
---
license: apache-2.0
datasets:
- NeuralNovel/Neural-Story-v1
library_name: peft
tags:
- generated_from_trainer
base_model: alnrg2arg/blockchainlabs_7B_merged_test2_4
model-index:
- name: qlora-out
results: []
---
![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/645cfe4603fc86c46b3e46d1/CATNxzDDJL6xHR4tc4IMf.jpeg)
# NeuralNovel/Valor-7B-v0.1
Valor speaks louder than words.
This is a qlora finetune of blockchainlabs_7B_merged_test2_4 using the **Neural-Story-v0.1** dataset, with the intention of increasing creativity and writing ability.
![image/png](https://cdn-uploads.huggingface.co/production/uploads/645cfe4603fc86c46b3e46d1/uW7SQrWBXv-CURsEKJerW.png)
Currently top #3 in 7B category
# Training Details
```yaml
base_model: alnrg2arg/blockchainlabs_7B_merged_test2_4
model_type: MistralForCausalLM
tokenizer_type: LlamaTokenizer
is_mistral_derived_model: true
load_in_8bit: false
load_in_4bit: true
strict: false
datasets:
- path: NeuralNovel/Neural-Story-v1
type: completion
dataset_prepared_path: last_run_prepared
val_set_size: 0.1
output_dir: ./qlora-out
adapter: qlora
lora_model_dir:
sequence_len: 8192
sample_packing: false
pad_to_sequence_len: true
lora_r: 32
lora_alpha: 16
lora_dropout: 0.05
lora_target_linear: true
lora_fan_in_fan_out:
lora_target_modules:
- gate_proj
- down_proj
- up_proj
- q_proj
- v_proj
- k_proj
- o_proj
wandb_project:
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:
gradient_accumulation_steps: 4
micro_batch_size: 2
num_epochs: 1
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 0.0002
train_on_inputs: false
group_by_length: false
bf16: true
fp16: false
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
loss_watchdog_threshold: 5.0
loss_watchdog_patience: 3
warmup_steps: 10
evals_per_epoch: 4
eval_table_size:
eval_table_max_new_tokens: 128
saves_per_epoch: 1
debug:
deepspeed:
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
bos_token: "<s>"
eos_token: "</s>"
unk_token: "<unk>"
```
</details><br>
# qlora-out
This model is a fine-tuned version of [alnrg2arg/blockchainlabs_7B_merged_test2_4](https://huggingface.co/alnrg2arg/blockchainlabs_7B_merged_test2_4) on the Neural-Story-v1.
It achieves the following results on the evaluation set:
- Loss: 2.1411
axolotl version: `0.3.0`
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.3251 | 0.06 | 1 | 2.8409 |
| 2.5318 | 0.25 | 4 | 2.7634 |
| 1.7316 | 0.51 | 8 | 2.3662 |
| 1.5196 | 0.76 | 12 | 2.1411 |
### Framework versions
- PEFT 0.7.0
- Transformers 4.37.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.16.1
- Tokenizers 0.15.0 |