Undi95 commited on
Commit
b5cc753
1 Parent(s): dc595d4

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +150 -0
README.md ADDED
@@ -0,0 +1,150 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: peft
3
+ tags:
4
+ - generated_from_trainer
5
+ base_model: NousResearch/Llama-2-13b-hf
6
+ model-index:
7
+ - name: NobodyExistsOnTheInternet/ToxicQAtextFiltered
8
+ results: []
9
+ ---
10
+
11
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
12
+ should probably proofread and complete it, then remove this comment. -->
13
+
14
+ [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
15
+ <details><summary>See axolotl config</summary>
16
+
17
+ axolotl version: `0.3.0`
18
+ ```yaml
19
+ base_model: NousResearch/Llama-2-13b-hf
20
+ model_type: LlamaForCausalLM
21
+ tokenizer_type: LlamaTokenizer
22
+ is_llama_derived_model: true
23
+
24
+ load_in_8bit: true
25
+ load_in_4bit: false
26
+ strict: false
27
+
28
+ datasets:
29
+ - path: dataset
30
+ type: sharegpt
31
+ dataset_prepared_path:
32
+ val_set_size: 0.05
33
+ output_dir: ./lora-out
34
+
35
+ sequence_len: 4096
36
+ sample_packing: true
37
+ pad_to_sequence_len: true
38
+
39
+ adapter: lora
40
+ lora_model_dir:
41
+ lora_r: 128
42
+ lora_alpha: 64
43
+ lora_dropout: 0.05
44
+ lora_target_linear: true
45
+ lora_fan_in_fan_out:
46
+
47
+ wandb_project: toxicLlama-2-13B
48
+ wandb_entity:
49
+ wandb_watch:
50
+ wandb_name:
51
+ wandb_log_model:
52
+
53
+ gradient_accumulation_steps: 1
54
+ micro_batch_size: 2
55
+ num_epochs: 2
56
+ optimizer: adamw_bnb_8bit
57
+ lr_scheduler: cosine
58
+ learning_rate: 0.0002
59
+ eval_batch_size: 2
60
+
61
+ train_on_inputs: false
62
+ group_by_length: false
63
+ bf16: true
64
+ fp16: false
65
+ tf32: false
66
+
67
+ gradient_checkpointing: true
68
+ early_stopping_patience:
69
+ resume_from_checkpoint:
70
+ local_rank:
71
+ logging_steps: 1
72
+ xformers_attention:
73
+ flash_attention: true
74
+
75
+ warmup_steps: 10
76
+ evals_per_epoch: 4
77
+ eval_table_size:
78
+ eval_table_max_new_tokens: 128
79
+ saves_per_epoch: 1
80
+ debug:
81
+ deepspeed:
82
+ weight_decay: 0.0
83
+ fsdp:
84
+ fsdp_config:
85
+ special_tokens:
86
+ bos_token: "<s>"
87
+ eos_token: "</s>"
88
+ unk_token: "<unk>"
89
+
90
+ ```
91
+
92
+ </details><br>
93
+
94
+ # NobodyExistsOnTheInternet/ToxicQAtextFiltered
95
+
96
+ This model is a fine-tuned version of [NousResearch/Llama-2-13b-hf](https://huggingface.co/NousResearch/Llama-2-13b-hf) on the [NobodyExistsOnTheInternet/ToxicQAtextFiltered](https://huggingface.co/datasets/NobodyExistsOnTheInternet/ToxicQAtextFiltered) dataset.
97
+ It achieves the following results on the evaluation set:
98
+ - Loss: 0.7634
99
+
100
+ ### Training hyperparameters
101
+
102
+ The following hyperparameters were used during training:
103
+ - learning_rate: 0.0002
104
+ - train_batch_size: 2
105
+ - eval_batch_size: 2
106
+ - seed: 42
107
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
108
+ - lr_scheduler_type: cosine
109
+ - lr_scheduler_warmup_steps: 10
110
+ - num_epochs: 2
111
+
112
+ ### Training results
113
+
114
+ | Training Loss | Epoch | Step | Validation Loss |
115
+ |:-------------:|:-----:|:----:|:---------------:|
116
+ | 1.0107 | 0.0 | 1 | 1.0286 |
117
+ | 0.8198 | 0.25 | 152 | 0.8079 |
118
+ | 0.7993 | 0.5 | 304 | 0.7904 |
119
+ | 0.7348 | 0.75 | 456 | 0.7748 |
120
+ | 0.689 | 1.0 | 608 | 0.7638 |
121
+ | 0.6462 | 1.23 | 760 | 0.7729 |
122
+ | 0.6226 | 1.48 | 912 | 0.7657 |
123
+ | 0.6179 | 1.73 | 1064 | 0.7634 |
124
+
125
+
126
+ ### Framework versions
127
+
128
+ - Transformers 4.36.2
129
+ - Pytorch 2.0.1+cu118
130
+ - Datasets 2.16.1
131
+ - Tokenizers 0.15.0
132
+ ## Training procedure
133
+
134
+
135
+ The following `bitsandbytes` quantization config was used during training:
136
+ - quant_method: bitsandbytes
137
+ - load_in_8bit: True
138
+ - load_in_4bit: False
139
+ - llm_int8_threshold: 6.0
140
+ - llm_int8_skip_modules: None
141
+ - llm_int8_enable_fp32_cpu_offload: False
142
+ - llm_int8_has_fp16_weight: False
143
+ - bnb_4bit_quant_type: fp4
144
+ - bnb_4bit_use_double_quant: False
145
+ - bnb_4bit_compute_dtype: float32
146
+
147
+ ### Framework versions
148
+
149
+
150
+ - PEFT 0.6.0