Tbh commited on
Commit
e3d3457
1 Parent(s): 3d95ab9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +142 -0
README.md CHANGED
@@ -1,3 +1,145 @@
1
  ---
 
 
 
 
 
 
 
 
 
 
 
 
2
  license: apache-2.0
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ base_model: mistralai/Mistral-7B-v0.1
3
+ tags:
4
+ - mistral
5
+ - instruct
6
+ - finetune
7
+ - chatml
8
+ - gpt4
9
+ - synthetic data
10
+ - distillation
11
+ model-index:
12
+ - name: Thestral-0.1
13
+ results: []
14
  license: apache-2.0
15
+ language:
16
+ - en
17
  ---
18
+
19
+ # Thestral 0.1
20
+
21
+ Thestral is Mistral Fine-tune. The model is a QLoRA version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the [Open-Orca/SlimOrca](https://huggingface.co/datasets/Open-Orca/SlimOrca).
22
+
23
+ This model is finetuned using `1xH100` using [axolotl](https://github.com/OpenAccess-AI-Collective/axolotl).
24
+
25
+ <details><summary>See axolotl config</summary>
26
+ axolotl version: `0.4.0`
27
+ ```yaml
28
+ base_model: mistralai/Mistral-7B-v0.1
29
+ model_type: MistralForCausalLM
30
+ tokenizer_type: LlamaTokenizer
31
+
32
+ load_in_8bit: false
33
+ load_in_4bit: true
34
+ strict: false
35
+
36
+ datasets:
37
+ - path: Open-Orca/SlimOrca
38
+ type: sharegpt
39
+ dataset_prepared_path: last_run_prepared
40
+ val_set_size: 0.1
41
+ output_dir: ./qlora-out_2
42
+
43
+ adapter: qlora
44
+ lora_model_dir:
45
+
46
+ sequence_len: 8192
47
+ sample_packing: true
48
+ pad_to_sequence_len: true
49
+
50
+ lora_r: 128
51
+ lora_alpha: 32
52
+ lora_dropout: 0.05
53
+ lora_target_linear: true
54
+ lora_fan_in_fan_out:
55
+ lora_target_modules:
56
+ - gate_proj
57
+ - down_proj
58
+ - up_proj
59
+ - q_proj
60
+ - v_proj
61
+ - k_proj
62
+ - o_proj
63
+
64
+ wandb_project: slim_orca
65
+ wandb_entity:
66
+ wandb_watch:
67
+ wandb_name:
68
+ wandb_log_model:
69
+
70
+ gradient_accumulation_steps: 4
71
+ micro_batch_size: 2
72
+ num_epochs: 1
73
+ optimizer: adamw_bnb_8bit
74
+ lr_scheduler: cosine
75
+ learning_rate: 0.0002
76
+
77
+ train_on_inputs: false
78
+ group_by_length: false
79
+ bf16: auto
80
+ fp16:
81
+ tf32: false
82
+
83
+ gradient_checkpointing: true
84
+ early_stopping_patience:
85
+ resume_from_checkpoint:
86
+ local_rank:
87
+ logging_steps: 1
88
+ xformers_attention:
89
+ flash_attention: true
90
+
91
+ loss_watchdog_threshold: 5.0
92
+ loss_watchdog_patience: 3
93
+
94
+ warmup_steps: 10
95
+ evals_per_epoch: 4
96
+ eval_table_size:
97
+ eval_max_new_tokens: 128
98
+ saves_per_epoch: 1
99
+ debug:
100
+ deepspeed:
101
+ weight_decay: 0.0
102
+ fsdp:
103
+ fsdp_config:
104
+ special_tokens:
105
+ bos_token: "<s>"
106
+ eos_token: "</s>"
107
+ unk_token: "<unk>"
108
+
109
+ ```
110
+
111
+
112
+ GPT-4All Benchmark Set
113
+
114
+ | Tasks |Version|Filter|n-shot| Metric |Value | |Stderr|
115
+ |-------------|------:|------|------|--------|-----:|---|-----:|
116
+ |winogrande | 1|none |None |acc |0.7498|± |0.0122|
117
+ |piqa | 1|none |None |acc |0.8172|± |0.0090|
118
+ | | |none |None |acc_norm|0.8286|± |0.0088|
119
+ |openbookqa | 1|none |None |acc |0.3380|± |0.0212|
120
+ | | |none |None |acc_norm|0.4420|± |0.0222|
121
+ |hellaswag | 1|none |None |acc |0.6254|± |0.0048|
122
+ | | |none |None |acc_norm|0.8061|± |0.0039|
123
+ |boolq | 2|none |None |acc |0.8740|± |0.0058|
124
+ |arc_easy | 1|none |None |acc |0.8199|± |0.0079|
125
+ | | |none |None |acc_norm|0.7891|± |0.0084|
126
+ |arc_challenge| 1|none |None |acc |0.5145|± |0.0146|
127
+ | | |none |None |acc_norm|0.5461|± |0.0145|
128
+ Average: 71.93
129
+
130
+
131
+ # 🤖 Additional information about training
132
+
133
+ This model is fine-tuned for 1.0 epoch.
134
+
135
+
136
+ <details><summary>Loss graph</summary>
137
+
138
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/60ca32d2e7bc4b029af088a0/bZdS1tIIJ4tWL_pTM4qeQ.png)
139
+ </details><br>
140
+
141
+ Thanks to [axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) for making the repository I used to make this model.
142
+
143
+ Thanks to all open source AI community.
144
+
145
+ [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)