Transformers
GGUF
Inference Endpoints
conversational
aashish1904 commited on
Commit
ff0d765
1 Parent(s): 1ad7be5

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +174 -0
README.md ADDED
@@ -0,0 +1,174 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ---
3
+
4
+ library_name: transformers
5
+ license: apache-2.0
6
+ base_model: Qwen/Qwen2.5-7B
7
+ datasets:
8
+ - allenai/tulu-3-sft-mixture
9
+
10
+ ---
11
+
12
+ [![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory)
13
+
14
+
15
+ # QuantFactory/Teleut-7b-GGUF
16
+ This is quantized version of [allura-org/Teleut-7b](https://huggingface.co/allura-org/Teleut-7b) created using llama.cpp
17
+
18
+ # Original Model Card
19
+
20
+
21
+ # Teleut 7b
22
+
23
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/634262af8d8089ebaefd410e/UqIi8eztdptvt52Mak_1K.png)
24
+
25
+ A replication attempt of Tulu 3 on the Qwen 2.5 base models.
26
+
27
+ ## Evals (so far)
28
+ | | Teleut 7B (measured) | Tülu 3 SFT 8B (reported) | Qwen 2.5 7B Instruct (reported) | Ministral 8B (reported) | Mistral 7B v0.3 (reported)
29
+ |-------------------------|----------------------|--------------------------|---------------------------------|-------------------------|---------------------------
30
+ |BBH (3 shot, CoT) |*64.4%* |**67.9%** |21.7% |56.2% |47.0%<sup>NLL</sup>
31
+ |GSM8K (8 shot, CoT) |78.5% |76.2% |**83.8%** |*80.0%* |xx.x%
32
+ |IFEval (prompt loose) |66.3% |*72.8%* |**74.7%** |56.4% |53.0%
33
+ |MMLU (0 shot, CoT) |*73.2%* |65.9% |**76.6%** |68.5% |30.7%<sup>5-shot</sup>
34
+ |MMLU Pro (0 shot, CoT) |*48.3%* |44.3% |**56.3%**<sup>Unknown</sup> |32.9%<sup>5-shot</sup> |30.7%<sup>5-shot</sup>
35
+ |PopQA (15 shot) |18.9% |**29.3%** |18.1% |*20.2%* |xx.x%
36
+ |TruthfulQA |47.2% |46.8% |**63.1%** |*55.5%* |xx.x%
37
+
38
+ ## Credits
39
+ Big thanks to Retis Labs for being providing my 8xH100 polycule used to train and test this model!
40
+ Another big thanks to AllenAI for publishing the Tülu 3 data and model series (as well as the paper and details on training), as well as Alibaba for training the original Qwen 2.5 base model series!
41
+
42
+ ```
43
+ @article{lambert2024tulu3,
44
+ title = {Tülu 3: Pushing Frontiers in Open Language Model Post-Training},
45
+ author = {
46
+ Nathan Lambert and
47
+ Jacob Morrison and
48
+ Valentina Pyatkin and
49
+ Shengyi Huang and
50
+ Hamish Ivison and
51
+ Faeze Brahman and
52
+ Lester James V. Miranda and
53
+ Alisa Liu and
54
+ Nouha Dziri and
55
+ Shane Lyu and
56
+ Yuling Gu and
57
+ Saumya Malik and
58
+ Victoria Graf and
59
+ Jena D. Hwang and
60
+ Jiangjiang Yang and
61
+ Ronan Le Bras and
62
+ Oyvind Tafjord and
63
+ Chris Wilhelm and
64
+ Luca Soldaini and
65
+ Noah A. Smith and
66
+ Yizhong Wang and
67
+ Pradeep Dasigi and
68
+ Hannaneh Hajishirzi
69
+ },
70
+ year = {2024},
71
+ email = {tulu@allenai.org}
72
+ }
73
+ ```
74
+
75
+ ## Training procedure
76
+
77
+ [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
78
+
79
+ ### Training hyperparameters
80
+
81
+ The following hyperparameters were used during training:
82
+ - learning_rate: 3.5e-06
83
+ - train_batch_size: 8
84
+ - eval_batch_size: 8
85
+ - seed: 42
86
+ - distributed_type: multi-GPU
87
+ - num_devices: 8
88
+ - gradient_accumulation_steps: 2
89
+ - total_train_batch_size: 128
90
+ - total_eval_batch_size: 64
91
+ - optimizer: Use paged_ademamix_8bit and the args are:
92
+ No additional optimizer arguments
93
+ - lr_scheduler_type: cosine
94
+ - lr_scheduler_warmup_steps: 370
95
+ - num_epochs: 1
96
+
97
+ ### Framework versions
98
+
99
+ - Transformers 4.46.3
100
+ - Pytorch 2.5.1+cu124
101
+ - Datasets 3.1.0
102
+ - Tokenizers 0.20.3
103
+
104
+ ### Configuration
105
+ <details><summary>See axolotl config</summary>
106
+
107
+ axolotl version: `0.5.2`
108
+ ```yaml
109
+ base_model: Qwen/Qwen2.5-7B
110
+
111
+ plugins:
112
+ - axolotl.integrations.liger.LigerPlugin
113
+ liger_rope: true
114
+ liger_rms_norm: true
115
+ liger_glu_activation: true
116
+ liger_fused_linear_cross_entropy: true
117
+
118
+ strict: false
119
+
120
+ chat_template: chatml
121
+ datasets:
122
+ - path: allenai/tulu-3-sft-mixture
123
+ type: chat_template
124
+ split: train
125
+ field_messages: messages
126
+
127
+ dataset_prepared_path: last_run_prepared
128
+ #val_set_size: 0.02
129
+ output_dir: ./ckpts
130
+
131
+ sequence_len: 8192
132
+ #sample_packing: true
133
+ pad_to_sequence_len: true
134
+
135
+ wandb_project: qwen-2.5-7b-sft
136
+ wandb_entity:
137
+ wandb_watch:
138
+ wandb_name:
139
+ wandb_log_model:
140
+
141
+ gradient_accumulation_steps: 2
142
+ micro_batch_size: 8
143
+ num_epochs: 1
144
+ optimizer: paged_ademamix_8bit
145
+ lr_scheduler: cosine
146
+ learning_rate: 3.5e-6
147
+
148
+ train_on_inputs: false
149
+ group_by_length: false
150
+ bf16: auto
151
+ fp16:
152
+ tf32: false
153
+
154
+ gradient_checkpointing: true
155
+ gradient_checkpointing_kwargs:
156
+ use_reentrant: false
157
+ early_stopping_patience:
158
+ resume_from_checkpoint:
159
+ logging_steps: 1
160
+ xformers_attention:
161
+ flash_attention: true
162
+
163
+ deepspeed: deepspeed_configs/zero3_bf16.json
164
+
165
+ warmup_steps: 370
166
+ #evals_per_epoch: 4
167
+ eval_table_size:
168
+ saves_per_epoch: 2
169
+ debug:
170
+ weight_decay: 0.0
171
+
172
+ ```
173
+
174
+ </details><br>