Training in progress, step 100
Browse files- README.md +73 -42
- adapter_config.json +6 -6
- adapter_model.safetensors +1 -1
- training_args.bin +1 -1
README.md
CHANGED
@@ -1,71 +1,102 @@
|
|
1 |
---
|
2 |
-
|
|
|
3 |
tags:
|
4 |
-
-
|
5 |
-
-
|
6 |
-
- generated_from_trainer
|
7 |
base_model: HuggingFaceH4/zephyr-7b-beta
|
8 |
model-index:
|
9 |
-
- name: WeniGPT-Agents-Zephyr-1.0.27-KTO
|
10 |
results: []
|
|
|
11 |
---
|
12 |
|
13 |
-
|
14 |
-
should probably proofread and complete it, then remove this comment. -->
|
15 |
|
16 |
-
|
|
|
17 |
|
18 |
-
This model is a fine-tuned version of [HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) on the None dataset.
|
19 |
It achieves the following results on the evaluation set:
|
20 |
-
-
|
21 |
-
- Rewards/chosen: 0.0648
|
22 |
-
- Rewards/rejected: -0.1020
|
23 |
-
- Rewards/margins: 0.1668
|
24 |
-
- Kl: 0.9039
|
25 |
-
- Logps/chosen: -278.8387
|
26 |
-
- Logps/rejected: -239.1956
|
27 |
|
28 |
-
##
|
29 |
|
30 |
-
|
31 |
|
32 |
-
##
|
33 |
|
34 |
-
|
35 |
|
36 |
-
|
|
|
|
|
|
|
|
|
37 |
|
38 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
39 |
|
40 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
41 |
|
42 |
### Training hyperparameters
|
43 |
|
44 |
The following hyperparameters were used during training:
|
45 |
- learning_rate: 2e-06
|
46 |
-
-
|
47 |
-
-
|
48 |
-
- seed: 42
|
49 |
- gradient_accumulation_steps: 4
|
|
|
50 |
- total_train_batch_size: 16
|
51 |
-
- optimizer:
|
52 |
-
- lr_scheduler_type:
|
53 |
-
-
|
54 |
-
-
|
55 |
-
-
|
56 |
|
57 |
### Training results
|
58 |
|
59 |
-
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/margins | Kl | Logps/chosen | Logps/rejected |
|
60 |
-
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:---------------:|:------:|:------------:|:--------------:|
|
61 |
-
| 0.5716 | 0.33 | 50 | 0.4898 | 0.0297 | -0.0529 | 0.0826 | 0.4425 | -279.1900 | -238.7043 |
|
62 |
-
| 0.6643 | 0.66 | 100 | 0.4799 | 0.0648 | -0.1020 | 0.1668 | 0.9039 | -278.8387 | -239.1956 |
|
63 |
-
|
64 |
-
|
65 |
### Framework versions
|
66 |
|
67 |
-
-
|
68 |
-
-
|
69 |
-
-
|
70 |
-
-
|
71 |
-
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
license: mit
|
3 |
+
library_name: "trl"
|
4 |
tags:
|
5 |
+
- KTO
|
6 |
+
- WeniGPT
|
|
|
7 |
base_model: HuggingFaceH4/zephyr-7b-beta
|
8 |
model-index:
|
9 |
+
- name: Weni/WeniGPT-Agents-Zephyr-1.0.27-KTO
|
10 |
results: []
|
11 |
+
language: ['pt']
|
12 |
---
|
13 |
|
14 |
+
# Weni/WeniGPT-Agents-Zephyr-1.0.27-KTO
|
|
|
15 |
|
16 |
+
This model is a fine-tuned version of [HuggingFaceH4/zephyr-7b-beta] on the dataset Weni/wenigpt-agent-1.4.0 with the KTO trainer. It is part of the WeniGPT project for [Weni](https://weni.ai/).
|
17 |
+
Description: Experiment with a new tokenizer configuration for chat template of zephyr
|
18 |
|
|
|
19 |
It achieves the following results on the evaluation set:
|
20 |
+
{'eval_loss': 0.47991228103637695, 'eval_runtime': 169.8862, 'eval_samples_per_second': 2.06, 'eval_steps_per_second': 0.518, 'eval_rewards/chosen': 0.06480830907821655, 'eval_rewards/rejected': -0.10202363133430481, 'eval_rewards/margins': 0.16683195531368256, 'eval_kl': 0.9039192199707031, 'eval_logps/chosen': -278.8386535644531, 'eval_logps/rejected': -239.19561767578125, 'epoch': 0.97}
|
|
|
|
|
|
|
|
|
|
|
|
|
21 |
|
22 |
+
## Intended uses & limitations
|
23 |
|
24 |
+
This model has not been trained to avoid specific intructions.
|
25 |
|
26 |
+
## Training procedure
|
27 |
|
28 |
+
Finetuning was done on the model HuggingFaceH4/zephyr-7b-beta with the following prompt:
|
29 |
|
30 |
+
```
|
31 |
+
---------------------
|
32 |
+
System_prompt:
|
33 |
+
Agora você se chama {name}, você é {occupation} e seu objetivo é {chatbot_goal}. O adjetivo que mais define a sua personalidade é {adjective} e você se comporta da seguinte forma:
|
34 |
+
{instructions_formatted}
|
35 |
|
36 |
+
Na sua memória você tem esse contexto:
|
37 |
+
{context}
|
38 |
+
|
39 |
+
Lista de requisitos:
|
40 |
+
- Responda de forma natural, mas nunca fale sobre um assunto fora do contexto.
|
41 |
+
- Nunca traga informações do seu próprio conhecimento.
|
42 |
+
- Repito é crucial que você responda usando apenas informações do contexto.
|
43 |
+
- Nunca mencione o contexto fornecido.
|
44 |
+
- Nunca mencione a pergunta fornecida.
|
45 |
+
- Gere a resposta mais útil possível para a pergunta usando informações do conexto acima.
|
46 |
+
- Nunca elabore sobre o porque e como você fez a tarefa, apenas responda.
|
47 |
+
|
48 |
+
|
49 |
+
---------------------
|
50 |
+
Question:
|
51 |
+
{question}
|
52 |
|
53 |
+
|
54 |
+
---------------------
|
55 |
+
Response:
|
56 |
+
{answer}
|
57 |
+
|
58 |
+
|
59 |
+
---------------------
|
60 |
+
|
61 |
+
```
|
62 |
|
63 |
### Training hyperparameters
|
64 |
|
65 |
The following hyperparameters were used during training:
|
66 |
- learning_rate: 2e-06
|
67 |
+
- per_device_train_batch_size: 4
|
68 |
+
- per_device_eval_batch_size: 4
|
|
|
69 |
- gradient_accumulation_steps: 4
|
70 |
+
- num_gpus: 1
|
71 |
- total_train_batch_size: 16
|
72 |
+
- optimizer: AdamW
|
73 |
+
- lr_scheduler_type: cosine
|
74 |
+
- num_steps: 147
|
75 |
+
- quantization_type: bitsandbytes
|
76 |
+
- LoRA: ("\n - bits: 4\n - use_exllama: True\n - device_map: auto\n - use_cache: False\n - lora_r: 8\n - lora_alpha: 16\n - lora_dropout: 0.05\n - bias: none\n - target_modules: ['q_proj', 'k_proj', 'v_proj', 'o_proj', 'gate_proj', 'up_proj', 'down_proj', 'lm_head', 'embed_tokens']\n - task_type: CAUSAL_LM",)
|
77 |
|
78 |
### Training results
|
79 |
|
|
|
|
|
|
|
|
|
|
|
|
|
80 |
### Framework versions
|
81 |
|
82 |
+
- transformers==4.38.2
|
83 |
+
- datasets==2.18.0
|
84 |
+
- peft==0.10.0
|
85 |
+
- safetensors==0.4.2
|
86 |
+
- evaluate==0.4.1
|
87 |
+
- bitsandbytes==0.43
|
88 |
+
- huggingface_hub==0.22.2
|
89 |
+
- seqeval==1.2.2
|
90 |
+
- optimum==1.18.1
|
91 |
+
- auto-gptq==0.7.1
|
92 |
+
- gpustat==1.1.1
|
93 |
+
- deepspeed==0.14.0
|
94 |
+
- wandb==0.16.6
|
95 |
+
- trl==0.8.1
|
96 |
+
- accelerate==0.29.2
|
97 |
+
- coloredlogs==15.0.1
|
98 |
+
- traitlets==5.14.2
|
99 |
+
- autoawq@https://github.com/casper-hansen/AutoAWQ/releases/download/v0.2.4/autoawq-0.2.4+cu118-cp310-cp310-linux_x86_64.whl
|
100 |
+
|
101 |
+
### Hardware
|
102 |
+
- Cloud provided: runpod.io
|
adapter_config.json
CHANGED
@@ -20,15 +20,15 @@
|
|
20 |
"rank_pattern": {},
|
21 |
"revision": null,
|
22 |
"target_modules": [
|
23 |
-
"up_proj",
|
24 |
-
"embed_tokens",
|
25 |
-
"down_proj",
|
26 |
-
"k_proj",
|
27 |
-
"gate_proj",
|
28 |
"q_proj",
|
29 |
"v_proj",
|
|
|
|
|
|
|
|
|
30 |
"o_proj",
|
31 |
-
"
|
|
|
32 |
],
|
33 |
"task_type": "CAUSAL_LM",
|
34 |
"use_dora": false,
|
|
|
20 |
"rank_pattern": {},
|
21 |
"revision": null,
|
22 |
"target_modules": [
|
|
|
|
|
|
|
|
|
|
|
23 |
"q_proj",
|
24 |
"v_proj",
|
25 |
+
"down_proj",
|
26 |
+
"up_proj",
|
27 |
+
"k_proj",
|
28 |
+
"lm_head",
|
29 |
"o_proj",
|
30 |
+
"embed_tokens",
|
31 |
+
"gate_proj"
|
32 |
],
|
33 |
"task_type": "CAUSAL_LM",
|
34 |
"use_dora": false,
|
adapter_model.safetensors
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 1134834064
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:cd57ab79ef124274d11e15da7954eead2132774d0e7119df267cce5cbd1d1105
|
3 |
size 1134834064
|
training_args.bin
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 5688
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:1bddc716402039400132fe7ed2438b8ffa438d294fae7f94ce5c8c97e9abe6e6
|
3 |
size 5688
|