Mel-Iza0 commited on
Commit
979257c
1 Parent(s): 6352386

Model save

Browse files
Files changed (2) hide show
  1. README.md +43 -72
  2. adapter_model.safetensors +2 -2
README.md CHANGED
@@ -1,103 +1,74 @@
1
  ---
2
- license: mit
3
- library_name: "trl"
4
  tags:
 
 
5
  - KTO
6
  - WeniGPT
 
7
  base_model: mistralai/Mixtral-8x7B-Instruct-v0.1
8
  model-index:
9
- - name: Weni/WeniGPT-Agents-Mixstral-Instruct-2.0.0-KTO
10
  results: []
11
- language: ['pt']
12
  ---
13
 
14
- # Weni/WeniGPT-Agents-Mixstral-Instruct-2.0.0-KTO
 
15
 
16
- This model is a fine-tuned version of [mistralai/Mixtral-8x7B-Instruct-v0.1] on the dataset Weni/wenigpt-agent-1.2.0 with the KTO trainer. It is part of the WeniGPT project for [Weni](https://weni.ai/).
17
- Description: KTO with Agents 1.2.0 dataset and Mixstral model
18
 
 
19
  It achieves the following results on the evaluation set:
20
- {'eval_loss': nan, 'eval_runtime': 353.1676, 'eval_samples_per_second': 0.849, 'eval_steps_per_second': 0.212, 'eval_rewards/chosen': -5.434815883636475, 'eval_logps/chosen': -287.4195251464844, 'eval_rewards/rejected': -12.600561141967773, 'eval_logps/rejected': -352.2350769042969, 'eval_kl': nan, 'eval_rewards/margins': 7.475613594055176, 'epoch': 0.99}
 
 
 
 
 
 
21
 
22
- ## Intended uses & limitations
23
-
24
- This model has not been trained to avoid specific intructions.
25
-
26
- ## Training procedure
27
-
28
- Finetuning was done on the model mistralai/Mixtral-8x7B-Instruct-v0.1 with the following prompt:
29
-
30
- ```
31
- ---------------------
32
- System_prompt:
33
- Agora você se chama {name}, você é {occupation} e seu objetivo é {chatbot_goal}. O adjetivo que mais define a sua personalidade é {adjective} e você se comporta da seguinte forma:
34
- {instructions_formatted}
35
-
36
- Na sua memória você tem esse contexto:
37
- {context}
38
 
39
- Lista de requisitos:
40
- - Responda de forma natural, mas nunca fale sobre um assunto fora do contexto.
41
- - Nunca traga informações do seu próprio conhecimento.
42
- - Repito é crucial que você responda usando apenas informações do contexto.
43
- - Nunca mencione o contexto fornecido.
44
- - Nunca mencione a pergunta fornecida.
45
- - Gere a resposta mais útil possível para a pergunta usando informações do conexto acima.
46
- - Nunca elabore sobre o porque e como você fez a tarefa, apenas responda.
47
-
48
-
49
- ---------------------
50
- Question:
51
- {question}
52
 
 
53
 
54
- ---------------------
55
- Response:
56
- {answer}
57
 
 
58
 
59
- ---------------------
60
 
61
- ```
62
 
63
  ### Training hyperparameters
64
 
65
  The following hyperparameters were used during training:
66
  - learning_rate: 0.0002
67
- - per_device_train_batch_size: 4
68
- - per_device_eval_batch_size: 4
 
69
  - gradient_accumulation_steps: 4
70
- - num_gpus: 1
71
  - total_train_batch_size: 16
72
- - optimizer: AdamW
73
- - lr_scheduler_type: cosine
74
- - num_steps: 145
75
- - quantization_type: bitsandbytes
76
- - LoRA: ("\n - bits: 4\n - use_exllama: True\n - device_map: auto\n - use_cache: False\n - lora_r: 16\n - lora_alpha: 32\n - lora_dropout: 0.05\n - bias: none\n - target_modules: ['q_proj', 'k_proj', 'v_proj', 'o_proj']\n - task_type: CAUSAL_LM",)
77
 
78
  ### Training results
79
 
 
 
 
 
 
 
80
  ### Framework versions
81
 
82
- - transformers==4.39.1
83
- - datasets==2.18.0
84
- - peft==0.10.0
85
- - safetensors==0.4.2
86
- - evaluate==0.4.1
87
- - bitsandbytes==0.43
88
- - huggingface_hub==0.20.3
89
- - seqeval==1.2.2
90
- - optimum==1.17.1
91
- - auto-gptq==0.7.1
92
- - gpustat==1.1.1
93
- - deepspeed==0.14.0
94
- - wandb==0.16.3
95
- - # trl==0.8.1
96
- - git+https://github.com/kawine/trl.git#egg=trl
97
- - accelerate==0.28.0
98
- - coloredlogs==15.0.1
99
- - traitlets==5.14.1
100
- - autoawq@https://github.com/casper-hansen/AutoAWQ/releases/download/v0.2.0/autoawq-0.2.0+cu118-cp310-cp310-linux_x86_64.whl
101
-
102
- ### Hardware
103
- - Cloud provided: runpod.io
 
1
  ---
2
+ license: apache-2.0
3
+ library_name: peft
4
  tags:
5
+ - trl
6
+ - kto
7
  - KTO
8
  - WeniGPT
9
+ - generated_from_trainer
10
  base_model: mistralai/Mixtral-8x7B-Instruct-v0.1
11
  model-index:
12
+ - name: WeniGPT-Agents-Mixstral-Instruct-2.0.0-KTO
13
  results: []
 
14
  ---
15
 
16
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
17
+ should probably proofread and complete it, then remove this comment. -->
18
 
19
+ # WeniGPT-Agents-Mixstral-Instruct-2.0.0-KTO
 
20
 
21
+ This model is a fine-tuned version of [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) on the None dataset.
22
  It achieves the following results on the evaluation set:
23
+ - Loss: 0.3856
24
+ - Rewards/chosen: -2.7129
25
+ - Logps/chosen: -263.7560
26
+ - Rewards/rejected: -6.5728
27
+ - Logps/rejected: -282.9854
28
+ - Kl: 0.0
29
+ - Rewards/margins: 3.9057
30
 
31
+ ## Model description
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
32
 
33
+ More information needed
 
 
 
 
 
 
 
 
 
 
 
 
34
 
35
+ ## Intended uses & limitations
36
 
37
+ More information needed
 
 
38
 
39
+ ## Training and evaluation data
40
 
41
+ More information needed
42
 
43
+ ## Training procedure
44
 
45
  ### Training hyperparameters
46
 
47
  The following hyperparameters were used during training:
48
  - learning_rate: 0.0002
49
+ - train_batch_size: 4
50
+ - eval_batch_size: 4
51
+ - seed: 42
52
  - gradient_accumulation_steps: 4
 
53
  - total_train_batch_size: 16
54
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
55
+ - lr_scheduler_type: linear
56
+ - lr_scheduler_warmup_ratio: 0.03
57
+ - training_steps: 145
58
+ - mixed_precision_training: Native AMP
59
 
60
  ### Training results
61
 
62
+ | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Logps/chosen | Rewards/rejected | Logps/rejected | Kl | Rewards/margins |
63
+ |:-------------:|:-----:|:----:|:---------------:|:--------------:|:------------:|:----------------:|:--------------:|:------:|:---------------:|
64
+ | 0.4403 | 0.34 | 50 | 0.4195 | -1.1683 | -248.3103 | -3.6153 | -253.4100 | 0.5120 | 2.6137 |
65
+ | 0.3518 | 0.68 | 100 | 0.3856 | -2.7129 | -263.7560 | -6.5728 | -282.9854 | 0.0 | 3.9057 |
66
+
67
+
68
  ### Framework versions
69
 
70
+ - PEFT 0.10.0
71
+ - Transformers 4.39.1
72
+ - Pytorch 2.1.0+cu118
73
+ - Datasets 2.18.0
74
+ - Tokenizers 0.15.2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:0f62c273336a7b98480a304521101fe04fdd590ee43f6f0dea258f4d29dfeb79
3
- size 54560368
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4bb30d47ede0c352e498d1c9f7a6c920d002a796dcbd6af885c0357368d11c7b
3
+ size 1103203256