ironrock commited on
Commit
97ad048
1 Parent(s): 8a5aaa0

Upload folder using huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +59 -57
README.md CHANGED
@@ -1,88 +1,90 @@
1
  ---
2
- library_name: peft
 
3
  tags:
4
- - trl
5
- - dpo
6
  - DPO
7
  - WeniGPT
8
- - generated_from_trainer
9
  base_model: Weni/WeniGPT-Agents-Mistral-1.0.6-SFT-merged
10
  model-index:
11
- - name: WeniGPT-Agents-Mistral-1.0.6-SFT-1.0.11-DPO
12
  results: []
 
13
  ---
14
 
15
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
16
- should probably proofread and complete it, then remove this comment. -->
17
 
18
- # WeniGPT-Agents-Mistral-1.0.6-SFT-1.0.11-DPO
 
19
 
20
- This model is a fine-tuned version of [Weni/WeniGPT-Agents-Mistral-1.0.6-SFT-merged](https://huggingface.co/Weni/WeniGPT-Agents-Mistral-1.0.6-SFT-merged) on an unknown dataset.
21
  It achieves the following results on the evaluation set:
22
- - Loss: 0.2882
23
- - Rewards/chosen: 1.1764
24
- - Rewards/rejected: -0.8691
25
- - Rewards/accuracies: 0.7857
26
- - Rewards/margins: 2.0455
27
- - Logps/rejected: -193.9308
28
- - Logps/chosen: -125.0071
29
- - Logits/rejected: -1.8155
30
- - Logits/chosen: -1.7656
31
 
32
- ## Model description
33
 
34
- More information needed
35
 
36
- ## Intended uses & limitations
37
 
38
- More information needed
39
 
40
- ## Training and evaluation data
 
 
 
 
41
 
42
- More information needed
 
 
 
 
 
 
 
 
 
43
 
44
- ## Training procedure
 
 
 
45
 
46
  ### Training hyperparameters
47
 
48
  The following hyperparameters were used during training:
49
  - learning_rate: 5e-06
50
- - train_batch_size: 1
51
- - eval_batch_size: 1
52
- - seed: 42
53
- - distributed_type: multi-GPU
54
- - num_devices: 2
55
  - gradient_accumulation_steps: 2
 
56
  - total_train_batch_size: 4
57
- - total_eval_batch_size: 2
58
- - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
59
- - lr_scheduler_type: linear
60
- - lr_scheduler_warmup_ratio: 0.03
61
- - training_steps: 366
62
- - mixed_precision_training: Native AMP
63
 
64
  ### Training results
65
 
66
- | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
67
- |:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
68
- | 0.6531 | 0.4878 | 30 | 0.6394 | 0.1271 | -0.0075 | 0.7857 | 0.1346 | -185.3151 | -135.4998 | -1.7717 | -1.7290 |
69
- | 0.5732 | 0.9756 | 60 | 0.5476 | 0.3645 | -0.0302 | 0.7857 | 0.3947 | -185.5418 | -133.1256 | -1.7778 | -1.7342 |
70
- | 0.5252 | 1.4634 | 90 | 0.4644 | 0.6021 | -0.0850 | 0.7857 | 0.6871 | -186.0897 | -130.7499 | -1.7853 | -1.7405 |
71
- | 0.4402 | 1.9512 | 120 | 0.4089 | 0.7788 | -0.1757 | 0.7857 | 0.9545 | -186.9970 | -128.9828 | -1.7949 | -1.7492 |
72
- | 0.4138 | 2.4390 | 150 | 0.3652 | 0.9271 | -0.3082 | 0.7857 | 1.2353 | -188.3218 | -127.4999 | -1.8014 | -1.7541 |
73
- | 0.3863 | 2.9268 | 180 | 0.3403 | 1.0288 | -0.4194 | 0.7857 | 1.4482 | -189.4338 | -126.4830 | -1.8065 | -1.7584 |
74
- | 0.2999 | 3.4146 | 210 | 0.3210 | 1.0940 | -0.5570 | 0.7857 | 1.6510 | -190.8097 | -125.8309 | -1.8114 | -1.7629 |
75
- | 0.2311 | 3.9024 | 240 | 0.3085 | 1.1291 | -0.6550 | 0.7857 | 1.7841 | -191.7899 | -125.4799 | -1.8136 | -1.7646 |
76
- | 0.2992 | 4.3902 | 270 | 0.3012 | 1.1494 | -0.7427 | 0.7857 | 1.8921 | -192.6667 | -125.2770 | -1.8147 | -1.7654 |
77
- | 0.2532 | 4.8780 | 300 | 0.2943 | 1.1602 | -0.8051 | 0.7857 | 1.9653 | -193.2910 | -125.1693 | -1.8146 | -1.7650 |
78
- | 0.2564 | 5.3659 | 330 | 0.2905 | 1.1717 | -0.8471 | 0.7857 | 2.0188 | -193.7112 | -125.0541 | -1.8151 | -1.7652 |
79
- | 0.2457 | 5.8537 | 360 | 0.2882 | 1.1764 | -0.8691 | 0.7857 | 2.0455 | -193.9308 | -125.0071 | -1.8155 | -1.7656 |
80
-
81
-
82
  ### Framework versions
83
 
84
- - PEFT 0.10.0
85
- - Transformers 4.40.0
86
- - Pytorch 2.1.0+cu118
87
- - Datasets 2.18.0
88
- - Tokenizers 0.19.1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: mit
3
+ library_name: "trl"
4
  tags:
 
 
5
  - DPO
6
  - WeniGPT
 
7
  base_model: Weni/WeniGPT-Agents-Mistral-1.0.6-SFT-merged
8
  model-index:
9
+ - name: Weni/WeniGPT-Agents-Mistral-1.0.6-SFT-1.0.11-DPO
10
  results: []
11
+ language: ['pt']
12
  ---
13
 
14
+ # Weni/WeniGPT-Agents-Mistral-1.0.6-SFT-1.0.11-DPO
 
15
 
16
+ This model is a fine-tuned version of [Weni/WeniGPT-Agents-Mistral-1.0.6-SFT-merged] on the dataset Weni/wenigpt-agent-dpo-1.0.0 with the DPO trainer. It is part of the WeniGPT project for [Weni](https://weni.ai/).
17
+ Description: Experiment on DPO with other hyperparameters and best SFT model of WeniGPT
18
 
 
19
  It achieves the following results on the evaluation set:
20
+ {'eval_loss': 0.2881975471973419, 'eval_runtime': 17.3794, 'eval_samples_per_second': 1.611, 'eval_steps_per_second': 0.806, 'eval_rewards/chosen': 1.1763746738433838, 'eval_rewards/rejected': -0.8690943121910095, 'eval_rewards/accuracies': 0.7857142686843872, 'eval_rewards/margins': 2.045469045639038, 'eval_logps/rejected': -193.93080139160156, 'eval_logps/chosen': -125.00711822509766, 'eval_logits/rejected': -1.8155311346054077, 'eval_logits/chosen': -1.7656012773513794, 'epoch': 5.951219512195122}
 
 
 
 
 
 
 
 
21
 
22
+ ## Intended uses & limitations
23
 
24
+ This model has not been trained to avoid specific intructions.
25
 
26
+ ## Training procedure
27
 
28
+ Finetuning was done on the model Weni/WeniGPT-Agents-Mistral-1.0.6-SFT-merged with the following prompt:
29
 
30
+ ```
31
+ ---------------------
32
+ System_prompt:
33
+ Agora você se chama {name}, você é {occupation} e seu objetivo é {chatbot_goal}. O adjetivo que mais define a sua personalidade é {adjective} e você se comporta da seguinte forma:
34
+ {instructions_formatted}
35
 
36
+ {context_statement}
37
+
38
+ Lista de requisitos:
39
+ - Responda de forma natural, mas nunca fale sobre um assunto fora do contexto.
40
+ - Nunca traga informações do seu próprio conhecimento.
41
+ - Repito é crucial que você responda usando apenas informações do contexto.
42
+ - Nunca mencione o contexto fornecido.
43
+ - Nunca mencione a pergunta fornecida.
44
+ - Gere a resposta mais útil possível para a pergunta usando informações do conexto acima.
45
+ - Nunca elabore sobre o porque e como você fez a tarefa, apenas responda.
46
 
47
+
48
+ ---------------------
49
+
50
+ ```
51
 
52
  ### Training hyperparameters
53
 
54
  The following hyperparameters were used during training:
55
  - learning_rate: 5e-06
56
+ - per_device_train_batch_size: 1
57
+ - per_device_eval_batch_size: 1
 
 
 
58
  - gradient_accumulation_steps: 2
59
+ - num_gpus: 2
60
  - total_train_batch_size: 4
61
+ - optimizer: AdamW
62
+ - lr_scheduler_type: cosine
63
+ - num_steps: 366
64
+ - quantization_type: bitsandbytes
65
+ - LoRA: ("\n - bits: 4\n - use_exllama: True\n - device_map: auto\n - use_cache: False\n - lora_r: 8\n - lora_alpha: 16\n - lora_dropout: 0.05\n - bias: none\n - target_modules: ['v_proj', 'q_proj']\n - task_type: CAUSAL_LM",)
 
66
 
67
  ### Training results
68
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
69
  ### Framework versions
70
 
71
+ - transformers==4.40.0
72
+ - datasets==2.18.0
73
+ - peft==0.10.0
74
+ - safetensors==0.4.2
75
+ - evaluate==0.4.1
76
+ - bitsandbytes==0.43
77
+ - huggingface_hub==0.22.2
78
+ - seqeval==1.2.2
79
+ - auto-gptq==0.7.1
80
+ - gpustat==1.1.1
81
+ - deepspeed==0.14.0
82
+ - wandb==0.16.6
83
+ - trl==0.8.1
84
+ - accelerate==0.29.3
85
+ - coloredlogs==15.0.1
86
+ - traitlets==5.14.2
87
+ - git+https://github.com/casper-hansen/AutoAWQ.git
88
+
89
+ ### Hardware
90
+ - Cloud provided: runpod.io