File size: 3,241 Bytes
22c4b3d
75343c7
 
22c4b3d
 
 
 
 
75343c7
22c4b3d
75343c7
22c4b3d
 
75343c7
22c4b3d
75343c7
 
22c4b3d
 
75343c7
22c4b3d
75343c7
22c4b3d
75343c7
22c4b3d
75343c7
22c4b3d
75343c7
22c4b3d
75343c7
 
 
 
 
22c4b3d
75343c7
 
 
 
 
 
 
 
 
 
22c4b3d
75343c7
 
 
 
22c4b3d
 
 
 
 
75343c7
 
22c4b3d
75343c7
22c4b3d
75343c7
 
 
 
 
22c4b3d
 
 
 
 
75343c7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
---
license: mit
library_name: "trl"
tags:
- DPO
- WeniGPT
base_model: Weni/WeniGPT-Agents-Mistral-1.0.0-SFT-merged
model-index:
- name: Weni/WeniGPT-Agents-Mistral-1.0.0-SFT-1.0.21-DPO
  results: []
language: ['pt']
---

# Weni/WeniGPT-Agents-Mistral-1.0.0-SFT-1.0.21-DPO

This model is a fine-tuned version of [Weni/WeniGPT-Agents-Mistral-1.0.0-SFT-merged] on the dataset Weni/wenigpt-agent-dpo-1.0.0 with the DPO trainer. It is part of the WeniGPT project for [Weni](https://weni.ai/).
Description: Experiment on DPO with other hyperparameters and best SFT model of WeniGPT

It achieves the following results on the evaluation set:
{'eval_loss': 0.052017249166965485, 'eval_runtime': 9.7177, 'eval_samples_per_second': 2.881, 'eval_steps_per_second': 0.72, 'eval_rewards/chosen': 2.3217406272888184, 'eval_rewards/rejected': -1.5050735473632812, 'eval_rewards/accuracies': 1.0, 'eval_rewards/margins': 3.8268144130706787, 'eval_logps/rejected': -167.99813842773438, 'eval_logps/chosen': -107.60765075683594, 'eval_logits/rejected': -1.7717255353927612, 'eval_logits/chosen': -1.7573397159576416, 'epoch': 5.806451612903226}

## Intended uses & limitations

This model has not been trained to avoid specific intructions. 

## Training procedure

Finetuning was done on the model Weni/WeniGPT-Agents-Mistral-1.0.0-SFT-merged with the following prompt:

```
---------------------
System_prompt:
Agora você se chama {name}, você é {occupation} e seu objetivo é {chatbot_goal}. O adjetivo que mais define a sua personalidade é {adjective} e você se comporta da seguinte forma:
{instructions_formatted}

{context_statement}

Lista de requisitos:
 - Responda de forma natural, mas nunca fale sobre um assunto fora do contexto.
 - Nunca traga informações do seu próprio conhecimento.
 - Repito é crucial que você responda usando apenas informações do contexto.
 - Nunca mencione o contexto fornecido.
 - Nunca mencione a pergunta fornecida.
 - Gere a resposta mais útil possível para a pergunta usando informações do conexto acima.
 - Nunca elabore sobre o porque e como você fez a tarefa, apenas responda.


---------------------

```

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 5e-06
- per_device_train_batch_size: 1
- per_device_eval_batch_size: 1
- gradient_accumulation_steps: 2
- num_gpus: 4
- total_train_batch_size: 8
- optimizer: AdamW
- lr_scheduler_type: cosine
- num_steps: 180
- quantization_type: bitsandbytes
- LoRA: ("\n  - bits: 4\n  - use_exllama: True\n  - device_map: auto\n  - use_cache: False\n  - lora_r: 32\n  - lora_alpha: 16\n  - lora_dropout: 0.05\n  - bias: none\n  - target_modules: ['q_proj', 'k_proj', 'v_proj', 'o_proj', 'gate_proj', 'up_proj', 'down_proj']\n  - task_type: CAUSAL_LM",)

### Training results

### Framework versions

- transformers==4.40.0
- datasets==2.18.0
- peft==0.10.0
- safetensors==0.4.2
- evaluate==0.4.1
- bitsandbytes==0.43
- huggingface_hub==0.22.2
- seqeval==1.2.2
- auto-gptq==0.7.1
- gpustat==1.1.1
- deepspeed==0.14.0
- wandb==0.16.6
- trl==0.8.1
- accelerate==0.29.3
- coloredlogs==15.0.1
- traitlets==5.14.2
- git+https://github.com/casper-hansen/AutoAWQ.git

### Hardware
- Cloud provided: runpod.io