Mel-Iza0 commited on
Commit
0d7c407
1 Parent(s): b703c3c

Model save

Browse files
README.md CHANGED
@@ -1,110 +1,69 @@
1
  ---
2
- license: mit
3
- library_name: "trl"
4
  tags:
5
- - DPO
6
- - ZeroShot
 
7
  base_model: Weni/ZeroShot-3.3.14-Mistral-7b-Multilanguage-3.2.0-merged
8
  model-index:
9
- - name: Weni/ZeroShot-Test-3.4.4-Mistral-7b-DPO-1.0.0
10
  results: []
11
- language: ['en', 'es', 'pt']
12
  ---
13
 
14
- # Weni/ZeroShot-Test-3.4.4-Mistral-7b-DPO-1.0.0
 
15
 
16
- This model is a fine-tuned version of [Weni/ZeroShot-3.3.14-Mistral-7b-Multilanguage-3.2.0-merged] on the dataset Weni/zeroshot-dpo-1.0.0 with the DPO trainer. It is part of the ZeroShot project for [Weni](https://weni.ai/).
17
 
 
18
  It achieves the following results on the evaluation set:
19
- {'eval_loss': 0.6931470632553101, 'eval_runtime': 23.8922, 'eval_samples_per_second': 2.553, 'eval_steps_per_second': 1.297, 'eval_rewards/chosen': 0.0, 'eval_rewards/rejected': 0.0, 'eval_rewards/accuracies': 0.0, 'eval_rewards/margins': 0.0, 'eval_logps/rejected': -14.109933853149414, 'eval_logps/chosen': -16.323902130126953, 'eval_logits/rejected': -1.3300466537475586, 'eval_logits/chosen': -1.3497618436813354, 'epoch': 0.01}
20
-
21
- ## Intended uses & limitations
22
-
23
- This model has not been trained to avoid specific intructions.
24
-
25
- ## Training procedure
26
-
27
- Finetuning was done on the model Weni/ZeroShot-3.3.14-Mistral-7b-Multilanguage-3.2.0-merged with the following prompt:
28
-
29
- ```
30
- Portuguese:
31
- [INST] Você é muito especialista em classificar a frase do usuário em um chatbot sobre: {context}
32
- Pare, pense bem e responda com APENAS UM ÚNICO \`id\` da classe que melhor represente a intenção para a frase do usuário de acordo com a análise de seu contexto, responda APENAS com o \`id\` da classe só se você tiver muita certeza e não explique o motivo. Na ausência, falta de informações ou caso a frase do usuário não se enquadre em nenhuma classe, classifique como "-1".
33
-
34
- # Essas são as Classes com seus Id e Contexto:
35
- {all_classes}
36
-
37
- # Frase do usuário: {input}
38
- # Id da Classe: [/INST]
39
 
 
40
 
41
- Spanish:
42
- [INST] Eres muy experto en clasificar la frase del usuario en un chatbot sobre: {context}
43
- Deténgase, piense bien y responda con SOLO UN ÚNICO \`id\` de la clase que mejor represente la intención para la frase del usuario de acuerdo con el análisis de su contexto, responda SOLO con el \`id\` de la clase si está muy seguro y no explique el motivo. En ausencia, falta de información o en caso de que la frase del usuario no se ajuste a ninguna clase, clasifique como "-1".
44
 
45
- # Estas son las Clases con sus Id y Contexto:
46
- {all_classes}
47
-
48
- # Frase del usuario: {input}
49
- # Id de la Clase: [/INST]
50
-
51
-
52
- English:
53
- [INST] You are very expert in classifying the user sentence in a chatbot about: {context}
54
- Stop, think carefully, and respond with ONLY ONE SINGLE \`id\` of the class that best represents the intention for the user's sentence according to the analysis of its context, respond ONLY with the \`id\` of the class if you are very sure and do not explain the reason. In the absence, lack of information, or if the user's sentence does not fit into any class, classify as "-1".
55
-
56
- # These are the Classes and its Context:
57
- {all_classes}
58
-
59
- # User's sentence: {input}
60
- # Class Id: [/INST]
61
 
 
62
 
63
- Chosen_response:
64
- {chosen_response}
65
 
 
66
 
67
- Rejected_response:
68
- {rejected_response}
69
- ```
70
 
71
  ### Training hyperparameters
72
 
73
  The following hyperparameters were used during training:
74
  - learning_rate: 2e-05
75
- - per_device_train_batch_size: 2
76
- - per_device_eval_batch_size: 2
 
77
  - gradient_accumulation_steps: 2
78
- - num_gpus: 1
79
  - total_train_batch_size: 4
80
- - optimizer: AdamW
81
- - lr_scheduler_type: cosine
82
- - num_steps: 1
83
- - quantization_type: bitsandbytes
84
- - LoRA: ("\n - bits: 4\n - use_exllama: True\n - device_map: auto\n - use_cache: False\n - lora_r: 8\n - lora_alpha: 16\n - lora_dropout: 0.1\n - bias: none\n - target_modules: ['q_proj', 'k_proj', 'v_proj', 'o_proj']\n - task_type: CAUSAL_LM",)
85
 
86
  ### Training results
87
 
 
 
88
  ### Framework versions
89
 
90
- - transformers==4.38.2
91
- - datasets==2.17.1
92
- - peft==0.8.2
93
- - safetensors==0.4.2
94
- - evaluate==0.4.1
95
- - bitsandbytes==0.42
96
- - huggingface_hub==0.20.3
97
- - seqeval==1.2.2
98
- - optimum==1.17.1
99
- - auto-gptq==0.7.0
100
- - gpustat==1.1.1
101
- - deepspeed==0.13.2
102
- - wandb==0.16.3
103
- - trl==0.7.11
104
- - accelerate==0.27.2
105
- - coloredlogs==15.0.1
106
- - traitlets==5.14.1
107
- - autoawq@https://github.com/casper-hansen/AutoAWQ/releases/download/v0.2.0/autoawq-0.2.0+cu118-cp310-cp310-linux_x86_64.whl
108
-
109
- ### Hardware
110
- - Cloud provided: runpod.io
 
1
  ---
2
+ library_name: peft
 
3
  tags:
4
+ - trl
5
+ - dpo
6
+ - generated_from_trainer
7
  base_model: Weni/ZeroShot-3.3.14-Mistral-7b-Multilanguage-3.2.0-merged
8
  model-index:
9
+ - name: ZeroShot-Test-3.4.4-Mistral-7b-DPO-1.0.0
10
  results: []
 
11
  ---
12
 
13
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
+ should probably proofread and complete it, then remove this comment. -->
15
 
16
+ # ZeroShot-Test-3.4.4-Mistral-7b-DPO-1.0.0
17
 
18
+ This model is a fine-tuned version of [Weni/ZeroShot-3.3.14-Mistral-7b-Multilanguage-3.2.0-merged](https://huggingface.co/Weni/ZeroShot-3.3.14-Mistral-7b-Multilanguage-3.2.0-merged) on an unknown dataset.
19
  It achieves the following results on the evaluation set:
20
+ - Loss: 0.6931
21
+ - Rewards/chosen: 0.0
22
+ - Rewards/rejected: 0.0
23
+ - Rewards/accuracies: 0.0
24
+ - Rewards/margins: 0.0
25
+ - Logps/rejected: -13.8448
26
+ - Logps/chosen: -16.3459
27
+ - Logits/rejected: -1.3047
28
+ - Logits/chosen: -1.3336
 
 
 
 
 
 
 
 
 
 
 
29
 
30
+ ## Model description
31
 
32
+ More information needed
 
 
33
 
34
+ ## Intended uses & limitations
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
35
 
36
+ More information needed
37
 
38
+ ## Training and evaluation data
 
39
 
40
+ More information needed
41
 
42
+ ## Training procedure
 
 
43
 
44
  ### Training hyperparameters
45
 
46
  The following hyperparameters were used during training:
47
  - learning_rate: 2e-05
48
+ - train_batch_size: 2
49
+ - eval_batch_size: 2
50
+ - seed: 42
51
  - gradient_accumulation_steps: 2
 
52
  - total_train_batch_size: 4
53
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
54
+ - lr_scheduler_type: linear
55
+ - lr_scheduler_warmup_ratio: 0.1
56
+ - training_steps: 1
57
+ - mixed_precision_training: Native AMP
58
 
59
  ### Training results
60
 
61
+
62
+
63
  ### Framework versions
64
 
65
+ - PEFT 0.8.2
66
+ - Transformers 4.38.2
67
+ - Pytorch 2.1.0+cu118
68
+ - Datasets 2.17.1
69
+ - Tokenizers 0.15.2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
adapter_config.json CHANGED
@@ -19,10 +19,10 @@
19
  "rank_pattern": {},
20
  "revision": null,
21
  "target_modules": [
 
22
  "o_proj",
23
  "q_proj",
24
- "v_proj",
25
- "k_proj"
26
  ],
27
  "task_type": "CAUSAL_LM",
28
  "use_rslora": false
 
19
  "rank_pattern": {},
20
  "revision": null,
21
  "target_modules": [
22
+ "k_proj",
23
  "o_proj",
24
  "q_proj",
25
+ "v_proj"
 
26
  ],
27
  "task_type": "CAUSAL_LM",
28
  "use_rslora": false
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:b2740b212f0c96fc69457df7d68ff5f1138273b27d77e225b82f58ca5cef3e58
3
  size 27297032
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5a978c1b53efacc0375ec47113a102ed25891151694fa585d0f27c90e38809d5
3
  size 27297032
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:bcb17154363db586abe71ae597083c3758f694eb927b64d1606adba821726ab3
3
  size 5304
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a4e5eb58d5ca3274c8966d95f0e1b311ede21d470dc31e08efb1d3b629ccbf4c
3
  size 5304