beamaia commited on
Commit
afbef88
1 Parent(s): b412e14

Upload folder using huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +60 -46
README.md CHANGED
@@ -1,75 +1,89 @@
1
  ---
2
- license: apache-2.0
3
- library_name: peft
4
  tags:
5
- - trl
6
- - kto
7
- - generated_from_trainer
8
  base_model: mistralai/Mixtral-8x7B-Instruct-v0.1
9
  model-index:
10
- - name: WeniGPT-QA-Zephyr-7B-5.0.1-KTO
11
  results: []
 
12
  ---
13
 
14
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
- should probably proofread and complete it, then remove this comment. -->
16
 
17
- # WeniGPT-QA-Zephyr-7B-5.0.1-KTO
 
18
 
19
- This model is a fine-tuned version of [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) on the None dataset.
20
  It achieves the following results on the evaluation set:
21
- - Loss: 0.0146
22
- - Eval/rewards/chosen: 6.5462
23
- - Eval/rewards/rejected: -30.7776
24
- - Eval/kl: 0.2505
25
- - Eval/logps/chosen: -129.4441
26
- - Eval/logps/rejected: -508.0271
27
- - Eval/rewards/margins: 37.3238
28
 
29
- ## Model description
30
 
31
- More information needed
32
 
33
- ## Intended uses & limitations
34
 
35
- More information needed
36
 
37
- ## Training and evaluation data
 
 
 
 
38
 
39
- More information needed
40
 
41
- ## Training procedure
 
 
 
 
 
 
 
 
 
42
 
43
  ### Training hyperparameters
44
 
45
  The following hyperparameters were used during training:
46
  - learning_rate: 0.0002
47
- - train_batch_size: 4
48
- - eval_batch_size: 4
49
- - seed: 42
50
  - gradient_accumulation_steps: 8
 
51
  - total_train_batch_size: 32
52
- - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
53
- - lr_scheduler_type: linear
54
- - lr_scheduler_warmup_ratio: 0.03
55
- - training_steps: 262
56
- - mixed_precision_training: Native AMP
57
 
58
  ### Training results
59
 
60
- | Training Loss | Epoch | Step | Validation Loss | |
61
- |:-------------:|:-----:|:----:|:---------------:|:-------:|
62
- | 0.1177 | 0.38 | 50 | 0.0468 | 24.8637 |
63
- | 0.0257 | 0.76 | 100 | 0.0236 | 30.5016 |
64
- | 0.0141 | 1.14 | 150 | 0.0219 | 33.9185 |
65
- | 0.0103 | 1.52 | 200 | 0.0146 | 37.3238 |
66
- | 0.0084 | 1.9 | 250 | 0.0129 | 39.0837 |
67
-
68
-
69
  ### Framework versions
70
 
71
- - PEFT 0.10.0
72
- - Transformers 4.39.1
73
- - Pytorch 2.1.0+cu118
74
- - Datasets 2.18.0
75
- - Tokenizers 0.15.1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: mit
3
+ library_name: "trl"
4
  tags:
5
+ - KTO
6
+ - WeniGPT
 
7
  base_model: mistralai/Mixtral-8x7B-Instruct-v0.1
8
  model-index:
9
+ - name: Weni/WeniGPT-QA-Zephyr-7B-5.0.1-KTO
10
  results: []
11
+ language: ['pt']
12
  ---
13
 
14
+ # Weni/WeniGPT-QA-Zephyr-7B-5.0.1-KTO
 
15
 
16
+ This model is a fine-tuned version of [mistralai/Mixtral-8x7B-Instruct-v0.1] on the dataset Weni/WeniGPT-QA-Binarized-1.2.0 with the KTO trainer. It is part of the WeniGPT project for [Weni](https://weni.ai/).
17
+ Description: WeniGPT Experiment using KTO trainer with no collator, Mixstral model and no system prompt.
18
 
 
19
  It achieves the following results on the evaluation set:
20
+ {'eval_loss': 0.014605735428631306, 'eval_runtime': 1025.937, 'eval_samples_per_second': 0.476, 'eval_steps_per_second': 0.119, 'eval/rewards/chosen': 6.546164512634277, 'eval/rewards/rejected': -30.777591705322266, 'eval/kl': 0.25049710273742676, 'eval/logps/chosen': -129.4441375732422, 'eval/logps/rejected': -508.0271301269531, 'eval/rewards/margins': 37.32375621795654, 'epoch': 1.99}
 
 
 
 
 
 
21
 
22
+ ## Intended uses & limitations
23
 
24
+ This model has not been trained to avoid specific intructions.
25
 
26
+ ## Training procedure
27
 
28
+ Finetuning was done on the model mistralai/Mixtral-8x7B-Instruct-v0.1 with the following prompt:
29
 
30
+ ```
31
+ ---------------------
32
+ Question:
33
+ <|user|>
34
+ Contexto: {context}
35
 
36
+ Questão: {question}</s>
37
 
38
+
39
+ ---------------------
40
+ Response:
41
+ <|assistant|>
42
+ {response}</s>
43
+
44
+
45
+ ---------------------
46
+
47
+ ```
48
 
49
  ### Training hyperparameters
50
 
51
  The following hyperparameters were used during training:
52
  - learning_rate: 0.0002
53
+ - per_device_train_batch_size: 4
54
+ - per_device_eval_batch_size: 4
 
55
  - gradient_accumulation_steps: 8
56
+ - num_gpus: 1
57
  - total_train_batch_size: 32
58
+ - optimizer: AdamW
59
+ - lr_scheduler_type: cosine
60
+ - num_steps: 262
61
+ - quantization_type: bitsandbytes
62
+ - LoRA: ("\n - bits: 4\n - use_exllama: True\n - device_map: auto\n - use_cache: False\n - lora_r: 16\n - lora_alpha: 32\n - lora_dropout: 0.05\n - bias: none\n - target_modules: ['q_proj', 'k_proj', 'v_proj', 'o_proj']\n - task_type: CAUSAL_LM",)
63
 
64
  ### Training results
65
 
 
 
 
 
 
 
 
 
 
66
  ### Framework versions
67
 
68
+ - transformers==4.39.1
69
+ - datasets==2.18.0
70
+ - peft==0.10.0
71
+ - safetensors==0.4.2
72
+ - evaluate==0.4.1
73
+ - bitsandbytes==0.43
74
+ - huggingface_hub==0.20.3
75
+ - seqeval==1.2.2
76
+ - optimum==1.17.1
77
+ - auto-gptq==0.7.1
78
+ - gpustat==1.1.1
79
+ - deepspeed==0.14.0
80
+ - wandb==0.16.3
81
+ - # trl==0.8.1
82
+ - git+https://github.com/claralp/trl.git@fix_nans#egg=trl
83
+ - accelerate==0.28.0
84
+ - coloredlogs==15.0.1
85
+ - traitlets==5.14.1
86
+ - autoawq@https://github.com/casper-hansen/AutoAWQ/releases/download/v0.2.0/autoawq-0.2.0+cu118-cp310-cp310-linux_x86_64.whl
87
+
88
+ ### Hardware
89
+ - Cloud provided: runpod.io