ninyx commited on
Commit
c6a0fc7
1 Parent(s): 87cd74e

Model save

Browse files
Files changed (3) hide show
  1. README.md +74 -0
  2. adapter_model.safetensors +1 -1
  3. results.json +4 -0
README.md ADDED
@@ -0,0 +1,74 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ library_name: peft
4
+ tags:
5
+ - trl
6
+ - sft
7
+ - generated_from_trainer
8
+ base_model: meta-llama/Meta-Llama-3-8B-Instruct
9
+ datasets:
10
+ - generator
11
+ metrics:
12
+ - bleu
13
+ - rouge
14
+ model-index:
15
+ - name: Meta-Llama-3-8B-Instruct-advisegpt-v0.2
16
+ results: []
17
+ ---
18
+
19
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
20
+ should probably proofread and complete it, then remove this comment. -->
21
+
22
+ # Meta-Llama-3-8B-Instruct-advisegpt-v0.2
23
+
24
+ This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the generator dataset.
25
+ It achieves the following results on the evaluation set:
26
+ - Loss: 0.6891
27
+ - Bleu: {'bleu': 0.7794801643070653, 'precisions': [0.8826931860836374, 0.7921738670614986, 0.7521498106470706, 0.7302911239298923], 'brevity_penalty': 0.9901418189906349, 'length_ratio': 0.9901900930687305, 'translation_length': 663363, 'reference_length': 669935}
28
+ - Rouge: {'rouge1': 0.8797610930416109, 'rouge2': 0.7838158722398209, 'rougeL': 0.8517529678496154, 'rougeLsum': 0.8731754875691802}
29
+ - Exact Match: {'exact_match': 0.0}
30
+
31
+ ## Model description
32
+
33
+ More information needed
34
+
35
+ ## Intended uses & limitations
36
+
37
+ More information needed
38
+
39
+ ## Training and evaluation data
40
+
41
+ More information needed
42
+
43
+ ## Training procedure
44
+
45
+ ### Training hyperparameters
46
+
47
+ The following hyperparameters were used during training:
48
+ - learning_rate: 2e-05
49
+ - train_batch_size: 5
50
+ - eval_batch_size: 4
51
+ - seed: 42
52
+ - gradient_accumulation_steps: 12
53
+ - total_train_batch_size: 60
54
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
55
+ - lr_scheduler_type: cosine
56
+ - num_epochs: 3
57
+ - mixed_precision_training: Native AMP
58
+
59
+ ### Training results
60
+
61
+ | Training Loss | Epoch | Step | Validation Loss | Bleu | Rouge | Exact Match |
62
+ |:-------------:|:------:|:----:|:---------------:|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------------------------:|:--------------------:|
63
+ | 0.1221 | 0.9967 | 175 | 0.6891 | {'bleu': 0.7794801643070653, 'precisions': [0.8826931860836374, 0.7921738670614986, 0.7521498106470706, 0.7302911239298923], 'brevity_penalty': 0.9901418189906349, 'length_ratio': 0.9901900930687305, 'translation_length': 663363, 'reference_length': 669935} | {'rouge1': 0.8797610930416109, 'rouge2': 0.7838158722398209, 'rougeL': 0.8517529678496154, 'rougeLsum': 0.8731754875691802} | {'exact_match': 0.0} |
64
+ | 0.1091 | 1.9991 | 351 | 0.6977 | {'bleu': 0.7805322713844085, 'precisions': [0.8833412231532545, 0.7931277801953389, 0.7535080094374768, 0.7317717661200727], 'brevity_penalty': 0.9900498636013274, 'length_ratio': 0.990099039459052, 'translation_length': 663302, 'reference_length': 669935} | {'rouge1': 0.88033924999596, 'rouge2': 0.7849601251129642, 'rougeL': 0.8519921287058778, 'rougeLsum': 0.8736913571890462} | {'exact_match': 0.0} |
65
+ | 0.1067 | 2.9900 | 525 | 0.7051 | {'bleu': 0.7808878497559923, 'precisions': [0.8838378429742967, 0.7938818670645449, 0.7542948740286441, 0.7326395901316979], 'brevity_penalty': 0.9895748787367024, 'length_ratio': 0.9896288445894005, 'translation_length': 662987, 'reference_length': 669935} | {'rouge1': 0.8806020535666979, 'rouge2': 0.7857024053578856, 'rougeL': 0.8520805662216797, 'rougeLsum': 0.8739154999822791} | {'exact_match': 0.0} |
66
+
67
+
68
+ ### Framework versions
69
+
70
+ - PEFT 0.10.0
71
+ - Transformers 4.40.1
72
+ - Pytorch 2.3.0+cu121
73
+ - Datasets 2.19.0
74
+ - Tokenizers 0.19.1
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:602da733bd12dff97010919d4c54d5c2bdb612887f6273ca5eab30299b9efd75
3
  size 2806378968
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6853d39830a8976bdc810dbf1e257d83c669a1f3427040b8c083ecc922a1884b
3
  size 2806378968
results.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ Pre-training results:
2
+ {"eval_loss": 2.3326611518859863, "eval_bleu": {"bleu": 0.5454691474666797, "precisions": [0.7673002426556667, 0.55498965343115, 0.47555838550638013, 0.4399945816996907], "brevity_penalty": 0.9983776325955439, "length_ratio": 0.9983789472112966, "translation_length": 668849, "reference_length": 669935}, "eval_rouge": {"rouge1": 0.7711607888144418, "rouge2": 0.5464636265187319, "rougeL": 0.6786521367857117, "rougeLsum": 0.7626278756724272}, "eval_exact_match": {"exact_match": 0.0}, "eval_runtime": 455.9955, "eval_samples_per_second": 3.241, "eval_steps_per_second": 0.811}
3
+ Post-training results:
4
+ {"eval_loss": 0.689052164554596, "eval_bleu": {"bleu": 0.7794801643070653, "precisions": [0.8826931860836374, 0.7921738670614986, 0.7521498106470706, 0.7302911239298923], "brevity_penalty": 0.9901418189906349, "length_ratio": 0.9901900930687305, "translation_length": 663363, "reference_length": 669935}, "eval_rouge": {"rouge1": 0.8797610930416109, "rouge2": 0.7838158722398209, "rougeL": 0.8517529678496154, "rougeLsum": 0.8731754875691802}, "eval_exact_match": {"exact_match": 0.0}, "eval_runtime": 454.9212, "eval_samples_per_second": 3.249, "eval_steps_per_second": 0.813, "epoch": 2.990033222591362}