metadata
license: other
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: google/gemma-7b-it
model-index:
- name: gemma-7b-it-dolly-15k-japanese-brainstorming-ipo
results: []
gemma-7b-it-dolly-15k-japanese-brainstorming-ipo
This model is a fine-tuned version of google/gemma-7b-it on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 2.4864
- Rouge Scores: {'rouge1': 0.8349379511354852, 'rouge2': 0.7274477012465996, 'rougeL': 0.8017731965466872, 'rougeLsum': 0.8347274626949961}
- Bleu Scores: [0.8344549267092076, 0.7702670675251023, 0.7108107057859971, 0.6501643333897235]
- Gen Len: 241.8588
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 3
Training results
Training Loss | Epoch | Step | Validation Loss | Rouge Scores | Bleu Scores | Gen Len |
---|---|---|---|---|---|---|
2.6263 | 1.0 | 398 | 2.3289 | {'rouge1': 0.8429091284800581, 'rouge2': 0.6936436106699162, 'rougeL': 0.8148324021885509, 'rougeLsum': 0.8436115528203997} | [0.8182599825705432, 0.7573989130361584, 0.6993616529579798, 0.6397083137823109] | 241.8588 |
1.4454 | 2.0 | 796 | 2.2673 | {'rouge1': 0.8676191825821662, 'rouge2': 0.7714826591897748, 'rougeL': 0.8384600375456672, 'rougeLsum': 0.8676504211460437} | [0.838231259049653, 0.7749860292910674, 0.715811357776676, 0.6553787760578684] | 241.8588 |
0.6829 | 3.0 | 1194 | 2.4864 | {'rouge1': 0.8349379511354852, 'rouge2': 0.7274477012465996, 'rougeL': 0.8017731965466872, 'rougeLsum': 0.8347274626949961} | [0.8344549267092076, 0.7702670675251023, 0.7108107057859971, 0.6501643333897235] | 241.8588 |
Framework versions
- PEFT 0.8.2
- Transformers 4.39.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.17.2.dev0
- Tokenizers 0.15.2