gopalakrishnan-d commited on
Commit
f1d6234
1 Parent(s): 7a3f5d3

End of training

Browse files
Files changed (1) hide show
  1. README.md +16 -1
README.md CHANGED
@@ -5,6 +5,8 @@ tags:
5
  - trl
6
  - sft
7
  - generated_from_trainer
 
 
8
  base_model: google/gemma-2b
9
  model-index:
10
  - name: gemma-2b-dolly-ds-lora
@@ -16,7 +18,9 @@ should probably proofread and complete it, then remove this comment. -->
16
 
17
  # gemma-2b-dolly-ds-lora
18
 
19
- This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on an unknown dataset.
 
 
20
 
21
  ## Model description
22
 
@@ -46,6 +50,17 @@ The following hyperparameters were used during training:
46
  - lr_scheduler_warmup_ratio: 0.05
47
  - training_steps: 593
48
 
 
 
 
 
 
 
 
 
 
 
 
49
  ### Framework versions
50
 
51
  - PEFT 0.10.0
 
5
  - trl
6
  - sft
7
  - generated_from_trainer
8
+ datasets:
9
+ - generator
10
  base_model: google/gemma-2b
11
  model-index:
12
  - name: gemma-2b-dolly-ds-lora
 
18
 
19
  # gemma-2b-dolly-ds-lora
20
 
21
+ This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on the generator dataset.
22
+ It achieves the following results on the evaluation set:
23
+ - Loss: 2.1563
24
 
25
  ## Model description
26
 
 
50
  - lr_scheduler_warmup_ratio: 0.05
51
  - training_steps: 593
52
 
53
+ ### Training results
54
+
55
+ | Training Loss | Epoch | Step | Validation Loss |
56
+ |:-------------:|:-----:|:----:|:---------------:|
57
+ | 2.8468 | 0.82 | 100 | 2.5258 |
58
+ | 2.4514 | 1.64 | 200 | 2.3193 |
59
+ | 2.3108 | 2.46 | 300 | 2.2252 |
60
+ | 2.2184 | 3.28 | 400 | 2.1790 |
61
+ | 2.1956 | 4.1 | 500 | 2.1563 |
62
+
63
+
64
  ### Framework versions
65
 
66
  - PEFT 0.10.0