Migara Amarasinghe commited on
Commit
d47c79b
1 Parent(s): 3e8cfbc

Model save

Browse files
Files changed (2) hide show
  1. README.md +4 -19
  2. adapter_model.safetensors +1 -1
README.md CHANGED
@@ -5,8 +5,6 @@ tags:
5
  - trl
6
  - sft
7
  - generated_from_trainer
8
- datasets:
9
- - generator
10
  base_model: google/gemma-2b
11
  model-index:
12
  - name: Gemma2B-LORAfied
@@ -18,9 +16,7 @@ should probably proofread and complete it, then remove this comment. -->
18
 
19
  # Gemma2B-LORAfied
20
 
21
- This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on the generator dataset.
22
- It achieves the following results on the evaluation set:
23
- - Loss: 2.1460
24
 
25
  ## Model description
26
 
@@ -43,23 +39,12 @@ The following hyperparameters were used during training:
43
  - train_batch_size: 2
44
  - eval_batch_size: 8
45
  - seed: 42
46
- - gradient_accumulation_steps: 4
47
- - total_train_batch_size: 8
48
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
49
  - lr_scheduler_type: linear
50
  - lr_scheduler_warmup_ratio: 0.05
51
- - training_steps: 593
52
-
53
- ### Training results
54
-
55
- | Training Loss | Epoch | Step | Validation Loss |
56
- |:-------------:|:-----:|:----:|:---------------:|
57
- | 2.8443 | 0.82 | 100 | 2.5332 |
58
- | 2.4577 | 1.64 | 200 | 2.3103 |
59
- | 2.275 | 2.46 | 300 | 2.2143 |
60
- | 2.2331 | 3.28 | 400 | 2.1686 |
61
- | 2.1737 | 4.1 | 500 | 2.1460 |
62
-
63
 
64
  ### Framework versions
65
 
 
5
  - trl
6
  - sft
7
  - generated_from_trainer
 
 
8
  base_model: google/gemma-2b
9
  model-index:
10
  - name: Gemma2B-LORAfied
 
16
 
17
  # Gemma2B-LORAfied
18
 
19
+ This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on an unknown dataset.
 
 
20
 
21
  ## Model description
22
 
 
39
  - train_batch_size: 2
40
  - eval_batch_size: 8
41
  - seed: 42
42
+ - gradient_accumulation_steps: 8
43
+ - total_train_batch_size: 16
44
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
  - lr_scheduler_type: linear
46
  - lr_scheduler_warmup_ratio: 0.05
47
+ - training_steps: 1480
 
 
 
 
 
 
 
 
 
 
 
48
 
49
  ### Framework versions
50
 
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:76fdc29fd385771b6b35b868d080c23709439a3b2cd71ea71fec345c39732356
3
  size 156926880
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:94e1e95f6be5b6e3f9b047c96a4604dda809c956c5cfc1ef9325c04d5df37378
3
  size 156926880