yuriachermann commited on
Commit
17ba267
1 Parent(s): c32fd37

Model save

Browse files
Files changed (2) hide show
  1. README.md +3 -27
  2. adapter_model.safetensors +1 -1
README.md CHANGED
@@ -6,8 +6,6 @@ tags:
6
  - sft
7
  - generated_from_trainer
8
  base_model: google/gemma-2b
9
- datasets:
10
- - generator
11
  model-index:
12
  - name: Not-so-bright-AGI-v1
13
  results: []
@@ -18,9 +16,7 @@ should probably proofread and complete it, then remove this comment. -->
18
 
19
  # Not-so-bright-AGI-v1
20
 
21
- This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on the generator dataset.
22
- It achieves the following results on the evaluation set:
23
- - Loss: 2.0209
24
 
25
  ## Model description
26
 
@@ -40,36 +36,16 @@ More information needed
40
 
41
  The following hyperparameters were used during training:
42
  - learning_rate: 1e-05
43
- - train_batch_size: 2
44
  - eval_batch_size: 8
45
  - seed: 42
46
  - gradient_accumulation_steps: 8
47
- - total_train_batch_size: 16
48
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
49
  - lr_scheduler_type: linear
50
  - lr_scheduler_warmup_ratio: 0.05
51
  - training_steps: 1480
52
 
53
- ### Training results
54
-
55
- | Training Loss | Epoch | Step | Validation Loss |
56
- |:-------------:|:-------:|:----:|:---------------:|
57
- | 2.9261 | 1.6393 | 100 | 2.5691 |
58
- | 2.4363 | 3.2787 | 200 | 2.2789 |
59
- | 2.2448 | 4.9180 | 300 | 2.1604 |
60
- | 2.1502 | 6.5574 | 400 | 2.1008 |
61
- | 2.111 | 8.1967 | 500 | 2.0725 |
62
- | 2.0837 | 9.8361 | 600 | 2.0565 |
63
- | 2.0646 | 11.4754 | 700 | 2.0456 |
64
- | 2.0499 | 13.1148 | 800 | 2.0378 |
65
- | 2.0497 | 14.7541 | 900 | 2.0323 |
66
- | 2.0275 | 16.3934 | 1000 | 2.0282 |
67
- | 2.0271 | 18.0328 | 1100 | 2.0255 |
68
- | 2.0205 | 19.6721 | 1200 | 2.0233 |
69
- | 2.0218 | 21.3115 | 1300 | 2.0219 |
70
- | 2.011 | 22.9508 | 1400 | 2.0209 |
71
-
72
-
73
  ### Framework versions
74
 
75
  - PEFT 0.10.0
 
6
  - sft
7
  - generated_from_trainer
8
  base_model: google/gemma-2b
 
 
9
  model-index:
10
  - name: Not-so-bright-AGI-v1
11
  results: []
 
16
 
17
  # Not-so-bright-AGI-v1
18
 
19
+ This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on an unknown dataset.
 
 
20
 
21
  ## Model description
22
 
 
36
 
37
  The following hyperparameters were used during training:
38
  - learning_rate: 1e-05
39
+ - train_batch_size: 4
40
  - eval_batch_size: 8
41
  - seed: 42
42
  - gradient_accumulation_steps: 8
43
+ - total_train_batch_size: 32
44
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
  - lr_scheduler_type: linear
46
  - lr_scheduler_warmup_ratio: 0.05
47
  - training_steps: 1480
48
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
49
  ### Framework versions
50
 
51
  - PEFT 0.10.0
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:969670841dda50db39b86a6ccd8cd527d5be3a6f55fede5196ad5b14dbb9d2c1
3
  size 156926880
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4ac64ba89b8f4a98331f15a564f81eaca49421b8190880d6db4bb35c9a1483d8
3
  size 156926880