yimingzhang commited on
Commit
1d19bad
1 Parent(s): 895f113

End of training

Browse files
README.md CHANGED
@@ -2,12 +2,16 @@
2
  license: gemma
3
  base_model: google/gemma-7b
4
  tags:
 
 
 
 
5
  - trl
6
  - sft
7
  - alignment-handbook
8
  - generated_from_trainer
9
  datasets:
10
- - generator
11
  model-index:
12
  - name: gemma-backtrack-0522
13
  results: []
@@ -19,7 +23,7 @@ should probably proofread and complete it, then remove this comment. -->
19
  [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/wandbruh/huggingface/runs/rbd699hg)
20
  # gemma-backtrack-0522
21
 
22
- This model is a fine-tuned version of [google/gemma-7b](https://huggingface.co/google/gemma-7b) on the generator dataset.
23
  It achieves the following results on the evaluation set:
24
  - Loss: 24.1739
25
 
 
2
  license: gemma
3
  base_model: google/gemma-7b
4
  tags:
5
+ - alignment-handbook
6
+ - trl
7
+ - sft
8
+ - generated_from_trainer
9
  - trl
10
  - sft
11
  - alignment-handbook
12
  - generated_from_trainer
13
  datasets:
14
+ - yimingzhang/backtrack-0522
15
  model-index:
16
  - name: gemma-backtrack-0522
17
  results: []
 
23
  [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/wandbruh/huggingface/runs/rbd699hg)
24
  # gemma-backtrack-0522
25
 
26
+ This model is a fine-tuned version of [google/gemma-7b](https://huggingface.co/google/gemma-7b) on the yimingzhang/backtrack-0522 dataset.
27
  It achieves the following results on the evaluation set:
28
  - Loss: 24.1739
29
 
all_results.json CHANGED
@@ -1,10 +1,10 @@
1
  {
2
  "epoch": 2.571428571428571,
3
- "eval_loss": 24.174694061279297,
4
- "eval_runtime": 0.8465,
5
  "eval_samples": 412,
6
- "eval_samples_per_second": 24.807,
7
- "eval_steps_per_second": 1.181,
8
  "total_flos": 2409879306240.0,
9
  "train_loss": 42.46108203464084,
10
  "train_runtime": 161.2984,
 
1
  {
2
  "epoch": 2.571428571428571,
3
+ "eval_loss": 24.173879623413086,
4
+ "eval_runtime": 0.8457,
5
  "eval_samples": 412,
6
+ "eval_samples_per_second": 24.832,
7
+ "eval_steps_per_second": 1.182,
8
  "total_flos": 2409879306240.0,
9
  "train_loss": 42.46108203464084,
10
  "train_runtime": 161.2984,
config.json CHANGED
@@ -24,6 +24,6 @@
24
  "rope_theta": 10000.0,
25
  "torch_dtype": "bfloat16",
26
  "transformers_version": "4.41.0",
27
- "use_cache": false,
28
  "vocab_size": 256000
29
  }
 
24
  "rope_theta": 10000.0,
25
  "torch_dtype": "bfloat16",
26
  "transformers_version": "4.41.0",
27
+ "use_cache": true,
28
  "vocab_size": 256000
29
  }
eval_results.json CHANGED
@@ -1,8 +1,8 @@
1
  {
2
  "epoch": 2.571428571428571,
3
- "eval_loss": 24.174694061279297,
4
- "eval_runtime": 0.8465,
5
  "eval_samples": 412,
6
- "eval_samples_per_second": 24.807,
7
- "eval_steps_per_second": 1.181
8
  }
 
1
  {
2
  "epoch": 2.571428571428571,
3
+ "eval_loss": 24.173879623413086,
4
+ "eval_runtime": 0.8457,
5
  "eval_samples": 412,
6
+ "eval_samples_per_second": 24.832,
7
+ "eval_steps_per_second": 1.182
8
  }
runs/May23_02-12-02_a100-st-p4de24xlarge-67/events.out.tfevents.1716430753.a100-st-p4de24xlarge-67.3492810.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:faf37eb9e58331600b4ddd87e7dd2430aca2b8101908f19c504a8b33545bc836
3
+ size 354