heedongKilOk commited on
Commit
bd39aab
1 Parent(s): ee84c56

End of training

Browse files
Files changed (1) hide show
  1. README.md +4 -3
README.md CHANGED
@@ -9,14 +9,15 @@ tags:
9
  - sft
10
  - generated_from_trainer
11
  model-index:
12
- - name: outputs
13
  results: []
14
  ---
15
 
16
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
17
  should probably proofread and complete it, then remove this comment. -->
18
 
19
- # outputs
 
20
 
21
  This model is a fine-tuned version of [beomi/gemma-ko-2b](https://huggingface.co/beomi/gemma-ko-2b) on the arrow dataset.
22
 
@@ -45,7 +46,7 @@ The following hyperparameters were used during training:
45
  - total_train_batch_size: 4
46
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
47
  - lr_scheduler_type: linear
48
- - training_steps: 3000
49
  - mixed_precision_training: Native AMP
50
 
51
  ### Training results
 
9
  - sft
10
  - generated_from_trainer
11
  model-index:
12
+ - name: typo_recommendation
13
  results: []
14
  ---
15
 
16
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
17
  should probably proofread and complete it, then remove this comment. -->
18
 
19
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/nuealyoon/typo_recommendation/runs/b4fmwq6h)
20
+ # typo_recommendation
21
 
22
  This model is a fine-tuned version of [beomi/gemma-ko-2b](https://huggingface.co/beomi/gemma-ko-2b) on the arrow dataset.
23
 
 
46
  - total_train_batch_size: 4
47
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
48
  - lr_scheduler_type: linear
49
+ - training_steps: 100
50
  - mixed_precision_training: Native AMP
51
 
52
  ### Training results