pellucid commited on
Commit
3b79f10
1 Parent(s): 0656dc8

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +12 -12
README.md CHANGED
@@ -1,5 +1,5 @@
1
  ---
2
- license: cc-by-4.0
3
  tags:
4
  - generated_from_trainer
5
  datasets:
@@ -21,7 +21,7 @@ model-index:
21
  metrics:
22
  - name: Bleu
23
  type: bleu
24
- value: 0.0477
25
  ---
26
 
27
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -29,11 +29,11 @@ should probably proofread and complete it, then remove this comment. -->
29
 
30
  # my_awesome_opus100_model
31
 
32
- This model is a fine-tuned version of [Helsinki-NLP/opus-mt-tc-big-en-ko](https://huggingface.co/Helsinki-NLP/opus-mt-tc-big-en-ko) on the opus100 dataset.
33
  It achieves the following results on the evaluation set:
34
- - Loss: 7.5752
35
- - Bleu: 0.0477
36
- - Gen Len: 37.525
37
 
38
  ## Model description
39
 
@@ -53,8 +53,8 @@ More information needed
53
 
54
  The following hyperparameters were used during training:
55
  - learning_rate: 2e-05
56
- - train_batch_size: 16
57
- - eval_batch_size: 16
58
  - seed: 42
59
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
60
  - lr_scheduler_type: linear
@@ -62,10 +62,10 @@ The following hyperparameters were used during training:
62
 
63
  ### Training results
64
 
65
- | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
66
- |:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
67
- | No log | 1.0 | 250 | 7.6990 | 0.0466 | 29.106 |
68
- | 7.8486 | 2.0 | 500 | 7.5752 | 0.0477 | 37.525 |
69
 
70
 
71
  ### Framework versions
 
1
  ---
2
+ license: apache-2.0
3
  tags:
4
  - generated_from_trainer
5
  datasets:
 
21
  metrics:
22
  - name: Bleu
23
  type: bleu
24
+ value: 0.0
25
  ---
26
 
27
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
29
 
30
  # my_awesome_opus100_model
31
 
32
+ This model is a fine-tuned version of [KETI-AIR-Downstream/long-ke-t5-base-translation-aihub-en2ko](https://huggingface.co/KETI-AIR-Downstream/long-ke-t5-base-translation-aihub-en2ko) on the opus100 dataset.
33
  It achieves the following results on the evaluation set:
34
+ - Loss: nan
35
+ - Bleu: 0.0
36
+ - Gen Len: 0.0
37
 
38
  ## Model description
39
 
 
53
 
54
  The following hyperparameters were used during training:
55
  - learning_rate: 2e-05
56
+ - train_batch_size: 4
57
+ - eval_batch_size: 4
58
  - seed: 42
59
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
60
  - lr_scheduler_type: linear
 
62
 
63
  ### Training results
64
 
65
+ | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
66
+ |:-------------:|:-----:|:----:|:---------------:|:----:|:-------:|
67
+ | 0.0 | 1.0 | 1000 | nan | 0.0 | 0.0 |
68
+ | 0.0 | 2.0 | 2000 | nan | 0.0 | 0.0 |
69
 
70
 
71
  ### Framework versions