ShyamVarahagiri commited on
Commit
355bf81
·
1 Parent(s): 5109fab

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -25
README.md CHANGED
@@ -4,24 +4,9 @@ tags:
4
  - generated_from_trainer
5
  datasets:
6
  - opus100
7
- metrics:
8
- - bleu
9
  model-index:
10
  - name: MachineTranslation
11
- results:
12
- - task:
13
- name: Sequence-to-sequence Language Modeling
14
- type: text2text-generation
15
- dataset:
16
- name: opus100
17
- type: opus100
18
- config: en-ta
19
- split: test
20
- args: en-ta
21
- metrics:
22
- - name: Bleu
23
- type: bleu
24
- value: 14.1043
25
  ---
26
 
27
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -30,10 +15,6 @@ should probably proofread and complete it, then remove this comment. -->
30
  # MachineTranslation
31
 
32
  This model is a fine-tuned version of [ShyamVarahagiri/MachineTranslation](https://huggingface.co/ShyamVarahagiri/MachineTranslation) on the opus100 dataset.
33
- It achieves the following results on the evaluation set:
34
- - Loss: 2.2734
35
- - Bleu: 14.1043
36
- - Gen Len: 11.8405
37
 
38
  ## Model description
39
 
@@ -60,14 +41,13 @@ The following hyperparameters were used during training:
60
  - total_train_batch_size: 768
61
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
62
  - lr_scheduler_type: linear
63
- - num_epochs: 2
64
 
65
  ### Training results
66
 
67
- | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
68
- |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
69
- | No log | 1.0 | 295 | 2.3233 | 13.2366 | 12.295 |
70
- | 2.6303 | 2.0 | 590 | 2.2734 | 14.1043 | 11.8405 |
71
 
72
 
73
  ### Framework versions
 
4
  - generated_from_trainer
5
  datasets:
6
  - opus100
 
 
7
  model-index:
8
  - name: MachineTranslation
9
+ results: []
 
 
 
 
 
 
 
 
 
 
 
 
 
10
  ---
11
 
12
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
15
  # MachineTranslation
16
 
17
  This model is a fine-tuned version of [ShyamVarahagiri/MachineTranslation](https://huggingface.co/ShyamVarahagiri/MachineTranslation) on the opus100 dataset.
 
 
 
 
18
 
19
  ## Model description
20
 
 
41
  - total_train_batch_size: 768
42
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
43
  - lr_scheduler_type: linear
44
+ - num_epochs: 1
45
 
46
  ### Training results
47
 
48
+ | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
49
+ |:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
50
+ | No log | 1.0 | 295 | 2.2160 | 15.007 | 11.698 |
 
51
 
52
 
53
  ### Framework versions