abhikr487 commited on
Commit
932cf7a
1 Parent(s): d0bfe2a

https://huggingface.co/abhikr487/lab2_efficient

Browse files
Files changed (3) hide show
  1. README.md +6 -5
  2. model.safetensors +1 -1
  3. training_args.bin +1 -1
README.md CHANGED
@@ -22,7 +22,7 @@ model-index:
22
  metrics:
23
  - name: Bleu
24
  type: bleu
25
- value: 50.29242417597565
26
  ---
27
 
28
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -32,8 +32,8 @@ should probably proofread and complete it, then remove this comment. -->
32
 
33
  This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset.
34
  It achieves the following results on the evaluation set:
35
- - Loss: 0.9594
36
- - Bleu: 50.2924
37
 
38
  ## Model description
39
 
@@ -53,15 +53,16 @@ More information needed
53
 
54
  The following hyperparameters were used during training:
55
  - learning_rate: 2e-05
56
- - train_batch_size: 64
57
  - eval_batch_size: 64
58
  - seed: 42
59
- - gradient_accumulation_steps: 2
60
  - total_train_batch_size: 128
61
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
62
  - lr_scheduler_type: cosine
63
  - lr_scheduler_warmup_ratio: 0.1
64
  - num_epochs: 2
 
65
 
66
  ### Training results
67
 
 
22
  metrics:
23
  - name: Bleu
24
  type: bleu
25
+ value: 51.56963230203597
26
  ---
27
 
28
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
32
 
33
  This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset.
34
  It achieves the following results on the evaluation set:
35
+ - Loss: 0.8936
36
+ - Bleu: 51.5696
37
 
38
  ## Model description
39
 
 
53
 
54
  The following hyperparameters were used during training:
55
  - learning_rate: 2e-05
56
+ - train_batch_size: 32
57
  - eval_batch_size: 64
58
  - seed: 42
59
+ - gradient_accumulation_steps: 4
60
  - total_train_batch_size: 128
61
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
62
  - lr_scheduler_type: cosine
63
  - lr_scheduler_warmup_ratio: 0.1
64
  - num_epochs: 2
65
+ - mixed_precision_training: Native AMP
66
 
67
  ### Training results
68
 
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:0a1c7554d7b1c6064bcad50862a2ca9896524be4721027732b35afc2ba859e64
3
  size 298705768
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9a1d3d7add0197415769767af7674d72e034eea84717eee957a26a7f3c70a61b
3
  size 298705768
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:d16b3da439db74f99eec8b232f35c7f32b83b0770c973b10fe9ef395111dd530
3
  size 5112
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b8130e8c9d8102c2e6b79e77c073d74aa4148f49058f8867c2e8b907d4587890
3
  size 5112