kepinsam commited on
Commit
4d5b203
1 Parent(s): 5d45122

End of training

Browse files
README.md ADDED
@@ -0,0 +1,87 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-4.0
3
+ base_model: facebook/nllb-200-distilled-600M
4
+ tags:
5
+ - generated_from_trainer
6
+ datasets:
7
+ - nusatranslation_mt
8
+ metrics:
9
+ - sacrebleu
10
+ model-index:
11
+ - name: ind-to-bbc-nmt-v7
12
+ results:
13
+ - task:
14
+ name: Sequence-to-sequence Language Modeling
15
+ type: text2text-generation
16
+ dataset:
17
+ name: nusatranslation_mt
18
+ type: nusatranslation_mt
19
+ config: nusatranslation_mt_btk_ind_source
20
+ split: test
21
+ args: nusatranslation_mt_btk_ind_source
22
+ metrics:
23
+ - name: Sacrebleu
24
+ type: sacrebleu
25
+ value: 31.4148
26
+ ---
27
+
28
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
29
+ should probably proofread and complete it, then remove this comment. -->
30
+
31
+ # ind-to-bbc-nmt-v7
32
+
33
+ This model is a fine-tuned version of [facebook/nllb-200-distilled-600M](https://huggingface.co/facebook/nllb-200-distilled-600M) on the nusatranslation_mt dataset.
34
+ It achieves the following results on the evaluation set:
35
+ - Loss: 1.1534
36
+ - Sacrebleu: 31.4148
37
+ - Gen Len: 45.246
38
+
39
+ ## Model description
40
+
41
+ More information needed
42
+
43
+ ## Intended uses & limitations
44
+
45
+ More information needed
46
+
47
+ ## Training and evaluation data
48
+
49
+ More information needed
50
+
51
+ ## Training procedure
52
+
53
+ ### Training hyperparameters
54
+
55
+ The following hyperparameters were used during training:
56
+ - learning_rate: 5e-05
57
+ - train_batch_size: 16
58
+ - eval_batch_size: 16
59
+ - seed: 42
60
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
61
+ - lr_scheduler_type: linear
62
+ - lr_scheduler_warmup_ratio: 0.1
63
+ - num_epochs: 10
64
+ - mixed_precision_training: Native AMP
65
+
66
+ ### Training results
67
+
68
+ | Training Loss | Epoch | Step | Validation Loss | Sacrebleu | Gen Len |
69
+ |:-------------:|:-----:|:----:|:---------------:|:---------:|:-------:|
70
+ | 5.1799 | 1.0 | 413 | 2.3351 | 25.3863 | 45.489 |
71
+ | 1.6805 | 2.0 | 826 | 1.3384 | 30.3818 | 45.661 |
72
+ | 1.2114 | 3.0 | 1239 | 1.2202 | 30.9982 | 45.562 |
73
+ | 1.0517 | 4.0 | 1652 | 1.1827 | 31.2905 | 45.3925 |
74
+ | 0.9461 | 5.0 | 2065 | 1.1678 | 31.6094 | 45.2625 |
75
+ | 0.8728 | 6.0 | 2478 | 1.1471 | 31.2517 | 45.4265 |
76
+ | 0.8153 | 7.0 | 2891 | 1.1497 | 31.332 | 45.1645 |
77
+ | 0.7719 | 8.0 | 3304 | 1.1467 | 31.372 | 45.3915 |
78
+ | 0.743 | 9.0 | 3717 | 1.1491 | 31.4979 | 45.0825 |
79
+ | 0.7204 | 10.0 | 4130 | 1.1534 | 31.4148 | 45.246 |
80
+
81
+
82
+ ### Framework versions
83
+
84
+ - Transformers 4.41.2
85
+ - Pytorch 2.3.0+cu121
86
+ - Datasets 2.14.6
87
+ - Tokenizers 0.19.1
generation_config.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token_id": 0,
3
+ "decoder_start_token_id": 2,
4
+ "eos_token_id": 2,
5
+ "max_length": 200,
6
+ "pad_token_id": 1,
7
+ "transformers_version": "4.41.2"
8
+ }
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:ae6b8bdbc676d29350ff9d1f9c6520b70815a614eb9205f2d5de108169697091
3
  size 2460354912
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8b9cf568c16b8d620c08175c5c9cfd0d0149da8f2d1b739e4f91356cc07de9d0
3
  size 2460354912
runs/Jul13_04-34-43_592ba6d38807/events.out.tfevents.1720845287.592ba6d38807.1247.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:f1986c5d872bb79035ceb27fab99cb1b5da9d7bc59bb55915cce49b28e3c7818
3
- size 11074
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:046c9f7534980c6a57ba7841777703f3a68aef806fd1ad73a7280d169a87185f
3
+ size 11428