gs224 commited on
Commit
a7f7ee6
·
verified ·
1 Parent(s): d802df4

Add model, config, and tokenizer

Browse files
README.md ADDED
@@ -0,0 +1,77 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ base_model: facebook/mbart-large-50-many-to-many-mmt
4
+ tags:
5
+ - generated_from_trainer
6
+ datasets:
7
+ - iva_mt_wslot
8
+ metrics:
9
+ - bleu
10
+ model-index:
11
+ - name: mbart-translation
12
+ results:
13
+ - task:
14
+ name: Sequence-to-sequence Language Modeling
15
+ type: text2text-generation
16
+ dataset:
17
+ name: iva_mt_wslot
18
+ type: iva_mt_wslot
19
+ config: en-pl
20
+ split: validation
21
+ args: en-pl
22
+ metrics:
23
+ - name: Bleu
24
+ type: bleu
25
+ value: 40.615
26
+ ---
27
+
28
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
29
+ should probably proofread and complete it, then remove this comment. -->
30
+
31
+ # mbart-translation
32
+
33
+ This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the iva_mt_wslot dataset.
34
+ It achieves the following results on the evaluation set:
35
+ - Loss: 1.0132
36
+ - Bleu: 40.615
37
+ - Gen Len: 14.4961
38
+
39
+ ## Model description
40
+
41
+ More information needed
42
+
43
+ ## Intended uses & limitations
44
+
45
+ More information needed
46
+
47
+ ## Training and evaluation data
48
+
49
+ More information needed
50
+
51
+ ## Training procedure
52
+
53
+ ### Training hyperparameters
54
+
55
+ The following hyperparameters were used during training:
56
+ - learning_rate: 5e-05
57
+ - train_batch_size: 8
58
+ - eval_batch_size: 8
59
+ - seed: 42
60
+ - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
61
+ - lr_scheduler_type: linear
62
+ - num_epochs: 1
63
+ - mixed_precision_training: Native AMP
64
+
65
+ ### Training results
66
+
67
+ | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
68
+ |:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
69
+ | 1.0632 | 1.0 | 2546 | 1.0132 | 40.615 | 14.4961 |
70
+
71
+
72
+ ### Framework versions
73
+
74
+ - Transformers 4.46.2
75
+ - Pytorch 2.5.1+cu121
76
+ - Datasets 3.1.0
77
+ - Tokenizers 0.20.3
generation_config.json ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token_id": 0,
3
+ "decoder_start_token_id": 2,
4
+ "early_stopping": true,
5
+ "eos_token_id": 2,
6
+ "forced_eos_token_id": 2,
7
+ "max_length": 200,
8
+ "num_beams": 5,
9
+ "pad_token_id": 1,
10
+ "transformers_version": "4.46.2"
11
+ }
runs/Dec01_15-45-49_ab8a804c46fd/events.out.tfevents.1733070758.ab8a804c46fd.574.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:20b029d8cd642817cd55d6ded385760e40ac249fa3e578eaef731891754488d7
3
+ size 458
tokenizer.json CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:0da4e7af9b86e84c844ce9b0d58a845dd3b0d9724abef93bc226aeb17d5110a0
3
- size 17110186
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:86f983b6563a9468794455498914bda0eaf9a60e5c9cd5a21669a24a625e490d
3
+ size 17109921