VK246 commited on
Commit
b3c92e6
1 Parent(s): c5eeb94

End of training

Browse files
README.md ADDED
@@ -0,0 +1,71 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: VK246/IC_ver6L_coco_swin_gpt2_50B_1e
3
+ tags:
4
+ - generated_from_trainer
5
+ datasets:
6
+ - coco
7
+ metrics:
8
+ - rouge
9
+ model-index:
10
+ - name: IC_ver6M_coco_swin_gpt2_50A_1e
11
+ results: []
12
+ ---
13
+
14
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
+ should probably proofread and complete it, then remove this comment. -->
16
+
17
+ # IC_ver6M_coco_swin_gpt2_50A_1e
18
+
19
+ This model is a fine-tuned version of [VK246/IC_ver6L_coco_swin_gpt2_50B_1e](https://huggingface.co/VK246/IC_ver6L_coco_swin_gpt2_50B_1e) on the coco dataset.
20
+ It achieves the following results on the evaluation set:
21
+ - Loss: 0.8502
22
+ - Cider: 74.2614
23
+ - Rouge1: 41.2346
24
+ - Rouge2: 15.6726
25
+ - Rougel: 37.3373
26
+ - Rougelsum: 37.3448
27
+ - Bleu-1: 42.1725
28
+ - Bleu-2: 24.0988
29
+ - Bleu-3: 15.0597
30
+ - Bleu-4: 9.8745
31
+ - Gen Len: 11.2806
32
+
33
+ ## Model description
34
+
35
+ More information needed
36
+
37
+ ## Intended uses & limitations
38
+
39
+ More information needed
40
+
41
+ ## Training and evaluation data
42
+
43
+ More information needed
44
+
45
+ ## Training procedure
46
+
47
+ ### Training hyperparameters
48
+
49
+ The following hyperparameters were used during training:
50
+ - learning_rate: 5e-05
51
+ - train_batch_size: 96
52
+ - eval_batch_size: 96
53
+ - seed: 42
54
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
55
+ - lr_scheduler_type: linear
56
+ - num_epochs: 1
57
+
58
+ ### Training results
59
+
60
+ | Training Loss | Epoch | Step | Validation Loss | Cider | Rouge1 | Rouge2 | Rougel | Rougelsum | Bleu-1 | Bleu-2 | Bleu-3 | Bleu-4 | Gen Len |
61
+ |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:-------:|:---------:|:-------:|:-------:|:-------:|:------:|:-------:|
62
+ | 0.4052 | 0.34 | 1000 | 0.9934 | 68.6639 | 39.9165 | 14.5681 | 36.2437 | 36.2466 | 41.4603 | 23.2118 | 14.2892 | 9.3187 | 11.2806 |
63
+ | 0.5281 | 0.68 | 2000 | 0.8502 | 74.2614 | 41.2346 | 15.6726 | 37.3373 | 37.3448 | 42.1725 | 24.0988 | 15.0597 | 9.8745 | 11.2806 |
64
+
65
+
66
+ ### Framework versions
67
+
68
+ - Transformers 4.32.0
69
+ - Pytorch 2.0.1+cu118
70
+ - Datasets 2.14.4
71
+ - Tokenizers 0.13.3
generation_config.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token_id": 50256,
3
+ "decoder_start_token_id": 50256,
4
+ "eos_token_id": 50256,
5
+ "pad_token_id": 50256,
6
+ "transformers_version": "4.32.0"
7
+ }
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
pytorch_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:01f45a7e9d8c7c2fcb680b1b08fd7e363e6f1c5d5d513ac934140acee1ef2f28
3
  size 962051233
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cc6b023050893d65e0619420e4763bed0f859ebf0607a5985dd2ee67188fa2e2
3
  size 962051233
special_tokens_map.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": "<|endoftext|>",
3
+ "eos_token": "<|endoftext|>",
4
+ "pad_token": "<|endoftext|>",
5
+ "unk_token": "<|endoftext|>"
6
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_prefix_space": false,
3
+ "bos_token": "<|endoftext|>",
4
+ "clean_up_tokenization_spaces": true,
5
+ "eos_token": "<|endoftext|>",
6
+ "max_length": 32,
7
+ "model_max_length": 1024,
8
+ "pad_to_multiple_of": null,
9
+ "pad_token": "<|endoftext|>",
10
+ "pad_token_type_id": 0,
11
+ "padding_side": "right",
12
+ "stride": 0,
13
+ "tokenizer_class": "GPT2Tokenizer",
14
+ "truncation_side": "right",
15
+ "truncation_strategy": "longest_first",
16
+ "unk_token": "<|endoftext|>"
17
+ }
vocab.json ADDED
The diff for this file is too large to render. See raw diff