scott156 commited on
Commit
0e9b72a
1 Parent(s): c7fca06

End of training

Browse files
Files changed (3) hide show
  1. README.md +72 -0
  2. generation_config.json +9 -0
  3. model.safetensors +1 -1
README.md ADDED
@@ -0,0 +1,72 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model: google/long-t5-tglobal-large
4
+ tags:
5
+ - generated_from_trainer
6
+ metrics:
7
+ - rouge
8
+ model-index:
9
+ - name: LongT5-Large-NSPCC
10
+ results: []
11
+ ---
12
+
13
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
+ should probably proofread and complete it, then remove this comment. -->
15
+
16
+ # LongT5-Large-NSPCC
17
+
18
+ This model is a fine-tuned version of [google/long-t5-tglobal-large](https://huggingface.co/google/long-t5-tglobal-large) on an unknown dataset.
19
+ It achieves the following results on the evaluation set:
20
+ - Loss: 1.5481
21
+ - Rouge1: 0.4597
22
+ - Rouge2: 0.1665
23
+ - Rougel: 0.2562
24
+ - Rougelsum: 0.2557
25
+ - Gen Len: 250.6383
26
+
27
+ ## Model description
28
+
29
+ More information needed
30
+
31
+ ## Intended uses & limitations
32
+
33
+ More information needed
34
+
35
+ ## Training and evaluation data
36
+
37
+ More information needed
38
+
39
+ ## Training procedure
40
+
41
+ ### Training hyperparameters
42
+
43
+ The following hyperparameters were used during training:
44
+ - learning_rate: 0.0003
45
+ - train_batch_size: 1
46
+ - eval_batch_size: 1
47
+ - seed: 42
48
+ - gradient_accumulation_steps: 4
49
+ - total_train_batch_size: 4
50
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
51
+ - lr_scheduler_type: cosine
52
+ - lr_scheduler_warmup_ratio: 0.03
53
+ - num_epochs: 6
54
+
55
+ ### Training results
56
+
57
+ | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
58
+ |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:--------:|
59
+ | 6.0521 | 1.0 | 188 | 2.8154 | 0.2268 | 0.0411 | 0.1627 | 0.1626 | 145.7447 |
60
+ | 2.5796 | 2.0 | 377 | 1.9961 | 0.3798 | 0.1115 | 0.2103 | 0.2101 | 220.234 |
61
+ | 2.0398 | 3.0 | 566 | 1.7703 | 0.4208 | 0.1319 | 0.2255 | 0.2258 | 299.6915 |
62
+ | 1.7329 | 4.0 | 755 | 1.5996 | 0.4427 | 0.1488 | 0.2423 | 0.2424 | 255.2553 |
63
+ | 1.5609 | 5.0 | 943 | 1.5510 | 0.4688 | 0.1726 | 0.2578 | 0.2576 | 289.2979 |
64
+ | 1.4733 | 5.98 | 1128 | 1.5481 | 0.4597 | 0.1665 | 0.2562 | 0.2557 | 250.6383 |
65
+
66
+
67
+ ### Framework versions
68
+
69
+ - Transformers 4.39.2
70
+ - Pytorch 2.2.1+cu121
71
+ - Datasets 2.18.0
72
+ - Tokenizers 0.15.2
generation_config.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "decoder_start_token_id": 0,
3
+ "eos_token_id": 1,
4
+ "max_new_tokens": 400,
5
+ "no_repeat_ngram_size": 5,
6
+ "num_beams": 3,
7
+ "pad_token_id": 0,
8
+ "transformers_version": "4.39.2"
9
+ }
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:56161b2fa76bd832c2ea0df03ffdbb7ce637cdf6accf8e627166c011273ec775
3
  size 3132774536
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fa2f3eb3c63f6f12345249ed45486bfa1ea0d26880dbcaa962c26ace69c85a07
3
  size 3132774536