vickt commited on
Commit
7be2d95
1 Parent(s): 5347d4e

End of training

Browse files
README.md ADDED
@@ -0,0 +1,74 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ base_model: facebook/bart-large-cnn
4
+ tags:
5
+ - generated_from_trainer
6
+ metrics:
7
+ - rouge
8
+ - precision
9
+ - recall
10
+ - f1
11
+ model-index:
12
+ - name: BART_CNNDM_ORIGIN
13
+ results: []
14
+ ---
15
+
16
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
17
+ should probably proofread and complete it, then remove this comment. -->
18
+
19
+ # BART_CNNDM_ORIGIN
20
+
21
+ This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on the None dataset.
22
+ It achieves the following results on the evaluation set:
23
+ - Loss: 1.6921
24
+ - Rouge1: 0.3423
25
+ - Rouge2: 0.144
26
+ - Rougel: 0.2434
27
+ - Rougelsum: 0.3142
28
+ - Gen Len: 73.4636
29
+ - Precision: 0.8695
30
+ - Recall: 0.8927
31
+ - F1: 0.8808
32
+
33
+ ## Model description
34
+
35
+ More information needed
36
+
37
+ ## Intended uses & limitations
38
+
39
+ More information needed
40
+
41
+ ## Training and evaluation data
42
+
43
+ More information needed
44
+
45
+ ## Training procedure
46
+
47
+ ### Training hyperparameters
48
+
49
+ The following hyperparameters were used during training:
50
+ - learning_rate: 2e-05
51
+ - train_batch_size: 8
52
+ - eval_batch_size: 4
53
+ - seed: 42
54
+ - gradient_accumulation_steps: 4
55
+ - total_train_batch_size: 32
56
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
57
+ - lr_scheduler_type: linear
58
+ - num_epochs: 2
59
+ - mixed_precision_training: Native AMP
60
+
61
+ ### Training results
62
+
63
+ | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | Precision | Recall | F1 |
64
+ |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|:---------:|:------:|:------:|
65
+ | 1.2137 | 1.0 | 625 | 1.6451 | 0.3343 | 0.1359 | 0.2346 | 0.3043 | 72.7655 | 0.8678 | 0.891 | 0.8791 |
66
+ | 1.054 | 2.0 | 1250 | 1.6921 | 0.3423 | 0.144 | 0.2434 | 0.3142 | 73.4636 | 0.8695 | 0.8927 | 0.8808 |
67
+
68
+
69
+ ### Framework versions
70
+
71
+ - Transformers 4.36.0
72
+ - Pytorch 2.0.1+cu117
73
+ - Datasets 2.14.5
74
+ - Tokenizers 0.15.0
generation_config.json ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token_id": 0,
3
+ "decoder_start_token_id": 2,
4
+ "early_stopping": true,
5
+ "eos_token_id": 2,
6
+ "forced_bos_token_id": 0,
7
+ "forced_eos_token_id": 2,
8
+ "length_penalty": 2.0,
9
+ "max_length": 142,
10
+ "min_length": 56,
11
+ "no_repeat_ngram_size": 3,
12
+ "num_beams": 4,
13
+ "pad_token_id": 1,
14
+ "transformers_version": "4.36.0"
15
+ }
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:cc99b6907dce61e9078c392d0fbd4d31225cdb505f661806fca1de54f22cbc98
3
  size 1625422896
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ac91eb64edcb8200a37f7ec051af40e18a45148feea01d2d8494085a9cde8a8f
3
  size 1625422896
runs/Jan02_22-40-05_vmi23bctr1704175002993-tsvtb/events.out.tfevents.1704206408.vmi23bctr1704175002993-tsvtb.25248.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:37b82d07ca91205ba3ed98f4572d6623066398d509e325aa6db74482a7ca047d
3
- size 6540
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5072383f35d5d36522e63a49f1aa5542f3b882de0351f5cf1af085636b3e4f01
3
+ size 7568