smrynrz20 commited on
Commit
afe0943
1 Parent(s): d4179b2

Model save

Browse files
README.md CHANGED
@@ -14,6 +14,8 @@ should probably proofread and complete it, then remove this comment. -->
14
  # bart_samsum
15
 
16
  This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset.
 
 
17
 
18
  ## Model description
19
 
@@ -41,15 +43,18 @@ The following hyperparameters were used during training:
41
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
42
  - lr_scheduler_type: linear
43
  - lr_scheduler_warmup_steps: 500
44
- - num_epochs: 1
45
 
46
  ### Training results
47
 
 
 
 
48
 
49
 
50
  ### Framework versions
51
 
52
- - Transformers 4.36.2
53
  - Pytorch 2.1.0+cu121
54
- - Datasets 2.16.1
55
- - Tokenizers 0.15.0
 
14
  # bart_samsum
15
 
16
  This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset.
17
+ It achieves the following results on the evaluation set:
18
+ - Loss: 1.4042
19
 
20
  ## Model description
21
 
 
43
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
44
  - lr_scheduler_type: linear
45
  - lr_scheduler_warmup_steps: 500
46
+ - num_epochs: 3
47
 
48
  ### Training results
49
 
50
+ | Training Loss | Epoch | Step | Validation Loss |
51
+ |:-------------:|:-----:|:----:|:---------------:|
52
+ | 1.0235 | 2.17 | 500 | 1.4042 |
53
 
54
 
55
  ### Framework versions
56
 
57
+ - Transformers 4.37.2
58
  - Pytorch 2.1.0+cu121
59
+ - Datasets 2.17.0
60
+ - Tokenizers 0.15.1
config.json CHANGED
@@ -64,7 +64,7 @@
64
  }
65
  },
66
  "torch_dtype": "float32",
67
- "transformers_version": "4.36.2",
68
  "use_cache": true,
69
  "vocab_size": 50264
70
  }
 
64
  }
65
  },
66
  "torch_dtype": "float32",
67
+ "transformers_version": "4.37.2",
68
  "use_cache": true,
69
  "vocab_size": 50264
70
  }
generation_config.json CHANGED
@@ -12,5 +12,5 @@
12
  "no_repeat_ngram_size": 3,
13
  "num_beams": 4,
14
  "pad_token_id": 1,
15
- "transformers_version": "4.36.2"
16
  }
 
12
  "no_repeat_ngram_size": 3,
13
  "num_beams": 4,
14
  "pad_token_id": 1,
15
+ "transformers_version": "4.37.2"
16
  }
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:76cce8cadd66e7aef4daac22435666c3f86e9f417a833dbeb6c7b77786f44c3b
3
  size 1625422896
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:89322c292b95b13c6a6fea1c25342b0fcb7101c8a1c5b2cba9d468425a917690
3
  size 1625422896
runs/Feb14_00-49-21_b1ad44074ccc/events.out.tfevents.1707871766.b1ad44074ccc.229.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f085a1925d50f8d387f8292d0f3b8f7c6cd40fb01d1de0d1e76b41f0db4613af
3
+ size 5323
runs/Feb14_00-49-49_b1ad44074ccc/events.out.tfevents.1707871790.b1ad44074ccc.229.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8ff7ffbb3d3036632247d32397712fb06d9a1146a4d87775eb8299903941b3f3
3
+ size 16745
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:0bbc039895d555705d20ae5b0c783ea2862e9f5cb67bdfbfe7b3fcf155396151
3
  size 4664
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3f9a65ac4a3f587002ac384edca83d239ecf43a6f4ad7b6b422d98b17d069e15
3
  size 4664