DunnBC22 commited on
Commit
f8c8be0
1 Parent(s): 8f0e367

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +12 -10
README.md CHANGED
@@ -5,16 +5,16 @@ tags:
5
  metrics:
6
  - rouge
7
  model-index:
8
- - name: flan-t5-base-text_summarization_data_3_epochs
9
  results: []
 
 
 
10
  ---
11
 
12
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
- should probably proofread and complete it, then remove this comment. -->
14
 
15
- # flan-t5-base-text_summarization_data_3_epochs
16
-
17
- This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the None dataset.
18
  It achieves the following results on the evaluation set:
19
  - Loss: 1.6783
20
  - Rouge1: 43.5994
@@ -25,15 +25,17 @@ It achieves the following results on the evaluation set:
25
 
26
  ## Model description
27
 
28
- More information needed
 
 
29
 
30
  ## Intended uses & limitations
31
 
32
- More information needed
33
 
34
  ## Training and evaluation data
35
 
36
- More information needed
37
 
38
  ## Training procedure
39
 
@@ -65,4 +67,4 @@ The following hyperparameters were used during training:
65
  - Transformers 4.26.1
66
  - Pytorch 1.13.1+cu116
67
  - Datasets 2.10.1
68
- - Tokenizers 0.13.2
 
5
  metrics:
6
  - rouge
7
  model-index:
8
+ - name: flan-t5-base-text_summarization_data_6_epochs
9
  results: []
10
+ language:
11
+ - en
12
+ pipeline_tag: summarization
13
  ---
14
 
15
+ # flan-t5-base-text_summarization_data_6_epochs
 
16
 
17
+ This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base).
 
 
18
  It achieves the following results on the evaluation set:
19
  - Loss: 1.6783
20
  - Rouge1: 43.5994
 
25
 
26
  ## Model description
27
 
28
+ This is a text summarization model.
29
+
30
+ For more information on how it was created, check out the following link: https://github.com/DunnBC22/NLP_Projects/blob/main/Text%20Summarization/Text-Summarized%20Data%20-%20Comparison/Flan-T5%20-%20Text%20Summarization%20-%206%20Epochs.ipynb
31
 
32
  ## Intended uses & limitations
33
 
34
+ This model is intended to demonstrate my ability to solve a complex problem using technology.
35
 
36
  ## Training and evaluation data
37
 
38
+ Dataset Source: https://www.kaggle.com/datasets/cuitengfeui/textsummarization-data
39
 
40
  ## Training procedure
41
 
 
67
  - Transformers 4.26.1
68
  - Pytorch 1.13.1+cu116
69
  - Datasets 2.10.1
70
+ - Tokenizers 0.13.2