DunnBC22 commited on
Commit
0311727
1 Parent(s): 3684006

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -7
README.md CHANGED
@@ -7,11 +7,11 @@ metrics:
7
  model-index:
8
  - name: flan-t5-base-text_summarization_data
9
  results: []
 
 
 
10
  ---
11
 
12
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
- should probably proofread and complete it, then remove this comment. -->
14
-
15
  # flan-t5-base-text_summarization_data
16
 
17
  This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the None dataset.
@@ -25,15 +25,17 @@ It achieves the following results on the evaluation set:
25
 
26
  ## Model description
27
 
28
- More information needed
 
 
29
 
30
  ## Intended uses & limitations
31
 
32
- More information needed
33
 
34
  ## Training and evaluation data
35
 
36
- More information needed
37
 
38
  ## Training procedure
39
 
@@ -60,4 +62,4 @@ The following hyperparameters were used during training:
60
  - Transformers 4.26.1
61
  - Pytorch 1.12.1
62
  - Datasets 2.9.0
63
- - Tokenizers 0.12.1
 
7
  model-index:
8
  - name: flan-t5-base-text_summarization_data
9
  results: []
10
+ language:
11
+ - en
12
+ pipeline_tag: summarization
13
  ---
14
 
 
 
 
15
  # flan-t5-base-text_summarization_data
16
 
17
  This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the None dataset.
 
25
 
26
  ## Model description
27
 
28
+ This is a text summarization model.
29
+
30
+ For more information on how it was created, check out the following link: https://github.com/DunnBC22/NLP_Projects/blob/main/Text%20Summarization/Text-Summarized%20Data%20-%20Comparison/Flan-T5%20-%20Text%20Summarization%20-%201%20Epoch.ipynb
31
 
32
  ## Intended uses & limitations
33
 
34
+ This model is intended to demonstrate my ability to solve a complex problem using technology.
35
 
36
  ## Training and evaluation data
37
 
38
+ Dataset Source: https://www.kaggle.com/datasets/cuitengfeui/textsummarization-data
39
 
40
  ## Training procedure
41
 
 
62
  - Transformers 4.26.1
63
  - Pytorch 1.12.1
64
  - Datasets 2.9.0
65
+ - Tokenizers 0.12.1