talalH commited on
Commit
1b25e8c
1 Parent(s): 30cbcdb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -9
README.md CHANGED
@@ -1,7 +1,12 @@
1
  ---
2
- # For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
3
- # Doc / guide: https://huggingface.co/docs/hub/model-cards
4
- {}
 
 
 
 
 
5
  ---
6
 
7
  # Model
@@ -15,9 +20,6 @@ The fine-tuned T5 base model is trained on a carefully curated subset of the XSu
15
  **Transfer Learning for Summarization:**
16
  Transfer learning is employed to enhance the model's performance in generating summaries. The T5 base model, pre-trained on a large corpus of text, is fine-tuned using the curated dataset mentioned above. This process allows the model to leverage its pre-existing knowledge while adapting specifically to the summarization task. By fine-tuning the model, we aim to improve its ability to capture important information and generate concise summaries.
17
 
18
- **Target Output Length:**
19
- One notable difference between this model and other similar models is that it is trained on the target output length of 512. This means that the model is explicitly trained to generate summaries that are up to 512 tokens long. By focusing on this target output length, we aim to provide summaries that are more comprehensive and informative, while still maintaining a reasonable length.
20
-
21
  **Enhanced Support for Greater Length Output:**
22
  We are confident that this fine-tuned T5 model will generate the best possible summaries, particularly for supporting greater length outputs. By training the model with a specific focus on generating longer summaries, we have enhanced its ability to handle and convey more detailed information. This makes the model particularly useful in scenarios where longer summaries are required, such as summarizing lengthy documents or providing in-depth analysis.
23
 
@@ -99,6 +101,4 @@ Talal Hassan (talalhassan141@gmail.com)
99
 
100
  ## Model Card Contact
101
 
102
- Talal Hassan (talalhassan141@gmail.com)
103
-
104
-
 
1
  ---
2
+ datasets:
3
+ - xsum
4
+ - quora
5
+ language:
6
+ - en
7
+ metrics:
8
+ - rouge
9
+ pipeline_tag: text2text-generation
10
  ---
11
 
12
  # Model
 
20
  **Transfer Learning for Summarization:**
21
  Transfer learning is employed to enhance the model's performance in generating summaries. The T5 base model, pre-trained on a large corpus of text, is fine-tuned using the curated dataset mentioned above. This process allows the model to leverage its pre-existing knowledge while adapting specifically to the summarization task. By fine-tuning the model, we aim to improve its ability to capture important information and generate concise summaries.
22
 
 
 
 
23
  **Enhanced Support for Greater Length Output:**
24
  We are confident that this fine-tuned T5 model will generate the best possible summaries, particularly for supporting greater length outputs. By training the model with a specific focus on generating longer summaries, we have enhanced its ability to handle and convey more detailed information. This makes the model particularly useful in scenarios where longer summaries are required, such as summarizing lengthy documents or providing in-depth analysis.
25
 
 
101
 
102
  ## Model Card Contact
103
 
104
+ Talal Hassan (talalhassan141@gmail.com)