Update README.md
Browse files
README.md
CHANGED
@@ -11,6 +11,8 @@ pipeline_tag: text2text-generation
|
|
11 |
|
12 |
# Model
|
13 |
|
|
|
|
|
14 |
<!-- Provide a quick summary of what the model is/does. -->
|
15 |
This model card provides information about a fine-tuned T5 base model that has been specifically trained for generating summaries. We have made some key modifications to the training process to optimize the model's performance and provide the best possible summaries, particularly supporting greater length outputs. One notable difference between this model and other similar models is that it is trained on the target output length of 512. This means that the model is explicitly trained to generate summaries that are up to 512 tokens long. By focusing on this target output length, we aim to provide summaries that are more comprehensive and informative, while still maintaining a reasonable length for large text.
|
16 |
|
|
|
11 |
|
12 |
# Model
|
13 |
|
14 |
+
**NOTE:** **FEEL FREE TO DOWNLOAD FOR INFERENCE YOU WON'T REGRET IT :)**
|
15 |
+
|
16 |
<!-- Provide a quick summary of what the model is/does. -->
|
17 |
This model card provides information about a fine-tuned T5 base model that has been specifically trained for generating summaries. We have made some key modifications to the training process to optimize the model's performance and provide the best possible summaries, particularly supporting greater length outputs. One notable difference between this model and other similar models is that it is trained on the target output length of 512. This means that the model is explicitly trained to generate summaries that are up to 512 tokens long. By focusing on this target output length, we aim to provide summaries that are more comprehensive and informative, while still maintaining a reasonable length for large text.
|
18 |
|