mohamedemam commited on
Commit
0794067
1 Parent(s): 143b185

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -1
README.md CHANGED
@@ -11,6 +11,8 @@ tags:
11
  - generate text
12
  - nlp
13
  - dataset maker
 
 
14
  ---
15
 
16
  # Model Card for QA_GeneraToR
@@ -36,7 +38,7 @@ alt="drawing" width="600"/>
36
  If you already know T5, FLAN-T5 is just better at everything. For the same number of parameters, these models have been fine-tuned on more than 1000 additional tasks covering also more languages.
37
  As mentioned in the first few lines of the abstract :
38
  > Flan-PaLM 540B achieves state-of-the-art performance on several benchmarks, such as 75.2% on five-shot MMLU. We also publicly release Flan-T5 checkpoints,1 which achieve strong few-shot performance even compared to much larger models, such as PaLM 62B. Overall, instruction finetuning is a general method for improving the performance and usability of pretrained language models.
39
-
40
  **Disclaimer**: Content from **this** model card has been written by the Hugging Face team, and parts of it were copy pasted from the [T5 model card](https://huggingface.co/t5-large).
41
 
42
  # Model Details
 
11
  - generate text
12
  - nlp
13
  - dataset maker
14
+ - flan t5
15
+ - t5
16
  ---
17
 
18
  # Model Card for QA_GeneraToR
 
38
  If you already know T5, FLAN-T5 is just better at everything. For the same number of parameters, these models have been fine-tuned on more than 1000 additional tasks covering also more languages.
39
  As mentioned in the first few lines of the abstract :
40
  > Flan-PaLM 540B achieves state-of-the-art performance on several benchmarks, such as 75.2% on five-shot MMLU. We also publicly release Flan-T5 checkpoints,1 which achieve strong few-shot performance even compared to much larger models, such as PaLM 62B. Overall, instruction finetuning is a general method for improving the performance and usability of pretrained language models.
41
+ This model is fine tuned to generate a question with answers from a context , why is can be very usful this can help you to generate a dataset from a book article any thing you would to make from it dataset and train another model on this dataset
42
  **Disclaimer**: Content from **this** model card has been written by the Hugging Face team, and parts of it were copy pasted from the [T5 model card](https://huggingface.co/t5-large).
43
 
44
  # Model Details