julien-c HF staff commited on
Commit
0286be6
1 Parent(s): 03bc654

Migrate model card from transformers-repo

Browse files

Read announcement at https://discuss.huggingface.co/t/announcement-all-model-cards-will-be-migrated-to-hf-co-model-repos/2755
Original file history: https://github.com/huggingface/transformers/commits/master/model_cards/valhalla/t5-base-qa-qg-hl/README.md

Files changed (1) hide show
  1. README.md +50 -0
README.md ADDED
@@ -0,0 +1,50 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ datasets:
3
+ - squad
4
+ tags:
5
+ - question-generation
6
+ widget:
7
+ - text: "generate question: <hl> 42 <hl> is the answer to life, the universe and everything. </s>"
8
+ - text: "question: What is 42 context: 42 is the answer to life, the universe and everything. </s>"
9
+ license: mit
10
+ ---
11
+
12
+ ## T5 for multi-task QA and QG
13
+ This is multi-task [t5-base](https://arxiv.org/abs/1910.10683) model trained for question answering and answer aware question generation tasks.
14
+
15
+ For question generation the answer spans are highlighted within the text with special highlight tokens (`<hl>`) and prefixed with 'generate question: '. For QA the input is processed like this `question: question_text context: context_text </s>`
16
+
17
+ You can play with the model using the inference API. Here's how you can use it
18
+
19
+ For QG
20
+
21
+ `generate question: <hl> 42 <hl> is the answer to life, the universe and everything. </s>`
22
+
23
+ For QA
24
+
25
+ `question: What is 42 context: 42 is the answer to life, the universe and everything. </s>`
26
+
27
+ For more deatils see [this](https://github.com/patil-suraj/question_generation) repo.
28
+
29
+
30
+ ### Model in action 🚀
31
+
32
+ You'll need to clone the [repo](https://github.com/patil-suraj/question_generation).
33
+
34
+ [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/patil-suraj/question_generation/blob/master/question_generation.ipynb)
35
+
36
+ ```python3
37
+ from pipelines import pipeline
38
+ nlp = pipeline("multitask-qa-qg", model="valhalla/t5-base-qa-qg-hl")
39
+
40
+ # to generate questions simply pass the text
41
+ nlp("42 is the answer to life, the universe and everything.")
42
+ => [{'answer': '42', 'question': 'What is the answer to life, the universe and everything?'}]
43
+
44
+ # for qa pass a dict with "question" and "context"
45
+ nlp({
46
+ "question": "What is 42 ?",
47
+ "context": "42 is the answer to life, the universe and everything."
48
+ })
49
+ => 'the answer to life, the universe and everything'
50
+ ```