julien-c HF staff commited on
Commit
8ddeab8
1 Parent(s): a1af0fb

Migrate model card from transformers-repo

Browse files

Read announcement at https://discuss.huggingface.co/t/announcement-all-model-cards-will-be-migrated-to-hf-co-model-repos/2755
Original file history: https://github.com/huggingface/transformers/commits/master/model_cards/valhalla/t5-small-qa-qg-hl/README.md

Files changed (1) hide show
  1. README.md +49 -0
README.md ADDED
@@ -0,0 +1,49 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ datasets:
3
+ - squad
4
+ tags:
5
+ - question-generation
6
+ widget:
7
+ - text: "generate question: <hl> 42 <hl> is the answer to life, the universe and everything. </s>"
8
+ - text: "question: What is 42 context: 42 is the answer to life, the universe and everything. </s>"
9
+ license: mit
10
+ ---
11
+
12
+ ## T5 for multi-task QA and QG
13
+ This is multi-task [t5-small](https://arxiv.org/abs/1910.10683) model trained for question answering and answer aware question generation tasks.
14
+
15
+ For question generation the answer spans are highlighted within the text with special highlight tokens (`<hl>`) and prefixed with 'generate question: '. For QA the input is processed like this `question: question_text context: context_text </s>`
16
+
17
+ You can play with the model using the inference API. Here's how you can use it
18
+
19
+ For QG
20
+
21
+ `generate question: <hl> 42 <hl> is the answer to life, the universe and everything. </s>`
22
+
23
+ For QA
24
+
25
+ `question: What is 42 context: 42 is the answer to life, the universe and everything. </s>`
26
+
27
+ For more deatils see [this](https://github.com/patil-suraj/question_generation) repo.
28
+
29
+ ### Model in action 🚀
30
+
31
+ You'll need to clone the [repo](https://github.com/patil-suraj/question_generation).
32
+
33
+ [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/patil-suraj/question_generation/blob/master/question_generation.ipynb)
34
+
35
+ ```python3
36
+ from pipelines import pipeline
37
+ nlp = pipeline("multitask-qa-qg")
38
+
39
+ # to generate questions simply pass the text
40
+ nlp("42 is the answer to life, the universe and everything.")
41
+ => [{'answer': '42', 'question': 'What is the answer to life, the universe and everything?'}]
42
+
43
+ # for qa pass a dict with "question" and "context"
44
+ nlp({
45
+ "question": "What is 42 ?",
46
+ "context": "42 is the answer to life, the universe and everything."
47
+ })
48
+ => 'the answer to life, the universe and everything'
49
+ ```