abhitopia commited on
Commit
fb8ffc5
1 Parent(s): 0517de4

Added model

Browse files
README.md CHANGED
@@ -1,3 +1,50 @@
1
  ---
 
 
 
 
 
 
 
2
  license: mit
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ datasets:
3
+ - squad
4
+ tags:
5
+ - question-answer-generation
6
+ widget:
7
+ - text: "generate question: <hl> 42 <hl> is the answer to life, the universe and everything. </s>"
8
+ - text: "question: What is 42 context: 42 is the answer to life, the universe and everything. </s>"
9
  license: mit
10
  ---
11
+
12
+ ## T5 for multi-task QA and QG
13
+ This is multi-task [t5-base](https://arxiv.org/abs/1910.10683) model trained for question answering and answer aware question generation tasks.
14
+
15
+ For question generation the answer spans are highlighted within the text with special highlight tokens (`<hl>`) and prefixed with 'generate question: '. For QA the input is processed like this `question: question_text context: context_text </s>`
16
+
17
+ You can play with the model using the inference API. Here's how you can use it
18
+
19
+ For QG
20
+
21
+ `generate question: <hl> 42 <hl> is the answer to life, the universe and everything. </s>`
22
+
23
+ For QA
24
+
25
+ `question: What is 42 context: 42 is the answer to life, the universe and everything. </s>`
26
+
27
+ For more deatils see [this](https://github.com/patil-suraj/question_generation) repo.
28
+
29
+
30
+ ### Model in action 🚀
31
+
32
+ You'll need to clone the [repo](https://github.com/patil-suraj/question_generation).
33
+
34
+ [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/patil-suraj/question_generation/blob/master/question_generation.ipynb)
35
+
36
+ ```python3
37
+ from pipelines import pipeline
38
+ nlp = pipeline("multitask-qa-qg", model="valhalla/t5-base-qa-qg-hl")
39
+
40
+ # to generate questions simply pass the text
41
+ nlp("42 is the answer to life, the universe and everything.")
42
+ => [{'answer': '42', 'question': 'What is the answer to life, the universe and everything?'}]
43
+
44
+ # for qa pass a dict with "question" and "context"
45
+ nlp({
46
+ "question": "What is 42 ?",
47
+ "context": "42 is the answer to life, the universe and everything."
48
+ })
49
+ => 'the answer to life, the universe and everything'
50
+ ```
added_tokens.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"<sep>": 32100, "<hl>": 32101}
config.json ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "T5ForConditionalGeneration"
4
+ ],
5
+ "d_ff": 3072,
6
+ "d_kv": 64,
7
+ "d_model": 768,
8
+ "decoder_start_token_id": 0,
9
+ "dropout_rate": 0.1,
10
+ "eos_token_id": 1,
11
+ "initializer_factor": 1.0,
12
+ "is_encoder_decoder": true,
13
+ "layer_norm_epsilon": 1e-06,
14
+ "model_type": "t5",
15
+ "n_positions": 512,
16
+ "num_heads": 12,
17
+ "num_layers": 12,
18
+ "output_past": true,
19
+ "pad_token_id": 0,
20
+ "relative_attention_num_buckets": 32,
21
+ "task_specific_params": {
22
+ "translation_en_to_fr": {
23
+ "early_stopping": true,
24
+ "max_length": 32,
25
+ "num_beams": 4,
26
+ "prefix": ""
27
+ }
28
+ },
29
+ "vocab_size": 32102
30
+ }
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f6e4bde0006f8f6cc131bf2c6efe609708af31961cd77142fc4d7dae5ea1a016
3
+ size 891612585
special_tokens_map.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"eos_token": "</s>", "unk_token": "<unk>", "pad_token": "<pad>"}
spiece.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d60acb128cf7b7f2536e8f38a5b18a05535c9e14c7a355904270e15b0945ea86
3
+ size 791656
tokenizer_config.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"model_max_length": 512, "eos_token": "</s>", "unk_token": "<unk>", "pad_token": "<pad>"}
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4914be32ce1a3c9667afa59c49853eb9eccb781009b4fb0eb0ee6168a6a7748c
3
+ size 1087