patrickvonplaten commited on
Commit
e08d302
1 Parent(s): 1d59abc

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +54 -0
README.md ADDED
@@ -0,0 +1,54 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: en
3
+ datasets:
4
+ - c4
5
+ - wikipedia
6
+ - natural_questions
7
+
8
+ license: apache-2.0
9
+ ---
10
+
11
+ [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) for **Closed Book Question Answering**.
12
+
13
+ The model was pre-trained using T5's denoising objective on [C4](https://huggingface.co/datasets/c4), subsequently additionally pre-trained using [REALM](https://arxiv.org/pdf/2002.08909.pdf)'s salient span masking objective on [Wikipedia](https://huggingface.co/datasets/wikipedia), and finally fine-tuned on [Natural Questions (NQ)](https://huggingface.co/datasets/natural_questions).
14
+
15
+ **Note**: The model was fine-tuned on 90% of the train splits of [Natural Questions (NQ)](https://huggingface.co/datasets/natural_questions) for 20k steps and validated on the held-out 10% of the train split.
16
+
17
+ Other community Checkpoints: [here](https://huggingface.co/models?search=ssm)
18
+
19
+ Paper: [How Much Knowledge Can You Pack
20
+ Into the Parameters of a Language Model?](https://arxiv.org/abs/1910.10683.pdf)
21
+
22
+ Authors: *Adam Roberts, Colin Raffel, Noam Shazeer*
23
+
24
+
25
+ ## Results on Natural Questions - Test Set
26
+
27
+ |Id | link | Exact Match |
28
+ |---|---|---|
29
+ |T5-large|https://huggingface.co/google/t5-large-ssm-nqo|29.0|
30
+ |T5-xxl|https://huggingface.co/google/t5-xxl-ssm-nqo|35.2|
31
+ |**T5-3b**|**https://huggingface.co/google/t5-3b-ssm-nqo**|**31.7**|
32
+ |T5-11b|https://huggingface.co/google/t5-11b-ssm-nqo|34.8|
33
+
34
+ ## Usage
35
+
36
+ The model can be used as follows for **closed book question answering**:
37
+
38
+ ```python
39
+ from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
40
+
41
+ t5_qa_model = AutoModelForSeq2SeqLM.from_pretrained("google/t5-large-ssm-nqo")
42
+ t5_tok = AutoTokenizer.from_pretrained("google/t5-large-ssm-nqo")
43
+
44
+ input_ids = t5_tok("When was Franklin D. Roosevelt born?", return_tensors="pt").input_ids
45
+ gen_output = t5_qa_model.generate(input_ids)[0]
46
+
47
+ print(t5_tok.decode(gen_output, skip_special_tokens=True))
48
+ ```
49
+
50
+ ## Abstract
51
+
52
+ It has recently been observed that neural language models trained on unstructured text can implicitly store and retrieve knowledge using natural language queries. In this short paper, we measure the practical utility of this approach by fine-tuning pre-trained models to answer questions without access to any external context or knowledge. We show that this approach scales with model size and performs competitively with open-domain systems that explicitly retrieve answers from an external knowledge source when answering questions. To facilitate reproducibility and future work, we release our code and trained models at https://goo.gle/t5-cbqa.
53
+
54
+ ![model image](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/how_much_know_ledge_image.png)