philschmid HF staff commited on
Commit
db07770
1 Parent(s): 5a033cf

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +130 -0
README.md ADDED
@@ -0,0 +1,130 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: en
3
+ datasets:
4
+ - squad_v2
5
+ license: cc-by-4.0
6
+ ---
7
+
8
+ # ONNX convert roberta-base for QA
9
+
10
+ ## Conversion of [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2)
11
+
12
+ NOTE: This is version 2 of the model. See [this github issue](https://github.com/deepset-ai/FARM/issues/552) from the FARM repository for an explanation of why we updated. If you'd like to use version 1, specify `revision="v1.0"` when loading the model in Transformers 3.5. For exmaple:
13
+ ```
14
+ model_name = "deepset/roberta-base-squad2"
15
+ pipeline(model=model_name, tokenizer=model_name, revision="v1.0", task="question-answering")
16
+ ```
17
+
18
+ ## Overview
19
+ **Language model:** roberta-base
20
+ **Language:** English
21
+ **Downstream-task:** Extractive QA
22
+ **Training data:** SQuAD 2.0
23
+ **Eval data:** SQuAD 2.0
24
+ **Code:** See [example](https://github.com/deepset-ai/FARM/blob/master/examples/question_answering.py) in [FARM](https://github.com/deepset-ai/FARM/blob/master/examples/question_answering.py)
25
+ **Infrastructure**: 4x Tesla v100
26
+
27
+ ## Hyperparameters
28
+
29
+ ```
30
+ batch_size = 96
31
+ n_epochs = 2
32
+ base_LM_model = "roberta-base"
33
+ max_seq_len = 386
34
+ learning_rate = 3e-5
35
+ lr_schedule = LinearWarmup
36
+ warmup_proportion = 0.2
37
+ doc_stride=128
38
+ max_query_length=64
39
+ ```
40
+
41
+ ## Using a distilled model instead
42
+ Please note that we have also released a distilled version of this model called [deepset/tinyroberta-squad2](https://huggingface.co/deepset/tinyroberta-squad2). The distilled model has a comparable prediction quality and runs at twice the speed of the base model.
43
+
44
+ ## Performance
45
+ Evaluated on the SQuAD 2.0 dev set with the [official eval script](https://worksheets.codalab.org/rest/bundles/0x6b567e1cf2e041ec80d7098f031c5c9e/contents/blob/).
46
+
47
+ ```
48
+ "exact": 79.87029394424324,
49
+ "f1": 82.91251169582613,
50
+
51
+ "total": 11873,
52
+ "HasAns_exact": 77.93522267206478,
53
+ "HasAns_f1": 84.02838248389763,
54
+ "HasAns_total": 5928,
55
+ "NoAns_exact": 81.79983179142137,
56
+ "NoAns_f1": 81.79983179142137,
57
+ "NoAns_total": 5945
58
+ ```
59
+
60
+ ## Usage
61
+
62
+ ### In Transformers
63
+ ```python
64
+ from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
65
+
66
+ model_name = "deepset/roberta-base-squad2"
67
+
68
+ # a) Get predictions
69
+ nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
70
+ QA_input = {
71
+ 'question': 'Why is model conversion important?',
72
+ 'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.'
73
+ }
74
+ res = nlp(QA_input)
75
+
76
+ # b) Load model & tokenizer
77
+ model = AutoModelForQuestionAnswering.from_pretrained(model_name)
78
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
79
+ ```
80
+
81
+ ### In FARM
82
+
83
+ ```python
84
+ from farm.modeling.adaptive_model import AdaptiveModel
85
+ from farm.modeling.tokenization import Tokenizer
86
+ from farm.infer import Inferencer
87
+
88
+ model_name = "deepset/roberta-base-squad2"
89
+
90
+ # a) Get predictions
91
+ nlp = Inferencer.load(model_name, task_type="question_answering")
92
+ QA_input = [{"questions": ["Why is model conversion important?"],
93
+ "text": "The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks."}]
94
+ res = nlp.inference_from_dicts(dicts=QA_input, rest_api_schema=True)
95
+
96
+ # b) Load model & tokenizer
97
+ model = AdaptiveModel.convert_from_transformers(model_name, device="cpu", task_type="question_answering")
98
+ tokenizer = Tokenizer.load(model_name)
99
+ ```
100
+
101
+ ### In haystack
102
+ For doing QA at scale (i.e. many docs instead of single paragraph), you can load the model also in [haystack](https://github.com/deepset-ai/haystack/):
103
+ ```python
104
+ reader = FARMReader(model_name_or_path="deepset/roberta-base-squad2")
105
+ # or
106
+ reader = TransformersReader(model_name_or_path="deepset/roberta-base-squad2",tokenizer="deepset/roberta-base-squad2")
107
+ ```
108
+
109
+
110
+ ## Authors
111
+ Branden Chan: `branden.chan [at] deepset.ai`
112
+ Timo Möller: `timo.moeller [at] deepset.ai`
113
+ Malte Pietsch: `malte.pietsch [at] deepset.ai`
114
+ Tanay Soni: `tanay.soni [at] deepset.ai`
115
+
116
+ ## About us
117
+ ![deepset logo](https://workablehr.s3.amazonaws.com/uploads/account/logo/476306/logo)
118
+ We bring NLP to the industry via open source!
119
+ Our focus: Industry specific language models & large scale QA systems.
120
+
121
+ Some of our work:
122
+ - [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert)
123
+ - [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad)
124
+ - [FARM](https://github.com/deepset-ai/FARM)
125
+ - [Haystack](https://github.com/deepset-ai/haystack/)
126
+
127
+ Get in touch:
128
+ [Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Slack](https://haystack.deepset.ai/community/join) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai)
129
+
130
+ By the way: [we're hiring!](http://www.deepset.ai/jobs)