julien-c HF staff commited on
Commit
17587c9
1 Parent(s): e86097d

Migrate model card from transformers-repo

Browse files

Read announcement at https://discuss.huggingface.co/t/announcement-all-model-cards-will-be-migrated-to-hf-co-model-repos/2755
Original file history: https://github.com/huggingface/transformers/commits/master/model_cards/deepset/roberta-base-squad2-covid/README.md

Files changed (1) hide show
  1. README.md +106 -0
README.md ADDED
@@ -0,0 +1,106 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # roberta-base-squad2 for QA on COVID-19
2
+
3
+ ## Overview
4
+ **Language model:** deepset/roberta-base-squad2
5
+ **Language:** English
6
+ **Downstream-task:** Extractive QA
7
+ **Training data:** [SQuAD-style CORD-19 annotations from 23rd April](https://github.com/deepset-ai/COVID-QA/blob/master/data/question-answering/200423_covidQA.json)
8
+ **Code:** See [example](https://github.com/deepset-ai/FARM/blob/master/examples/question_answering_crossvalidation.py) in [FARM](https://github.com/deepset-ai/FARM)
9
+ **Infrastructure**: Tesla v100
10
+
11
+ ## Hyperparameters
12
+ ```
13
+ batch_size = 24
14
+ n_epochs = 3
15
+ base_LM_model = "deepset/roberta-base-squad2"
16
+ max_seq_len = 384
17
+ learning_rate = 3e-5
18
+ lr_schedule = LinearWarmup
19
+ warmup_proportion = 0.1
20
+ doc_stride = 128
21
+ xval_folds = 5
22
+ dev_split = 0
23
+ no_ans_boost = -100
24
+ ```
25
+
26
+ ## Performance
27
+ 5-fold cross-validation on the data set led to the following results:
28
+
29
+ **Single EM-Scores:** [0.222, 0.123, 0.234, 0.159, 0.158]
30
+ **Single F1-Scores:** [0.476, 0.493, 0.599, 0.461, 0.465]
31
+ **Single top\_3\_recall Scores:** [0.827, 0.776, 0.860, 0.771, 0.777]
32
+ **XVAL EM:** 0.17890995260663506
33
+ **XVAL f1:** 0.49925444207319924
34
+ **XVAL top\_3\_recall:** 0.8021327014218009
35
+
36
+ This model is the model obtained from the **third** fold of the cross-validation.
37
+
38
+ ## Usage
39
+
40
+ ### In Transformers
41
+ ```python
42
+ from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
43
+
44
+
45
+ model_name = "deepset/roberta-base-squad2-covid"
46
+
47
+ # a) Get predictions
48
+ nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
49
+ QA_input = {
50
+ 'question': 'Why is model conversion important?',
51
+ 'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.'
52
+ }
53
+ res = nlp(QA_input)
54
+
55
+ # b) Load model & tokenizer
56
+ model = AutoModelForQuestionAnswering.from_pretrained(model_name)
57
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
58
+ ```
59
+
60
+ ### In FARM
61
+ ```python
62
+ from farm.modeling.adaptive_model import AdaptiveModel
63
+ from farm.modeling.tokenization import Tokenizer
64
+ from farm.infer import Inferencer
65
+
66
+ model_name = "deepset/roberta-base-squad2-covid"
67
+
68
+ # a) Get predictions
69
+ nlp = Inferencer.load(model_name, task_type="question_answering")
70
+ QA_input = [{"questions": ["Why is model conversion important?"],
71
+ "text": "The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks."}]
72
+ res = nlp.inference_from_dicts(dicts=QA_input, rest_api_schema=True)
73
+
74
+ # b) Load model & tokenizer
75
+ model = AdaptiveModel.convert_from_transformers(model_name, device="cpu", task_type="question_answering")
76
+ tokenizer = Tokenizer.load(model_name)
77
+ ```
78
+
79
+ ### In haystack
80
+ For doing QA at scale (i.e. many docs instead of single paragraph), you can load the model also in [haystack](https://github.com/deepset-ai/haystack/):
81
+ ```python
82
+ reader = FARMReader(model_name_or_path="deepset/roberta-base-squad2-covid")
83
+ # or
84
+ reader = TransformersReader(model="deepset/roberta-base-squad2",tokenizer="deepset/roberta-base-squad2-covid")
85
+ ```
86
+
87
+ ## Authors
88
+ Branden Chan: `branden.chan [at] deepset.ai`
89
+ Timo Möller: `timo.moeller [at] deepset.ai`
90
+ Malte Pietsch: `malte.pietsch [at] deepset.ai`
91
+ Tanay Soni: `tanay.soni [at] deepset.ai`
92
+ Bogdan Kostić: `bogdan.kostic [at] deepset.ai`
93
+
94
+ ## About us
95
+ ![deepset logo](https://raw.githubusercontent.com/deepset-ai/FARM/master/docs/img/deepset_logo.png)
96
+
97
+ We bring NLP to the industry via open source!
98
+ Our focus: Industry specific language models & large scale QA systems.
99
+
100
+ Some of our work:
101
+ - [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert)
102
+ - [FARM](https://github.com/deepset-ai/FARM)
103
+ - [Haystack](https://github.com/deepset-ai/haystack/)
104
+
105
+ Get in touch:
106
+ [Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Website](https://deepset.ai)