julien-c HF staff commited on
Commit
e1a5579
1 Parent(s): 8e9ec10

Migrate model card from transformers-repo

Browse files

Read announcement at https://discuss.huggingface.co/t/announcement-all-model-cards-will-be-migrated-to-hf-co-model-repos/2755
Original file history: https://github.com/huggingface/transformers/commits/master/model_cards/deepset/electra-base-squad2/README.md

Files changed (1) hide show
  1. README.md +115 -0
README.md ADDED
@@ -0,0 +1,115 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ datasets:
3
+ - squad_v2
4
+ ---
5
+
6
+ # electra-base for QA
7
+
8
+ ## Overview
9
+ **Language model:** electra-base
10
+ **Language:** English
11
+ **Downstream-task:** Extractive QA
12
+ **Training data:** SQuAD 2.0
13
+ **Eval data:** SQuAD 2.0
14
+ **Code:** See [example](https://github.com/deepset-ai/FARM/blob/master/examples/question_answering.py) in [FARM](https://github.com/deepset-ai/FARM/blob/master/examples/question_answering.py)
15
+ **Infrastructure**: 1x Tesla v100
16
+
17
+ ## Hyperparameters
18
+
19
+ ```
20
+ seed=42
21
+ batch_size = 32
22
+ n_epochs = 5
23
+ base_LM_model = "google/electra-base-discriminator"
24
+ max_seq_len = 384
25
+ learning_rate = 1e-4
26
+ lr_schedule = LinearWarmup
27
+ warmup_proportion = 0.1
28
+ doc_stride=128
29
+ max_query_length=64
30
+ ```
31
+
32
+ ## Performance
33
+ Evaluated on the SQuAD 2.0 dev set with the [official eval script](https://worksheets.codalab.org/rest/bundles/0x6b567e1cf2e041ec80d7098f031c5c9e/contents/blob/).
34
+ ```
35
+ "exact": 77.30144024256717,
36
+ "f1": 81.35438272008543,
37
+ "total": 11873,
38
+ "HasAns_exact": 74.34210526315789,
39
+ "HasAns_f1": 82.45961302894314,
40
+ "HasAns_total": 5928,
41
+ "NoAns_exact": 80.25231286795626,
42
+ "NoAns_f1": 80.25231286795626,
43
+ "NoAns_total": 5945
44
+ ```
45
+
46
+ ## Usage
47
+
48
+ ### In Transformers
49
+ ```python
50
+ from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
51
+
52
+ model_name = "deepset/electra-base-squad2"
53
+
54
+ # a) Get predictions
55
+ nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
56
+ QA_input = {
57
+ 'question': 'Why is model conversion important?',
58
+ 'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.'
59
+ }
60
+ res = nlp(QA_input)
61
+
62
+ # b) Load model & tokenizer
63
+ model = AutoModelForQuestionAnswering.from_pretrained(model_name)
64
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
65
+ ```
66
+
67
+ ### In FARM
68
+
69
+ ```python
70
+ from farm.modeling.adaptive_model import AdaptiveModel
71
+ from farm.modeling.tokenization import Tokenizer
72
+ from farm.infer import Inferencer
73
+
74
+ model_name = "deepset/electra-base-squad2"
75
+
76
+ # a) Get predictions
77
+ nlp = Inferencer.load(model_name, task_type="question_answering")
78
+ QA_input = [{"questions": ["Why is model conversion important?"],
79
+ "text": "The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks."}]
80
+ res = nlp.inference_from_dicts(dicts=QA_input)
81
+
82
+ # b) Load model & tokenizer
83
+ model = AdaptiveModel.convert_from_transformers(model_name, device="cpu", task_type="question_answering")
84
+ tokenizer = Tokenizer.load(model_name)
85
+ ```
86
+
87
+ ### In haystack
88
+ For doing QA at scale (i.e. many docs instead of single paragraph), you can load the model also in [haystack](https://github.com/deepset-ai/haystack/):
89
+ ```python
90
+ reader = FARMReader(model_name_or_path="deepset/electra-base-squad2")
91
+ # or
92
+ reader = TransformersReader(model="deepset/electra-base-squad2",tokenizer="deepset/electra-base-squad2")
93
+ ```
94
+
95
+
96
+ ## Authors
97
+ Vaishali Pal `vaishali.pal [at] deepset.ai`
98
+ Branden Chan: `branden.chan [at] deepset.ai`
99
+ Timo Möller: `timo.moeller [at] deepset.ai`
100
+ Malte Pietsch: `malte.pietsch [at] deepset.ai`
101
+ Tanay Soni: `tanay.soni [at] deepset.ai`
102
+
103
+ ## About us
104
+ ![deepset logo](https://raw.githubusercontent.com/deepset-ai/FARM/master/docs/img/deepset_logo.png)
105
+
106
+ We bring NLP to the industry via open source!
107
+ Our focus: Industry specific language models & large scale QA systems.
108
+
109
+ Some of our work:
110
+ - [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert)
111
+ - [FARM](https://github.com/deepset-ai/FARM)
112
+ - [Haystack](https://github.com/deepset-ai/haystack/)
113
+
114
+ Get in touch:
115
+ [Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Website](https://deepset.ai)