MichelBartelsDeepset commited on
Commit
350a9ad
1 Parent(s): af4e144

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +60 -16
README.md CHANGED
@@ -8,19 +8,20 @@ tags:
8
  - exbert
9
  ---
10
 
11
- ![bert_image](https://thumb.tildacdn.com/tild3433-3637-4830-a533-353833613061/-/resize/720x/-/format/webp/germanquad.jpg)
 
12
 
13
  ## Overview
14
  **Language model:** deepset/xlm-roberta-base-squad2-distilled
15
  **Language:** Multilingual
16
- **Training data:** SQuAD 2.0 training set
17
- **Infrastructure**: 1x V100 GPU
18
- **Published**: Apr 21st, 2021
19
-
20
- ## Details
21
- - haystack's distillation feature was used for training. deepset/xlm-roberta-large-squad2 was used as the teacher model.
22
 
23
  ## Hyperparameters
 
24
  ```
25
  batch_size = 56
26
  n_epochs = 4
@@ -31,8 +32,39 @@ embeds_dropout_prob = 0.1
31
  temperature = 3
32
  distillation_loss_weight = 0.75
33
  ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
34
  ## Performance
35
- SQuAD v2 dev set:
36
  ```
37
  "exact": 74.06721131980123%
38
  "f1": 76.39919553344667%
@@ -44,17 +76,29 @@ SQuAD v2 dev set:
44
  - Malte Pietsch: `malte.pietsch [at] deepset.ai`
45
  - Michel Bartels: `michel.bartels [at] deepset.ai`
46
  ## About us
47
- ![deepset logo](https://workablehr.s3.amazonaws.com/uploads/account/logo/476306/logo)
48
- We bring NLP to the industry via open source!
49
- Our focus: Industry specific language models & large scale QA systems.
50
-
51
- Some of our work:
 
 
 
 
 
 
 
 
 
52
  - [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert)
53
  - [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad)
54
- - [FARM](https://github.com/deepset-ai/FARM)
55
- - [Haystack](https://github.com/deepset-ai/haystack/)
56
 
57
- Get in touch:
 
 
 
 
 
58
  [Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Slack](https://haystack.deepset.ai/community/join) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai)
59
 
60
  By the way: [we're hiring!](http://www.deepset.ai/jobs)
 
8
  - exbert
9
  ---
10
 
11
+ # deepset/xlm-roberta-base-squad2-distilled
12
+ - haystack's distillation feature was used for training. deepset/xlm-roberta-large-squad2 was used as the teacher model.
13
 
14
  ## Overview
15
  **Language model:** deepset/xlm-roberta-base-squad2-distilled
16
  **Language:** Multilingual
17
+ **Downstream-task:** Extractive QA
18
+ **Training data:** SQuAD 2.0
19
+ **Eval data:** SQuAD 2.0
20
+ **Code:** See [an example QA pipeline on Haystack](https://haystack.deepset.ai/tutorials/first-qa-system)
21
+ **Infrastructure**: 1x Tesla v100
 
22
 
23
  ## Hyperparameters
24
+
25
  ```
26
  batch_size = 56
27
  n_epochs = 4
 
32
  temperature = 3
33
  distillation_loss_weight = 0.75
34
  ```
35
+
36
+ ## Usage
37
+
38
+ ### In Haystack
39
+ Haystack is an NLP framework by deepset. You can use this model in a Haystack pipeline to do question answering at scale (over many documents). To load the model in [Haystack](https://github.com/deepset-ai/haystack/):
40
+ ```python
41
+ reader = FARMReader(model_name_or_path="deepset/xlm-roberta-base-squad2-distilled")
42
+ # or
43
+ reader = TransformersReader(model_name_or_path="deepset/xlm-roberta-base-squad2-distilled",tokenizer="deepset/xlm-roberta-base-squad2-distilled")
44
+ ```
45
+ For a complete example of ``deepset/xlm-roberta-base-squad2-distilled`` being used for [question answering], check out the [Tutorials in Haystack Documentation](https://haystack.deepset.ai/tutorials/first-qa-system)
46
+
47
+ ### In Transformers
48
+ ```python
49
+ from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
50
+
51
+ model_name = "deepset/xlm-roberta-base-squad2-distilled"
52
+
53
+ # a) Get predictions
54
+ nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
55
+ QA_input = {
56
+ 'question': 'Why is model conversion important?',
57
+ 'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.'
58
+ }
59
+ res = nlp(QA_input)
60
+
61
+ # b) Load model & tokenizer
62
+ model = AutoModelForQuestionAnswering.from_pretrained(model_name)
63
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
64
+ ```
65
+
66
  ## Performance
67
+ Evaluated on the SQuAD 2.0 dev set
68
  ```
69
  "exact": 74.06721131980123%
70
  "f1": 76.39919553344667%
 
76
  - Malte Pietsch: `malte.pietsch [at] deepset.ai`
77
  - Michel Bartels: `michel.bartels [at] deepset.ai`
78
  ## About us
79
+ <div class="grid lg:grid-cols-2 gap-x-4 gap-y-3">
80
+ <div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
81
+ <img alt="" src="https://huggingface.co/spaces/deepset/README/resolve/main/haystack-logo-colored.svg" class="w-40"/>
82
+ </div>
83
+ <div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
84
+ <img alt="" src="https://huggingface.co/spaces/deepset/README/resolve/main/deepset-logo-colored.svg" class="w-40"/>
85
+ </div>
86
+ </div>
87
+
88
+ [deepset](http://deepset.ai/) is the company behind the open-source NLP framework [Haystack](https://haystack.deepset.ai/) which is designed to help you build production ready NLP systems that use: Question answering, summarization, ranking etc.
89
+
90
+
91
+ Some of our other work:
92
+ - [Distilled roberta-base-squad2 (aka "tinyroberta-squad2")]([https://huggingface.co/deepset/tinyroberta-squad2)
93
  - [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert)
94
  - [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad)
 
 
95
 
96
+ ## Get in touch and join the Haystack community
97
+
98
+ <p>For more info on Haystack, visit our <strong><a href="https://github.com/deepset-ai/haystack">GitHub</a></strong> repo and <strong><a href="https://haystack.deepset.ai">Documentation</a></strong>.
99
+
100
+ We also have a <strong><a class="h-7" href="https://haystack.deepset.ai/community/join"><img alt="slack" class="h-7 inline-block m-0" style="margin: 0" src="https://huggingface.co/spaces/deepset/README/resolve/main/Slack_RGB.png"/>community open to everyone!</a></strong></p>
101
+
102
  [Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Slack](https://haystack.deepset.ai/community/join) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai)
103
 
104
  By the way: [we're hiring!](http://www.deepset.ai/jobs)