jaimin commited on
Commit
c05eb65
1 Parent(s): 0f4fd7d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -142
README.md CHANGED
@@ -1,142 +1 @@
1
- ---
2
- language: en
3
- datasets:
4
- - squad_v2
5
- license: cc-by-4.0
6
- model-index:
7
- - name: deepset/roberta-base-squad2
8
- results:
9
- - task:
10
- type: question-answering
11
- name: Question Answering
12
- dataset:
13
- name: squad_v2
14
- type: squad_v2
15
- config: squad_v2
16
- split: validation
17
- metrics:
18
- - name: Exact Match
19
- type: exact_match
20
- value: 79.9309
21
- verified: true
22
- - name: F1
23
- type: f1
24
- value: 82.9501
25
- verified: true
26
- - name: total
27
- type: total
28
- value: 11869
29
- verified: true
30
- ---
31
-
32
- # roberta-base for QA
33
-
34
- This is the [roberta-base](https://huggingface.co/roberta-base) model, fine-tuned using the [SQuAD2.0](https://huggingface.co/datasets/squad_v2) dataset. It's been trained on question-answer pairs, including unanswerable questions, for the task of Question Answering.
35
-
36
-
37
- ## Overview
38
- **Language model:** roberta-base
39
- **Language:** English
40
- **Downstream-task:** Extractive QA
41
- **Training data:** SQuAD 2.0
42
- **Eval data:** SQuAD 2.0
43
- **Code:** See [an example QA pipeline on Haystack](https://haystack.deepset.ai/tutorials/first-qa-system)
44
- **Infrastructure**: 4x Tesla v100
45
-
46
- ## Hyperparameters
47
-
48
- ```
49
- batch_size = 96
50
- n_epochs = 2
51
- base_LM_model = "roberta-base"
52
- max_seq_len = 386
53
- learning_rate = 3e-5
54
- lr_schedule = LinearWarmup
55
- warmup_proportion = 0.2
56
- doc_stride=128
57
- max_query_length=64
58
- ```
59
-
60
- ## Using a distilled model instead
61
- Please note that we have also released a distilled version of this model called [deepset/tinyroberta-squad2](https://huggingface.co/deepset/tinyroberta-squad2). The distilled model has a comparable prediction quality and runs at twice the speed of the base model.
62
-
63
- ## Usage
64
-
65
- ### In Haystack
66
- Haystack is an NLP framework by deepset. You can use this model in a Haystack pipeline to do question answering at scale (over many documents). To load the model in [Haystack](https://github.com/deepset-ai/haystack/):
67
- ```python
68
- reader = FARMReader(model_name_or_path="deepset/roberta-base-squad2")
69
- # or
70
- reader = TransformersReader(model_name_or_path="deepset/roberta-base-squad2",tokenizer="deepset/roberta-base-squad2")
71
- ```
72
- For a complete example of ``roberta-base-squad2`` being used for Question Answering, check out the [Tutorials in Haystack Documentation](https://haystack.deepset.ai/tutorials/first-qa-system)
73
-
74
- ### In Transformers
75
- ```python
76
- from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
77
-
78
- model_name = "deepset/roberta-base-squad2"
79
-
80
- # a) Get predictions
81
- nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
82
- QA_input = {
83
- 'question': 'Why is model conversion important?',
84
- 'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.'
85
- }
86
- res = nlp(QA_input)
87
-
88
- # b) Load model & tokenizer
89
- model = AutoModelForQuestionAnswering.from_pretrained(model_name)
90
- tokenizer = AutoTokenizer.from_pretrained(model_name)
91
- ```
92
-
93
- ## Performance
94
- Evaluated on the SQuAD 2.0 dev set with the [official eval script](https://worksheets.codalab.org/rest/bundles/0x6b567e1cf2e041ec80d7098f031c5c9e/contents/blob/).
95
-
96
- ```
97
- "exact": 79.87029394424324,
98
- "f1": 82.91251169582613,
99
-
100
- "total": 11873,
101
- "HasAns_exact": 77.93522267206478,
102
- "HasAns_f1": 84.02838248389763,
103
- "HasAns_total": 5928,
104
- "NoAns_exact": 81.79983179142137,
105
- "NoAns_f1": 81.79983179142137,
106
- "NoAns_total": 5945
107
- ```
108
-
109
- ## Authors
110
- **Branden Chan:** branden.chan@deepset.ai
111
- **Timo Möller:** timo.moeller@deepset.ai
112
- **Malte Pietsch:** malte.pietsch@deepset.ai
113
- **Tanay Soni:** tanay.soni@deepset.ai
114
-
115
- ## About us
116
-
117
- <div class="grid lg:grid-cols-2 gap-x-4 gap-y-3">
118
- <div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
119
- <img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/deepset-logo-colored.png" class="w-40"/>
120
- </div>
121
- <div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
122
- <img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/haystack-logo-colored.png" class="w-40"/>
123
- </div>
124
- </div>
125
-
126
- [deepset](http://deepset.ai/) is the company behind the open-source NLP framework [Haystack](https://haystack.deepset.ai/) which is designed to help you build production ready NLP systems that use: Question answering, summarization, ranking etc.
127
-
128
-
129
- Some of our other work:
130
- - [Distilled roberta-base-squad2 (aka "tinyroberta-squad2")]([https://huggingface.co/deepset/tinyroberta-squad2)
131
- - [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert)
132
- - [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad)
133
-
134
- ## Get in touch and join the Haystack community
135
-
136
- <p>For more info on Haystack, visit our <strong><a href="https://github.com/deepset-ai/haystack">GitHub</a></strong> repo and <strong><a href="https://haystack.deepset.ai">Documentation</a></strong>.
137
-
138
- We also have a <strong><a class="h-7" href="https://haystack.deepset.ai/community/join">Discord community open to everyone!</a></strong></p>
139
-
140
- [Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Discord](https://haystack.deepset.ai/community/join) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai)
141
-
142
- By the way: [we're hiring!](http://www.deepset.ai/jobs)
 
1
+ Generate Bullete Points