Shobhank-iiitdwd commited on
Commit
71efd89
1 Parent(s): 2f09e8b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -48
README.md CHANGED
@@ -4,7 +4,7 @@ license: cc-by-4.0
4
  datasets:
5
  - squad_v2
6
  model-index:
7
- - name: deepset/roberta-base-squad2
8
  results:
9
  - task:
10
  type: question-answering
@@ -43,8 +43,6 @@ This is the [roberta-base](https://huggingface.co/roberta-base) model, fine-tune
43
  **Downstream-task:** Extractive QA
44
  **Training data:** SQuAD 2.0
45
  **Eval data:** SQuAD 2.0
46
- **Code:** See [an example QA pipeline on Haystack](https://haystack.deepset.ai/tutorials/first-qa-system)
47
- **Infrastructure**: 4x Tesla v100
48
 
49
  ## Hyperparameters
50
 
@@ -58,27 +56,22 @@ lr_schedule = LinearWarmup
58
  warmup_proportion = 0.2
59
  doc_stride=128
60
  max_query_length=64
61
- ```
62
-
63
- ## Using a distilled model instead
64
- Please note that we have also released a distilled version of this model called [deepset/tinyroberta-squad2](https://huggingface.co/deepset/tinyroberta-squad2). The distilled model has a comparable prediction quality and runs at twice the speed of the base model.
65
 
66
  ## Usage
67
 
68
  ### In Haystack
69
  Haystack is an NLP framework by deepset. You can use this model in a Haystack pipeline to do question answering at scale (over many documents). To load the model in [Haystack](https://github.com/deepset-ai/haystack/):
70
  ```python
71
- reader = FARMReader(model_name_or_path="deepset/roberta-base-squad2")
72
  # or
73
- reader = TransformersReader(model_name_or_path="deepset/roberta-base-squad2",tokenizer="deepset/roberta-base-squad2")
74
  ```
75
- For a complete example of ``roberta-base-squad2`` being used for Question Answering, check out the [Tutorials in Haystack Documentation](https://haystack.deepset.ai/tutorials/first-qa-system)
76
-
77
  ### In Transformers
78
  ```python
79
  from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
80
 
81
- model_name = "deepset/roberta-base-squad2"
82
 
83
  # a) Get predictions
84
  nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
@@ -107,39 +100,4 @@ Evaluated on the SQuAD 2.0 dev set with the [official eval script](https://works
107
  "NoAns_exact": 81.79983179142137,
108
  "NoAns_f1": 81.79983179142137,
109
  "NoAns_total": 5945
110
- ```
111
-
112
- ## Authors
113
- **Branden Chan:** branden.chan@deepset.ai
114
- **Timo Möller:** timo.moeller@deepset.ai
115
- **Malte Pietsch:** malte.pietsch@deepset.ai
116
- **Tanay Soni:** tanay.soni@deepset.ai
117
-
118
- ## About us
119
-
120
- <div class="grid lg:grid-cols-2 gap-x-4 gap-y-3">
121
- <div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
122
- <img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/deepset-logo-colored.png" class="w-40"/>
123
- </div>
124
- <div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
125
- <img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/haystack-logo-colored.png" class="w-40"/>
126
- </div>
127
- </div>
128
-
129
- [deepset](http://deepset.ai/) is the company behind the open-source NLP framework [Haystack](https://haystack.deepset.ai/) which is designed to help you build production ready NLP systems that use: Question answering, summarization, ranking etc.
130
-
131
-
132
- Some of our other work:
133
- - [Distilled roberta-base-squad2 (aka "tinyroberta-squad2")]([https://huggingface.co/deepset/tinyroberta-squad2)
134
- - [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert)
135
- - [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad)
136
-
137
- ## Get in touch and join the Haystack community
138
-
139
- <p>For more info on Haystack, visit our <strong><a href="https://github.com/deepset-ai/haystack">GitHub</a></strong> repo and <strong><a href="https://docs.haystack.deepset.ai">Documentation</a></strong>.
140
-
141
- We also have a <strong><a class="h-7" href="https://haystack.deepset.ai/community">Discord community open to everyone!</a></strong></p>
142
-
143
- [Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Discord](https://haystack.deepset.ai/community) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai)
144
-
145
- By the way: [we're hiring!](http://www.deepset.ai/jobs)
 
4
  datasets:
5
  - squad_v2
6
  model-index:
7
+ - name: Shobhank-iiitdwd/RoBERTA-rrQA
8
  results:
9
  - task:
10
  type: question-answering
 
43
  **Downstream-task:** Extractive QA
44
  **Training data:** SQuAD 2.0
45
  **Eval data:** SQuAD 2.0
 
 
46
 
47
  ## Hyperparameters
48
 
 
56
  warmup_proportion = 0.2
57
  doc_stride=128
58
  max_query_length=64
59
+ ``` The distilled model has a comparable prediction quality and runs at twice the speed of the base model.
 
 
 
60
 
61
  ## Usage
62
 
63
  ### In Haystack
64
  Haystack is an NLP framework by deepset. You can use this model in a Haystack pipeline to do question answering at scale (over many documents). To load the model in [Haystack](https://github.com/deepset-ai/haystack/):
65
  ```python
66
+ reader = FARMReader(model_name_or_path="Shobhank-iiitdwd/RoBERTA-rrQA")
67
  # or
68
+ reader = TransformersReader(model_name_or_path="Shobhank-iiitdwd/RoBERTA-rrQA",tokenizer="Shobhank-iiitdwd/RoBERTA-rrQA")
69
  ```
 
 
70
  ### In Transformers
71
  ```python
72
  from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
73
 
74
+ model_name = "Shobhank-iiitdwd/RoBERTA-rrQA"
75
 
76
  # a) Get predictions
77
  nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
 
100
  "NoAns_exact": 81.79983179142137,
101
  "NoAns_f1": 81.79983179142137,
102
  "NoAns_total": 5945
103
+ ```