julien-c HF staff commited on
Commit
325a3df
1 Parent(s): 978c03c

Migrate model card from transformers-repo

Browse files

Read announcement at https://discuss.huggingface.co/t/announcement-all-model-cards-will-be-migrated-to-hf-co-model-repos/2755
Original file history: https://github.com/huggingface/transformers/commits/master/model_cards/deepset/roberta-base-squad2/README.md

Files changed (1) hide show
  1. README.md +9 -8
README.md CHANGED
@@ -5,7 +5,11 @@ datasets:
5
 
6
  # roberta-base for QA
7
 
8
- NOTE: This is version 2 of the model. See [this github issue](https://github.com/deepset-ai/FARM/issues/552) from the FARM repository for an explanation of why we updated.
 
 
 
 
9
 
10
  ## Overview
11
  **Language model:** roberta-base
@@ -50,11 +54,9 @@ Evaluated on the SQuAD 2.0 dev set with the [official eval script](https://works
50
 
51
  ### In Transformers
52
  ```python
53
- from transformers.pipelines import pipeline
54
- from transformers.modeling_auto import AutoModelForQuestionAnswering
55
- from transformers.tokenization_auto import AutoTokenizer
56
 
57
- model_name = "deepset/roberta-base-squad2-v2"
58
 
59
  # a) Get predictions
60
  nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
@@ -76,7 +78,7 @@ from farm.modeling.adaptive_model import AdaptiveModel
76
  from farm.modeling.tokenization import Tokenizer
77
  from farm.infer import Inferencer
78
 
79
- model_name = "deepset/roberta-base-squad2-v2"
80
 
81
  # a) Get predictions
82
  nlp = Inferencer.load(model_name, task_type="question_answering")
@@ -94,7 +96,7 @@ For doing QA at scale (i.e. many docs instead of single paragraph), you can load
94
  ```python
95
  reader = FARMReader(model_name_or_path="deepset/roberta-base-squad2")
96
  # or
97
- reader = TransformersReader(model="deepset/roberta-base-squad2",tokenizer="deepset/roberta-base-squad2")
98
  ```
99
 
100
 
@@ -117,4 +119,3 @@ Some of our work:
117
 
118
  Get in touch:
119
  [Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Website](https://deepset.ai)
120
-
 
5
 
6
  # roberta-base for QA
7
 
8
+ NOTE: This is version 2 of the model. See [this github issue](https://github.com/deepset-ai/FARM/issues/552) from the FARM repository for an explanation of why we updated. If you'd like to use version 1, specify `revision="v1.0"` when loading the model in Transformers 3.5. For exmaple:
9
+ ```
10
+ model_name = "deepset/roberta-base-squad2"
11
+ pipeline(model=model_name, tokenizer=model_name, revision="v1.0", task="question-answering")
12
+ ```
13
 
14
  ## Overview
15
  **Language model:** roberta-base
 
54
 
55
  ### In Transformers
56
  ```python
57
+ from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
 
 
58
 
59
+ model_name = "deepset/roberta-base-squad2"
60
 
61
  # a) Get predictions
62
  nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
 
78
  from farm.modeling.tokenization import Tokenizer
79
  from farm.infer import Inferencer
80
 
81
+ model_name = "deepset/roberta-base-squad2"
82
 
83
  # a) Get predictions
84
  nlp = Inferencer.load(model_name, task_type="question_answering")
 
96
  ```python
97
  reader = FARMReader(model_name_or_path="deepset/roberta-base-squad2")
98
  # or
99
+ reader = TransformersReader(model_name_or_path="deepset/roberta-base-squad2",tokenizer="deepset/roberta-base-squad2")
100
  ```
101
 
102
 
 
119
 
120
  Get in touch:
121
  [Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Website](https://deepset.ai)