File size: 1,176 Bytes
1983e4c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
---
datasets:
- squad_v2
language: en
license: mit
pipeline_tag: question-answering
tags:
- electra
- question-answering
---
# Electra base model for QA (SQuAD 2.0)

This model uses [electra-base](https://huggingface.co/google/electra-base-discriminator).

## Training Data
The models have been trained on the [SQuAD 2.0](https://rajpurkar.github.io/SQuAD-explorer/) dataset.

It can be used for question answering task.

## Usage and Performance
The trained model can be used like this:
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline

# Load model & tokenizer
electra_model = AutoModelForQuestionAnswering.from_pretrained('navteca/electra-base-squad2')
electra_tokenizer = AutoTokenizer.from_pretrained('navteca/electra-base-squad2')

# Get predictions
nlp = pipeline('question-answering', model=electra_model, tokenizer=electra_tokenizer)

result = nlp({
    'question': 'How many people live in Berlin?',
    'context': 'Berlin had a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.'
})

print(result)

#{
#  "answer": "3,520,031"
#  "end": 36,
#  "score": 0.99983448,
#  "start": 27,
#}
```