File size: 2,076 Bytes
e70021e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
# ELECTRA-BASE-DISCRIMINATOR finetuned on SQuADv1

This is electra-base-discriminator model finetuned on SQuADv1 dataset for for question answering task.

## Model details
As mentioned in the original paper: ELECTRA is a new method for self-supervised language representation learning.
It can be used to pre-train transformer networks using relatively little compute. 
ELECTRA models are trained to distinguish "real" input tokens vs "fake" input tokens generated by another neural network, 
similar to the discriminator of a GAN. At small scale, ELECTRA achieves strong results even when trained on a single GPU.
At large scale, ELECTRA achieves state-of-the-art results on the SQuAD 2.0 dataset.

| Param               | #Value |
|---------------------|--------|
| layers              | 12     |
| hidden size         | 768    |
| num attetion heads  | 12     |
| on disk size        | 436MB  |

## Model training
This model was trained on google colab v100 GPU. 
You can find the fine-tuning colab here
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/11yo-LaFsgggwmDSy2P8zD3tzf5cCb-DU?usp=sharing).

## Results
The results are actually slightly better than given in the paper. 
In the paper the authors mentioned that electra-base achieves 84.5 EM and 90.8 F1

| Metric | #Value |
|--------|--------|
| EM     | 85.0520|
| F1     | 91.6050|


## Model in Action  馃殌
```python3
from transformers import pipeline

nlp = pipeline('question-answering', model='valhalla/electra-base-discriminator-finetuned_squadv1')
nlp({
    'question': 'What is the answer to everything ?',
    'context': '42 is the answer to life the universe and everything'
})
=> {'answer': '42', 'end': 2, 'score': 0.981274963050339, 'start': 0}
```

> Created with 鉂わ笍 by Suraj Patil [![Github icon](https://cdn0.iconfinder.com/data/icons/octicons/1024/mark-github-32.png)](https://github.com/patil-suraj/)
[![Twitter icon](https://cdn0.iconfinder.com/data/icons/shift-logotypes/32/Twitter-32.png)](https://twitter.com/psuraj28)