question-answering mask_token: [MASK]
Context
Query this model
🔥 This model is currently loaded and running on the Inference API. ⚠️ This model could not be loaded by the inference API. ⚠️ This model can be loaded on the Inference API on-demand.
JSON Output
API endpoint
							\$
curl -X POST \
-H "Authorization: Bearer YOUR_ORG_OR_USER_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{"question": "Where does she live?", "context": "She lives in Berlin."}' \

Share

last 30 days

pytorch

tf

#### Contributed by

How to use this model directly from the 🤗/transformers library:




Update on GitHub

## Model details

As mentioned in the original paper: ELECTRA is a new method for self-supervised language representation learning. It can be used to pre-train transformer networks using relatively little compute. ELECTRA models are trained to distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to the discriminator of a GAN. At small scale, ELECTRA achieves strong results even when trained on a single GPU. At large scale, ELECTRA achieves state-of-the-art results on the SQuAD 2.0 dataset.

Param #Value
layers 12
hidden size 768
on disk size 436MB

## Model training

This model was trained on google colab v100 GPU. You can find the fine-tuning colab here .

## Results

The results are actually slightly better than given in the paper. In the paper the authors mentioned that electra-base achieves 84.5 EM and 90.8 F1

Metric #Value
EM 85.0520
F1 91.6050

## Model in Action 🚀

from transformers import pipeline

=> {'answer': '42', 'end': 2, 'score': 0.981274963050339, 'start': 0}