Back to all models
fill-mask mask_token: [MASK]
Query this model
🔥 This model is currently loaded and running on the Inference API. ⚠️ This model could not be loaded by the inference API. ⚠️ This model can be loaded on the Inference API on-demand.
JSON Output
API endpoint
								curl -X POST \
-H "Authorization: Bearer YOUR_ORG_OR_USER_API_TOKEN" \
-H "Content-Type: application/json" \
-d '"json encoded string"' \
Share Copied link to clipboard

Monthly model downloads

ViktorAlm/electra-base-norwegian-uncased-discriminator ViktorAlm/electra-base-norwegian-uncased-discriminator
last 30 days



Contributed by

ViktorAlm Viktor Alm
1 model

How to use this model directly from the 🤗/transformers library:

Copy to clipboard
from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("ViktorAlm/electra-base-norwegian-uncased-discriminator") model = AutoModelWithLMHead.from_pretrained("ViktorAlm/electra-base-norwegian-uncased-discriminator")

Norwegian Electra

Image of norwegian electra

Trained on Oscar + wikipedia + opensubtitles + some other data I had with the awesome power of TPUs(V3-8)

Use with caution. I have no downstream tasks in Norwegian to test on so I have no idea of its performance yet.


Electra: Pre-training Text Encoders as Discriminators Rather Than Generators

Kevin Clark and Minh-Thang Luong and Quoc V. Le and Christopher D. Manning