Back to all models
fill-mask mask_token: [MASK]
Query this model
๐Ÿ”ฅ This model is currently loaded and running on the Inference API. โš ๏ธ This model could not be loaded by the inference API. โš ๏ธ This model can be loaded on the Inference API on-demand.
JSON Output
API endpoint
								$
								curl -X POST \
-H "Authorization: Bearer YOUR_ORG_OR_USER_API_TOKEN" \
-H "Content-Type: application/json" \
-d '"json encoded string"' \
https://api-inference.huggingface.co/models/krevas/finance-koelectra-base-discriminator
Share Copied link to clipboard

Monthly model downloads

krevas/finance-koelectra-base-discriminator krevas/finance-koelectra-base-discriminator
25 downloads
last 30 days

pytorch

tf

Contributed by

krevas Wonchul Kim
6 models

How to use this model directly from the ๐Ÿค—/transformers library:

			
Copy to clipboard
from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("krevas/finance-koelectra-base-discriminator") model = AutoModelWithLMHead.from_pretrained("krevas/finance-koelectra-base-discriminator")

๐Ÿ“ˆ Financial Korean ELECTRA model

Pretrained ELECTRA Language Model for Korean (finance-koelectra-base-discriminator)

ELECTRA is a new method for self-supervised language representation learning. It can be used to pre-train transformer networks using relatively little compute. ELECTRA models are trained to distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to the discriminator of a GAN.

More details about ELECTRA can be found in the ICLR paper or in the official ELECTRA repository on GitHub.

Stats

The current version of the model is trained on a financial news data of Naver news.

The final training corpus has a size of 25GB and 2.3B tokens.

This model was trained a cased model on a TITAN RTX for 500k steps.

Usage

from transformers import ElectraForPreTraining, ElectraTokenizer
import torch
discriminator = ElectraForPreTraining.from_pretrained("krevas/finance-koelectra-base-discriminator")
tokenizer = ElectraTokenizer.from_pretrained("krevas/finance-koelectra-base-discriminator")
sentence = "๋‚ด์ผ ํ•ด๋‹น ์ข…๋ชฉ์ด ๋Œ€ํญ ์ƒ์Šนํ•  ๊ฒƒ์ด๋‹ค"
fake_sentence = "๋‚ด์ผ ํ•ด๋‹น ์ข…๋ชฉ์ด ๋ง›์žˆ๊ฒŒ ์ƒ์Šนํ•  ๊ฒƒ์ด๋‹ค"
fake_tokens = tokenizer.tokenize(fake_sentence)
fake_inputs = tokenizer.encode(fake_sentence, return_tensors="pt")
discriminator_outputs = discriminator(fake_inputs)
predictions = torch.round((torch.sign(discriminator_outputs[0]) + 1) / 2)
[print("%7s" % token, end="") for token in fake_tokens]
[print("%7s" % int(prediction), end="") for prediction in predictions.tolist()[1:-1]]
print("fake token : %s" % fake_tokens[predictions.tolist()[1:-1].index(1)])

Huggingface model hub

All models are available on the Huggingface model hub.