julien-c's picture
julien-c HF staff
Migrate model card from transformers-repo
08c9a3a
metadata
language: ko

๐Ÿ“ˆ Financial Korean ELECTRA model

Pretrained ELECTRA Language Model for Korean (finance-koelectra-base-generator)

ELECTRA is a new method for self-supervised language representation learning. It can be used to pre-train transformer networks using relatively little compute. ELECTRA models are trained to distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to the discriminator of a GAN.

More details about ELECTRA can be found in the ICLR paper or in the official ELECTRA repository on GitHub.

Stats

The current version of the model is trained on a financial news data of Naver news.

The final training corpus has a size of 25GB and 2.3B tokens.

This model was trained a cased model on a TITAN RTX for 500k steps.

Usage

from transformers import pipeline

fill_mask = pipeline(
            "fill-mask",
            model="krevas/finance-koelectra-base-generator",
            tokenizer="krevas/finance-koelectra-base-generator"
            )

print(fill_mask(f"๋‚ด์ผ ํ•ด๋‹น ์ข…๋ชฉ์ด ๋Œ€ํญ {fill_mask.tokenizer.mask_token}ํ•  ๊ฒƒ์ด๋‹ค."))

Huggingface model hub

All models are available on the Huggingface model hub.