Bert base model for Korean

  • 70GB Korean text dataset and 42000 lower-cased subwords are used
  • Check the model performance and other language models for Korean in github
# only for pytorch in transformers
from transformers import BertTokenizerFast, EncoderDecoderModel

tokenizer = BertTokenizerFast.from_pretrained("kykim/bertshared-kor-base")
model = EncoderDecoderModel.from_pretrained("kykim/bertshared-kor-base")

Select AutoNLP in the “Train” menu to fine-tune this model automatically.

Downloads last month
Hosted inference API
Text2Text Generation
This model can be loaded on the Inference API on-demand.