- 70GB Korean text dataset and 42000 lower-cased subwords are used
- Check the model performance and other language models for Korean in github
from transformers import BertTokenizerFast, AlbertModel tokenizer_albert = BertTokenizerFast.from_pretrained("kykim/albert-kor-base") model_albert = AlbertModel.from_pretrained("kykim/albert-kor-base")
New: fine-tune this model in a few clicks by selecting AutoNLP in the "Train" menu!
- Downloads last month