Indonesian RoBERTa Large is a masked language model based on the RoBERTa model. It was trained on the OSCAR dataset, specifically the
unshuffled_deduplicated_id subset. The model was trained from scratch and achieved an evaluation loss of 4.801 and an evaluation accuracy of 29.8%.
This model was trained using HuggingFace's Flax framework and is part of the JAX/Flax Community Week organized by HuggingFace. All training was done on a TPUv3-8 VM, sponsored by the Google Cloud team.
|Model||#params||Arch.||Training/Validation data (text)|
The model was trained for 10 epochs and the following is the final result once the training ended.
|train loss||valid loss||valid accuracy||total time|
from transformers import pipeline pretrained_name = "flax-community/indonesian-roberta-large" fill_mask = pipeline( "fill-mask", model=pretrained_name, tokenizer=pretrained_name ) fill_mask("Budi sedang <mask> di sekolah.")
from transformers import RobertaModel, RobertaTokenizerFast pretrained_name = "flax-community/indonesian-roberta-large" model = RobertaModel.from_pretrained(pretrained_name) tokenizer = RobertaTokenizerFast.from_pretrained(pretrained_name) prompt = "Budi sedang berada di sekolah." encoded_input = tokenizer(prompt, return_tensors='pt') output = model(**encoded_input)
- Downloads last month