Javanese DistilBERT Small IMDB is a masked language model based on the DistilBERT model. It was trained on Javanese IMDB movie reviews.
The model was originally the pretrained Javanese DistilBERT Small model and is later fine-tuned on the Javanese IMDB movie review dataset. It achieved a perplexity of 21.01 on the validation dataset. Many of the techniques used are based on a Hugging Face tutorial notebook written by Sylvain Gugger.
Trainer class from the Transformers library was used to train the model. PyTorch was used as the backend framework during training, but the model remains compatible with TensorFlow nonetheless.
|Model||#params||Arch.||Training/Validation data (text)|
||66M||DistilBERT Small||Javanese IMDB (47.5 MB of text)|
The model was trained for 5 epochs and the following is the final result once the training ended.
|train loss||valid loss||perplexity||total time|
from transformers import pipeline pretrained_name = "w11wo/javanese-distilbert-small-imdb" fill_mask = pipeline( "fill-mask", model=pretrained_name, tokenizer=pretrained_name ) fill_mask("Aku mangan sate ing [MASK] bareng konco-konco")
from transformers import DistilBertModel, DistilBertTokenizerFast pretrained_name = "w11wo/javanese-distilbert-small-imdb" model = DistilBertModel.from_pretrained(pretrained_name) tokenizer = DistilBertTokenizerFast.from_pretrained(pretrained_name) prompt = "Indonesia minangka negara gedhe." encoded_input = tokenizer(prompt, return_tensors='pt') output = model(**encoded_input)
Do consider the biases which came from the IMDB review that may be carried over into the results of this model.
Javanese DistilBERT Small was trained and evaluated by Wilson Wongso. All computation and development are done on Google Colaboratory using their free GPU access.
- Downloads last month