Back to all models
fill-mask mask_token: [MASK]
Query this model
🔥 This model is currently loaded and running on the Inference API. ⚠️ This model could not be loaded by the inference API. ⚠️ This model can be loaded on the Inference API on-demand.
JSON Output
API endpoint
								$
								curl -X POST \
-H "Authorization: Bearer YOUR_ORG_OR_USER_API_TOKEN" \
-H "Content-Type: application/json" \
-d '"json encoded string"' \
https://api-inference.huggingface.co/models/huseinzol05/tiny-bert-bahasa-cased
Share Copied link to clipboard

Monthly model downloads

huseinzol05/tiny-bert-bahasa-cased huseinzol05/tiny-bert-bahasa-cased
46 downloads
last 30 days

pytorch

tf

Contributed by

huseinzol05 husein zolkepli
15 models

How to use this model directly from the 🤗/transformers library:

			
Copy to clipboard
from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("huseinzol05/tiny-bert-bahasa-cased") model = AutoModelWithLMHead.from_pretrained("huseinzol05/tiny-bert-bahasa-cased")

Bahasa Tiny-BERT Model

General Distilled Tiny BERT language model for Malay and Indonesian.

Pretraining Corpus

tiny-bert-bahasa-cased model was distilled on ~1.8 Billion words. We distilled on both standard and social media language structures, and below is list of data we distilled on,

  1. dumping wikipedia.
  2. local instagram.
  3. local twitter.
  4. local news.
  5. local parliament text.
  6. local singlish/manglish text.
  7. IIUM Confession.
  8. Wattpad.
  9. Academia PDF.

Preprocessing steps can reproduce from here, Malaya/pretrained-model/preprocess.

Distilling details

Load Distilled Model

You can use this model by installing torch or tensorflow and Huggingface library transformers. And you can use it directly by initializing it like this:

from transformers import AlbertTokenizer, BertModel

model = BertModel.from_pretrained('huseinzol05/tiny-bert-bahasa-cased')
tokenizer = AlbertTokenizer.from_pretrained(
    'huseinzol05/tiny-bert-bahasa-cased',
    unk_token = '[UNK]',
    pad_token = '[PAD]',
    do_lower_case = False,
)

We use google/sentencepiece to train the tokenizer, so to use it, need to load from AlbertTokenizer.

Example using AutoModelWithLMHead

from transformers import AlbertTokenizer, AutoModelWithLMHead, pipeline

model = AutoModelWithLMHead.from_pretrained('huseinzol05/tiny-bert-bahasa-cased')
tokenizer = AlbertTokenizer.from_pretrained(
    'huseinzol05/tiny-bert-bahasa-cased',
    unk_token = '[UNK]',
    pad_token = '[PAD]',
    do_lower_case = False,
)
fill_mask = pipeline('fill-mask', model = model, tokenizer = tokenizer)
print(fill_mask('makan ayam dengan [MASK]'))

Output is,

[{'sequence': '[CLS] makan ayam dengan berbual[SEP]',
  'score': 0.00015769545279908925,
  'token': 17859},
 {'sequence': '[CLS] makan ayam dengan kembar[SEP]',
  'score': 0.0001448775001335889,
  'token': 8289},
 {'sequence': '[CLS] makan ayam dengan memaklumkan[SEP]',
  'score': 0.00013484008377417922,
  'token': 6881},
 {'sequence': '[CLS] makan ayam dengan Senarai[SEP]',
  'score': 0.00013061291247140616,
  'token': 11698},
 {'sequence': '[CLS] makan ayam dengan Tiga[SEP]',
  'score': 0.00012453157978598028,
  'token': 4232}]

Results

For further details on the model performance, simply checkout accuracy page from Malaya, https://malaya.readthedocs.io/en/latest/Accuracy.html, we compared with traditional models.

Acknowledgement

Thanks to Im Big, LigBlou, Mesolitica and KeyReply for sponsoring AWS, Google and GPU clouds to train BERT for Bahasa.