Back to all models
Model: huseinzol05/electra-base-discriminator-bahasa-cased

Monthly model downloads

huseinzol05/electra-base-discriminator-bahasa-cased huseinzol05/electra-base-discriminator-bahasa-cased
- downloads
last 30 days

pytorch

tf

Contributed by

huseinzol05 husein zolkepli
14 models

How to use this model directly from the 🤗/transformers library:

			
Copy model
tokenizer = AutoTokenizer.from_pretrained("huseinzol05/electra-base-discriminator-bahasa-cased") model = AutoModelWithLMHead.from_pretrained("huseinzol05/electra-base-discriminator-bahasa-cased")

Bahasa ELECTRA Model

Pretrained ELECTRA base language model for Malay and Indonesian.

Pretraining Corpus

electra-base-discriminator-bahasa-cased model was pretrained on ~1.8 Billion words. We trained on both standard and social media language structures, and below is list of data we trained on,

  1. dumping wikipedia.
  2. local instagram.
  3. local twitter.
  4. local news.
  5. local parliament text.
  6. local singlish/manglish text.
  7. IIUM Confession.
  8. Wattpad.
  9. Academia PDF.

Preprocessing steps can reproduce from here, Malaya/pretrained-model/preprocess.

Pretraining details

Load Pretrained Model

You can use this model by installing torch or tensorflow and Huggingface library transformers. And you can use it directly by initializing it like this:

from transformers import ElectraTokenizer, ElectraModel

model = ElectraModel.from_pretrained('huseinzol05/electra-base-discriminator-bahasa-cased')
tokenizer = ElectraTokenizer.from_pretrained(
    'huseinzol05/electra-base-discriminator-bahasa-cased',
    do_lower_case = False,
)

Example using ElectraForPreTraining

from transformers import ElectraTokenizer, AutoModelWithLMHead, pipeline

model = ElectraForPreTraining.from_pretrained('huseinzol05/electra-base-discriminator-bahasa-cased')
tokenizer = ElectraTokenizer.from_pretrained(
    'huseinzol05/electra-base-discriminator-bahasa-cased', 
    do_lower_case = False
)
sentence = 'kerajaan sangat prihatin terhadap rakyat'
fake_tokens = tokenizer.tokenize(sentence)
fake_inputs = tokenizer.encode(sentence, return_tensors="pt")
discriminator_outputs = discriminator(fake_inputs)
predictions = torch.round((torch.sign(discriminator_outputs[0]) + 1) / 2)

list(zip(fake_tokens, predictions.tolist()))

Output is,

[('kerajaan', 0.0),
 ('sangat', 0.0),
 ('prihatin', 0.0),
 ('terhadap', 0.0),
 ('rakyat', 0.0)]

Results

For further details on the model performance, simply checkout accuracy page from Malaya, https://malaya.readthedocs.io/en/latest/Accuracy.html, we compared with traditional models.

Acknowledgement

Thanks to Im Big, LigBlou, Mesolitica and KeyReply for sponsoring AWS, Google and GPU clouds to train ELECTRA for Bahasa.