Back to all models

Unable to determine this model’s pipeline type. Check the docs .

Monthly model downloads

huseinzol05/gpt2-345M-bahasa-cased huseinzol05/gpt2-345M-bahasa-cased
22 downloads
last 30 days

pytorch

tf

Contributed by

huseinzol05 husein zolkepli
23 models

How to use this model directly from the 🤗/transformers library:

			
Copy to clipboard
from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("huseinzol05/gpt2-345M-bahasa-cased") model = AutoModel.from_pretrained("huseinzol05/gpt2-345M-bahasa-cased")

Bahasa GPT2 Model

Pretrained GPT2 345M model for Malay.

Pretraining Corpus

gpt2-345M-bahasa-cased model was pretrained on ~0.9 Billion words. We trained on standard language structure only, and below is list of data we trained on,

  1. dumping wikipedia.
  2. local news.
  3. local parliament text.
  4. local singlish/manglish text.
  5. IIUM Confession.
  6. Wattpad.
  7. Academia PDF.
  8. Common-Crawl.

Preprocessing steps can reproduce from here, Malaya/pretrained-model/preprocess.

Pretraining details

Load Pretrained Model

You can use this model by installing torch or tensorflow and Huggingface library transformers. And you can use it directly by initializing it like this:

from transformers import GPT2Tokenizer, GPT2Model

model = GPT2Model.from_pretrained('huseinzol05/gpt2-345M-bahasa-cased')
tokenizer = GPT2Tokenizer.from_pretrained(
    'huseinzol05/gpt2-345M-bahasa-cased',
)

Example using GPT2LMHeadModel

from transformers import GPT2Tokenizer, GPT2LMHeadModel

tokenizer = GPT2Tokenizer.from_pretrained('huseinzol05/gpt2-345M-bahasa-cased')
model = GPT2LMHeadModel.from_pretrained(
    'huseinzol05/gpt2-345M-bahasa-cased', pad_token_id = tokenizer.eos_token_id
)

input_ids = tokenizer.encode(
    'penat bak hang, macam ni aku takmau kerja dah', return_tensors = 'pt'
)
sample_outputs = model.generate(
    input_ids,
    do_sample = True,
    max_length = 50,
    top_k = 50,
    top_p = 0.95,
    num_return_sequences = 3,
)

print('Output:\n' + 100 * '-')
for i, sample_output in enumerate(sample_outputs):
    print(
        '{}: {}'.format(
            i, tokenizer.decode(sample_output, skip_special_tokens = True)
        )
    )

Output is,

Output:
----------------------------------------------------------------------------------------------------
0: penat bak hang, macam ni aku takmau kerja dah dekat 2,3 jam.
Aku harap aku dapat berjimat banyak.
Ini pun masa kerja, bila dah kerja jadi satu.
Aku buat kerja ni la.
Aku memang kalau ada
1: penat bak hang, macam ni aku takmau kerja dah.
Tapi nak buat macam mana kan, aku tolong bentang tugas.
Dan, memang sangat-sangat tak mahu buat kerja sekarang ni.
Aku pun suka sangat kerja di luar bandar
2: penat bak hang, macam ni aku takmau kerja dah pun.
Takpa nak buat kerja-kerja sampingan, baru boleh dapat hadiah pulak.
Ni la tempat paling best bila duduk di restoran yang ada pekena kopi.
Cumanya