Back to all models
text-generation mask_token:
Query this model
🔥 This model is currently loaded and running on the Inference API. ⚠️ This model could not be loaded by the inference API. ⚠️ This model can be loaded on the Inference API on-demand.
JSON Output
API endpoint
								$
								curl -X POST \
-H "Authorization: Bearer YOUR_ORG_OR_USER_API_TOKEN" \
-H "Content-Type: application/json" \
-d '"json encoded string"' \
https://api-inference.huggingface.co/models/huseinzol05/gpt2-117M-bahasa-cased
Share Copied link to clipboard

Monthly model downloads

huseinzol05/gpt2-117M-bahasa-cased huseinzol05/gpt2-117M-bahasa-cased
43 downloads
last 30 days

pytorch

tf

Contributed by

huseinzol05 husein zolkepli
14 models

How to use this model directly from the 🤗/transformers library:

			
Copy to clipboard
from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("huseinzol05/gpt2-117M-bahasa-cased") model = AutoModelWithLMHead.from_pretrained("huseinzol05/gpt2-117M-bahasa-cased")

Bahasa GPT2 Model

Pretrained GPT2 117M model for Malay.

Pretraining Corpus

gpt2-117M-bahasa-cased model was pretrained on ~0.9 Billion words. We trained on standard language structure only, and below is list of data we trained on,

  1. dumping wikipedia.
  2. local news.
  3. local parliament text.
  4. local singlish/manglish text.
  5. IIUM Confession.
  6. Wattpad.
  7. Academia PDF.
  8. Common-Crawl.

Preprocessing steps can reproduce from here, Malaya/pretrained-model/preprocess.

Pretraining details

Load Pretrained Model

You can use this model by installing torch or tensorflow and Huggingface library transformers. And you can use it directly by initializing it like this:

from transformers import GPT2Tokenizer, GPT2Model

model = GPT2Model.from_pretrained('huseinzol05/gpt2-117M-bahasa-cased')
tokenizer = GPT2Tokenizer.from_pretrained(
    'huseinzol05/gpt2-117M-bahasa-cased',
)

Example using GPT2LMHeadModel

from transformers import GPT2Tokenizer, GPT2LMHeadModel

tokenizer = GPT2Tokenizer.from_pretrained('huseinzol05/gpt2-117M-bahasa-cased')
model = GPT2LMHeadModel.from_pretrained(
    'huseinzol05/gpt2-117M-bahasa-cased', pad_token_id = tokenizer.eos_token_id
)

input_ids = tokenizer.encode(
    'penat bak hang, macam ni aku takmau kerja dah', return_tensors = 'pt'
)
sample_outputs = model.generate(
    input_ids,
    do_sample = True,
    max_length = 50,
    top_k = 50,
    top_p = 0.95,
    num_return_sequences = 3,
)

print('Output:\n' + 100 * '-')
for i, sample_output in enumerate(sample_outputs):
    print(
        '{}: {}'.format(
            i, tokenizer.decode(sample_output, skip_special_tokens = True)
        )
    )

Output is,

Output:
----------------------------------------------------------------------------------------------------
0: penat bak hang, macam ni aku takmau kerja dah jadi aku pernah beritahu orang.
Ini bukan aku rasa cam nak ajak teman kan ni.
Tengok ni aku dah ada adik-adik & anak yang tinggal dan kerja2 yang kat sekolah.
1: penat bak hang, macam ni aku takmau kerja dah.
Takleh takleh nak ambik air.
Tgk jugak aku kat rumah ni.
Pastu aku nak bagi aku.
So aku dah takde masalah pulak.
Balik aku pun
2: penat bak hang, macam ni aku takmau kerja dah macam tu.
Tapi semua tu aku ingat cakap, ada cara hidup ni yang kita kena bayar.. pastu kita tak mampu bayar.. kan!!
Takpelah, aku nak cakap, masa yang