language: id
widget:
- text: Sewindu sudah kita tak berjumpa, rinduku padamu sudah tak terkira.
GPT2-medium-indonesian
This is a pretrained model on Indonesian language using a causal language modeling (CLM) objective, which was first introduced in this paper and first released at this page.
This model was trained using HuggingFace's Flax framework and is part of the JAX/Flax Community Week organized by HuggingFace. All training was done on a TPUv3-8 VM sponsored by the Google Cloud team.
The demo can be found here.
How to use
You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility:
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='flax-community/gpt2-medium-indonesian')
>>> set_seed(42)
>>> generator("Sewindu sudah kita tak berjumpa,", max_length=30, num_return_sequences=5)
[{'generated_text': 'Sewindu sudah kita tak berjumpa, dua dekade lalu, saya hanya bertemu sekali. Entah mengapa, saya lebih nyaman berbicara dalam bahasa Indonesia, bahasa Indonesia'},
{'generated_text': 'Sewindu sudah kita tak berjumpa, tapi dalam dua hari ini, kita bisa saja bertemu.”\
“Kau tau, bagaimana dulu kita bertemu?” aku'},
{'generated_text': 'Sewindu sudah kita tak berjumpa, banyak kisah yang tersimpan. Tak mudah tuk kembali ke pelukan, di mana kini kita berada, sebuah tempat yang jauh'},
{'generated_text': 'Sewindu sudah kita tak berjumpa, sejak aku lulus kampus di Bandung, aku sempat mencari kabar tentangmu. Ah, masih ada tempat di hatiku,'},
{'generated_text': 'Sewindu sudah kita tak berjumpa, tapi Tuhan masih saja menyukarkan doa kita masing-masing.\
Tuhan akan memberi lebih dari apa yang kita'}]
Here is how to use this model to get the features of a given text in PyTorch:
from transformers import GPT2Tokenizer, GPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('flax-community/gpt2-medium-indonesian')
model = GPT2Model.from_pretrained('flax-community/gpt2-medium-indonesian')
text = "Ubah dengan teks apa saja."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
and in TensorFlow:
from transformers import GPT2Tokenizer, TFGPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('flax-community/gpt2-medium-indonesian')
model = TFGPT2Model.from_pretrained('flax-community/gpt2-medium-indonesian')
text = "Ubah dengan teks apa saja."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
Limitations and bias
The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of unfiltered content from the internet, which is far from neutral. As the openAI team themselves point out in their model card:
Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true.
Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans > unless the deployers first carry out a study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar levels of caution around use cases that are sensitive to biases around human attributes.
Training data
The model was trained on a combined dataset of OSCAR and mc4 for the Indonesian language, with 29GB of data in total. The mc4 dataset was cleaned using this script and we also only included links that were cited by IDWiki.
Training procedure
The model was trained on a TPUv3-8 VM provided by the Google Cloud team. The training duration was 6d 3h 7m 26s
.
Evaluation results
The model achieves the following results without any fine-tuning (zero-shot):
dataset | train loss | eval loss | eval perplexity |
---|---|---|---|
ID OSCAR+mc4 (29GB) | 2.79 | 2.696 | 14.826 |
Tracking
The training process was tracked in TensorBoard and Weights and Biases.
Team members
- Akmal (@Wikidepia)
- alvinwatner (@alvinwatner)
- Cahya Wirawan (@cahya)
- Galuh Sahid (@Galuh)
- Muhammad Agung Hambali (@AyameRushia)
- Muhammad Fhadli (@muhammadfhadli)
- Samsul Rahmadani (@munggok)