Back to all models
text-generation mask_token:
Query this model
🔥 This model is currently loaded and running on the Inference API. ⚠️ This model could not be loaded by the inference API. ⚠️ This model can be loaded on the Inference API on-demand.
JSON Output
API endpoint
								$
								curl -X POST \
-H "Authorization: Bearer YOUR_ORG_OR_USER_API_TOKEN" \
-H "Content-Type: application/json" \
-d '"json encoded string"' \
https://api-inference.huggingface.co/models/huseinzol05/t5-small-bahasa-cased
Share Copied link to clipboard

Monthly model downloads

huseinzol05/t5-small-bahasa-cased huseinzol05/t5-small-bahasa-cased
50 downloads
last 30 days

pytorch

tf

Contributed by

huseinzol05 husein zolkepli
15 models

How to use this model directly from the 🤗/transformers library:

			
Copy to clipboard
from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("huseinzol05/t5-small-bahasa-cased") model = AutoModelWithLMHead.from_pretrained("huseinzol05/t5-small-bahasa-cased")

Bahasa T5 Model

Pretrained T5 small language model for Malay and Indonesian.

Pretraining Corpus

t5-small-bahasa-cased model was pretrained on multiple tasks. Below is list of tasks we trained on,

  1. Unsupervised on local Wikipedia.
  2. Unsupervised on local news.
  3. Unsupervised on local parliament text.
  4. Unsupervised on IIUM Confession.
  5. Unsupervised on Wattpad.
  6. Unsupervised on Academia PDF.
  7. Next sentence prediction on local Wikipedia.
  8. Next sentence prediction on local news.
  9. Next sentence prediction on local parliament text.
  10. Next sentence prediction on IIUM Confession.
  11. Next sentence prediction on Wattpad.
  12. Next sentence prediction on Academia PDF.
  13. Bahasa SNLI.
  14. Bahasa Question Quora.
  15. Bahasa Natural Questions.
  16. News title summarization.
  17. Stemming to original wikipedia.
  18. Synonym to original wikipedia.

Preprocessing steps can reproduce from here, Malaya/pretrained-model/preprocess.

Pretraining details

Load Pretrained Model

You can use this model by installing torch or tensorflow and Huggingface library transformers. And you can use it directly by initializing it like this:

from transformers import T5Tokenizer, T5Model

model = T5Model.from_pretrained('huseinzol05/t5-small-bahasa-cased')
tokenizer = T5Tokenizer.from_pretrained('huseinzol05/t5-small-bahasa-cased')

Example using T5ForConditionalGeneration

from transformers import T5Tokenizer, T5ForConditionalGeneration

tokenizer = T5Tokenizer.from_pretrained('huseinzol05/t5-small-bahasa-cased')
model = T5ForConditionalGeneration.from_pretrained('huseinzol05/t5-small-bahasa-cased')
input_ids = tokenizer.encode('soalan: siapakah perdana menteri malaysia?', return_tensors = 'pt')
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))

Output is,

'Mahathir Mohamad'

Results

For further details on the model performance, simply checkout accuracy page from Malaya, https://malaya.readthedocs.io/en/latest/Accuracy.html, we compared with traditional models.

Acknowledgement

Thanks to Im Big, LigBlou, Mesolitica and KeyReply for sponsoring AWS, Google and GPU clouds to train T5 for Bahasa.