File size: 979 Bytes
980bafb
 
 
 
 
 
e397676
 
 
980bafb
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
---
language:
- id
datasets:
- allenai/c4
---

**NOTE** : This model might be broken :/

# Indonesian T5 Large

T5 (Text-to-Text Transfer Transformer) model pretrained on Indonesian mC4 with [extra filtering](https://github.com/Wikidepia/indonesian_datasets/tree/master/dump/mc4). This model is pre-trained only and needs to be fine-tuned to be used for specific tasks.

## Pretraining Details

Trained for 500K steps following [`google/t5-v1_1-large`](https://huggingface.co/google/t5-v1_1-large).

## Model Performance

TBD

## Limitations and bias

This model also has the problem of biased (unethical, harmful, biased) output results due to the bias of the content of the training data, which is associated with the language model using a large-scale corpus. There is potential. Assuming that this problem may occur, please be careful to use it only for applications that do not cause damage.

## Acknowledgement

Thanks to Tensorflow Research Cloud for providing TPU v3-8s.