Wikidepia commited on
Commit
980bafb
1 Parent(s): c7ba2cf

Initial model

Browse files
Files changed (4) hide show
  1. README.md +25 -0
  2. config.json +28 -0
  3. pytorch_model.bin +3 -0
  4. spiece.model +3 -0
README.md ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - id
4
+ datasets:
5
+ - allenai/c4
6
+ ---
7
+ # Indonesian T5 Large
8
+
9
+ T5 (Text-to-Text Transfer Transformer) model pretrained on Indonesian mC4 with [extra filtering](https://github.com/Wikidepia/indonesian_datasets/tree/master/dump/mc4). This model is pre-trained only and needs to be fine-tuned to be used for specific tasks.
10
+
11
+ ## Pretraining Details
12
+
13
+ Trained for 500K steps following [`google/t5-v1_1-large`](https://huggingface.co/google/t5-v1_1-large).
14
+
15
+ ## Model Performance
16
+
17
+ TBD
18
+
19
+ ## Limitations and bias
20
+
21
+ This model also has the problem of biased (unethical, harmful, biased) output results due to the bias of the content of the training data, which is associated with the language model using a large-scale corpus. There is potential. Assuming that this problem may occur, please be careful to use it only for applications that do not cause damage.
22
+
23
+ ## Acknowledgement
24
+
25
+ Thanks to Tensorflow Research Cloud for providing TPU v3-8s.
config.json ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "/home/patrick/hugging_face/t5/t5-v1_1-large",
3
+ "architectures": [
4
+ "T5ForConditionalGeneration"
5
+ ],
6
+ "d_ff": 2816,
7
+ "d_kv": 64,
8
+ "d_model": 1024,
9
+ "decoder_start_token_id": 0,
10
+ "dropout_rate": 0.1,
11
+ "eos_token_id": 1,
12
+ "feed_forward_proj": "gated-gelu",
13
+ "gradient_checkpointing": false,
14
+ "initializer_factor": 1.0,
15
+ "is_encoder_decoder": true,
16
+ "layer_norm_epsilon": 1e-06,
17
+ "model_type": "t5",
18
+ "num_decoder_layers": 24,
19
+ "num_heads": 16,
20
+ "num_layers": 24,
21
+ "output_past": true,
22
+ "pad_token_id": 0,
23
+ "relative_attention_num_buckets": 32,
24
+ "tie_word_embeddings": false,
25
+ "transformers_version": "4.8.1",
26
+ "use_cache": true,
27
+ "vocab_size": 32128
28
+ }
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4ab9f01c5f3615d0560e155ee31b0adbcbcf8edbbf7d8ec450384dd4c2d3d4c1
3
+ size 3132845093
spiece.model ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ec91e24db6b3ab052b7a93bd6ac3fc0d06727ff3a57d462cada3c00783430173
3
+ size 793027