w11wo commited on
Commit
7b7aa94
1 Parent(s): d74ce70

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +71 -0
README.md ADDED
@@ -0,0 +1,71 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: id
3
+ tags:
4
+ - indonesian-roberta-large
5
+ license: mit
6
+ datasets:
7
+ - oscar
8
+ widget:
9
+ - text: "Budi telat ke sekolah karena ia <mask>."
10
+ ---
11
+
12
+ ## Indonesian RoBERTa Large
13
+
14
+ Indonesian RoBERTa Large is a masked language model based on the [RoBERTa](https://arxiv.org/abs/1907.11692) model. It was trained on the [OSCAR](https://huggingface.co/datasets/oscar) dataset, specifically the `unshuffled_deduplicated_id` subset. The model was trained from scratch and achieved an evaluation loss of 4.801 and an evaluation accuracy of 29.8%.
15
+
16
+ This model was trained using HuggingFace's Flax framework and is part of the [JAX/Flax Community Week](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104) organized by HuggingFace. All training was done on a TPUv3-8 VM, sponsored by the Google Cloud team.
17
+
18
+ All necessary scripts used for training could be found in the [Files and versions](https://huggingface.co/flax-community/indonesian-roberta-large/tree/main) tab, as well as the [Training metrics](https://huggingface.co/flax-community/indonesian-roberta-large/tensorboard) logged via Tensorboard.
19
+
20
+ ## Model
21
+
22
+ | Model | #params | Arch. | Training/Validation data (text) |
23
+ | -------------------------- | ------- | ------- | ------------------------------------------ |
24
+ | `indonesian-roberta-large` | 355M | RoBERTa | OSCAR `unshuffled_deduplicated_id` Dataset |
25
+
26
+ ## Evaluation Results
27
+
28
+ The model was trained for 10 epochs and the following is the final result once the training ended.
29
+
30
+ | train loss | valid loss | valid accuracy | total time |
31
+ | ---------- | ---------- | -------------- | ---------- |
32
+ | 5.19 | 4.801 | 0.298 | 2:8:32:28 |
33
+
34
+ ## How to Use
35
+
36
+ ### As Masked Language Model
37
+
38
+ ```python
39
+ from transformers import pipeline
40
+
41
+ pretrained_name = "flax-community/indonesian-roberta-large"
42
+
43
+ fill_mask = pipeline(
44
+ "fill-mask",
45
+ model=pretrained_name,
46
+ tokenizer=pretrained_name
47
+ )
48
+
49
+ fill_mask("Budi sedang <mask> di sekolah.")
50
+ ```
51
+
52
+ ### Feature Extraction in PyTorch
53
+
54
+ ```python
55
+ from transformers import RobertaModel, RobertaTokenizerFast
56
+
57
+ pretrained_name = "flax-community/indonesian-roberta-large"
58
+ model = RobertaModel.from_pretrained(pretrained_name)
59
+ tokenizer = RobertaTokenizerFast.from_pretrained(pretrained_name)
60
+
61
+ prompt = "Budi sedang berada di sekolah."
62
+ encoded_input = tokenizer(prompt, return_tensors='pt')
63
+ output = model(**encoded_input)
64
+ ```
65
+
66
+ ## Team Members
67
+
68
+ - Wilson Wongso ([@w11wo](https://hf.co/w11wo))
69
+ - Steven Limcorn ([@stevenlimcorn](https://hf.co/stevenlimcorn))
70
+ - Samsul Rahmadani ([@munggok](https://hf.co/munggok))
71
+ - Chew Kok Wah ([@chewkokwah](https://hf.co/chewkokwah))