tyqiangz commited on
Commit
9b6b0dd
1 Parent(s): 40e087c

Fixed formatting error in README.md

Browse files
Files changed (1) hide show
  1. README.md +19 -19
README.md CHANGED
@@ -15,27 +15,8 @@ datasets:
15
  Finetuned the IndoBERT-Lite Large Model (phase2 - uncased) model on the IndoNLU SmSA dataset following the procedues stated in the paper [IndoNLU: Benchmark and Resources for Evaluating Indonesian
16
  Natural Language Understanding](https://arxiv.org/pdf/2009.05387.pdf).
17
 
18
- **Finetuning hyperparameters:**
19
- - learning rate: 2e-5
20
- - batch size: 16
21
- - no. of epochs: 5
22
- - max sequence length: 512
23
- - random seed: 42
24
-
25
- **Classes:**
26
- - 0: positive
27
- - 1: neutral
28
- - 2: negative
29
-
30
- Validation accuracy: 0.94
31
- Validation F1: 0.91
32
- Validation Recall: 0.91
33
- Validation Precision: 0.93
34
-
35
  ## How to use
36
 
37
- ### Load model and tokenizer
38
-
39
  ```python
40
  from transformers import pipeline
41
  classifier = pipeline("text-classification",
@@ -51,3 +32,22 @@ Output:
51
  {'label': 'negative', 'score': 0.987165629863739}]]
52
  """
53
  ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15
  Finetuned the IndoBERT-Lite Large Model (phase2 - uncased) model on the IndoNLU SmSA dataset following the procedues stated in the paper [IndoNLU: Benchmark and Resources for Evaluating Indonesian
16
  Natural Language Understanding](https://arxiv.org/pdf/2009.05387.pdf).
17
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
18
  ## How to use
19
 
 
 
20
  ```python
21
  from transformers import pipeline
22
  classifier = pipeline("text-classification",
 
32
  {'label': 'negative', 'score': 0.987165629863739}]]
33
  """
34
  ```
35
+
36
+ **Finetuning hyperparameters:**
37
+ - learning rate: 2e-5
38
+ - batch size: 16
39
+ - no. of epochs: 5
40
+ - max sequence length: 512
41
+ - random seed: 42
42
+
43
+ **Classes:**
44
+ - 0: positive
45
+ - 1: neutral
46
+ - 2: negative
47
+
48
+ **Performance metrics on SmSA validation dataset**
49
+ - Validation accuracy: 0.94
50
+ - Validation F1: 0.91
51
+ - Validation Recall: 0.91
52
+ - Validation Precision: 0.93
53
+