File size: 3,512 Bytes
e52417b
 
 
 
 
 
 
f2cb31e
 
 
e52417b
 
f2cb31e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e52417b
 
 
 
 
 
 
 
 
f2cb31e
 
 
e52417b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e3a039a
e52417b
 
 
 
 
e3a039a
e52417b
 
 
f2cb31e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e52417b
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
datasets:
- indolem_sentiment
metrics:
- accuracy
- f1
model-index:
- name: scenario-normal-finetune-clf-data-indolem_sentiment-model-xlm-roberta-base
  results:
  - task:
      name: Text Classification
      type: text-classification
    dataset:
      name: indolem_sentiment
      type: indolem_sentiment
      config: indolem_sentiment_nusantara_text
      split: validation
      args: indolem_sentiment_nusantara_text
    metrics:
    - name: Accuracy
      type: accuracy
      value: 0.9147869674185464
    - name: F1
      type: f1
      value: 0.8629032258064516
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# scenario-normal-finetune-clf-data-indolem_sentiment-model-xlm-roberta-base

This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the indolem_sentiment dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5769
- Accuracy: 0.9148
- F1: 0.8629

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30

### Training results

| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1     |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log        | 0.44  | 200  | 0.4983          | 0.7068   | 0.0    |
| No log        | 0.88  | 400  | 0.4663          | 0.7995   | 0.7059 |
| 0.5119        | 1.32  | 600  | 0.4746          | 0.8722   | 0.7792 |
| 0.5119        | 1.76  | 800  | 0.4463          | 0.8797   | 0.7949 |
| 0.3523        | 2.2   | 1000 | 0.5374          | 0.8772   | 0.7984 |
| 0.3523        | 2.64  | 1200 | 0.4591          | 0.8897   | 0.8087 |
| 0.3523        | 3.08  | 1400 | 0.4909          | 0.8872   | 0.8148 |
| 0.2978        | 3.52  | 1600 | 0.5236          | 0.8872   | 0.8263 |
| 0.2978        | 3.96  | 1800 | 0.4410          | 0.9148   | 0.8559 |
| 0.2623        | 4.4   | 2000 | 0.4655          | 0.8997   | 0.8347 |
| 0.2623        | 4.84  | 2200 | 0.6111          | 0.8772   | 0.8231 |
| 0.2623        | 5.27  | 2400 | 0.4194          | 0.9198   | 0.8667 |
| 0.1863        | 5.71  | 2600 | 0.5278          | 0.8972   | 0.8392 |
| 0.1863        | 6.15  | 2800 | 0.4805          | 0.9173   | 0.8559 |
| 0.1332        | 6.59  | 3000 | 0.5610          | 0.9098   | 0.8548 |
| 0.1332        | 7.03  | 3200 | 0.4435          | 0.9248   | 0.875  |
| 0.1332        | 7.47  | 3400 | 0.5367          | 0.9148   | 0.8651 |
| 0.1143        | 7.91  | 3600 | 0.5159          | 0.9148   | 0.8618 |
| 0.1143        | 8.35  | 3800 | 0.5945          | 0.9098   | 0.8487 |
| 0.0836        | 8.79  | 4000 | 0.7401          | 0.8947   | 0.8421 |
| 0.0836        | 9.23  | 4200 | 0.5591          | 0.9148   | 0.8618 |
| 0.0836        | 9.67  | 4400 | 0.6025          | 0.9123   | 0.8511 |
| 0.0899        | 10.11 | 4600 | 0.5769          | 0.9148   | 0.8629 |


### Framework versions

- Transformers 4.33.3
- Pytorch 2.0.1
- Datasets 2.14.5
- Tokenizers 0.13.3