Update README.md
Browse files
README.md
CHANGED
@@ -168,4 +168,18 @@ model_path = "5CD-AI/viso-twhin-bert-large"
|
|
168 |
mask_filler = pipeline("fill-mask", model_path)
|
169 |
|
170 |
mask_filler("đúng nhận sai <mask>", top_k=10)
|
171 |
-
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
168 |
mask_filler = pipeline("fill-mask", model_path)
|
169 |
|
170 |
mask_filler("đúng nhận sai <mask>", top_k=10)
|
171 |
+
```
|
172 |
+
|
173 |
+
## Fine-tune Configuration
|
174 |
+
We fine-tune `5CD-AI/viso-twhin-bert-large` on 4 downstream tasks with `transformer` library with the following configuration:
|
175 |
+
- seed: 42
|
176 |
+
- gradient_accumulation_steps: 1
|
177 |
+
- weight_decay: 0.01
|
178 |
+
- optimizer: AdamW with betas=(0.9, 0.999) and epsilon=1e-08
|
179 |
+
- training_epochs: 30
|
180 |
+
- model_max_length: 128
|
181 |
+
- learning_rate: 1e-5
|
182 |
+
And different additional configurations for each task:
|
183 |
+
| Emotion Recognition | Hate Speech Detection | Spam Reviews Detection | Hate Speech Spans Detection |
|
184 |
+
| --------------------------------------------------------------------------------- | --------------------------------------------------------------------------------- | --------------------------------------------------------------------------------- | --------------------------------------------------------------------------------- |
|
185 |
+
|\- train_batch_size: 64<br>\- lr_scheduler_type: linear | \- train_batch_size: 32<br>\- lr_scheduler_type: linear | \- train_batch_size: 32<br>\- lr_scheduler_type: cosine | \- train_batch_size: 32<br>\- lr_scheduler_type: cosine |
|