|
--- |
|
library_name: transformers |
|
tags: [] |
|
pipeline_tag: fill-mask |
|
widget: |
|
- text: "shop làm ăn như cái <mask>" |
|
- text: "hag từ Quảng <mask> kực nét" |
|
- text: "Set xinh quá, <mask> bèo nhèo" |
|
- text: "ăn nói xà <mask>" |
|
--- |
|
|
|
# 5CD-AI/visobert-14gb-corpus |
|
## Overview |
|
<!-- Provide a quick summary of what the model is/does. --> |
|
We continually pretrain `uitnlp/visobert` on a merged 14GB dataset, the training dataset includes: |
|
- Internal data (100M comments and 15M posts on Facebook) |
|
- UIT data, which is used to pretrain `uitnlp/visobert` |
|
- MC4 ecommerce |
|
|
|
Here are the results on 4 downstream tasks on Vietnamese social media texts, including Emotion Recognition(UIT-VSMEC), Hate Speech Detection(UIT-HSD), Spam Reviews Detection(ViSpamReviews), Hate Speech Spans Detection(ViHOS): |
|
<table> |
|
<tr align="center"> |
|
<td rowspan=2><b>Model</td> |
|
<td rowspan=2><b>Avg MF1</td> |
|
<td colspan=3><b>Emotion Recognition</td> |
|
<td colspan=3><b>Hate Speech Detection</td> |
|
<td colspan=3><b>Spam Reviews Detection</td> |
|
<td colspan=3><b>Hate Speech Spans Detection</td> |
|
</tr> |
|
<tr align="center"> |
|
<td><b>Acc</td> |
|
<td><b>WF1</td> |
|
<td><b>MF1</td> |
|
<td><b>Acc</td> |
|
<td><b>WF1</td> |
|
<td><b>MF1</td> |
|
<td><b>Acc</td> |
|
<td><b>WF1</td> |
|
<td><b>MF1</td> |
|
<td><b>Acc</td> |
|
<td><b>WF1</td> |
|
<td><b>MF1</td> |
|
</tr> |
|
<tr align="center"> |
|
<td align="left">viBERT</td> |
|
<td>78.16</td> |
|
<td>61.91</td> |
|
<td>61.98</td> |
|
<td>59.7</td> |
|
<td>85.34</td> |
|
<td>85.01</td> |
|
<td>62.07</td> |
|
<td>89.93</td> |
|
<td>89.79</td> |
|
<td>76.8</td> |
|
<td>90.42</td> |
|
<td>90.45</td> |
|
<td>84.55</td> |
|
</tr> |
|
<tr align="center"> |
|
<td align="left">vELECTRA</td> |
|
<td>79.23</td> |
|
<td>64.79</td> |
|
<td>64.71</td> |
|
<td>61.95</td> |
|
<td>86.96</td> |
|
<td>86.37</td> |
|
<td>63.95</td> |
|
<td>89.83</td> |
|
<td>89.68</td> |
|
<td>76.23</td> |
|
<td>90.59</td> |
|
<td>90.58</td> |
|
<td>85.12</td> |
|
</tr> |
|
<tr align="center"> |
|
<td align="left">PhoBERT-Base </td> |
|
<td>79.3</td> |
|
<td>63.49</td> |
|
<td>63.36</td> |
|
<td>61.41</td> |
|
<td>87.12</td> |
|
<td>86.81</td> |
|
<td>65.01</td> |
|
<td>89.83</td> |
|
<td>89.75</td> |
|
<td>76.18</td> |
|
<td>91.32</td> |
|
<td>91.38</td> |
|
<td>85.92</td> |
|
</tr> |
|
<tr align="center"> |
|
<td align="left">PhoBERT-Large</td> |
|
<td>79.82</td> |
|
<td>64.71</td> |
|
<td>64.66</td> |
|
<td>62.55</td> |
|
<td>87.32</td> |
|
<td>86.98</td> |
|
<td>65.14</td> |
|
<td>90.12</td> |
|
<td>90.03</td> |
|
<td>76.88</td> |
|
<td>91.44</td> |
|
<td>91.46</td> |
|
<td>86.56</td> |
|
</tr> |
|
<tr align="center"> |
|
<td align="left">ViSoBERT</td> |
|
<td>81.58</td> |
|
<td>68.1</td> |
|
<td>68.37</td> |
|
<td>65.88</td> |
|
<td>88.51</td> |
|
<td>88.31</td> |
|
<td>68.77</td> |
|
<td>90.99</td> |
|
<td><b>90.92</td> |
|
<td><b>79.06</td> |
|
<td>91.62</td> |
|
<td>91.57</td> |
|
<td>86.8</td> |
|
</tr> |
|
<tr align="center"> |
|
<td align="left">visobert-14gb-corpus</td> |
|
<td><b>82.2</td> |
|
<td><b>68.69</td> |
|
<td><b>68.75</td> |
|
<td><b>66.03</td> |
|
<td><b>88.79</td> |
|
<td><b>88.6</td> |
|
<td><b>69.57</td> |
|
<td><b>91.02</td> |
|
<td>90.88</td> |
|
<td>77.13</td> |
|
<td><b>93.69</td> |
|
<td><b>93.63</td> |
|
<td><b>89.66</td> |
|
</tr> |
|
</div> |
|
</table> |
|
|
|
## Usage (HuggingFace Transformers) |
|
|
|
Install `transformers` package: |
|
|
|
pip install transformers |
|
|
|
Then you can use this model for fill-mask task like this: |
|
|
|
```python |
|
from transformers import pipeline |
|
|
|
model_path = "5CD-AI/visobert-14gb-corpus" |
|
mask_filler = pipeline("fill-mask", model_path) |
|
|
|
mask_filler("shop làm ăn như cái <mask>", top_k=10) |
|
``` |
|
|
|
## Fine-tune Configuration |
|
We fine-tune `5CD-AI/visobert-14gb-corpus` on 4 downstream tasks with `transformers` library with the following configuration: |
|
- seed: 42 |
|
- gradient_accumulation_steps: 1 |
|
- weight_decay: 0.01 |
|
- optimizer: AdamW with betas=(0.9, 0.999) and epsilon=1e-08 |
|
- training_epochs: 30 |
|
- model_max_length: 128 |
|
- learning_rate: 1e-5 |
|
- metric_for_best_model: wf1 |
|
- strategy: epoch |
|
|
|
And different additional configurations for each task: |
|
| Emotion Recognition | Hate Speech Detection | Spam Reviews Detection | Hate Speech Spans Detection | |
|
| --------------------------------------------------------------------------------- | --------------------------------------------------------------------------------- | --------------------------------------------------------------------------------- | --------------------------------------------------------------------------------- | |
|
|\- train_batch_size: 64<br>\- lr_scheduler_type: linear | \- train_batch_size: 32<br>\- lr_scheduler_type: linear | \- train_batch_size: 32<br>\- lr_scheduler_type: cosine | \- train_batch_size: 32<br>\- lr_scheduler_type: cosine | |
|
|