Edit model card

electra-base-cyberbullying

This is a BERT Base model for the Japanese language finetuned for automatic cyberbullying detection.

The model was based on Hiroshi Matsuda's BERT base Japanese, and later finetuned on a balanced dataset created by unifying two datasets, namely "Harmful BBS Japanese comments dataset" and "Twitter Japanese cyberbullying dataset".

Licenses

The finetuned model with all attached files is licensed under CC BY-SA 4.0, or Creative Commons Attribution-ShareAlike 4.0 International License.

Creative Commons License

Citations

Please, cite this model using the following citation.

@inproceedings{tanabe2022bert-base-cyberbullying-matsuda,
  title={北見工業大学 テキスト情報処理研究室 BERT Base ネットいじめ検出モデル (Hiroshi Matsuda ver.)}, 
  author={田邊 威裕 and プタシンスキ ミハウ and エロネン ユーソ and 桝井 文人}, 
  publisher={HuggingFace}, 
  year={2022},
  url = "https://huggingface.co/kit-nlp/bert-base-japanese-basic-char-v2-cyberbullying/"
}
Downloads last month
17
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.