BERT for Sentiment Analysis of Japanese Twitter
This model was finetuned from BERT for Japanese Twitter, which was adapted from Japanese BERT by Tohoku NLP by continuing MLM on a Twitter corpus.
It used Japanese Twitter Sentiment 1k (JTS1k) for finetuning, omitting the mixed examples.
Labels
0 -> Negative; 1 -> Neutral; 2 -> Positive
Example Pipeline
from transformers import pipeline
sentiment = pipeline("sentiment-analysis", model="LoneWolfgang/bert-for-japanese-twitter-sentiment")
sentiment ("こちらのカフェ、サービスが残念でした。二度と行かないかな…😞 #がっかり")
[{'label': 'negative', 'score': 0.8242}]
- Downloads last month
- 306
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.