Back to all models
text-classification mask_token: [MASK]
Query this model
🔥 This model is currently loaded and running on the Inference API. ⚠️ This model could not be loaded by the inference API. ⚠️ This model can be loaded on the Inference API on-demand.
JSON Output
API endpoint  

⚡️ Upgrade your account to access the Inference API

							$
							curl -X POST \
-H "Authorization: Bearer YOUR_ORG_OR_USER_API_TOKEN" \
-H "Content-Type: application/json" \
-d '"json encoded string"' \
https://api-inference.huggingface.co/models/nlptown/bert-base-multilingual-uncased-sentiment
Share Copied link to clipboard

Monthly model downloads

nlptown/bert-base-multilingual-uncased-sentiment nlptown/bert-base-multilingual-uncased-sentiment
21,535 downloads
last 30 days

pytorch

tf

Contributed by

nlptown NLP Town company
1 model

How to use this model directly from the 🤗/transformers library:

			
Copy to clipboard
from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("nlptown/bert-base-multilingual-uncased-sentiment") model = AutoModelForSequenceClassification.from_pretrained("nlptown/bert-base-multilingual-uncased-sentiment")

bert-base-multilingual-uncased-sentiment

This a bert-base-multilingual-uncased model finetuned for sentiment analysis on product reviews in six languages: English, Dutch, German, French, Spanish and Italian. It predicts the sentiment of the review as a number of stars (between 1 and 5).

This model is intended for direct use as a sentiment analysis model for product reviews in any of the six languages above, or for further finetuning on related sentiment analysis tasks.

Training data

Here is the number of product reviews we used for finetuning the model:

Language Number of reviews
English 150k
Dutch 80k
German 137k
French 140k
Italian 72k
Spanish 50k

Accuracy

The finetuned model obtained the following accuracy on 5,000 held-out product reviews in each of the languages:

  • Accuracy (exact) is the exact match on the number of stars.
  • Accuracy (off-by-1) is the percentage of reviews where the number of stars the model predicts differs by a maximum of 1 from the number given by the human reviewer.
Language Accuracy (exact) Accuracy (off-by-1)
English 67% 95%
Dutch 57% 93%
German 61% 94%
French 59% 94%
Italian 59% 95%
Spanish 58% 95%

Contact

Contact NLP Town for questions, feedback and/or requests for similar models.