Edit model card

Model Card for Model ID

This model performs sentiment analysis on user reviews, rating them on a scale from 1 (Very Bad) to 5 (Very Good).

Model Details

Model Description

The Bert-User-Review-Rating model is trained on a dataset of 1,300,000 reviews of public places and points of interest. It is capable of classifying user ratings from 1 to 5, where:

5 = Very Good

4 = Good

3 = Neutral

2 = Bad

1 = Very Bad

Model type: BERT-based sentiment analysis model

Language(s) (NLP): English

Direct Use

The model can be used directly to classify the sentiment of user reviews for public places and points of interest, providing a rating from 1 to 5.

Bias, Risks and Limitations

The model may reflect biases present in the training data, such as cultural or regional biases, as training data reflects public places in Singapore.

Recommendations

Users should be aware of potential biases and limitations in the model’s performance, particularly when applied to reviews from different regions or contexts.

How to Get Started with the Model

Use the code below to get started with the model.

Use a pipeline as a high-level helper

from transformers import pipeline

pipe = pipeline("text-classification", model="mekes/Bert-User-Review-Rating")

result = pipe("The food was super tasty, I enjoyed every bite.")

print(result)

Metrics

The subjective performance of the model is better than the metrices, because it was evaluated regarding meeting the exact correct prediction, so if it reflects 4 instead of 5 it was counted as wrong prediciton. Even though the prediction was way better than predicting 1 for a 5 review.

Test Accuracy: 0.714 Test F1 Score: 0.695 Test Loss: 0.698 Test Recall: 0.714

Environmental Impact

Carbon emissions were estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019)

Calculations were done with Nvidia RTX 3090 instead of the used Nvidia RTX 4090.

For one training run it emmited approximately 1,5 kg CO2

Downloads last month
12
Safetensors
Model size
109M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.