Back to all models
text-classification mask_token: [MASK]
Query this model
🔥 This model is currently loaded and running on the Inference API. ⚠️ This model could not be loaded by the inference API. ⚠️ This model can be loaded on the Inference API on-demand.
JSON Output
API endpoint
								$
								curl -X POST \
-H "Authorization: Bearer YOUR_ORG_OR_USER_API_TOKEN" \
-H "Content-Type: application/json" \
-d '"json encoded string"' \
https://api-inference.huggingface.co/models/lvwerra/bert-imdb
Share Copied link to clipboard

Monthly model downloads

lvwerra/bert-imdb lvwerra/bert-imdb
259 downloads
last 30 days

pytorch

tf

Contributed by

lvwerra Leandro von Werra
5 models

How to use this model directly from the 🤗/transformers library:

			
Copy to clipboard
from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("lvwerra/bert-imdb") model = AutoModelForSequenceClassification.from_pretrained("lvwerra/bert-imdb")

BERT-IMDB

What is it?

BERT (bert-large-cased) trained for sentiment classification on the IMDB dataset.

Training setting

The model was trained on 80% of the IMDB dataset for sentiment classification for three epochs with a learning rate of 1e-5 with the simpletransformers library. The library uses a learning rate schedule.

Result

The model achieved 90% classification accuracy on the validation set.

Reference

The full experiment is available in the tlr repo.