Edit model card

Fine-tuned IndoBERT

This model is a fine-tuned version of IndoBERT for sentiment analysis.

Model Details

  • Model Architecture: BERT (Bidirectional Encoder Representations from Transformers)
  • Fine-tuning Objective: Sentiment Analysis
  • Dataset: DANA Sentiment Analysis from Playstore Indonesia from Kaggle

https://www.kaggle.com/datasets/alexmariosimanjuntak/dana-app-sentiment-review-on-playstore-indonesia/code

Usage

from transformers import AutoModelForSequenceClassification, AutoTokenizer

model = AutoModelForSequenceClassification.from_pretrained("your-username/fine-tuned-indobert")
tokenizer = AutoTokenizer.from_pretrained("your-username/fine-tuned-indobert")

inputs = tokenizer("Your input text", return_tensors="pt")
outputs = model(**inputs)

Training data

The model was trained on a custom dataset for sentiment analysis.

Hyperparameters

  • Learning rate: 2e-05
  • Train batch size: 6
  • Eval batch size: 6
  • Epochs: 5
  • Optimizer: Adam with betas=(0.9, 0.999) and epsilon=1e-08
  • LR scheduler type: Linear
  • Seed: 42
  • Accuracy: 0.8578
Downloads last month
18
Safetensors
Model size
124M params
Tensor type
F32
·
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.