fine-tuned-indobert / README.md
padilfm's picture
Update README.md
c2d5583 verified
metadata
license: apache-2.0
language:
  - id
  - en

Fine-tuned IndoBERT

This model is a fine-tuned version of IndoBERT for sentiment analysis.

Model Details

  • Model Architecture: BERT (Bidirectional Encoder Representations from Transformers)
  • Fine-tuning Objective: Sentiment Analysis
  • Dataset: DANA Sentiment Analysis from Playstore Indonesia from Kaggle

https://www.kaggle.com/datasets/alexmariosimanjuntak/dana-app-sentiment-review-on-playstore-indonesia/code

Usage

from transformers import AutoModelForSequenceClassification, AutoTokenizer

model = AutoModelForSequenceClassification.from_pretrained("your-username/fine-tuned-indobert")
tokenizer = AutoTokenizer.from_pretrained("your-username/fine-tuned-indobert")

inputs = tokenizer("Your input text", return_tensors="pt")
outputs = model(**inputs)

Training data

The model was trained on a custom dataset for sentiment analysis.

Hyperparameters

  • Learning rate: 2e-05
  • Train batch size: 6
  • Eval batch size: 6
  • Epochs: 5
  • Optimizer: Adam with betas=(0.9, 0.999) and epsilon=1e-08
  • LR scheduler type: Linear
  • Seed: 42
  • Accuracy: 0.8578