RoBERTa-base-finetuned-yelp-polarity

This is a RoBERTa-base checkpoint fine-tuned on binary sentiment classifcation from Yelp polarity. It gets 98.08% accuracy on the test set.

Hyper-parameters

We used the following hyper-parameters to train the model on one GPU:

num_train_epochs            = 2.0
learning_rate               = 1e-05
weight_decay                = 0.0
seed                        = 42