Edit model card

Model description

This model is a fine-tuned version of the bert-base-uncased model to classify the sentiment of yelp reviews.
The BERT model is finetuned using adversarial training to boost robustness against textual adversarial attacks.

How to use

You can use the model with the following code.

from transformers import BertForSequenceClassification, BertTokenizer, TextClassificationPipeline
model_path = "JiaqiLee/robust-bert-yelp"
tokenizer = BertTokenizer.from_pretrained(model_path)
model = BertForSequenceClassification.from_pretrained(model_path, num_labels=2)
pipeline = TextClassificationPipeline(model=model, tokenizer=tokenizer)
print(pipeline("Definitely a greasy spoon! Always packed here and always a wait but worth it."))

Training data

The training data comes from Huggingface yelp polarity dataset. We use 90% of the train.csv data to train the model.
We augment original training data with adversarial examples generated by PWWS, TextBugger and TextFooler.

Evaluation results

The model achieves 0.9532 accuracy in yelp polarity test dataset.

Downloads last month
0

Dataset used to train JiaqiLee/robust-bert-yelp