siebert commited on
Commit
7631e41
1 Parent(s): 69e8208

Fixed typo

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -33,7 +33,7 @@ The model can also be used as a starting point for further fine-tuning of RoBERT
33
 
34
 
35
  # Performance
36
- To evaluate the performance of our general-purpose sentiment analysis model, we set aside an evaluation set from each data set, which was not used for training. On average, our model outperforms a [DistilBERT-based model](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) (which is solely fine-tuned on the popular SST-2 data set) by more than 15 percentage points (78.1 vs. 93.2 percent, see table below). As a robustness check, we evaluate the model in a leave-on-out manner (training on 14 data sets, evaluating on the one left out), which decreases model performance by only about 3 percentage points on average and underscores its generalizability. Model performance is given as evaluation set accuracy in percent.
37
 
38
  |Dataset|DistilBERT SST-2|This model|
39
  |---|---|---|
33
 
34
 
35
  # Performance
36
+ To evaluate the performance of our general-purpose sentiment analysis model, we set aside an evaluation set from each data set, which was not used for training. On average, our model outperforms a [DistilBERT-based model](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) (which is solely fine-tuned on the popular SST-2 data set) by more than 15 percentage points (78.1 vs. 93.2 percent, see table below). As a robustness check, we evaluate the model in a leave-one-out manner (training on 14 data sets, evaluating on the one left out), which decreases model performance by only about 3 percentage points on average and underscores its generalizability. Model performance is given as evaluation set accuracy in percent.
37
 
38
  |Dataset|DistilBERT SST-2|This model|
39
  |---|---|---|