siebert commited on
Commit
4d7a7e7
1 Parent(s): 102f8ba

Added example for data set prediction

Browse files
Files changed (1) hide show
  1. README.md +14 -12
README.md CHANGED
@@ -8,12 +8,17 @@ tags:
8
 
9
 
10
  # Overview
11
- This model is a fine-tuned checkpoint of [RoBERTa-large](https://huggingface.co/roberta-large) (Liu et al. 2019). It enables reliable binary sentiment analysis for various types of English-language text. For each instance, it predicts either positive (1) or negative (0) sentiment. The model was fine-tuned and evaluated on 15 data sets from diverse text sources to enhance generalization across different types of texts (reviews, tweets, etc.). Consequently, it outperforms models trained on only one type of text (e.g., movie reviews from the popular SST-2 benchmark) when used on new data as shown below.
12
-
13
- # Usage
14
- The model can be used with few lines of code. We suggest that you manually label a subset of your data to evaluate performance for your use case. For performance benchmark values across different sentiment analysis contexts, please refer to our paper ([Heitmann et al. 2020](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3489963)). The model can also be used as a starting point for further [fine-tuning](https://huggingface.co/transformers/custom_datasets.html#fine-tuning-with-trainer) on your sentiment analysis task.
 
 
 
15
 
16
- The easiest way to use the model is Huggingface's [sentiment analysis pipeline](https://huggingface.co/transformers/quicktour.html#getting-started-on-a-task-with-a-pipeline):
 
 
17
  ```
18
  from transformers import pipeline
19
  sentiment_analysis = pipeline("sentiment-analysis",model="siebert/sentiment-roberta-large-english")
@@ -23,15 +28,12 @@ print(sentiment_analysis("I love this!"))
23
  [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/chrsiebert/sentiment-roberta-large-english/blob/main/sentiment_roberta_pipeline.ipynb)
24
 
25
 
26
- Alternatively, you can load the model as follows:
27
- ```
28
- from transformers import AutoTokenizer, AutoModelForSequenceClassification
29
- tokenizer = AutoTokenizer.from_pretrained("siebert/sentiment-roberta-large-english")
30
- model = AutoModelForSequenceClassification.from_pretrained("siebert/sentiment-roberta-large-english")
31
- ```
32
 
33
  # Performance
34
- To evaluate the performance of our general-purpose sentiment analysis model, we set aside an evaluation set from each data set, which was not used for training. On average, our model outperforms a [DistilBERT-based model](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) (which is solely fine-tuned on the popular SST-2 data set) by more than 15 percentage points (78.1 vs. 93.2, see table below). As a robustness check, we evaluate the model in a leave-on-out manner (training on 14 data sets, evaluating on the one left out), which decreases model performance by only about 3 percentage points on average and underscores its generalizability.
35
 
36
  |Dataset|DistilBERT SST-2|This model|
37
  |---|---|---|
 
8
 
9
 
10
  # Overview
11
+ This model is a fine-tuned checkpoint of [RoBERTa-large](https://huggingface.co/roberta-large) ([Liu et al. 2019](https://arxiv.org/pdf/1907.11692.pdf)). It enables reliable binary sentiment analysis for various types of English-language text. For each instance, it predicts either positive (1) or negative (0) sentiment. The model was fine-tuned and evaluated on 15 data sets from diverse text sources to enhance generalization across different types of texts (reviews, tweets, etc.). Consequently, it outperforms models trained on only one type of text (e.g., movie reviews from the popular SST-2 benchmark) when used on new data as shown below.
12
+
13
+
14
+ # Predictions on a data set
15
+ If you want to predict sentiment for your own data, we provide an example script via [Google Colab](https://colab.research.google.com/notebooks/intro.ipynb). You can load your data to a Google Drive and run the script for free on a Colab GPU. Set-up takes only a few minutes. We suggest that you manually label a subset of your data to evaluate performance for your use case. For performance benchmark values across different sentiment analysis contexts, please refer to our paper ([Heitmann et al. 2020](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3489963)).
16
+
17
+ [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/chrsiebert/sentiment-roberta-large-english/blob/main/sentiment_roberta_prediction_example.ipynb)
18
 
19
+
20
+ # Use in a Hugging Face pipeline
21
+ The easiest way to use the model for single predictions is Hugging Face's [sentiment analysis pipeline](https://huggingface.co/transformers/quicktour.html#getting-started-on-a-task-with-a-pipeline), which only needs a couple lines of code as in the following example:
22
  ```
23
  from transformers import pipeline
24
  sentiment_analysis = pipeline("sentiment-analysis",model="siebert/sentiment-roberta-large-english")
 
28
  [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/chrsiebert/sentiment-roberta-large-english/blob/main/sentiment_roberta_pipeline.ipynb)
29
 
30
 
31
+ # Use for further fine-tuning
32
+ The model can also be used as a starting point for further fine-tuning on your specific data. Please refer to Hugging Face's [documentation](https://huggingface.co/transformers/custom_datasets.html#fine-tuning-with-trainer) for further details and example code.
33
+
 
 
 
34
 
35
  # Performance
36
+ To evaluate the performance of our general-purpose sentiment analysis model, we set aside an evaluation set from each data set, which was not used for training. On average, our model outperforms a [DistilBERT-based model](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) (which is solely fine-tuned on the popular SST-2 data set) by more than 15 percentage points (78.1 vs. 93.2, see table below). As a robustness check, we evaluate the model in a leave-on-out manner (training on 14 data sets, evaluating on the one left out), which decreases model performance by only about 3 percentage points on average and underscores its generalizability. Model performance is given as evaluation set accuracy in percent.
37
 
38
  |Dataset|DistilBERT SST-2|This model|
39
  |---|---|---|