Ryan Kim commited on
Commit
27b97d0
1 Parent(s): 548d450

fixed some small typos in the readme

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -357,9 +357,9 @@ Train(train_claims_loader,upsto_claims_model_path, num_train_epochs=10)
357
 
358
  There isn't an effective way to validate the sentiment analysis models, as they are publicly available models and it is unknown what data they were explicitly trained on. Therefore, evaluation will rely on anecdotal testing.
359
 
360
- The sentiment models that appear to work the best are the [cardiffnlp/twitter-roberta-base-sentiment](https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment) and - [siebert/sentiment-roberta-large-english](https://huggingface.co/siebert/sentiment-roberta-large-english) models, with few caveats. These two models generally perform very well at detecting sentiment in mid to long expressions. However the [siebert/sentiment-roberta-large-english](https://huggingface.co/siebert/sentiment-roberta-large-english) model tends to suffer when expressions are shorter and less complex in lexicon. Even the [cardiffnlp/twitter-roberta-base-sentiment](https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment) model suffers from time to time if not enough context has been provided.
361
 
362
- The model that performed the worst is the [finiteautomata/beto-sentiment-analysis](https://huggingface.co/finiteautomata/beto-sentiment-analysis). This model seems to have the worst time trying to interpret meaning from sentences, even with strongly worded language such as "hate". For example, the expression _"I hate you" returns a **NEUTRAL** response with 99.6% confidence, which differs from the [cardiffnlp/twitter-roberta-base-sentiment](https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment) and - [siebert/sentiment-roberta-large-english](https://huggingface.co/siebert/sentiment-roberta-large-english) models (**NEGATIVE**: ~96.5% - ~99.9% accuracy respectively). It appears that the [finiteautomata/beto-sentiment-analysis](https://huggingface.co/finiteautomata/beto-sentiment-analysis) gets confused when not enough context is provided. The expression "I hate you because you hurt my family" manages to return a **NEGATIVE** label, but with a mere 87.7% confidence.
363
 
364
  The unique model is the [bhadresh-savani/distilbert-base-uncased-emotion](https://huggingface.co/bhadresh-savani/distilbert-base-uncased-emotion) model, which instead gives 6 general emotions as opposed to a binary **NEGATIVE** or **POSITIVE** rating:
365
 
 
357
 
358
  There isn't an effective way to validate the sentiment analysis models, as they are publicly available models and it is unknown what data they were explicitly trained on. Therefore, evaluation will rely on anecdotal testing.
359
 
360
+ The sentiment models that appear to work the best are the [cardiffnlp/twitter-roberta-base-sentiment](https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment) and [siebert/sentiment-roberta-large-english](https://huggingface.co/siebert/sentiment-roberta-large-english) models, with few caveats. These two models generally perform very well at detecting sentiment in mid to long expressions. However the [siebert/sentiment-roberta-large-english](https://huggingface.co/siebert/sentiment-roberta-large-english) model tends to suffer when expressions are shorter and less complex in lexicon. Even the [cardiffnlp/twitter-roberta-base-sentiment](https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment) model suffers from time to time if not enough context has been provided.
361
 
362
+ The model that performed the worst is the [finiteautomata/beto-sentiment-analysis](https://huggingface.co/finiteautomata/beto-sentiment-analysis). This model seems to have the worst time trying to interpret meaning from sentences, even with strongly worded language such as "hate". For example, the expression "I hate you" returns a **NEUTRAL** response with 99.6% confidence, which differs from the [cardiffnlp/twitter-roberta-base-sentiment](https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment) and - [siebert/sentiment-roberta-large-english](https://huggingface.co/siebert/sentiment-roberta-large-english) models (**NEGATIVE**: ~96.5% - ~99.9% accuracy respectively). It appears that the [finiteautomata/beto-sentiment-analysis](https://huggingface.co/finiteautomata/beto-sentiment-analysis) gets confused when not enough context is provided. The expression "I hate you because you hurt my family" manages to return a **NEGATIVE** label, but with a mere 87.7% confidence.
363
 
364
  The unique model is the [bhadresh-savani/distilbert-base-uncased-emotion](https://huggingface.co/bhadresh-savani/distilbert-base-uncased-emotion) model, which instead gives 6 general emotions as opposed to a binary **NEGATIVE** or **POSITIVE** rating:
365