Roberta-Base Trained on Llama 3.1 405B Twitter Sentiment Classification
The FacebookAI/roberta-base 125M Parameter language model trained on annotated twitter sentiment data from meta-llama/Meta-Llama-3.1-405B for text classification.
Evaluation
Llama 3.1 405B Accuracy: 65.49%
Fine Tuned Roberta Accuracy: 63.38%
Essentially the same performance at 0.03% of the parameters.*
*Further eval definitely needed!
Fine-tuning Data Description
Data and expected label used in accuracy calculation comes from mteb/tweet_sentiment_extraction dataset. Annotations made using a subset of the tweet sentiment extraction dataset, removing blank texts and removing entries deemed innapropriate by Llama 3.1 405B's filter. Generated annotations using Fireworks API, which is expected to be hosting Llama 3.1 405B in FP8 accuracy.
Final train/test split count ends at 4992/998, available at AdamLucek/twittersentiment-llama-3.1-405B-labels.
Using the Model
from transformers import pipeline
# Create sentiment Analysis pipeline
classifier = pipeline("sentiment-analysis", model="AdamLucek/roberta-llama3.1405B-twitter-sentiment")
classifier("Want to get a Blackberry but can`t afford it . Just watching the telly and relaxing. Hard sesion tomorrow.")
# Output: [{'label': 'neutral', 'score': 0.3881794810295105}]
Model Trained Using AutoTrain - Validation Metrics
loss: 0.6081525683403015
f1_macro: 0.7293016589919367
f1_micro: 0.7567567567567568
f1_weighted: 0.7525753769969824
precision_macro: 0.7459781321674904
precision_micro: 0.7567567567567568
precision_weighted: 0.7607241180619724
recall_macro: 0.727181992488115
recall_micro: 0.7567567567567568
recall_weighted: 0.7567567567567568
accuracy: 0.7567567567567568
- Downloads last month
- 54
Model tree for AdamLucek/roberta-llama3.1405B-twitter-sentiment
Base model
FacebookAI/roberta-base