MND_TweetEvalBert_model
This model is a fine-tuned version of bert-base-uncased on the tweet_eval dataset. It achieves the following results on the evaluation set:
- Loss: 0.7241
Model description
This is how to use the model with the transformer library to do a text classification task. This model was trained and built for sentiment analysis with a text classification model architecture.
from transformers import AutoTokenizer, AutoModelForSequenceClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("barbieheimer/MND_TweetEvalBert_model")
model = AutoModelForSequenceClassification.from_pretrained("barbieheimer/MND_TweetEvalBert_model")
# We can now use the model in the pipeline.
classifier = pipeline("text-classification", model=model, tokenizer=tokenizer)
# Get some text to fool around with for a basic test.
text = "I loved Oppenheimer and Barbie "
classifier(text) # Let's see if the model works on our example text.
[{'label': 'JOY', 'score': 0.9845513701438904}]
Training Evalutation Results
{'eval_loss': 0.7240552306175232,
'eval_runtime': 3.7803,
'eval_samples_per_second': 375.896,
'eval_steps_per_second': 23.543,
'epoch': 5.0}
Overall Model Evaluation Results
{'accuracy': {'confidence_interval': (0.783, 0.832),
'standard_error': 0.01241992329458207,
'score': 0.808},
'total_time_in_seconds': 150.93268656500004,
'samples_per_second': 6.625470087086432,
'latency_in_seconds': 0.15093268656500003}
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
Training results
{'training_loss'=0.3821827131159165}
{'train_runtime': 174.1546, 'train_samples_per_second': 93.509,
'train_steps_per_second': 5.857, 'total_flos': 351397804992312.0,
'train_loss': 0.3821827131159165, 'epoch': 5.0}
Step: 500
{training loss: 0.607100}
Step: 1000
{training loss: 0.169000}
Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
- Downloads last month
- 7
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for barbieheimer/MND_TweetEvalBert_model
Base model
google-bert/bert-base-uncased