SetFit with sentence-transformers/all-mpnet-base-v2
This is a SetFit model that can be used for Text Classification. This SetFit model uses sentence-transformers/all-mpnet-base-v2 as the Sentence Transformer embedding model. A LogisticRegression instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
- Fine-tuning a Sentence Transformer with contrastive learning.
- Training a classification head with features from the fine-tuned Sentence Transformer.
Model Details
Model Description
Model Sources
Model Labels
Label |
Examples |
0 |
- 'FOOTBALL IS BACK THIS WEEKEND ITS JUST SUNK IN ??????'
- 'Tried orange aftershock today. My life will never be the same'
- "Attack on Titan game on PS Vita yay! Can't wait for 2016"
|
1 |
|
Evaluation
Metrics
Label |
Accuracy |
all |
0.8233 |
Uses
Direct Use for Inference
First install the SetFit library:
pip install setfit
Then you can load this model and run inference.
from setfit import SetFitModel
model = SetFitModel.from_pretrained("pEpOo/catastrophy4")
preds = model("ThisIsFaz: Anti Collision Rear- #technology #cool http://t.co/KEfxTjTAKB Via Techesback #Tech")
Training Details
Training Set Metrics
Training set |
Min |
Median |
Max |
Word count |
2 |
15.0486 |
30 |
Label |
Training Sample Count |
0 |
836 |
1 |
686 |
Training Hyperparameters
- batch_size: (16, 16)
- num_epochs: (1, 1)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 20
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
Training Results
Epoch |
Step |
Training Loss |
Validation Loss |
0.0003 |
1 |
0.4126 |
- |
0.0131 |
50 |
0.2779 |
- |
0.0263 |
100 |
0.2507 |
- |
0.0394 |
150 |
0.2475 |
- |
0.0526 |
200 |
0.1045 |
- |
0.0657 |
250 |
0.2595 |
- |
0.0788 |
300 |
0.1541 |
- |
0.0920 |
350 |
0.1761 |
- |
0.1051 |
400 |
0.0456 |
- |
0.1183 |
450 |
0.1091 |
- |
0.1314 |
500 |
0.1335 |
- |
0.1445 |
550 |
0.0956 |
- |
0.1577 |
600 |
0.0583 |
- |
0.1708 |
650 |
0.0067 |
- |
0.1840 |
700 |
0.0021 |
- |
0.1971 |
750 |
0.0057 |
- |
0.2102 |
800 |
0.065 |
- |
0.2234 |
850 |
0.0224 |
- |
0.2365 |
900 |
0.0008 |
- |
0.2497 |
950 |
0.1282 |
- |
0.2628 |
1000 |
0.1045 |
- |
0.2760 |
1050 |
0.001 |
- |
0.2891 |
1100 |
0.0005 |
- |
0.3022 |
1150 |
0.0013 |
- |
0.3154 |
1200 |
0.0007 |
- |
0.3285 |
1250 |
0.0015 |
- |
0.3417 |
1300 |
0.0007 |
- |
0.3548 |
1350 |
0.0027 |
- |
0.3679 |
1400 |
0.0006 |
- |
0.3811 |
1450 |
0.0001 |
- |
0.3942 |
1500 |
0.0009 |
- |
0.4074 |
1550 |
0.0002 |
- |
0.4205 |
1600 |
0.0004 |
- |
0.4336 |
1650 |
0.0003 |
- |
0.4468 |
1700 |
0.0013 |
- |
0.4599 |
1750 |
0.0004 |
- |
0.4731 |
1800 |
0.0007 |
- |
0.4862 |
1850 |
0.0001 |
- |
0.4993 |
1900 |
0.0001 |
- |
0.5125 |
1950 |
0.0476 |
- |
0.5256 |
2000 |
0.0561 |
- |
0.5388 |
2050 |
0.0009 |
- |
0.5519 |
2100 |
0.0381 |
- |
0.5650 |
2150 |
0.017 |
- |
0.5782 |
2200 |
0.033 |
- |
0.5913 |
2250 |
0.0001 |
- |
0.6045 |
2300 |
0.0077 |
- |
0.6176 |
2350 |
0.0002 |
- |
0.6307 |
2400 |
0.0003 |
- |
0.6439 |
2450 |
0.0001 |
- |
0.6570 |
2500 |
0.0155 |
- |
0.6702 |
2550 |
0.0002 |
- |
0.6833 |
2600 |
0.0001 |
- |
0.6965 |
2650 |
0.031 |
- |
0.7096 |
2700 |
0.0215 |
- |
0.7227 |
2750 |
0.0002 |
- |
0.7359 |
2800 |
0.0002 |
- |
0.7490 |
2850 |
0.0001 |
- |
0.7622 |
2900 |
0.0001 |
- |
0.7753 |
2950 |
0.0001 |
- |
0.7884 |
3000 |
0.0001 |
- |
0.8016 |
3050 |
0.0001 |
- |
0.8147 |
3100 |
0.0001 |
- |
0.8279 |
3150 |
0.0001 |
- |
0.8410 |
3200 |
0.0001 |
- |
0.8541 |
3250 |
0.0001 |
- |
0.8673 |
3300 |
0.0001 |
- |
0.8804 |
3350 |
0.0001 |
- |
0.8936 |
3400 |
0.0 |
- |
0.9067 |
3450 |
0.0156 |
- |
0.9198 |
3500 |
0.0 |
- |
0.9330 |
3550 |
0.0 |
- |
0.9461 |
3600 |
0.0001 |
- |
0.9593 |
3650 |
0.0208 |
- |
0.9724 |
3700 |
0.0 |
- |
0.9855 |
3750 |
0.0001 |
- |
0.9987 |
3800 |
0.0001 |
- |
Framework Versions
- Python: 3.10.12
- SetFit: 1.0.1
- Sentence Transformers: 2.2.2
- Transformers: 4.35.2
- PyTorch: 2.1.0+cu121
- Datasets: 2.15.0
- Tokenizers: 0.15.0
Citation
BibTeX
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}