SetFit with sentence-transformers/paraphrase-mpnet-base-v2
This is a SetFit model that can be used for Text Classification. This SetFit model uses sentence-transformers/paraphrase-mpnet-base-v2 as the Sentence Transformer embedding model. A LogisticRegression instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
- Fine-tuning a Sentence Transformer with contrastive learning.
- Training a classification head with features from the fine-tuned Sentence Transformer.
Model Details
Model Description
Model Sources
Model Labels
Label |
Examples |
0 |
- "it 's not a motion picture ; it 's an utterly static picture ."
- "frankly , it 's kind of insulting , both to men and women ."
- 'under-rehearsed and lifeless'
|
2 |
- "recoing 's fantastic performance does n't exactly reveal what makes vincent tick , but perhaps any definitive explanation for it would have felt like a cheat ."
- "do n't expect any subtlety from this latest entry in the increasingly threadbare gross-out comedy cycle ."
- "merry friggin ' christmas !"
|
3 |
- "so purely enjoyable that you might not even notice it 's a fairly straightforward remake of hollywood comedies such as father of the bride ."
- "what saves this deeply affecting film from being merely a collection of wrenching cases is corcuera 's attention to detail ."
- 'for once , a movie does not proclaim the truth about two love-struck somebodies , but permits them time and space to convince us of that all on their own .'
|
1 |
- "the fact that it is n't very good is almost beside the point ."
- 'what starts off as a satisfying kids flck becomes increasingly implausible as it races through contrived plot points .'
- 'the film is ultimately about as inspiring as a hallmark card .'
|
4 |
- 'cool gadgets and creatures keep this fresh .'
- 'morton deserves an oscar nomination .'
- 'a brutal and funny work .'
|
Evaluation
Metrics
Label |
Accuracy |
all |
0.5380 |
Uses
Direct Use for Inference
First install the SetFit library:
pip install setfit
Then you can load this model and run inference.
from setfit import SetFitModel
model = SetFitModel.from_pretrained("vidhi0206/setfit-paraphrase-mpnet-sst5_v2")
preds = model("my response to the film is best described as lukewarm .")
Training Details
Training Set Metrics
Training set |
Min |
Median |
Max |
Word count |
2 |
18.8062 |
52 |
Label |
Training Sample Count |
0 |
64 |
1 |
64 |
2 |
64 |
3 |
64 |
4 |
64 |
Training Hyperparameters
- batch_size: (8, 8)
- num_epochs: (1, 1)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 20
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
Training Results
Epoch |
Step |
Training Loss |
Validation Loss |
0.0006 |
1 |
0.2259 |
- |
0.0312 |
50 |
0.2373 |
- |
0.0625 |
100 |
0.1726 |
- |
0.0938 |
150 |
0.1607 |
- |
0.125 |
200 |
0.1869 |
- |
0.1562 |
250 |
0.1863 |
- |
0.1875 |
300 |
0.224 |
- |
0.2188 |
350 |
0.1625 |
- |
0.25 |
400 |
0.1284 |
- |
0.2812 |
450 |
0.1357 |
- |
0.3125 |
500 |
0.2193 |
- |
0.3438 |
550 |
0.1434 |
- |
0.375 |
600 |
0.0524 |
- |
0.4062 |
650 |
0.0558 |
- |
0.4375 |
700 |
0.072 |
- |
0.4688 |
750 |
0.0312 |
- |
0.5 |
800 |
0.0732 |
- |
0.5312 |
850 |
0.0117 |
- |
0.5625 |
900 |
0.0311 |
- |
0.5938 |
950 |
0.0228 |
- |
0.625 |
1000 |
0.0026 |
- |
0.6562 |
1050 |
0.0196 |
- |
0.6875 |
1100 |
0.0017 |
- |
0.7188 |
1150 |
0.0067 |
- |
0.75 |
1200 |
0.0029 |
- |
0.7812 |
1250 |
0.0041 |
- |
0.8125 |
1300 |
0.0006 |
- |
0.8438 |
1350 |
0.0022 |
- |
0.875 |
1400 |
0.0006 |
- |
0.9062 |
1450 |
0.0007 |
- |
0.9375 |
1500 |
0.001 |
- |
0.9688 |
1550 |
0.0009 |
- |
1.0 |
1600 |
0.0013 |
- |
Framework Versions
- Python: 3.8.10
- SetFit: 1.0.3
- Sentence Transformers: 2.3.1
- Transformers: 4.37.2
- PyTorch: 2.2.0+cu121
- Datasets: 2.17.0
- Tokenizers: 0.15.1
Citation
BibTeX
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}