Edit model card

SetFit with sentence-transformers/all-mpnet-base-v2

This is a SetFit model that can be used for Text Classification. This SetFit model uses sentence-transformers/all-mpnet-base-v2 as the Sentence Transformer embedding model. A SetFitHead instance is used for classification.

The model has been trained using an efficient few-shot learning technique that involves:

  1. Fine-tuning a Sentence Transformer with contrastive learning.
  2. Training a classification head with features from the fine-tuned Sentence Transformer.

Model Details

Model Description

Model Sources

Model Labels

Label Examples
1
  • 'Gone are the days when they led the world in recession-busting'
  • 'Who so mean that he will not himself be taxed, who so mindful of wealth that he will not favor increasing the popular taxes, in aid of these defective children?'
  • 'That state has sixty-two counties and sixty cities … In addition there are 932 towns, 507 villages, and, at the last count, 9,600 school districts … Just try to render efficient service … amid the diffused identities and inevitable jealousies of, roughly, 11,000 independent administrative officers or boards!'
0
  • 'Is this a warning of what’s to come?'
  • 'This unique set of circumstances has brought PCL back into focus as the safe haven of choice for global players seeking somewhere to stash their cash.'
  • 'Socialists believe that, if everyone cannot have something, no one shall.'

Evaluation

Metrics

Label F1
all 0.7866

Uses

Direct Use for Inference

First install the SetFit library:

pip install setfit

Then you can load this model and run inference.

from setfit import SetFitModel

# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("SOUMYADEEPSAR/Setfit_subj_all-mpnet-base-v2")
# Run inference
preds = model("That can happen again.")

Training Details

Training Set Metrics

Training set Min Median Max
Word count 3 36.5327 97
Label Training Sample Count
0 100
1 114

Training Hyperparameters

  • batch_size: (8, 8)
  • num_epochs: (1, 1)
  • max_steps: -1
  • sampling_strategy: oversampling
  • num_iterations: 20
  • body_learning_rate: (2e-05, 2e-05)
  • head_learning_rate: 2e-05
  • loss: CosineSimilarityLoss
  • distance_metric: cosine_distance
  • margin: 0.25
  • end_to_end: False
  • use_amp: False
  • warmup_proportion: 0.1
  • seed: 42
  • eval_max_steps: -1
  • load_best_model_at_end: False

Training Results

Epoch Step Training Loss Validation Loss
0.0003 1 0.3816 -
1.0 2902 0.0 0.2172
2.0 5804 0.0 0.2248
0.0003 1 0.5764 -
0.0467 50 0.0009 -
0.0935 100 0.0011 -
0.1402 150 0.0001 -
0.1869 200 0.0001 -
0.2336 250 0.0001 -
0.2804 300 0.0 -
0.3271 350 0.0 -
0.3738 400 0.0 -
0.4206 450 0.0001 -
0.4673 500 0.0 -
0.5140 550 0.0 -
0.5607 600 0.0 -
0.6075 650 0.0 -
0.6542 700 0.0 -
0.7009 750 0.0 -
0.7477 800 0.0 -
0.7944 850 0.0 -
0.8411 900 0.0 -
0.8879 950 0.0001 -
0.9346 1000 0.0 -
0.9813 1050 0.0 -

Framework Versions

  • Python: 3.10.12
  • SetFit: 1.0.3
  • Sentence Transformers: 2.7.0
  • Transformers: 4.40.1
  • PyTorch: 2.2.1+cu121
  • Datasets: 2.19.1
  • Tokenizers: 0.19.1

Citation

BibTeX

@article{https://doi.org/10.48550/arxiv.2209.11055,
    doi = {10.48550/ARXIV.2209.11055},
    url = {https://arxiv.org/abs/2209.11055},
    author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
    keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
    title = {Efficient Few-Shot Learning Without Prompts},
    publisher = {arXiv},
    year = {2022},
    copyright = {Creative Commons Attribution 4.0 International}
}
Downloads last month
26
Safetensors
Model size
109M params
Tensor type
F32
·

Finetuned from

Evaluation results