Edit model card

SetFit with avsolatorio/GIST-small-Embedding-v0

This is a SetFit model that can be used for Text Classification. This SetFit model uses avsolatorio/GIST-small-Embedding-v0 as the Sentence Transformer embedding model. A LogisticRegression instance is used for classification.

The model has been trained using an efficient few-shot learning technique that involves:

  1. Fine-tuning a Sentence Transformer with contrastive learning.
  2. Training a classification head with features from the fine-tuned Sentence Transformer.

Model Details

Model Description

Model Sources

Model Labels

Label Examples
subjective
  • 'Stakeholder capitalism poisons democracy and partisan politics poisons capitalism.'
  • 'There is yet everywhere a deficit in the public revenue because the shrinkage in everything taxable was so sudden and violent.'
  • 'Our system of unbridled profit-focused capitalism used to serve as perhaps the most important of those sanctuaries, but no longer.'
objective
  • 'But a top buying agent tells me that access to 13 can be gained if you know the right people.'
  • 'A portion of positive tests around the country is being forwarded to the agency for genetic sequencing, according to a report by CBS News.'
  • 'asked American Federation of Teachers President Randi Weingarten.'

Evaluation

Metrics

Label Accuracy
all 0.8446

Uses

Direct Use for Inference

First install the SetFit library:

pip install setfit

Then you can load this model and run inference.

from setfit import SetFitModel

# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("setfit_model_id")
# Run inference
preds = model("As the total national income falls, the proportion of it absorbed by government will rise.")

Training Details

Training Set Metrics

Training set Min Median Max
Word count 1 22.9219 77
Label Training Sample Count
objective 128
subjective 128

Training Hyperparameters

  • batch_size: (32, 32)
  • num_epochs: (1, 1)
  • max_steps: -1
  • sampling_strategy: oversampling
  • body_learning_rate: (2e-05, 1e-05)
  • head_learning_rate: 0.01
  • loss: CosineSimilarityLoss
  • distance_metric: cosine_distance
  • margin: 0.25
  • end_to_end: False
  • use_amp: False
  • warmup_proportion: 0.1
  • seed: 42
  • eval_max_steps: -1
  • load_best_model_at_end: False

Training Results

Epoch Step Training Loss Validation Loss
0.0010 1 0.2715 -
0.0484 50 0.2469 -
0.0969 100 0.2247 -
0.1453 150 0.0501 -
0.1938 200 0.0039 -
0.2422 250 0.0014 -
0.2907 300 0.0011 -
0.3391 350 0.0014 -
0.3876 400 0.001 -
0.4360 450 0.0009 -
0.4845 500 0.0008 -
0.5329 550 0.0008 -
0.5814 600 0.0008 -
0.6298 650 0.0007 -
0.6783 700 0.0007 -
0.7267 750 0.0006 -
0.7752 800 0.0007 -
0.8236 850 0.0006 -
0.8721 900 0.0005 -
0.9205 950 0.0007 -
0.9690 1000 0.0007 -

Framework Versions

  • Python: 3.11.9
  • SetFit: 1.0.3
  • Sentence Transformers: 3.0.0
  • Transformers: 4.40.2
  • PyTorch: 2.1.2
  • Datasets: 2.19.1
  • Tokenizers: 0.19.1

Citation

BibTeX

@article{https://doi.org/10.48550/arxiv.2209.11055,
    doi = {10.48550/ARXIV.2209.11055},
    url = {https://arxiv.org/abs/2209.11055},
    author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
    keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
    title = {Efficient Few-Shot Learning Without Prompts},
    publisher = {arXiv},
    year = {2022},
    copyright = {Creative Commons Attribution 4.0 International}
}
Downloads last month
101
Safetensors
Model size
33.4M params
Tensor type
F32
·

Finetuned from

Evaluation results