Edit model card

SetFit with sentence-transformers/paraphrase-mpnet-base-v2

This is a SetFit model that can be used for Text Classification. This SetFit model uses sentence-transformers/paraphrase-mpnet-base-v2 as the Sentence Transformer embedding model. A LogisticRegression instance is used for classification.

The model has been trained using an efficient few-shot learning technique that involves:

  1. Fine-tuning a Sentence Transformer with contrastive learning.
  2. Training a classification head with features from the fine-tuned Sentence Transformer.

Model Details

Model Description

Model Sources

Model Labels

Label Examples
product discoverability
  • 'Do you have Adidas Superstar shoes?'
  • 'Do you have any running shoes in pink color?'
  • 'Do you have black Yeezy sneakers in size 9?'
order tracking
  • "I'm concerned about the delay in the delivery of my order. Can you please provide me with the status?"
  • 'What is the estimated delivery time for orders within the same city?'
  • "I placed an order last week and it still hasn't arrived. Can you check the status for me?"
product policy
  • 'Are there any exceptions to the return policy for items that were purchased with a student discount?'
  • 'Do you offer a try-and-buy option for sneakers?'
  • 'Do you offer a price adjustment for sneakers if the price drops after purchase?'
product faq
  • 'Do you have any limited edition sneakers available?'
  • 'Are the Adidas Yeezy Foam Runner available in size 7?'
  • "Are the Nike Air Force 1 sneakers available in women's sizes?"

Evaluation

Metrics

Label Accuracy
all 0.8381

Uses

Direct Use for Inference

First install the SetFit library:

pip install setfit

Then you can load this model and run inference.

from setfit import SetFitModel

# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("setfit_model_id")
# Run inference
preds = model("special features for bakery boxes")

Training Details

Training Set Metrics

Training set Min Median Max
Word count 3 11.6415 24
Label Training Sample Count
order tracking 30
product discoverability 30
product faq 16
product policy 30

Training Hyperparameters

  • batch_size: (16, 16)
  • num_epochs: (2, 2)
  • max_steps: -1
  • sampling_strategy: oversampling
  • body_learning_rate: (2e-05, 1e-05)
  • head_learning_rate: 0.01
  • loss: CosineSimilarityLoss
  • distance_metric: cosine_distance
  • margin: 0.25
  • end_to_end: False
  • use_amp: False
  • warmup_proportion: 0.1
  • seed: 42
  • eval_max_steps: -1
  • load_best_model_at_end: True

Training Results

Epoch Step Training Loss Validation Loss
0.0019 1 0.1782 -
0.0965 50 0.0628 -
0.1931 100 0.0036 -
0.2896 150 0.0013 -
0.3861 200 0.0012 -
0.4826 250 0.0003 -
0.5792 300 0.0002 -
0.6757 350 0.0003 -
0.7722 400 0.0002 -
0.8687 450 0.0005 -
0.9653 500 0.0003 -
1.0618 550 0.0001 -
1.1583 600 0.0002 -
1.2548 650 0.0002 -
1.3514 700 0.0002 -
1.4479 750 0.0001 -
1.5444 800 0.0001 -
1.6409 850 0.0001 -
1.7375 900 0.0002 -
1.8340 950 0.0001 -
1.9305 1000 0.0001 -

Framework Versions

  • Python: 3.9.16
  • SetFit: 1.0.3
  • Sentence Transformers: 2.7.0
  • Transformers: 4.40.2
  • PyTorch: 2.3.0
  • Datasets: 2.19.1
  • Tokenizers: 0.19.1

Citation

BibTeX

@article{https://doi.org/10.48550/arxiv.2209.11055,
    doi = {10.48550/ARXIV.2209.11055},
    url = {https://arxiv.org/abs/2209.11055},
    author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
    keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
    title = {Efficient Few-Shot Learning Without Prompts},
    publisher = {arXiv},
    year = {2022},
    copyright = {Creative Commons Attribution 4.0 International}
}
Downloads last month
4
Safetensors
Model size
109M params
Tensor type
F32
·
Inference API
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Shankhdhar/classifier_woog

Finetuned
this model

Evaluation results