metadata
library_name: setfit
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
metrics:
- accuracy
widget:
- text: >-
Blockbuster Cuts Online Price, Challenges Netflix (Reuters) Reuters -
Video chain Blockbuster Inc on\Friday said it would lower the price of its
online DVD rentals\to undercut a similar move by Netflix Inc. that sparked
a stock\a sell-off of both companies' shares.
- text: >-
Goss Gets Senate Panel's OK for CIA Post (AP) AP - A Senate panel on
Tuesday approved the nomination of Rep. Porter Goss, R-Fla., to head the
CIA, overcoming Democrats' objections that Goss was too political for the
job.
- text: >-
Crazy Like a Firefox Today, the Mozilla Foundation #39;s Firefox browser
officially launched -- welcome, version 1.0. In a way, it #39;s much ado
about nothing, seeing how it wasn #39;t that long ago that we reported on
how Mozilla had set
- text: >-
North Korea eases tough stance against US in nuclear talks North Korea on
Friday eased its tough stance against the United States, saying it is
willing to resume stalled six-way talks on its nuclear weapons if
Washington is ready to consider its demands.
- text: >-
Mauresmo confident of LA victory Amelie Mauresmo insists she can win the
Tour Championships this week and finish the year as world number one. The
Frenchwoman could overtake Lindsay Davenport with a win in Los Angeles.
pipeline_tag: text-classification
inference: true
base_model: sentence-transformers/paraphrase-mpnet-base-v2
model-index:
- name: SetFit with sentence-transformers/paraphrase-mpnet-base-v2
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: accuracy
value: 0.8726315789473684
name: Accuracy
SetFit with sentence-transformers/paraphrase-mpnet-base-v2
This is a SetFit model that can be used for Text Classification. This SetFit model uses sentence-transformers/paraphrase-mpnet-base-v2 as the Sentence Transformer embedding model. A LogisticRegression instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
- Fine-tuning a Sentence Transformer with contrastive learning.
- Training a classification head with features from the fine-tuned Sentence Transformer.
Model Details
Model Description
- Model Type: SetFit
- Sentence Transformer body: sentence-transformers/paraphrase-mpnet-base-v2
- Classification head: a LogisticRegression instance
- Maximum Sequence Length: 512 tokens
- Number of Classes: 4 classes
Model Sources
- Repository: SetFit on GitHub
- Paper: Efficient Few-Shot Learning Without Prompts
- Blogpost: SetFit: Efficient Few-Shot Learning Without Prompts
Model Labels
Label | Examples |
---|---|
2 |
|
0 |
|
3 |
|
1 |
|
Evaluation
Metrics
Label | Accuracy |
---|---|
all | 0.8726 |
Uses
Direct Use for Inference
First install the SetFit library:
pip install setfit
Then you can load this model and run inference.
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("vidhi0206/setfit-paraphrase-mpnet-ag_news_v2")
# Run inference
preds = model("Mauresmo confident of LA victory Amelie Mauresmo insists she can win the Tour Championships this week and finish the year as world number one. The Frenchwoman could overtake Lindsay Davenport with a win in Los Angeles.")
Training Details
Training Set Metrics
Training set | Min | Median | Max |
---|---|---|---|
Word count | 15 | 38.1953 | 73 |
Label | Training Sample Count |
---|---|
0 | 64 |
1 | 64 |
2 | 64 |
3 | 64 |
Training Hyperparameters
- batch_size: (8, 8)
- num_epochs: (1, 1)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 20
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
Training Results
Epoch | Step | Training Loss | Validation Loss |
---|---|---|---|
0.0008 | 1 | 0.3712 | - |
0.0391 | 50 | 0.2353 | - |
0.0781 | 100 | 0.1091 | - |
0.1172 | 150 | 0.0898 | - |
0.1562 | 200 | 0.0054 | - |
0.1953 | 250 | 0.0103 | - |
0.2344 | 300 | 0.0051 | - |
0.2734 | 350 | 0.0081 | - |
0.3125 | 400 | 0.0007 | - |
0.3516 | 450 | 0.0003 | - |
0.3906 | 500 | 0.0003 | - |
0.4297 | 550 | 0.0005 | - |
0.4688 | 600 | 0.0003 | - |
0.5078 | 650 | 0.0001 | - |
0.5469 | 700 | 0.0002 | - |
0.5859 | 750 | 0.0001 | - |
0.625 | 800 | 0.0001 | - |
0.6641 | 850 | 0.0001 | - |
0.7031 | 900 | 0.0001 | - |
0.7422 | 950 | 0.0001 | - |
0.7812 | 1000 | 0.0002 | - |
0.8203 | 1050 | 0.0002 | - |
0.8594 | 1100 | 0.0001 | - |
0.8984 | 1150 | 0.0002 | - |
0.9375 | 1200 | 0.0001 | - |
0.9766 | 1250 | 0.0001 | - |
Framework Versions
- Python: 3.8.10
- SetFit: 1.0.3
- Sentence Transformers: 2.3.1
- Transformers: 4.37.2
- PyTorch: 2.2.0+cu121
- Datasets: 2.17.0
- Tokenizers: 0.15.1
Citation
BibTeX
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}