Edit model card

SetFit with BAAI/bge-large-en-v1.5

This is a SetFit model that can be used for Text Classification. This SetFit model uses BAAI/bge-large-en-v1.5 as the Sentence Transformer embedding model. A LogisticRegression instance is used for classification.

The model has been trained using an efficient few-shot learning technique that involves:

  1. Fine-tuning a Sentence Transformer with contrastive learning.
  2. Training a classification head with features from the fine-tuned Sentence Transformer.

Model Details

Model Description

Model Sources

Model Labels

Label Examples
peak
  • 'after using product to summarize and gather main points of hundreds of research articles that are 50+ pages, i think i can confidently say that brand is on the right track with regards to implementing product in their business. truly extraordinary.'
  • 'i was stuck in a error for 2+ hours and my bingey bot cleared it!! awesome ai product'
  • 'product in teams: in teams, product transforms meetings. it organizes thoughts, maintains context, and facilitates collaborative brainstorming, making every meeting more productive.'
neither
  • ">youll receive the test via email and will have two hours to complete it. finally, youll return to zoom with the analyst to go over your results together i don't think it's live. op will get the assigment and he/she has 2 hours to complete it. if this is correct, then op is an idiot because there are thousands of examples online and then there's product. op, start working on the fundamentals and pay the $20 product suscription for product."
  • 'utilising advanced technologies with brand to perform a practical demonstration for a client on themes of cyber security, product, product, digital transformation, product, the product and more. these skills are rapidly being adopted for safety and efficielnkd.in/ghumbffm'
  • "another great example of the elites in the tech world using control of the information to infl your thoughts and actions. as product becomes more prevalent doing your own research will be essential. will be interesting to see if anyone finds success with designing a true 'unbiased' product"
pit
  • "the utter disappointment of learning from an amazing passionate teacher for two years who gives you decades of knowledge in 2 years and then you continue the subject and get some bland intellectual from the capital who can't even make a product presentation"
  • 'the amount of times that product has been forced on me against my will after updates is just infuriating. product just taking advantage of the market position they (illegally) established long ago. near-universal software compatibility and being the default os of the general market are why people keep using them. they are in the position where they can fail upwards. and it sucks for the rest of us.'
  • 'literally canceling my subscription on my product because this is terrible business practice. forcing subscription services to squeeze out every last dollar is disgusting especially when your whole program is a rip off of another established program. cringe'

Evaluation

Metrics

Label Accuracy F1 Precision Recall
all 0.88 [0.8846153846153847, 0.6666666666666666, 0.9222520107238605] [0.8214285714285714, 0.5, 1.0] [0.9583333333333334, 1.0, 0.8557213930348259]

Uses

Direct Use for Inference

First install the SetFit library:

pip install setfit

Then you can load this model and run inference.

from setfit import SetFitModel

# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("jamiehudson/725_model_v6")
# Run inference
preds = model("why though? whats the harm in using ai as a tool. theres more to ai than product.")

Training Details

Training Set Metrics

Training set Min Median Max
Word count 10 37.08 98
Label Training Sample Count
pit 50
peak 50
neither 50

Training Hyperparameters

  • batch_size: (16, 16)
  • num_epochs: (3, 3)
  • max_steps: -1
  • sampling_strategy: oversampling
  • body_learning_rate: (2e-05, 1e-05)
  • head_learning_rate: 0.01
  • loss: CosineSimilarityLoss
  • distance_metric: cosine_distance
  • margin: 0.25
  • end_to_end: False
  • use_amp: False
  • warmup_proportion: 0.1
  • seed: 42
  • eval_max_steps: -1
  • load_best_model_at_end: False

Training Results

Epoch Step Training Loss Validation Loss
0.0011 1 0.2299 -
0.0533 50 0.1604 -
0.1066 100 0.0071 -
0.1599 150 0.0016 -
0.2132 200 0.0012 -
0.2665 250 0.0012 -
0.3198 300 0.0011 -
0.3731 350 0.0009 -
0.4264 400 0.0008 -
0.4797 450 0.0009 -
0.5330 500 0.0007 -
0.5864 550 0.0008 -
0.6397 600 0.0007 -
0.6930 650 0.0007 -
0.7463 700 0.0007 -
0.7996 750 0.0006 -
0.8529 800 0.0006 -
0.9062 850 0.0006 -
0.9595 900 0.0006 -
0.0011 1 0.0006 -
0.0533 50 0.0005 -
0.1066 100 0.0005 -
0.1599 150 0.0005 -
0.2132 200 0.0004 -
0.2665 250 0.0003 -
0.3198 300 0.0004 -
0.3731 350 0.0003 -
0.4264 400 0.0004 -
0.4797 450 0.0004 -
0.5330 500 0.0002 -
0.5864 550 0.0002 -
0.6397 600 0.0002 -
0.6930 650 0.0002 -
0.7463 700 0.0002 -
0.7996 750 0.0003 -
0.8529 800 0.0002 -
0.9062 850 0.0002 -
0.9595 900 0.0001 -
1.0128 950 0.0002 -
1.0661 1000 0.0002 -
1.1194 1050 0.0002 -
1.1727 1100 0.0001 -
1.2260 1150 0.0001 -
1.2793 1200 0.0001 -
1.3326 1250 0.0001 -
1.3859 1300 0.0001 -
1.4392 1350 0.0001 -
1.4925 1400 0.0001 -
1.5458 1450 0.0001 -
1.5991 1500 0.0001 -
1.6525 1550 0.0001 -
1.7058 1600 0.0001 -
1.7591 1650 0.0001 -
1.8124 1700 0.0001 -
1.8657 1750 0.0001 -
1.9190 1800 0.0001 -
1.9723 1850 0.0001 -
2.0256 1900 0.0001 -
2.0789 1950 0.0001 -
2.1322 2000 0.0001 -
2.1855 2050 0.0001 -
2.2388 2100 0.0001 -
2.2921 2150 0.0001 -
2.3454 2200 0.0001 -
2.3987 2250 0.0001 -
2.4520 2300 0.0001 -
2.5053 2350 0.0001 -
2.5586 2400 0.0001 -
2.6119 2450 0.0001 -
2.6652 2500 0.0001 -
2.7186 2550 0.0001 -
2.7719 2600 0.0001 -
2.8252 2650 0.0001 -
2.8785 2700 0.0001 -
2.9318 2750 0.0001 -
2.9851 2800 0.0001 -

Framework Versions

  • Python: 3.10.12
  • SetFit: 1.0.3
  • Sentence Transformers: 2.5.1
  • Transformers: 4.38.2
  • PyTorch: 2.1.0+cu121
  • Datasets: 2.18.0
  • Tokenizers: 0.15.2

Citation

BibTeX

@article{https://doi.org/10.48550/arxiv.2209.11055,
    doi = {10.48550/ARXIV.2209.11055},
    url = {https://arxiv.org/abs/2209.11055},
    author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
    keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
    title = {Efficient Few-Shot Learning Without Prompts},
    publisher = {arXiv},
    year = {2022},
    copyright = {Creative Commons Attribution 4.0 International}
}
Downloads last month
4
Safetensors
Model size
335M params
Tensor type
F32
·

Finetuned from

Evaluation results