SetFit with sentence-transformers/all-MiniLM-L6-v2
This is a SetFit model that can be used for Text Classification. This SetFit model uses sentence-transformers/all-MiniLM-L6-v2 as the Sentence Transformer embedding model. A LogisticRegression instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
- Fine-tuning a Sentence Transformer with contrastive learning.
- Training a classification head with features from the fine-tuned Sentence Transformer.
Model Details
Model Description
- Model Type: SetFit
- Sentence Transformer body: sentence-transformers/all-MiniLM-L6-v2
- Classification head: a LogisticRegression instance
- Maximum Sequence Length: 256 tokens
- Number of Classes: 2 classes
Model Sources
- Repository: SetFit on GitHub
- Paper: Efficient Few-Shot Learning Without Prompts
- Blogpost: SetFit: Efficient Few-Shot Learning Without Prompts
Model Labels
Label | Examples |
---|---|
0 |
|
1 |
|
Evaluation
Metrics
Label | Accuracy |
---|---|
all | 0.7692 |
Uses
Direct Use for Inference
First install the SetFit library:
pip install setfit
Then you can load this model and run inference.
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("sijan1/empathy_model")
# Run inference
preds = model("Hello Jonathan, Thank you for your work on the Beta project. I would like for us to set up a meeting to discuss your work on the project. You have completed a few reports now and I have had some feedback I would like to share with you; specifically the commentary you are providing and your business writing. The additional commentary you are providing makes it difficult to find the objective facts of your findings while working with a tight deadline. I would like to have a discussion with you what ideas you may have to help make your reports more concise so the team can meet their deadlines. You are investing considerable time and effort in these reports and you have expressed your desire to be in an engineering role in the future. Your work on these reports can certainly help you in achieving your career goals. I want to make sure you are successful. I'll send out a meeting invite shortly. Thank you again Jonathan for all your work on this project. I'm looking forward to discussing this with you.")
Training Details
Training Set Metrics
Training set | Min | Median | Max |
---|---|---|---|
Word count | 114 | 187.5 | 338 |
Label | Training Sample Count |
---|---|
0 | 2 |
1 | 2 |
Training Hyperparameters
- batch_size: (16, 16)
- num_epochs: (1, 1)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 40
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
Training Results
Epoch | Step | Training Loss | Validation Loss |
---|---|---|---|
0.025 | 1 | 0.0001 | - |
2.5 | 50 | 0.0001 | - |
0.0667 | 1 | 0.0 | - |
Framework Versions
- Python: 3.10.12
- SetFit: 1.0.3
- Sentence Transformers: 2.3.1
- Transformers: 4.35.2
- PyTorch: 2.1.0+cu121
- Datasets: 2.17.0
- Tokenizers: 0.15.2
Citation
BibTeX
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
- Downloads last month
- 30
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for sijan1/empathy_model
Base model
sentence-transformers/all-MiniLM-L6-v2