metadata
library_name: setfit
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
metrics:
- accuracy
widget:
- text: Tiong Bahru Plaza, DDC-L2-5, AHU-L2-03 trip alarm
- text: >-
Tiong Bahru Plaza, DDC L4-1, PAU-L4-03 supply air temperature (Units:
°C).2
- text: Tiong Bahru Plaza, DDC-L20, AHU 20-1 VSD CONTROL
- text: 'Tiong Bahru Plaza, VAV 19-7, Discharge Air Flow (Units: m3/h)'
- text: Tiong Bahru Plaza, DDC-L2-5, PAU-L2-02 VSD control
pipeline_tag: text-classification
inference: true
base_model: sentence-transformers/paraphrase-MiniLM-L3-v2
model-index:
- name: SetFit with sentence-transformers/paraphrase-MiniLM-L3-v2
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: accuracy
value: 0.8019925280199253
name: Accuracy
SetFit with sentence-transformers/paraphrase-MiniLM-L3-v2
This is a SetFit model that can be used for Text Classification. This SetFit model uses sentence-transformers/paraphrase-MiniLM-L3-v2 as the Sentence Transformer embedding model. A LogisticRegression instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
- Fine-tuning a Sentence Transformer with contrastive learning.
- Training a classification head with features from the fine-tuned Sentence Transformer.
Model Details
Model Description
- Model Type: SetFit
- Sentence Transformer body: sentence-transformers/paraphrase-MiniLM-L3-v2
- Classification head: a LogisticRegression instance
- Maximum Sequence Length: 128 tokens
- Number of Classes: 42 classes
Model Sources
- Repository: SetFit on GitHub
- Paper: Efficient Few-Shot Learning Without Prompts
- Blogpost: SetFit: Efficient Few-Shot Learning Without Prompts
Model Labels
Label | Examples |
---|---|
13 |
|
4 |
|
8 |
|
16 |
|
2 |
|
9 |
|
6 |
|
21 |
|
10 |
|
35 |
|
7 |
|
1 |
|
26 |
|
34 |
|
18 |
|
0 |
|
17 |
|
14 |
|
11 |
|
41 |
|
5 |
|
20 |
|
37 |
|
31 |
|
3 |
|
15 |
|
36 |
|
12 |
|
30 |
|
19 |
|
24 |
|
28 |
|
27 |
|
33 |
|
25 |
|
39 |
|
23 |
|
38 |
|
32 |
|
29 |
|
22 |
|
40 |
|
Evaluation
Metrics
Label | Accuracy |
---|---|
all | 0.8020 |
Uses
Direct Use for Inference
First install the SetFit library:
pip install setfit
Then you can load this model and run inference.
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("Varun1010/all-MiniLM-L6-v2-polaris-new-distilled")
# Run inference
preds = model("Tiong Bahru Plaza, DDC-L20, AHU 20-1 VSD CONTROL")
Training Details
Training Set Metrics
Training set | Min | Median | Max |
---|---|---|---|
Word count | 6 | 8.8589 | 14 |
Label | Training Sample Count |
---|---|
0 | 10 |
1 | 10 |
2 | 10 |
3 | 10 |
4 | 10 |
5 | 10 |
6 | 10 |
7 | 10 |
8 | 10 |
9 | 10 |
10 | 10 |
11 | 10 |
12 | 10 |
13 | 10 |
14 | 10 |
15 | 10 |
16 | 10 |
17 | 10 |
18 | 3 |
19 | 3 |
20 | 10 |
21 | 10 |
22 | 1 |
23 | 1 |
24 | 10 |
25 | 4 |
26 | 10 |
27 | 8 |
28 | 4 |
29 | 3 |
30 | 3 |
31 | 4 |
32 | 4 |
33 | 6 |
34 | 6 |
35 | 5 |
36 | 3 |
37 | 3 |
38 | 1 |
39 | 1 |
40 | 3 |
41 | 10 |
Training Hyperparameters
- batch_size: (16, 16)
- num_epochs: (1, 16)
- max_steps: 500
- sampling_strategy: oversampling
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
Training Results
Epoch | Step | Training Loss | Validation Loss |
---|---|---|---|
0.0008 | 1 | 0.1765 | - |
0.0376 | 50 | 0.1212 | - |
0.0752 | 100 | 0.0674 | - |
0.1129 | 150 | 0.0589 | - |
0.1505 | 200 | 0.039 | - |
0.1881 | 250 | 0.0326 | - |
0.2257 | 300 | 0.0365 | - |
0.2634 | 350 | 0.0192 | - |
0.3010 | 400 | 0.0254 | - |
0.3386 | 450 | 0.0166 | - |
0.3762 | 500 | 0.0139 | - |
0.4138 | 550 | 0.0161 | - |
0.4515 | 600 | 0.0116 | - |
0.4891 | 650 | 0.0179 | - |
0.5267 | 700 | 0.0184 | - |
0.5643 | 750 | 0.0141 | - |
0.6020 | 800 | 0.0159 | - |
0.6396 | 850 | 0.0253 | - |
0.6772 | 900 | 0.0064 | - |
0.7148 | 950 | 0.0173 | - |
0.7524 | 1000 | 0.0172 | - |
0.7901 | 1050 | 0.0108 | - |
0.8277 | 1100 | 0.0055 | - |
0.8653 | 1150 | 0.0211 | - |
0.9029 | 1200 | 0.0053 | - |
0.9406 | 1250 | 0.0175 | - |
0.9782 | 1300 | 0.0112 | - |
1.0158 | 1350 | 0.005 | - |
1.0534 | 1400 | 0.0046 | - |
1.0910 | 1450 | 0.0072 | - |
1.1287 | 1500 | 0.0144 | - |
1.1663 | 1550 | 0.0284 | - |
1.2039 | 1600 | 0.0039 | - |
1.2415 | 1650 | 0.0157 | - |
1.2792 | 1700 | 0.014 | - |
1.3168 | 1750 | 0.0082 | - |
1.3544 | 1800 | 0.0029 | - |
1.3920 | 1850 | 0.0099 | - |
1.4296 | 1900 | 0.0037 | - |
1.4673 | 1950 | 0.0097 | - |
1.5049 | 2000 | 0.0064 | - |
1.5425 | 2050 | 0.0037 | - |
1.5801 | 2100 | 0.0042 | - |
1.6178 | 2150 | 0.0167 | - |
1.6554 | 2200 | 0.0062 | - |
1.6930 | 2250 | 0.0057 | - |
1.7306 | 2300 | 0.0072 | - |
1.7682 | 2350 | 0.017 | - |
1.8059 | 2400 | 0.0175 | - |
1.8435 | 2450 | 0.0067 | - |
1.8811 | 2500 | 0.0162 | - |
1.9187 | 2550 | 0.0058 | - |
1.9564 | 2600 | 0.0019 | - |
1.9940 | 2650 | 0.0171 | - |
2.0316 | 2700 | 0.0072 | - |
2.0692 | 2750 | 0.0034 | - |
2.1068 | 2800 | 0.0032 | - |
2.1445 | 2850 | 0.0054 | - |
2.1821 | 2900 | 0.0025 | - |
2.2197 | 2950 | 0.0047 | - |
2.2573 | 3000 | 0.0026 | - |
2.2950 | 3050 | 0.002 | - |
2.3326 | 3100 | 0.0043 | - |
2.3702 | 3150 | 0.0022 | - |
2.4078 | 3200 | 0.0036 | - |
2.4454 | 3250 | 0.0023 | - |
2.4831 | 3300 | 0.0018 | - |
2.5207 | 3350 | 0.0021 | - |
2.5583 | 3400 | 0.0026 | - |
2.5959 | 3450 | 0.003 | - |
2.6336 | 3500 | 0.0028 | - |
2.6712 | 3550 | 0.0025 | - |
2.7088 | 3600 | 0.0026 | - |
2.7464 | 3650 | 0.0018 | - |
2.7840 | 3700 | 0.0021 | - |
2.8217 | 3750 | 0.0107 | - |
2.8593 | 3800 | 0.0024 | - |
2.8969 | 3850 | 0.0022 | - |
2.9345 | 3900 | 0.0027 | - |
2.9722 | 3950 | 0.0023 | - |
3.0098 | 4000 | 0.0015 | - |
3.0474 | 4050 | 0.0035 | - |
3.0850 | 4100 | 0.0013 | - |
3.1226 | 4150 | 0.0014 | - |
3.1603 | 4200 | 0.0013 | - |
3.1979 | 4250 | 0.0015 | - |
3.2355 | 4300 | 0.0014 | - |
3.2731 | 4350 | 0.0022 | - |
3.3108 | 4400 | 0.0012 | - |
3.3484 | 4450 | 0.0018 | - |
3.3860 | 4500 | 0.0027 | - |
3.4236 | 4550 | 0.0014 | - |
3.4612 | 4600 | 0.001 | - |
3.4989 | 4650 | 0.0013 | - |
3.5365 | 4700 | 0.0013 | - |
3.5741 | 4750 | 0.0014 | - |
3.6117 | 4800 | 0.001 | - |
3.6494 | 4850 | 0.0012 | - |
3.6870 | 4900 | 0.0053 | - |
3.7246 | 4950 | 0.0025 | - |
3.7622 | 5000 | 0.0011 | - |
3.7998 | 5050 | 0.0013 | - |
3.8375 | 5100 | 0.0019 | - |
3.8751 | 5150 | 0.0012 | - |
3.9127 | 5200 | 0.0011 | - |
3.9503 | 5250 | 0.0013 | - |
3.9880 | 5300 | 0.0017 | - |
4.0256 | 5350 | 0.0013 | - |
4.0632 | 5400 | 0.0069 | - |
4.1008 | 5450 | 0.0009 | - |
4.1384 | 5500 | 0.0022 | - |
4.1761 | 5550 | 0.0013 | - |
4.2137 | 5600 | 0.0009 | - |
4.2513 | 5650 | 0.0011 | - |
4.2889 | 5700 | 0.0013 | - |
4.3266 | 5750 | 0.0014 | - |
4.3642 | 5800 | 0.0012 | - |
4.4018 | 5850 | 0.0014 | - |
4.4394 | 5900 | 0.0039 | - |
4.4771 | 5950 | 0.0011 | - |
4.5147 | 6000 | 0.0011 | - |
4.5523 | 6050 | 0.0012 | - |
4.5899 | 6100 | 0.0011 | - |
4.6275 | 6150 | 0.0024 | - |
4.6652 | 6200 | 0.0024 | - |
4.7028 | 6250 | 0.0039 | - |
4.7404 | 6300 | 0.0029 | - |
4.7780 | 6350 | 0.0015 | - |
4.8157 | 6400 | 0.0013 | - |
4.8533 | 6450 | 0.0007 | - |
4.8909 | 6500 | 0.0008 | - |
4.9285 | 6550 | 0.001 | - |
4.9661 | 6600 | 0.0012 | - |
0.0020 | 1 | 0.8538 | - |
0.0998 | 50 | 0.429 | - |
0.1996 | 100 | 0.0025 | - |
0.2994 | 150 | 0.0015 | - |
0.3992 | 200 | 0.0007 | - |
0.4990 | 250 | 0.0005 | - |
0.5988 | 300 | 0.0004 | - |
0.6986 | 350 | 0.0003 | - |
0.7984 | 400 | 0.0003 | - |
0.8982 | 450 | 0.0004 | - |
0.9980 | 500 | 0.0003 | - |
Framework Versions
- Python: 3.10.12
- SetFit: 1.0.3
- Sentence Transformers: 2.6.1
- Transformers: 4.38.2
- PyTorch: 2.2.1+cu121
- Datasets: 2.18.0
- Tokenizers: 0.15.2
Citation
BibTeX
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}