--- base_model: sentence-transformers/all-MiniLM-L6-v2 library_name: setfit metrics: - accuracy pipeline_tag: text-classification tags: - setfit - sentence-transformers - text-classification - generated_from_setfit_trainer widget: - text: Government Announces Reforms to Pension Fund Regulations - text: 'Quantum Computing Breakthrough: New Algorithm Solves Cryptography Challenges Faster' - text: Regulatory Oversight of Short Selling Practices in Financial Markets - text: Urban Planning Strategies Focus on Sustainable Development Principles - text: Telehealth Services See Surge in Demand Amid Pandemic inference: true --- # SetFit with sentence-transformers/all-MiniLM-L6-v2 This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **Maximum Sequence Length:** 256 tokens - **Number of Classes:** 106 classes ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ### Model Labels | Label | Examples | |:------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | | | 1 | | | 2 | | | 3 | | | 4 | | | 5 | | | 6 | | | 7 | | | 8 | | | 9 | | | 10 | | | 11 | | | 12 | | | 13 | | | 14 | | | 15 | | | 16 | | | 17 | | | 18 | | | 19 | | | 20 | | | 21 | | | 22 | | | 23 | | | 24 | | | 25 | | | 26 | | | 27 | | | 28 | | | 29 | | | 30 | | | 31 | | | 32 | | | 33 | | | 34 | | | 35 | | | 36 | | | 37 | | | 38 | | | 39 | | | 40 | | | 41 | | | 42 | | | 43 | | | 44 | | | 45 | | | 46 | | | 47 | | | 48 | | | 49 | | | 50 | | | 51 | | | 52 | | | 53 | | | 54 | | | 55 | | | 56 | | | 57 | | | 58 | | | 59 | | | 60 | | | 61 | | | 62 | | | 63 | | | 64 | | | 65 | | | 66 | | | 67 | | | 68 | | | 69 | | | 70 | | | 71 | | | 72 | | | 73 | | | 74 | | | 75 | | | 76 | | | 77 | | | 78 | | | 79 | | | 80 | | | 81 | | | 82 | | | 83 | | | 84 | | | 85 | | | 86 | | | 87 | | | 88 | | | 89 | | | 90 | | | 91 | | | 92 | | | 93 | | | 94 | | | 95 | | | 96 | | | 97 | | | 98 | | | 99 | | | 100 | | | 101 | | | 102 | | | 103 | | | 104 | | | 105 | | ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("snowdere/trainer_topic") # Run inference preds = model("Telehealth Services See Surge in Demand Amid Pandemic") ``` ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:-------------|:----|:-------|:----| | Word count | 4 | 9.0495 | 17 | | Label | Training Sample Count | |:------|:----------------------| | 0 | 120 | | 1 | 100 | | 2 | 40 | | 3 | 21 | | 4 | 20 | | 5 | 20 | | 6 | 20 | | 7 | 20 | | 8 | 20 | | 9 | 20 | | 10 | 20 | | 11 | 20 | | 12 | 20 | | 13 | 20 | | 14 | 20 | | 15 | 20 | | 16 | 20 | | 17 | 20 | | 18 | 20 | | 19 | 20 | | 20 | 20 | | 21 | 20 | | 22 | 20 | | 23 | 20 | | 24 | 20 | | 25 | 20 | | 26 | 20 | | 27 | 20 | | 28 | 20 | | 29 | 20 | | 30 | 20 | | 31 | 20 | | 32 | 20 | | 33 | 20 | | 34 | 20 | | 35 | 20 | | 36 | 20 | | 37 | 20 | | 38 | 20 | | 39 | 20 | | 40 | 20 | | 41 | 20 | | 42 | 20 | | 43 | 20 | | 44 | 20 | | 45 | 20 | | 46 | 20 | | 47 | 20 | | 48 | 20 | | 49 | 20 | | 50 | 20 | | 51 | 20 | | 52 | 20 | | 53 | 20 | | 54 | 20 | | 55 | 20 | | 56 | 20 | | 57 | 20 | | 58 | 20 | | 59 | 20 | | 60 | 20 | | 61 | 20 | | 62 | 20 | | 63 | 20 | | 64 | 20 | | 65 | 20 | | 66 | 20 | | 67 | 20 | | 68 | 20 | | 69 | 20 | | 70 | 20 | | 71 | 20 | | 72 | 20 | | 73 | 20 | | 74 | 20 | | 75 | 20 | | 76 | 20 | | 77 | 20 | | 78 | 20 | | 79 | 20 | | 80 | 20 | | 81 | 20 | | 82 | 20 | | 83 | 20 | | 84 | 20 | | 85 | 20 | | 86 | 20 | | 87 | 20 | | 88 | 20 | | 89 | 20 | | 90 | 20 | | 91 | 20 | | 92 | 20 | | 93 | 20 | | 94 | 20 | | 95 | 20 | | 96 | 20 | | 97 | 20 | | 98 | 20 | | 99 | 20 | | 100 | 20 | | 101 | 20 | | 102 | 20 | | 103 | 20 | | 104 | 20 | | 105 | 20 | ### Training Hyperparameters - batch_size: (16, 16) - num_epochs: (1, 1) - max_steps: -1 - sampling_strategy: oversampling - num_iterations: 20 - body_learning_rate: (2e-05, 2e-05) - head_learning_rate: 2e-05 - loss: CosineSimilarityLoss - distance_metric: cosine_distance - margin: 0.25 - end_to_end: False - use_amp: False - warmup_proportion: 0.1 - seed: 42 - eval_max_steps: -1 - load_best_model_at_end: False ### Training Results | Epoch | Step | Training Loss | Validation Loss | |:------:|:----:|:-------------:|:---------------:| | 0.0002 | 1 | 0.153 | - | | 0.0086 | 50 | 0.1179 | - | | 0.0172 | 100 | 0.1312 | - | | 0.0258 | 150 | 0.0883 | - | | 0.0345 | 200 | 0.07 | - | | 0.0431 | 250 | 0.0706 | - | | 0.0517 | 300 | 0.0462 | - | | 0.0603 | 350 | 0.0608 | - | | 0.0689 | 400 | 0.0932 | - | | 0.0775 | 450 | 0.0726 | - | | 0.0862 | 500 | 0.0624 | - | | 0.0948 | 550 | 0.0418 | - | | 0.1034 | 600 | 0.0417 | - | | 0.1120 | 650 | 0.0426 | - | | 0.1206 | 700 | 0.0243 | - | | 0.1292 | 750 | 0.0387 | - | | 0.1379 | 800 | 0.0707 | - | | 0.1465 | 850 | 0.0258 | - | | 0.1551 | 900 | 0.0182 | - | | 0.1637 | 950 | 0.0203 | - | | 0.1723 | 1000 | 0.0277 | - | | 0.1809 | 1050 | 0.0482 | - | | 0.1896 | 1100 | 0.0284 | - | | 0.1982 | 1150 | 0.0136 | - | | 0.2068 | 1200 | 0.08 | - | | 0.2154 | 1250 | 0.0113 | - | | 0.2240 | 1300 | 0.0169 | - | | 0.2326 | 1350 | 0.0284 | - | | 0.2413 | 1400 | 0.0929 | - | | 0.2499 | 1450 | 0.0271 | - | | 0.2585 | 1500 | 0.0252 | - | | 0.2671 | 1550 | 0.0224 | - | | 0.2757 | 1600 | 0.0135 | - | | 0.2843 | 1650 | 0.0223 | - | | 0.2930 | 1700 | 0.0266 | - | | 0.3016 | 1750 | 0.0084 | - | | 0.3102 | 1800 | 0.0233 | - | | 0.3188 | 1850 | 0.039 | - | | 0.3274 | 1900 | 0.0264 | - | | 0.3360 | 1950 | 0.0165 | - | | 0.3446 | 2000 | 0.0113 | - | | 0.3533 | 2050 | 0.0394 | - | | 0.3619 | 2100 | 0.0142 | - | | 0.3705 | 2150 | 0.0421 | - | | 0.3791 | 2200 | 0.0355 | - | | 0.3877 | 2250 | 0.017 | - | | 0.3963 | 2300 | 0.0086 | - | | 0.4050 | 2350 | 0.012 | - | | 0.4136 | 2400 | 0.0141 | - | | 0.4222 | 2450 | 0.0049 | - | | 0.4308 | 2500 | 0.0437 | - | | 0.4394 | 2550 | 0.0085 | - | | 0.4480 | 2600 | 0.0185 | - | | 0.4567 | 2650 | 0.0098 | - | | 0.4653 | 2700 | 0.0224 | - | | 0.4739 | 2750 | 0.0241 | - | | 0.4825 | 2800 | 0.0056 | - | | 0.4911 | 2850 | 0.028 | - | | 0.4997 | 2900 | 0.0601 | - | | 0.5084 | 2950 | 0.0169 | - | | 0.5170 | 3000 | 0.0286 | - | | 0.5256 | 3050 | 0.017 | - | | 0.5342 | 3100 | 0.0028 | - | | 0.5428 | 3150 | 0.025 | - | | 0.5514 | 3200 | 0.009 | - | | 0.5601 | 3250 | 0.0161 | - | | 0.5687 | 3300 | 0.0072 | - | | 0.5773 | 3350 | 0.0047 | - | | 0.5859 | 3400 | 0.0066 | - | | 0.5945 | 3450 | 0.0101 | - | | 0.6031 | 3500 | 0.0116 | - | | 0.6118 | 3550 | 0.0153 | - | | 0.6204 | 3600 | 0.0075 | - | | 0.6290 | 3650 | 0.0071 | - | | 0.6376 | 3700 | 0.0116 | - | | 0.6462 | 3750 | 0.0073 | - | | 0.6548 | 3800 | 0.0113 | - | | 0.6634 | 3850 | 0.0475 | - | | 0.6721 | 3900 | 0.0067 | - | | 0.6807 | 3950 | 0.0111 | - | | 0.6893 | 4000 | 0.0101 | - | | 0.6979 | 4050 | 0.0084 | - | | 0.7065 | 4100 | 0.0089 | - | | 0.7151 | 4150 | 0.0035 | - | | 0.7238 | 4200 | 0.008 | - | | 0.7324 | 4250 | 0.0121 | - | | 0.7410 | 4300 | 0.0121 | - | | 0.7496 | 4350 | 0.0054 | - | | 0.7582 | 4400 | 0.0099 | - | | 0.7668 | 4450 | 0.0418 | - | | 0.7755 | 4500 | 0.0044 | - | | 0.7841 | 4550 | 0.0151 | - | | 0.7927 | 4600 | 0.0046 | - | | 0.8013 | 4650 | 0.0188 | - | | 0.8099 | 4700 | 0.0085 | - | | 0.8185 | 4750 | 0.0079 | - | | 0.8272 | 4800 | 0.0272 | - | | 0.8358 | 4850 | 0.005 | - | | 0.8444 | 4900 | 0.0104 | - | | 0.8530 | 4950 | 0.0082 | - | | 0.8616 | 5000 | 0.0076 | - | | 0.8702 | 5050 | 0.0315 | - | | 0.8789 | 5100 | 0.0069 | - | | 0.8875 | 5150 | 0.0098 | - | | 0.8961 | 5200 | 0.0082 | - | | 0.9047 | 5250 | 0.0015 | - | | 0.9133 | 5300 | 0.0037 | - | | 0.9219 | 5350 | 0.0049 | - | | 0.9306 | 5400 | 0.0093 | - | | 0.9392 | 5450 | 0.0098 | - | | 0.9478 | 5500 | 0.0061 | - | | 0.9564 | 5550 | 0.0058 | - | | 0.9650 | 5600 | 0.0075 | - | | 0.9736 | 5650 | 0.027 | - | | 0.9823 | 5700 | 0.0285 | - | | 0.9909 | 5750 | 0.0032 | - | | 0.9995 | 5800 | 0.0098 | - | ### Framework Versions - Python: 3.10.14 - SetFit: 1.0.3 - Sentence Transformers: 2.6.1 - Transformers: 4.36.2 - PyTorch: 2.3.0+cu121 - Datasets: 2.19.1 - Tokenizers: 0.15.2 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```