--- library_name: setfit tags: - setfit - sentence-transformers - text-classification - generated_from_setfit_trainer metrics: - accuracy widget: - text: ' i still dont know what we would do though' - text: ' where`d you go!' - text: ' Thank you! I`m working on `s' - text: Terminator Salvation... by myself. - text: ' lol man i got 2 1 /2 hrs an iont how i woulda made it wit out my ramen noodles and t.v. Time' pipeline_tag: text-classification inference: true base_model: sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2 model-index: - name: SetFit with sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2 results: - task: type: text-classification name: Text Classification dataset: name: Unknown type: unknown split: test metrics: - type: accuracy value: 0.79 name: Accuracy --- # SetFit with sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2 This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2) - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **Maximum Sequence Length:** 128 tokens - **Number of Classes:** 3 classes ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ### Model Labels | Label | Examples | |:------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | | | 1 | | | 2 | | ## Evaluation ### Metrics | Label | Accuracy | |:--------|:---------| | **all** | 0.79 | ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("ehsanhallo/setfit-paraphrase-multilingual-MiniLM-L12-v2-ig-fa") # Run inference preds = model(" where`d you go!") ``` ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:-------------|:----|:-------|:----| | Word count | 1 | 6.4184 | 75 | | Label | Training Sample Count | |:------|:----------------------| | 0 | 69 | | 1 | 238 | | 2 | 551 | ### Training Hyperparameters - batch_size: (32, 16) - num_epochs: (1, 2) - max_steps: -1 - sampling_strategy: oversampling - body_learning_rate: (2e-05, 5e-06) - head_learning_rate: 0.002 - loss: CosineSimilarityLoss - distance_metric: cosine_distance - margin: 0.25 - end_to_end: False - use_amp: False - warmup_proportion: 0.1 - l2_weight: 0.01 - seed: 42 - eval_max_steps: -1 - load_best_model_at_end: True ### Training Results | Epoch | Step | Training Loss | Validation Loss | |:----------:|:--------:|:-------------:|:---------------:| | 0.0001 | 1 | 0.1767 | - | | 0.0216 | 250 | 0.1513 | - | | 0.0431 | 500 | 0.0629 | 0.2389 | | 0.0647 | 750 | 0.0351 | - | | 0.0862 | 1000 | 0.0015 | 0.1886 | | 0.1078 | 1250 | 0.0003 | - | | 0.1293 | 1500 | 0.0004 | 0.1813 | | 0.1509 | 1750 | 0.0002 | - | | **0.1724** | **2000** | **0.0002** | **0.1807** | | 0.1940 | 2250 | 0.0001 | - | | 0.2155 | 2500 | 0.0001 | 0.187 | | 0.2371 | 2750 | 0.0001 | - | | 0.2586 | 3000 | 0.0001 | 0.1903 | | 0.2802 | 3250 | 0.0001 | - | | 0.3018 | 3500 | 0.0 | 0.1864 | | 0.3233 | 3750 | 0.0 | - | | 0.3449 | 4000 | 0.0 | 0.193 | | 0.3664 | 4250 | 0.0 | - | | 0.3880 | 4500 | 0.0 | 0.1879 | | 0.4095 | 4750 | 0.0 | - | | 0.4311 | 5000 | 0.0 | 0.1887 | | 0.4526 | 5250 | 0.0 | - | | 0.4742 | 5500 | 0.0 | 0.187 | | 0.4957 | 5750 | 0.0 | - | | 0.5173 | 6000 | 0.0001 | 0.205 | | 0.5388 | 6250 | 0.0 | - | | 0.5604 | 6500 | 0.0 | 0.205 | | 0.5819 | 6750 | 0.0 | - | | 0.6035 | 7000 | 0.0 | 0.2018 | | 0.6251 | 7250 | 0.0 | - | | 0.6466 | 7500 | 0.0 | 0.2022 | | 0.6682 | 7750 | 0.0 | - | | 0.6897 | 8000 | 0.0 | 0.2063 | | 0.7113 | 8250 | 0.0 | - | | 0.7328 | 8500 | 0.0 | 0.2143 | | 0.7544 | 8750 | 0.0 | - | | 0.7759 | 9000 | 0.0 | 0.2206 | | 0.7975 | 9250 | 0.0 | - | | 0.8190 | 9500 | 0.0 | 0.2167 | | 0.8406 | 9750 | 0.0 | - | | 0.8621 | 10000 | 0.0 | 0.2176 | | 0.8837 | 10250 | 0.0 | - | | 0.9053 | 10500 | 0.0 | 0.217 | | 0.9268 | 10750 | 0.0 | - | | 0.9484 | 11000 | 0.0 | 0.2153 | | 0.9699 | 11250 | 0.0 | - | | 0.9915 | 11500 | 0.0 | 0.2137 | * The bold row denotes the saved checkpoint. ### Framework Versions - Python: 3.10.12 - SetFit: 1.0.1 - Sentence Transformers: 2.2.2 - Transformers: 4.35.2 - PyTorch: 2.1.0+cu121 - Datasets: 2.16.1 - Tokenizers: 0.15.0 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```