--- library_name: setfit tags: - setfit - sentence-transformers - text-classification - generated_from_setfit_trainer base_model: sentence-transformers/all-mpnet-base-v2 metrics: - f1 widget: - text: What could possibly go wrong? - text: We may have faith that human inventiveness will prevail in the long run. - text: That can happen again. - text: But in fact it was intensely rational. - text: Chinese crime, like Chinese cuisine, varies according to regional origin. pipeline_tag: text-classification inference: true model-index: - name: SetFit with sentence-transformers/all-mpnet-base-v2 results: - task: type: text-classification name: Text Classification dataset: name: Unknown type: unknown split: test metrics: - type: f1 value: 0.7866108786610879 name: F1 --- # SetFit with sentence-transformers/all-mpnet-base-v2 This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) as the Sentence Transformer embedding model. A [SetFitHead](huggingface.co/docs/setfit/reference/main#setfit.SetFitHead) instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) - **Classification head:** a [SetFitHead](huggingface.co/docs/setfit/reference/main#setfit.SetFitHead) instance - **Maximum Sequence Length:** 384 tokens - **Number of Classes:** 2 classes ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ### Model Labels | Label | Examples | |:------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 1 | | | 0 | | ## Evaluation ### Metrics | Label | F1 | |:--------|:-------| | **all** | 0.7866 | ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("SOUMYADEEPSAR/Setfit_subj_all-mpnet-base-v2") # Run inference preds = model("That can happen again.") ``` ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:-------------|:----|:--------|:----| | Word count | 3 | 36.5327 | 97 | | Label | Training Sample Count | |:------|:----------------------| | 0 | 100 | | 1 | 114 | ### Training Hyperparameters - batch_size: (8, 8) - num_epochs: (1, 1) - max_steps: -1 - sampling_strategy: oversampling - num_iterations: 20 - body_learning_rate: (2e-05, 2e-05) - head_learning_rate: 2e-05 - loss: CosineSimilarityLoss - distance_metric: cosine_distance - margin: 0.25 - end_to_end: False - use_amp: False - warmup_proportion: 0.1 - seed: 42 - eval_max_steps: -1 - load_best_model_at_end: False ### Training Results | Epoch | Step | Training Loss | Validation Loss | |:------:|:----:|:-------------:|:---------------:| | 0.0003 | 1 | 0.3816 | - | | 1.0 | 2902 | 0.0 | 0.2172 | | 2.0 | 5804 | 0.0 | 0.2248 | | 0.0003 | 1 | 0.5764 | - | | 0.0467 | 50 | 0.0009 | - | | 0.0935 | 100 | 0.0011 | - | | 0.1402 | 150 | 0.0001 | - | | 0.1869 | 200 | 0.0001 | - | | 0.2336 | 250 | 0.0001 | - | | 0.2804 | 300 | 0.0 | - | | 0.3271 | 350 | 0.0 | - | | 0.3738 | 400 | 0.0 | - | | 0.4206 | 450 | 0.0001 | - | | 0.4673 | 500 | 0.0 | - | | 0.5140 | 550 | 0.0 | - | | 0.5607 | 600 | 0.0 | - | | 0.6075 | 650 | 0.0 | - | | 0.6542 | 700 | 0.0 | - | | 0.7009 | 750 | 0.0 | - | | 0.7477 | 800 | 0.0 | - | | 0.7944 | 850 | 0.0 | - | | 0.8411 | 900 | 0.0 | - | | 0.8879 | 950 | 0.0001 | - | | 0.9346 | 1000 | 0.0 | - | | 0.9813 | 1050 | 0.0 | - | ### Framework Versions - Python: 3.10.12 - SetFit: 1.0.3 - Sentence Transformers: 2.7.0 - Transformers: 4.40.1 - PyTorch: 2.2.1+cu121 - Datasets: 2.19.1 - Tokenizers: 0.19.1 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```