--- base_model: sentence-transformers/paraphrase-mpnet-base-v2 library_name: setfit metrics: - accuracy - precision - recall - f1 pipeline_tag: text-classification tags: - setfit - sentence-transformers - text-classification - generated_from_setfit_trainer widget: - text: temperature salinity profile collected ctd cast nw atlantic limit40 w noaa ship delaware ii noaa ship albatross iv 14 january 1997 30 october 1997 data collected 12 cruise multiple program ctd cast primarily made conjunction bongo plankton tow plankton data included - text: plume height misr 82420 california fire 2020 multiangle imaging spectroradiometer misr team nasa jet propulsion laboratory california institute technology pasadena california provided map wildfire smoke plume height several wildfire california derived data acquired misr instrument board nasa terra satellite august 24 2020 misr carry nine fixed camera view scene different angle period seven minute accounting true motion cloud due wind angular parallax cloud different view used derive height smoke plume data contain plume height information czu lightning complex lnu lightning complex scu lightning complex fire observed misr approximately 1210 pm local time august 24 2020 plume height give indication fire intensity indicates whether smoke impacting air quality groundlevel observation plume height also important input air quality model predict smoke go might affect downwind misr plume height map produced using misr interactive explorer minx software - text: municipal land transfer tax revenue summary - text: aggregated broccoli production yield - text: street furniture bicycle parking inference: false model-index: - name: SetFit with sentence-transformers/paraphrase-mpnet-base-v2 results: - task: type: text-classification name: Text Classification dataset: name: Unknown type: unknown split: test metrics: - type: accuracy value: 0.595 name: Accuracy - type: precision value: 0.7037037037037037 name: Precision - type: recall value: 0.8407079646017699 name: Recall - type: f1 value: 0.7661290322580645 name: F1 --- # SetFit with sentence-transformers/paraphrase-mpnet-base-v2 This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) as the Sentence Transformer embedding model. A OneVsRestClassifier instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) - **Classification head:** a OneVsRestClassifier instance - **Maximum Sequence Length:** 512 tokens ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ## Evaluation ### Metrics | Label | Accuracy | Precision | Recall | F1 | |:--------|:---------|:----------|:-------|:-------| | **all** | 0.595 | 0.7037 | 0.8407 | 0.7661 | ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("lgd/setfit-multilabel") # Run inference preds = model("street furniture bicycle parking") ``` ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:-------------|:----|:-------|:----| | Word count | 1 | 59.4 | 411 | ### Training Hyperparameters - batch_size: (16, 16) - num_epochs: (1, 1) - max_steps: -1 - sampling_strategy: oversampling - num_iterations: 20 - body_learning_rate: (2e-05, 1e-05) - head_learning_rate: 0.01 - loss: CosineSimilarityLoss - distance_metric: cosine_distance - margin: 0.25 - end_to_end: False - use_amp: False - warmup_proportion: 0.1 - seed: 42 - eval_max_steps: -1 - load_best_model_at_end: False ### Training Results | Epoch | Step | Training Loss | Validation Loss | |:-----:|:----:|:-------------:|:---------------:| | 0.002 | 1 | 0.2153 | - | | 0.1 | 50 | 0.201 | - | | 0.2 | 100 | 0.1433 | - | | 0.3 | 150 | 0.0812 | - | | 0.4 | 200 | 0.0866 | - | | 0.5 | 250 | 0.0306 | - | | 0.6 | 300 | 0.1093 | - | | 0.7 | 350 | 0.0647 | - | | 0.8 | 400 | 0.0255 | - | | 0.9 | 450 | 0.0421 | - | | 1.0 | 500 | 0.0366 | - | ### Framework Versions - Python: 3.10.12 - SetFit: 1.0.3 - Sentence Transformers: 3.0.1 - Transformers: 4.39.0 - PyTorch: 2.3.1+cu121 - Datasets: 2.20.0 - Tokenizers: 0.15.2 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```