--- library_name: setfit tags: - setfit - sentence-transformers - text-classification - generated_from_setfit_trainer base_model: sentence-transformers/paraphrase-mpnet-base-v2 datasets: - ethos metrics: - accuracy widget: - text: Why are niggers superior athletically to all other races? - text: An ugly man in a wig and dress, is still just an ugly man in a wig and dress... NOT A WOMAN!!! - text: Perfect place for Lana- China and Myanmar. No trouble there. They know how to handle those pigs. - text: No counting when Hispanics get probation if they do find them self in some trouble they get probation then get deported and if they come back boom there in violation of the probation we gave them but knowingly sent them back. Those rats are smart. Bunch of GOVERNMENT BULL SHIT - text: Most Americans are nazis or kkk members so they elected Trump. pipeline_tag: text-classification inference: false model-index: - name: SetFit with sentence-transformers/paraphrase-mpnet-base-v2 results: - task: type: text-classification name: Text Classification dataset: name: ethos type: ethos split: test metrics: - type: accuracy value: 0.46524064171123 name: Accuracy --- # SetFit with sentence-transformers/paraphrase-mpnet-base-v2 This is a [SetFit](https://github.com/huggingface/setfit) model trained on the [ethos](https://huggingface.co/datasets/ethos) dataset that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) as the Sentence Transformer embedding model. A OneVsRestClassifier instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) - **Classification head:** a OneVsRestClassifier instance - **Maximum Sequence Length:** 512 tokens - **Training Dataset:** [ethos](https://huggingface.co/datasets/ethos) ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ## Evaluation ### Metrics | Label | Accuracy | |:--------|:---------| | **all** | 0.4652 | ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("Ankit15nov/setfit-ethos-multilabel-example") # Run inference preds = model("Why are niggers superior athletically to all other races?") ``` ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:-------------|:----|:--------|:----| | Word count | 3 | 27.0625 | 463 | ### Training Hyperparameters - batch_size: (16, 16) - num_epochs: (1, 1) - max_steps: -1 - sampling_strategy: oversampling - num_iterations: 20 - body_learning_rate: (2e-05, 2e-05) - head_learning_rate: 2e-05 - loss: CosineSimilarityLoss - distance_metric: cosine_distance - margin: 0.25 - end_to_end: False - use_amp: False - warmup_proportion: 0.1 - seed: 42 - eval_max_steps: -1 - load_best_model_at_end: False ### Training Results | Epoch | Step | Training Loss | Validation Loss | |:------:|:----:|:-------------:|:---------------:| | 0.0063 | 1 | 0.2956 | - | | 0.3125 | 50 | 0.1114 | - | | 0.625 | 100 | 0.1177 | - | | 0.9375 | 150 | 0.0695 | - | ### Framework Versions - Python: 3.10.14 - SetFit: 1.0.3 - Sentence Transformers: 2.7.0 - Transformers: 4.40.1 - PyTorch: 2.1.0 - Datasets: 2.3.2 - Tokenizers: 0.19.1 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```