--- library_name: setfit tags: - setfit - sentence-transformers - text-classification - generated_from_setfit_trainer datasets: - hojzas/proj4-all-labs metrics: - accuracy widget: - text: return list(dict.fromkeys(sorted(it))) - text: ' perms = all_permutations_substrings(string)\n result = perms & set(words)\n return set(i for i in words if i in perms)' - text: return [l for i, l in enumerate(it) if i == it.index(l)] - text: " unique_items = set(it)\n return sorted(list(unique_items))" - text: " seen = set()\n result = []\n for word in it:\n if word not\ \ in seen:\n result.append(word)\n seen.add(word)\n return\ \ result" pipeline_tag: text-classification inference: true co2_eq_emissions: emissions: 6.0133985248367114 source: codecarbon training_type: fine-tuning on_cloud: false cpu_model: Intel(R) Xeon(R) Silver 4314 CPU @ 2.40GHz ram_total_size: 251.49161911010742 hours_used: 0.019 hardware_used: 4 x NVIDIA RTX A5000 base_model: sentence-transformers/all-mpnet-base-v2 --- # SetFit with sentence-transformers/all-mpnet-base-v2 This is a [SetFit](https://github.com/huggingface/setfit) model trained on the [hojzas/proj4-all-labs](https://huggingface.co/datasets/hojzas/proj4-all-labs) dataset that can be used for Text Classification. This SetFit model uses [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **Maximum Sequence Length:** 384 tokens - **Number of Classes:** 7 classes - **Training Dataset:** [hojzas/proj4-all-labs](https://huggingface.co/datasets/hojzas/proj4-all-labs) ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ### Model Labels | Label | Examples | |:------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | | | 1 | | | 3 | | | 2 | | | 4 | | | 5 | | | 6 | | ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("hojzas/proj4-all-labs") # Run inference preds = model("return list(dict.fromkeys(sorted(it)))") ``` ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:-------------|:----|:--------|:----| | Word count | 2 | 25.0515 | 140 | | Label | Training Sample Count | |:------|:----------------------| | 0 | 35 | | 1 | 14 | | 2 | 8 | | 3 | 10 | | 4 | 9 | | 5 | 13 | | 6 | 8 | ### Training Hyperparameters - batch_size: (16, 16) - num_epochs: (1, 1) - max_steps: -1 - sampling_strategy: oversampling - num_iterations: 20 - body_learning_rate: (2e-05, 2e-05) - head_learning_rate: 2e-05 - loss: CosineSimilarityLoss - distance_metric: cosine_distance - margin: 0.25 - end_to_end: False - use_amp: False - warmup_proportion: 0.1 - seed: 42 - eval_max_steps: -1 - load_best_model_at_end: False ### Training Results | Epoch | Step | Training Loss | Validation Loss | |:------:|:----:|:-------------:|:---------------:| | 0.0041 | 1 | 0.1745 | - | | 0.2058 | 50 | 0.0355 | - | | 0.4115 | 100 | 0.0168 | - | | 0.6173 | 150 | 0.0042 | - | | 0.8230 | 200 | 0.0075 | - | ### Environmental Impact Carbon emissions were measured using [CodeCarbon](https://github.com/mlco2/codecarbon). - **Carbon Emitted**: 0.006 kg of CO2 - **Hours Used**: 0.019 hours ### Training Hardware - **On Cloud**: No - **GPU Model**: 4 x NVIDIA RTX A5000 - **CPU Model**: Intel(R) Xeon(R) Silver 4314 CPU @ 2.40GHz - **RAM Size**: 251.49 GB ### Framework Versions - Python: 3.10.12 - SetFit: 1.0.3 - Sentence Transformers: 2.2.2 - Transformers: 4.36.1 - PyTorch: 2.1.2+cu121 - Datasets: 2.14.7 - Tokenizers: 0.15.1 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```