--- license: apache-2.0 tags: - feature-extraction - text-classification - sentence-transformers - setfit pipeline_tag: feature-extraction datasets: - gentilrenard/lmd_ukraine_comments metrics: - accuracy base_model: sentence-transformers/paraphrase-multilingual-mpnet-base-v2 model-index: - name: sentence-transformers/paraphrase-multilingual-mpnet-base-v2 results: - task: type: feature-extraction name: Text Classification dataset: name: gentilrenard/lmd_ukraine_comments type: gentilrenard/lmd_ukraine_comments split: test metrics: - type: accuracy value: 0.762589928057554 name: Accuracy --- # SetFit with sentence-transformers/paraphrase-multilingual-mpnet-base-v2 This is a [SetFit](https://github.com/huggingface/setfit) model trained on the [gentilrenard/lmd_ukraine_comments](https://huggingface.co/datasets/gentilrenard/lmd_ukraine_comments) dataset that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-multilingual-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [sentence-transformers/paraphrase-multilingual-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-mpnet-base-v2) - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **Maximum Sequence Length:** 128 tokens - **Number of Classes:** 3 classes - **Training Dataset:** [gentilrenard/lmd_ukraine_comments](https://huggingface.co/datasets/gentilrenard/lmd_ukraine_comments) ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ### Model Labels | Label | Examples | |:------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 2 | | | 0 | | | 1 | | ## Evaluation ### Metrics | Label | Accuracy | |:--------|:---------| | **all** | 0.7626 | ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("gentilrenard/paraphrase-multilingual-mpnet-base-v2_setfit-lemonde-french") # Run inference preds = model("Pour Yves Pozzo di Borgo, c'est une tradition familliale. Charles André Pozzo di Borgo fut ambassadeur de la Russie.") ``` ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:-------------|:----|:--------|:----| | Word count | 1 | 63.1703 | 180 | | Label | Training Sample Count | |:------|:----------------------| | 0 | 115 | | 1 | 82 | | 2 | 126 | ### Training Hyperparameters - batch_size: (32, 32) - num_epochs: (2, 2) - max_steps: 2350 - sampling_strategy: oversampling - body_learning_rate: (3e-07, 3e-07) - head_learning_rate: 0.01 - loss: CosineSimilarityLoss - distance_metric: cosine_distance - margin: 0.25 - end_to_end: False - use_amp: False - warmup_proportion: 0.1 - seed: 42 - run_name: setfit_optimized_v4 - eval_max_steps: -1 - load_best_model_at_end: True ### Training Results | Epoch | Step | Training Loss | Validation Loss | |:----------:|:--------:|:-------------:|:---------------:| | 0.0005 | 1 | 0.243 | - | | 0.0234 | 50 | 0.2654 | 0.2636 | | 0.0467 | 100 | 0.2942 | 0.2611 | | 0.0701 | 150 | 0.2462 | 0.2572 | | 0.0934 | 200 | 0.2562 | 0.2546 | | 0.1168 | 250 | 0.2445 | 0.2505 | | 0.1401 | 300 | 0.2206 | 0.2473 | | 0.1635 | 350 | 0.2435 | 0.2453 | | 0.1868 | 400 | 0.1985 | 0.2425 | | 0.2102 | 450 | 0.265 | 0.2411 | | 0.2335 | 500 | 0.2408 | 0.2387 | | 0.2569 | 550 | 0.1986 | 0.2369 | | 0.2802 | 600 | 0.2071 | 0.2351 | | 0.3036 | 650 | 0.2119 | 0.2341 | | 0.3270 | 700 | 0.2558 | 0.2314 | | 0.3503 | 750 | 0.215 | 0.2292 | | 0.3737 | 800 | 0.2286 | 0.2271 | | 0.3970 | 850 | 0.2495 | 0.2256 | | 0.4204 | 900 | 0.1844 | 0.2237 | | 0.4437 | 950 | 0.2529 | 0.2216 | | 0.4671 | 1000 | 0.2074 | 0.2202 | | 0.4904 | 1050 | 0.1753 | 0.2188 | | 0.5138 | 1100 | 0.2216 | 0.2169 | | 0.5371 | 1150 | 0.1878 | 0.2153 | | 0.5605 | 1200 | 0.1862 | 0.2142 | | 0.5838 | 1250 | 0.1682 | 0.2129 | | 0.6072 | 1300 | 0.2425 | 0.2116 | | 0.6305 | 1350 | 0.174 | 0.211 | | 0.6539 | 1400 | 0.1641 | 0.209 | | 0.6773 | 1450 | 0.2014 | 0.2094 | | 0.7006 | 1500 | 0.1423 | 0.2083 | | 0.7240 | 1550 | 0.204 | 0.2078 | | 0.7473 | 1600 | 0.2265 | 0.2075 | | 0.7707 | 1650 | 0.1812 | 0.2063 | | 0.7940 | 1700 | 0.1804 | 0.2058 | | 0.8174 | 1750 | 0.1658 | 0.2055 | | 0.8407 | 1800 | 0.1374 | 0.2064 | | 0.8641 | 1850 | 0.1316 | 0.2057 | | 0.8874 | 1900 | 0.1566 | 0.205 | | **0.9108** | **1950** | **0.2053** | **0.2035** | | 0.9341 | 2000 | 0.1436 | 0.2046 | | 0.9575 | 2050 | 0.2436 | 0.2039 | | 0.9809 | 2100 | 0.1999 | 0.2038 | | 1.0042 | 2150 | 0.1459 | 0.2042 | | 1.0276 | 2200 | 0.1669 | 0.2044 | | 1.0509 | 2250 | 0.1705 | 0.2042 | | 1.0743 | 2300 | 0.1509 | 0.2038 | | 1.0976 | 2350 | 0.1382 | 0.2036 | * The bold row denotes the saved checkpoint. ### Framework Versions - Python: 3.10.12 - SetFit: 1.0.3 - Sentence Transformers: 2.3.0 - Transformers: 4.36.0 - PyTorch: 2.0.0 - Datasets: 2.16.1 - Tokenizers: 0.15.0 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```