--- library_name: setfit tags: - setfit - absa - sentence-transformers - text-classification - generated_from_setfit_trainer base_model: cointegrated/rubert-tiny2 metrics: - accuracy widget: - text: а л а палтуса запеченного – х о:П о п р о б о в а л а палтуса запеченного – х о р о ш , д а и к р а с и в о с м о т р и т с я н а т а р е л к е . - text: 'с курицей , лосось со шпинатным соусом , чай облепиховый:При каждом новом посещении я стараюсь пробовать новые блюда из меню , особенно мне понравились : цезарь с курицей , лосось со шпинатным соусом , чай облепиховый и тирамису от шеф повара .' - text: ', но качество еды ее не украсило:Свадьба , конечно , прошла весело , но качество еды ее не украсило .' - text: найти уютное недорогое местечко в районе метро:Думаю , если стоит задача найти уютное недорогое местечко в районе метро московская , то это наверно один из лучших вариантов . - text: они начали разнообразить кухню мясными блюдами ,:Хочется , чтобы мой отзыв дошел до администрации , и они начали разнообразить кухню мясными блюдами , гарнирами , интересными салатами и супами . pipeline_tag: text-classification inference: false --- # SetFit Polarity Model with cointegrated/rubert-tiny2 This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Aspect Based Sentiment Analysis (ABSA). This SetFit model uses [cointegrated/rubert-tiny2](https://huggingface.co/cointegrated/rubert-tiny2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. In particular, this model is in charge of classifying aspect polarities. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. This model was trained within the context of a larger system for ABSA, which looks like so: 1. Use a spaCy model to select possible aspect span candidates. 2. Use a SetFit model to filter these possible aspect span candidates. 3. **Use this SetFit model to classify the filtered aspect span candidates.** ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [cointegrated/rubert-tiny2](https://huggingface.co/cointegrated/rubert-tiny2) - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **spaCy Model:** ru_core_news_lg - **SetFitABSA Aspect Model:** [isolation-forest/setfit-absa-aspect](https://huggingface.co/isolation-forest/setfit-absa-aspect) - **SetFitABSA Polarity Model:** [isolation-forest/setfit-absa-polarity](https://huggingface.co/isolation-forest/setfit-absa-polarity) - **Maximum Sequence Length:** 2048 tokens - **Number of Classes:** 2 classes ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ### Model Labels | Label | Examples | |:---------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Positive | | | Negative | | ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import AbsaModel # Download from the 🤗 Hub model = AbsaModel.from_pretrained( "isolation-forest/setfit-absa-aspect", "isolation-forest/setfit-absa-polarity", ) # Run inference preds = model("The food was great, but the venue is just way too busy.") ``` ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:-------------|:----|:--------|:----| | Word count | 3 | 28.4766 | 92 | | Label | Training Sample Count | |:---------|:----------------------| | Negative | 128 | | Positive | 128 | ### Training Hyperparameters - batch_size: (16, 2) - num_epochs: (1, 16) - max_steps: -1 - sampling_strategy: oversampling - body_learning_rate: (2e-05, 1e-05) - head_learning_rate: 0.01 - loss: CosineSimilarityLoss - distance_metric: cosine_distance - margin: 0.25 - end_to_end: False - use_amp: False - warmup_proportion: 0.1 - seed: 42 - eval_max_steps: -1 - load_best_model_at_end: False ### Training Results | Epoch | Step | Training Loss | Validation Loss | |:------:|:----:|:-------------:|:---------------:| | 0.0005 | 1 | 0.2196 | - | | 0.0242 | 50 | 0.2339 | - | | 0.0484 | 100 | 0.2258 | - | | 0.0727 | 150 | 0.246 | - | | 0.0969 | 200 | 0.1963 | - | | 0.1211 | 250 | 0.18 | - | | 0.1453 | 300 | 0.1176 | - | | 0.1696 | 350 | 0.0588 | - | | 0.1938 | 400 | 0.0482 | - | | 0.2180 | 450 | 0.1131 | - | | 0.2422 | 500 | 0.0134 | - | | 0.2665 | 550 | 0.0415 | - | | 0.2907 | 600 | 0.0144 | - | | 0.3149 | 650 | 0.012 | - | | 0.3391 | 700 | 0.0091 | - | | 0.3634 | 750 | 0.0055 | - | | 0.3876 | 800 | 0.0054 | - | | 0.4118 | 850 | 0.0055 | - | | 0.4360 | 900 | 0.0072 | - | | 0.4603 | 950 | 0.0094 | - | | 0.4845 | 1000 | 0.0054 | - | | 0.5087 | 1050 | 0.0045 | - | | 0.5329 | 1100 | 0.003 | - | | 0.5572 | 1150 | 0.0067 | - | | 0.5814 | 1200 | 0.0041 | - | | 0.6056 | 1250 | 0.0048 | - | | 0.6298 | 1300 | 0.0053 | - | | 0.6541 | 1350 | 0.0048 | - | | 0.6783 | 1400 | 0.0038 | - | | 0.7025 | 1450 | 0.0037 | - | | 0.7267 | 1500 | 0.0031 | - | | 0.7510 | 1550 | 0.0038 | - | | 0.7752 | 1600 | 0.0032 | - | | 0.7994 | 1650 | 0.0039 | - | | 0.8236 | 1700 | 0.0032 | - | | 0.8479 | 1750 | 0.0023 | - | | 0.8721 | 1800 | 0.0029 | - | | 0.8963 | 1850 | 0.0041 | - | | 0.9205 | 1900 | 0.0026 | - | | 0.9448 | 1950 | 0.0027 | - | | 0.9690 | 2000 | 0.0035 | - | | 0.9932 | 2050 | 0.003 | - | ### Framework Versions - Python: 3.10.13 - SetFit: 1.0.3 - Sentence Transformers: 2.7.0 - spaCy: 3.7.2 - Transformers: 4.39.3 - PyTorch: 2.1.2 - Datasets: 2.18.0 - Tokenizers: 0.15.2 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```