File size: 2,192 Bytes
feb767e 636bf26 feb767e b9530fd 08fc1a0 b9530fd feb767e 0a53bda 834a514 0a53bda feb767e |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# moshew/gte_tiny_setfit-sst2-english
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) ("TaylorAI/gte-tiny") with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Training code
```python
from setfit import SetFitModel
from datasets import load_dataset
from setfit import SetFitModel, SetFitTrainer
# Load a dataset from the Hugging Face Hub
dataset = load_dataset("SetFit/sst2")
# Upload Train and Test data
num_classes = 2
test_ds = dataset["test"]
train_ds = dataset["train"]
model = SetFitModel.from_pretrained("TaylorAI/gte-tiny")
trainer = SetFitTrainer(model=model, train_dataset=train_ds, eval_dataset=test_ds)
# Train and evaluate
trainer.train()
trainer.evaluate()['accuracy']
```
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("moshew/gte_tiny_setfit-sst2-english")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## Accuracy
On SST-2 dev set:
90.7% SetFit
85.5% (no Fine-Tuning)
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|