SetFit
๐ค SetFit is an efficient and prompt-free framework for few-shot fine-tuning of Sentence Transformers. It achieves high accuracy with little labeled data - for instance, with only 8 labeled examples per class on the Customer Reviews sentiment dataset, ๐ค SetFit is competitive with fine-tuning RoBERTa Large on the full training set of 3k examples!
Compared to other few-shot learning methods, SetFit has several unique features:
- ๐ฃ No prompts or verbalizers: Current techniques for few-shot fine-tuning require handcrafted prompts or verbalizers to convert examples into a format suitable for the underlying language model. SetFit dispenses with prompts altogether by generating rich embeddings directly from text examples.
- ๐ Fast to train: SetFit doesnโt require large-scale models like T0, Llama or GPT-4 to achieve high accuracy. As a result, it is typically an order of magnitude (or more) faster to train and run inference with.
- ๐ Multilingual support: SetFit can be used with any Sentence Transformer on the Hub, which means you can classify text in multiple languages by simply fine-tuning a multilingual checkpoint.
Learn the basics and become familiar with loading pretrained Sentence Transformers and fine-tuning them on data. Start here if you are using ๐ค SetFit for the first time!
Practical guides to help you achieve a specific goal. Take a look at these guides to learn how to use ๐ค SetFit to solve real-world problems.
High-level explanations for building a better understanding about important topics such as few-shot and contrastive learning.
Technical descriptions of how ๐ค SetFit classes and methods work.