StefanH's picture
Update README.md
3f3c86f
|
raw
history blame
1.88 kB
metadata
library_name: zeroshot_classifier
tags:
  - transformers
  - sentence-transformers
  - zeroshot_classifier
license: mit
datasets:
  - claritylab/UTCD
language:
  - en
pipeline_tag: zero-shot-classification
metrics:
  - accuracy

Zero-shot Vanilla Bi-Encoder

This is a sentence-transformers model. It was introduced in the Findings of ACL'23 Paper Label Agnostic Pre-training for Zero-shot Text Classification by Christopher Clarke, Yuzhao Heng, Yiping Kang, Krisztian Flautner, Lingjia Tang and Jason Mars. The code for training and evaluating this model can be found here.

Model description

This model is intended for zero-shot text classification. It was trained via the dual encoding classification framework as a baseline with the aspect-normalized UTCD dataset.

Usage

You can use the model like this:

>>> from sentence_transformers import SentenceTransformer, util as sbert_util
>>> model = SentenceTransformer(model_name_or_path='claritylab/zero-shot-vanilla-bi-encoder')

>>> text = "I'd like to have this track onto my Classical Relaxations playlist."
>>> labels = [
>>>     'Add To Playlist', 'Book Restaurant', 'Get Weather', 'Play Music', 'Rate Book', 'Search Creative Work',
>>>     'Search Screening Event'
>>> ]

>>> text_embed = model.encode(text)
>>> label_embeds = model.encode(labels)
>>> scores = [sbert_util.cos_sim(text_embed, lb_embed).item() for lb_embed in label_embeds]
>>> print(scores)

[
  0.7219685912132263,
  -0.011121425777673721,
  0.04929959028959274,
  0.6653788089752197,
  0.07093366980552673,
  0.2897151708602905,
  0.06133288890123367
]