Edit model card

Bart large model for NLI-based Zero Shot Text Classification

This model uses bart-large.

Training Data

This model was trained on the MultiNLI (MNLI) dataset in the manner originally described in Yin et al. 2019.

It can be used to predict whether a topic label can be assigned to a given sequence, whether or not the label has been seen before.

Usage and Performance

The trained model can be used like this:

from transformers import AutoModelForSequenceClassification, AutoTokenizer, pipeline

# Load model & tokenizer
bart_model = AutoModelForSequenceClassification.from_pretrained('navteca/bart-large-mnli')
bart_tokenizer = AutoTokenizer.from_pretrained('navteca/bart-large-mnli')

# Get predictions
nlp = pipeline('zero-shot-classification', model=bart_model, tokenizer=bart_tokenizer)

sequence = 'One day I will see the world.'
candidate_labels = ['cooking', 'dancing', 'travel']

result = nlp(sequence, candidate_labels, multi_label=True)

print(result)

#{
#  "sequence": "One day I will see the world.",
#  "labels": [
#    "travel",
#    "dancing",
#    "cooking"
#  ],
#  "scores": [
#    0.9941897988319397,
#    0.0060537424869835,
#    0.0020010927692056
#  ]
#}
Downloads last month
52
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train navteca/bart-large-mnli