Ambiguity-aware RoBERTa

This model is trained on a subset of the SNLI dataset and is capable of representing the ambiguity occurring in natural language inference tasks as an accurate distribution (i.e., softmax output). It was introduced in the following paper: "Deep Model Compression Also Helps Models Capture Ambiguity" (ACL 2023).

Usage

from transformers import RobertaTokenizer, RobertaForSequenceClassification
tokenizer = RobertaTokenizer.from_pretrained('hancheolp/ambiguity-aware-roberta-snli')
model = RobertaForSequenceClassification.from_pretrained('hancheolp/ambiguity-aware-roberta-snli')
premise = "To the sociologists' speculations, add mine."
hypothesis = "I don't agree with sociologists."
encoded_input = tokenizer(premise, hypothesis, return_tensors='pt')
output = model(**encoded_input)
distribution = output.logits.softmax(dim=-1)

Each index of the output vector represents the following:

  • 0: entailment
  • 1: neutral
  • 2: contradiction
Downloads last month
8
Inference API
Unable to determine this model's library. Check the docs .

Model tree for hancheolp/ambiguity-aware-roberta-snli

Finetuned
(1370)
this model