shhshn's picture
first commit
8d9a9e2
metadata
language: ja
license: cc-by-4.0
library_name: sentence-transformers
tags:
  - xlm-roberta
  - nli
datasets:
  - jnli
  - jsick

Japanese Natural Language Inference Model

This model was trained using SentenceTransformers Cross-Encoder class, gradient accumulation PR, and the code from CyberAgentAILab/japanese-nli-model.

Training Data

The model was trained on the JGLUE-JNLI and JSICK datasets. For a given sentence pair, it will output three scores corresponding to the labels: contradiction, entailment, neutral.

Usage

from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained('cyberagent/xlm-roberta-large-jnli-jsick')
model = AutoModelForSequenceClassification.from_pretrained('cyberagent/xlm-roberta-large-jnli-jsick')
features = tokenizer(["ε­δΎ›γŒθ΅°γ£γ¦γ„γ‚‹ηŒ«γ‚’θ¦‹γ¦γ„γ‚‹", "ηŒ«γŒθ΅°γ£γ¦γ„γ‚‹"], ["ηŒ«γŒθ΅°γ£γ¦γ„γ‚‹", "ε­δΎ›γŒθ΅°γ£γ¦γ„γ‚‹"], padding=True, truncation=True, return_tensors="pt")
model.eval()
with torch.no_grad():
    scores = model(**features).logits
    label_mapping = ['contradiction', 'entailment', 'neutral']
    labels = [label_mapping[score_max] for score_max in scores.argmax(dim=1)]
    print(labels)