DeBERTa-v3-base Fine-Tuned for Hallucination Detection
Model Details Model Name: DeBERTa-v3-base Architecture: DeBERTa (Decoding-enhanced BERT with disentangled attention) Base Model: DeBERTa-v3-base Fine-tuned Dataset: PAWS (Paraphrase Adversaries from Word Scrambling) Task: Sentence Pair Classification (Hallucination Detection) Model Description This model is a fine-tuned version of the DeBERTa-v3-base model specifically for the task of detecting hallucinations between pairs of sentences. Hallucinations in this context refer to statements or information present in one sentence but not supported or contradicted by the other.
Fine-Tuning Dataset Dataset Name: PAWS (Paraphrase Adversaries from Word Scrambling) Dataset Description: The PAWS dataset contains pairs of sentences with high lexical overlap but different meanings, designed to challenge models' understanding of semantic content.
Dataset: https://huggingface.co/datasets/paws Training Procedure Number of Epochs: 10 Hardware Used: NVIDIA -A 100
Performance: Accuracy: 94.88% F1 Score: 92.3% Precision: 92.82% Recall: 95.81%
from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch
Load model and tokenizer
tokenizer = AutoTokenizer.from_pretrained("Varun-Chowdary/hallucination_detect") model = AutoModelForSequenceClassification.from_pretrained("Varun-Chowdary/hallucination_detect")
Define the sentences
sentence1 = "Maradona was born in Argentina, South America." sentence2 = "Maradona was born in Brazil, South America. "
Tokenize and prepare input
inputs = tokenizer(sentence1, sentence2, return_tensors='pt', truncation=True, padding=True)
Perform inference
with torch.no_grad(): outputs = model(**inputs) logits = outputs.logits probabilities = torch.softmax(logits, dim=1)
Get the predicted label
predicted_label = torch.argmax(probabilities, dim=1).item() labels = ["No Hallucination", "Hallucination"] print(f"Predicted label: {labels[predicted_label]}")
- Downloads last month
- 7