Edit model card

Alberta Fact Checking Model

The Alberta Fact Checking Model is a natural language processing model designed to classify claims as either supporting or refuting a given evidence. The model uses the ALBERT architecture and a tokenizer for text classification. It was trained on a dataset that primarily consisted of the FEVER, HOOVER, and FEVEROUS datasets, with a small sample of created data.

Labels

The model returns two labels:

  • 0 = Supports
  • 1 = Refutes

Input

The input to the model should be a claim accompanied by evidence.

Usage

The Alberta Fact Checking Model can be used to classify claims based on the evidence provided.

import torch
from transformers import AlbertTokenizer, AlbertForSequenceClassification

# Load the tokenizer and model
tokenizer = AlbertTokenizer.from_pretrained('Dzeniks/alberta_fact_checking')
model = AlbertForSequenceClassification.from_pretrained('Dzeniks/alberta_fact_checking')

# Define the claim with evidence to classify
claim = "Albert Einstein work in the field of computer science"
evidence = "Albert Einstein was a German-born theoretical physicist, widely acknowledged to be one of the greatest and most influential physicists of all time."

# Tokenize the claim with evidence
x = tokenizer.encode_plus(claim, evidence, return_tensors="pt")

model.eval()
with torch.no_grad():
  prediction = model(**x)

label = torch.argmax(outputs[0]).item()

print(f"Label: {label}")

Disclaimer

While the alberta_fact_checking Model has been trained on a relatively large dataset and can provide accurate results, it may not always provide correct results. Users should always exercise caution when making decisions based on the output of any machine learning model.

Downloads last month
14
Safetensors
Model size
11.7M params
Tensor type
I64
·
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.