Dataset used

Fake and real news dataset

Labels

Fake news: 1
Real news: 0

Usage

from transformers import AutoModelForSequenceClassification, AutoTokenizer, AutoConfig
import torch

config = AutoConfig.from_pretrained("bhavitvyamalik/fake-news_xtremedistil-l6-h256-uncased")
model = AutoModelForSequenceClassification.from_pretrained("bhavitvyamalik/fake-news_xtremedistil-l6-h256-uncased", config=config)
tokenizer = AutoTokenizer.from_pretrained("microsoft/xtremedistil-l6-h256-uncased", usefast=True)

text = "According to reports by Fox News, Biden is the President of the USA"
encode = tokenizer(text, max_length=512, truncation=True, padding="max_length", return_tensors="pt")

output = model(**encode)
print(torch.argmax(output["logits"]))

Performance on test data

'test/accuracy': 0.9977836608886719,
'test/aucroc': 0.9999998807907104,
'test/f1': 0.9976308941841125,
'test/loss': 0.00828308891505003

Run can be tracked here

Wandb project for Fake news classifier

Downloads last month
22
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.