FakeBerta: A Fine-Tuned DistilRoBERTa Model for Fake News Detection
You can check the model's fine-tuning code on my GitHub.
Model Overview
FakeBerta is a fine-tuned version of DistilRoBERTa for detecting fake news. The model is trained to classify news articles as real (0) or fake (1) using natural language processing (NLP) techniques. Base Model: DistilRoBERTa Task: Fake news classification
Example of code using AutoModelForSequenceCalssification:
from transformers import AutoModelForSequenceClassification, AutoTokenizer
import torch
model_name = "YerayEsp/FakeBerta"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
inputs = tokenizer("Breaking: Scientists discover water on Mars!", return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
predicted_class = torch.argmax(logits).item()
print(f"Predicted class: {predicted_class}") # 0 = Real, 1 = Fake
Library: Transformers (Hugging Face)
- Downloads last month
- 1,151
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Model tree for YerayEsp/FakeBERTa
Base model
distilbert/distilroberta-base