roberta-large-fallacy-classification
This model is a fine-tuned version of roberta-large
trained on the Logical Fallacy Classification Dataset. It is capable of classifying various types of logical fallacies in text.
Model Details
- Base Model:
roberta-large
- Dataset: Logical Fallacy Dataset
- Number of Classes: 13
- Training Parameters:
- Learning Rate: 2e-6
- Batch Size: 8 (gradient accumulation for an effective batch size of 16)
- Weight Decay: 0.01
- Training Epochs: 15
- Mixed Precision (FP16): Enabled
- Features:
- Class weights to handle dataset imbalance
- Tokenization with truncation and padding (maximum length: 128)
Supported Fallacies
The model can classify the following types of logical fallacies:
- Equivocation
- Faulty Generalization
- Fallacy of Logic
- Ad Populum
- Circular Reasoning
- False Dilemma
- False Causality
- Fallacy of Extension
- Fallacy of Credibility
- Fallacy of Relevance
- Intentional
- Appeal to Emotion
- Ad Hominem
Text Classification Pipeline
To use the model for quick classification with a text pipeline:
from transformers import pipeline
pipe = pipeline("text-classification", model="MidhunKanadan/roberta-large-fallacy-classification", device=0)
text = "The rooster crows always before the sun rises, therefore the crowing rooster causes the sun to rise."
result = pipe(text)[0]
print(f"Predicted Label: {result['label']}, Score: {result['score']:.4f}")
Expected Output:
Predicted Label: false causality, Score: 0.9632
Advanced Usage: Predict Scores for All Labels
import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch.nn.functional as F
model_path = "MidhunKanadan/roberta-large-fallacy-classification"
text = "The rooster crows always before the sun rises, therefore the crowing rooster causes the sun to rise."
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForSequenceClassification.from_pretrained(model_path).to("cuda")
inputs = tokenizer(text, return_tensors="pt", truncation=True, padding=True, max_length=128).to("cuda")
with torch.no_grad():
probs = F.softmax(model(**inputs).logits, dim=-1)
results = {model.config.id2label[i]: score.item() for i, score in enumerate(probs[0])}
# Print scores for all labels
for label, score in sorted(results.items(), key=lambda x: x[1], reverse=True):
print(f"{label}: {score:.4f}")
Expected Output:
false causality: 0.9632
fallacy of logic: 0.0139
faulty generalization: 0.0054
intentional: 0.0029
fallacy of credibility: 0.0023
equivocation: 0.0022
fallacy of extension: 0.0020
ad hominem: 0.0019
circular reasoning: 0.0016
false dilemma: 0.0015
fallacy of relevance: 0.0013
ad populum: 0.0009
appeal to emotion: 0.0009
Dataset
- Dataset Name: Logical Fallacy Classification Dataset
- Source: Logical Fallacy Classification Dataset
- Number of Classes: 13 fallacies (e.g., ad hominem, appeal to emotion, faulty generalization, etc.)
Applications
- Education: Teach logical reasoning and critical thinking by identifying common fallacies.
- Argumentation Analysis: Evaluate the validity of arguments in debates, essays, and articles.
- AI Assistants: Enhance conversational AI systems with critical reasoning capabilities.
- Content Moderation: Identify logical flaws in online debates or social media discussions.
License
The model is licensed under the Apache 2.0 License.
- Downloads last month
- 94
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for MidhunKanadan/roberta-large-fallacy-classification
Base model
FacebookAI/roberta-large