byunal/xlm-roberta-base-turkish-cased-stance

Model card

This repository contains a fine-tuned BERT model for stance detection in Turkish. The base model for this fine-tuning is FacebookAI/xlm-roberta-base. The model has been specifically trained on a uniquely collected Turkish stance detection dataset.

Model Description

  • Model Name: byunal/xlm-roberta-base-turkish-cased-stance
  • Base Model: FacebookAI/xlm-roberta-base
  • Task: Stance Detection
  • Language: Turkish

The model predicts the stance of a given text towards a specific target. Possible stance labels include:

  • Favor: The text supports the target
  • Against: The text opposes the target
  • Neutral: The text does not express a clear stance on the target

Installation

To install the necessary libraries and load the model, run:

pip install transformers

Usage

Here’s a simple example of how to use the model for stance detection in Turkish:

from transformers import AutoModelForSequenceClassification, AutoTokenizer
import torch

# Load the model and tokenizer
model_name = "byunal/xlm-roberta-base-turkish-cased-stance"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)

# Example text
text = "Bu konu hakkında kesinlikle karşıyım."

# Tokenize input
inputs = tokenizer(text, return_tensors="pt", truncation=True, padding=True)

# Perform prediction
with torch.no_grad():
    outputs = model(**inputs)

# Get predicted stance
predictions = torch.argmax(outputs.logits, dim=-1)
stance_label = predictions.item()

# Display result
labels = ["Favor", "Against", "Neutral"]
print(f"The stance is: {labels[stance_label]}")

Training

This model was fine-tuned using a specialized Turkish stance detection dataset that uniquely reflects various text contexts and opinions. The dataset includes diverse examples from social media, news articles, and public comments, ensuring a robust understanding of stance detection in real-world applications.

  • Epochs: 10
  • Batch Size: 32
  • Learning Rate: 5e-5
  • Optimizer: AdamW

Evaluation

The model was evaluated using Accuracy and Macro F1-score on a validation dataset. The results confirm the model's effectiveness in stance detection tasks in Turkish.

  • Accuracy Score: % 80.0
  • Macro F1 Score: % 80.0
Downloads last month
30
Safetensors
Model size
278M params
Tensor type
F32
·
Inference Examples
Unable to determine this model's library. Check the docs .

Model tree for byunal/xlm-roberta-base-turkish-cased-stance

Finetuned
(2700)
this model