Edit model card

The model is based on XLM-RoBERTa large ("xlm-roberta-large") fine-tuned for Humor Recognition in Greek language.

Model Details

The model was pre-trained over 10 epochs on Greek Humorous Dataset #

Pre-processing details

The text needs to be pre-processed by removing all greek diacritics and punctuation and converting all letters to lowercase

Load Pretrained Model

from transformers import AutoTokenizer, XLMRobertaForSequenceClassification

tokenizer = AutoTokenizer.from_pretrained("kallantis/Humor-Recognition-Greek-XLM-R-large")
model = XLMRobertaForSequenceClassification.from_pretrained("kallantis/Humor-Recognition-Greek-XLM-R-large", num_labels=2, ignore_mismatched_sizes=True)
Downloads last month
1
Safetensors
Model size
560M params
Tensor type
F32
·

Dataset used to train kallantis/Humor-Recognition-Greek-XLM-R-large