Edit model card

The model is based on TwHIN-BERT large ("Twitter/twhin-bert-large") fine-tuned for Humor Recognition in Greek language.

TwHIN-BERT is a large pre-trained language model for Multilingual Tweets that is trained on 7 billion Tweets from over 100 distinct languages

Model Details

The model was pre-trained over 10 epochs on Greek Humorous Dataset

Pre-processing details

The text needs to be pre-processed by removing all greek diacritics and punctuation and converting all letters to lowercase

Load Pretrained Model

from transformers import AutoTokenizer, AutoModelForSequenceClassification

tokenizer = AutoTokenizer.from_pretrained("kallantis/Humor-Recognition-Greek-twhin-bert-large")
model = AutoModelForSequenceClassification.from_pretrained("kallantis/Humor-Recognition-Greek-twhin-bert-large", num_labels=2, ignore_mismatched_sizes=True)
Downloads last month
2
Safetensors
Model size
561M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.