Edit model card

Model description

Cased fine-tuned BERT model for Hungarian, trained on 2738 (manually annotated) online news sentences.

Intended uses & limitations

The model can be used as any other (cased) BERT model. It has been tested recognizing guilt at the sentence level, where:

0_label: none

1_label: guilt

Results

                 precision    recall  f1-score   support


           0       0.98      0.95      0.97       133
           1       0.96      0.98      0.97       141

    accuracy                           0.97       274
    macro avg      0.97      0.97      0.97       274

weighted avg       0.97      0.97      0.97       274
Downloads last month
6
Inference API
This model can be loaded on Inference API (serverless).