Edit model card

Text Multi-Label Sequence Classification model used to decode if passages contain a misfortunate event, a cause for misfortune, and/or an action to mollify or prevent some misfortune. 8293 passages were used for Training and split into 5 folds (~6634 for the train set, ~1659 for the validation set over 5 folds).


Parameters:
Transformer: distilbert-base-uncased
Tokenizer: distilbert-base-uncased
learning rate: 2e-05
weight decay: .01
Dropout: .1
Batch Size: 8
Epochs: 15
Metric for best model: F1 micro

Using epoch 13, the current F1 micro score of 2074 passages not used for training is .637. individual class f1 scores are shown below. Note that at this moment, some labels have been excluded as they are not relevant for the final use of the model.

  • EVENT: -
    • Illness: .866
    • Accident: .41
    • Other: .583
  • CAUSE: -
    • Just Happens: -
    • Material Physical: .431
    • Spirits and Gods: .667
    • Witchcraft and Sorcery: .615
    • Rule Violation Taboo: .555
    • Jealous Evil Eye: -
  • ACTION: -
    • Physical Material: .635
    • Technical Specialist: .357
    • Divination: .303
    • Shaman Medium Healer: .549
    • Priest High Religion: .34
    • Other: -




The quick demo is no longer available at this time in hugging face's API

Downloads last month
10
Safetensors
Model size
67M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.