Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

An mDeBERTa-v3 model fine-tuned on English Language News articles by the Executive Approval Project team. This model is trained to detect whether a sequence contains either conflict between political actors, or criticism directed towards a political actor or their policies.

The model was tuned for 8 epochs and returned a test-set accuracy of .897 and a balanced accuracy (accounting for the imbalance in the test set, where ~.77 of sequences did not contain conflict) of .827.

The training/tuning set consisted of 6500 sequences (paragraphs), and a 80/20 train/test split was utilized. Each text was coded 3 times using majority rule.

Cleanlab was utilized to judge label health and fix mislabeled texts, which made up fewer than 10% of all labels. Texts marked as mislabeled were manually verified by a human coder.

The following hyperparameters were used during tuning: num_train_epochs=8 learning_rate=2e-5 per_device_train_batch_size=8 per_device_eval_batch_size=64 warmup_ratio=0.06 weight_decay=0.1 load_best_model_at_end=True metric_for_best_model="f1_macro

Downloads last month
1