herbert-large-cased_deprel
This model is a fine-tuned version of allegro/herbert-large-cased on the universal_dependencies dataset. It achieves the following results on the evaluation set:
- Loss: 0.3266
- : {'precision': 0.9565217391304348, 'recall': 0.9295774647887324, 'f1': 0.9428571428571428, 'number': 71}
- Arataxis:insert: {'precision': 0.7380952380952381, 'recall': 0.4626865671641791, 'f1': 0.5688073394495413, 'number': 67}
- Arataxis:obj: {'precision': 0.8181818181818182, 'recall': 0.7758620689655172, 'f1': 0.7964601769911505, 'number': 58}
- Ark: {'precision': 0.9248554913294798, 'recall': 0.8888888888888888, 'f1': 0.9065155807365439, 'number': 180}
- Ase: {'precision': 0.9654178674351584, 'recall': 0.9429978888106967, 'f1': 0.9540761836952653, 'number': 1421}
- Bj: {'precision': 0.9072978303747534, 'recall': 0.8846153846153846, 'f1': 0.8958130477117819, 'number': 520}
- Bl: {'precision': 0.8146718146718147, 'recall': 0.8554054054054054, 'f1': 0.834541858932103, 'number': 740}
- Bl:agent: {'precision': 0.875, 'recall': 0.875, 'f1': 0.875, 'number': 16}
- Bl:arg: {'precision': 0.8407407407407408, 'recall': 0.7138364779874213, 'f1': 0.772108843537415, 'number': 318}
- Bl:cmpr: {'precision': 0.75, 'recall': 0.7058823529411765, 'f1': 0.7272727272727272, 'number': 17}
- C: {'precision': 0.9197530864197531, 'recall': 0.863768115942029, 'f1': 0.890881913303438, 'number': 345}
- C:preconj: {'precision': 0.8, 'recall': 0.6666666666666666, 'f1': 0.7272727272727272, 'number': 6}
- Cl: {'precision': 0.8733333333333333, 'recall': 0.8397435897435898, 'f1': 0.8562091503267975, 'number': 156}
- Cl:relcl: {'precision': 0.9056603773584906, 'recall': 0.631578947368421, 'f1': 0.7441860465116278, 'number': 76}
- Comp: {'precision': 0.8118279569892473, 'recall': 0.7475247524752475, 'f1': 0.7783505154639175, 'number': 202}
- Comp:cleft: {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 4}
- Comp:obj: {'precision': 0.55, 'recall': 0.4583333333333333, 'f1': 0.5, 'number': 24}
- Comp:pred: {'precision': 0.7272727272727273, 'recall': 0.8, 'f1': 0.761904761904762, 'number': 10}
- Comp:subj: {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1}
- Dvcl: {'precision': 0.8270676691729323, 'recall': 0.8661417322834646, 'f1': 0.8461538461538461, 'number': 127}
- Dvcl:cmpr: {'precision': 0.6666666666666666, 'recall': 0.5, 'f1': 0.5714285714285715, 'number': 4}
- Dvmod: {'precision': 0.8817204301075269, 'recall': 0.8631578947368421, 'f1': 0.8723404255319148, 'number': 380}
- Dvmod:arg: {'precision': 0.4, 'recall': 0.5, 'f1': 0.4444444444444445, 'number': 4}
- Dvmod:emph: {'precision': 0.8571428571428571, 'recall': 0.8516129032258064, 'f1': 0.8543689320388348, 'number': 155}
- Dvmod:neg: {'precision': 0.9411764705882353, 'recall': 0.8888888888888888, 'f1': 0.9142857142857143, 'number': 126}
- Et: {'precision': 0.9320388349514563, 'recall': 0.8648648648648649, 'f1': 0.8971962616822431, 'number': 111}
- Et:numgov: {'precision': 0.9473684210526315, 'recall': 0.9, 'f1': 0.9230769230769231, 'number': 20}
- Et:nummod: {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 1}
- Et:poss: {'precision': 0.9482758620689655, 'recall': 0.9482758620689655, 'f1': 0.9482758620689655, 'number': 58}
- Iscourse:intj: {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2}
- Ist: {'precision': 1.0, 'recall': 0.6666666666666666, 'f1': 0.8, 'number': 9}
- Ixed: {'precision': 0.875, 'recall': 0.5697674418604651, 'f1': 0.6901408450704225, 'number': 86}
- Lat: {'precision': 0.819672131147541, 'recall': 0.6944444444444444, 'f1': 0.7518796992481204, 'number': 72}
- Mod: {'precision': 0.8294849023090586, 'recall': 0.7861952861952862, 'f1': 0.8072601555747624, 'number': 1188}
- Mod:arg: {'precision': 0.6832298136645962, 'recall': 0.5365853658536586, 'f1': 0.6010928961748634, 'number': 205}
- Mod:flat: {'precision': 0.6326530612244898, 'recall': 0.5254237288135594, 'f1': 0.5740740740740742, 'number': 59}
- Mod:poss: {'precision': 1.0, 'recall': 0.25, 'f1': 0.4, 'number': 4}
- Mod:pred: {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1}
- Obj: {'precision': 0.8617021276595744, 'recall': 0.7330316742081447, 'f1': 0.7921760391198045, 'number': 221}
- Ocative: {'precision': 0.9090909090909091, 'recall': 1.0, 'f1': 0.9523809523809523, 'number': 10}
- Onj: {'precision': 0.830316742081448, 'recall': 0.7474541751527495, 'f1': 0.7867095391211147, 'number': 491}
- Oot: {'precision': 0.9740777666999003, 'recall': 0.977, 'f1': 0.9755366949575637, 'number': 1000}
- Op: {'precision': 0.8470588235294118, 'recall': 0.8780487804878049, 'f1': 0.8622754491017964, 'number': 82}
- Ppos: {'precision': 0.6612903225806451, 'recall': 0.6949152542372882, 'f1': 0.6776859504132231, 'number': 59}
- Rphan: {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3}
- Subj: {'precision': 0.9516129032258065, 'recall': 0.918562874251497, 'f1': 0.9347958561852528, 'number': 835}
- Subj:pass: {'precision': 0.8275862068965517, 'recall': 0.8275862068965517, 'f1': 0.8275862068965517, 'number': 29}
- Ummod: {'precision': 0.9393939393939394, 'recall': 0.96875, 'f1': 0.9538461538461539, 'number': 64}
- Ummod:gov: {'precision': 0.9555555555555556, 'recall': 0.86, 'f1': 0.9052631578947369, 'number': 50}
- Unct: {'precision': 0.9564315352697096, 'recall': 0.9146825396825397, 'f1': 0.9350912778904665, 'number': 2016}
- Ux: {'precision': 0.9285714285714286, 'recall': 0.7222222222222222, 'f1': 0.8125000000000001, 'number': 36}
- Ux:clitic: {'precision': 0.9661016949152542, 'recall': 0.95, 'f1': 0.957983193277311, 'number': 60}
- Ux:cnd: {'precision': 0.9090909090909091, 'recall': 0.9090909090909091, 'f1': 0.9090909090909091, 'number': 22}
- Ux:imp: {'precision': 1.0, 'recall': 0.75, 'f1': 0.8571428571428571, 'number': 4}
- Ux:pass: {'precision': 0.8461538461538461, 'recall': 0.8461538461538461, 'f1': 0.8461538461538461, 'number': 39}
- Xpl:pv: {'precision': 0.940677966101695, 'recall': 0.9288702928870293, 'f1': 0.9347368421052632, 'number': 239}
- Overall Precision: 0.9013
- Overall Recall: 0.8597
- Overall F1: 0.88
- Overall Accuracy: 0.8941
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
Training results
Framework versions
- Transformers 4.42.4
- Pytorch 2.3.1+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
- Downloads last month
- 8
Model tree for izaitova/herbert-large-cased_deprel
Base model
allegro/herbert-large-cased