ruBert-large_deprel
This model is a fine-tuned version of ai-forever/ruBert-large on the universal_dependencies dataset. It achieves the following results on the evaluation set:
- Loss: 0.7246
- : {'precision': 0.6857142857142857, 'recall': 0.6486486486486487, 'f1': 0.6666666666666667, 'number': 37}
- Arataxis: {'precision': 0.7638190954773869, 'recall': 0.6816143497757847, 'f1': 0.7203791469194313, 'number': 446}
- Ark: {'precision': 0.916923076923077, 'recall': 0.884272997032641, 'f1': 0.9003021148036254, 'number': 337}
- Ase: {'precision': 0.9278455284552846, 'recall': 0.9330608073582013, 'f1': 0.9304458598726114, 'number': 1957}
- Bj: {'precision': 0.9047619047619048, 'recall': 0.9114391143911439, 'f1': 0.9080882352941176, 'number': 542}
- Bl: {'precision': 0.8643478260869565, 'recall': 0.8603577611079054, 'f1': 0.8623481781376519, 'number': 1733}
- C: {'precision': 0.9087248322147651, 'recall': 0.8978779840848806, 'f1': 0.9032688458972647, 'number': 754}
- Cl: {'precision': 0.8141263940520446, 'recall': 0.8171641791044776, 'f1': 0.8156424581005587, 'number': 268}
- Cl:relcl: {'precision': 0.8129496402877698, 'recall': 0.889763779527559, 'f1': 0.849624060150376, 'number': 127}
- Comp: {'precision': 0.9117647058823529, 'recall': 0.9004149377593361, 'f1': 0.906054279749478, 'number': 241}
- Dvcl: {'precision': 0.8235294117647058, 'recall': 0.8324324324324325, 'f1': 0.8279569892473118, 'number': 185}
- Dvmod: {'precision': 0.8639744952178533, 'recall': 0.8648936170212767, 'f1': 0.864433811802233, 'number': 940}
- Et: {'precision': 0.9315673289183223, 'recall': 0.9274725274725275, 'f1': 0.9295154185022025, 'number': 455}
- Iscourse: {'precision': 1.0, 'recall': 0.7333333333333333, 'f1': 0.846153846153846, 'number': 15}
- Ixed: {'precision': 0.872093023255814, 'recall': 0.8571428571428571, 'f1': 0.8645533141210374, 'number': 175}
- Lat: {'precision': 1.0, 'recall': 0.7777777777777778, 'f1': 0.8750000000000001, 'number': 9}
- Lat:foreign: {'precision': 0.6363636363636364, 'recall': 0.6422018348623854, 'f1': 0.6392694063926941, 'number': 109}
- Lat:name: {'precision': 0.6060606060606061, 'recall': 0.5714285714285714, 'f1': 0.588235294117647, 'number': 140}
- Mod: {'precision': 0.8624740843123704, 'recall': 0.8553803975325566, 'f1': 0.8589125946317961, 'number': 2918}
- Obj: {'precision': 0.9107142857142857, 'recall': 0.8571428571428571, 'f1': 0.8831168831168831, 'number': 119}
- Ompound: {'precision': 0.6666666666666666, 'recall': 0.42105263157894735, 'f1': 0.5161290322580646, 'number': 38}
- Onj: {'precision': 0.8317349607672189, 'recall': 0.8361086765994742, 'f1': 0.8339160839160839, 'number': 1141}
- Oot: {'precision': 0.8993963782696177, 'recall': 0.8948948948948949, 'f1': 0.8971399899648771, 'number': 999}
- Op: {'precision': 0.9117647058823529, 'recall': 0.8303571428571429, 'f1': 0.8691588785046729, 'number': 112}
- Ppos: {'precision': 0.5403225806451613, 'recall': 0.6600985221674877, 'f1': 0.5942350332594235, 'number': 203}
- Rphan: {'precision': 0.5, 'recall': 0.3103448275862069, 'f1': 0.3829787234042554, 'number': 29}
- Subj: {'precision': 0.903305785123967, 'recall': 0.9078073089700996, 'f1': 0.9055509527754764, 'number': 1204}
- Subj:pass: {'precision': 0.8978494623655914, 'recall': 0.8391959798994975, 'f1': 0.8675324675324676, 'number': 199}
- Ummod: {'precision': 0.7381615598885793, 'recall': 0.8412698412698413, 'f1': 0.7863501483679525, 'number': 315}
- Ummod:gov: {'precision': 0.7625, 'recall': 0.8026315789473685, 'f1': 0.7820512820512822, 'number': 76}
- Unct: {'precision': 0.9231651376146789, 'recall': 0.911406736484574, 'f1': 0.9172482552342971, 'number': 3533}
- Ux: {'precision': 0.9230769230769231, 'recall': 0.6, 'f1': 0.7272727272727274, 'number': 20}
- Ux:pass: {'precision': 0.9393939393939394, 'recall': 0.9253731343283582, 'f1': 0.9323308270676692, 'number': 67}
- Overall Precision: 0.8762
- Overall Recall: 0.8717
- Overall F1: 0.8739
- Overall Accuracy: 0.8881
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
Training results
Framework versions
- Transformers 4.42.4
- Pytorch 2.3.1+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
- Downloads last month
- 9
Model tree for izaitova/ruBert-large_deprel
Base model
ai-forever/ruBert-large