--- license: other widget: - text: I'm fine. Who is this? - text: You can't take anything seriously. - text: In the end he's going to croak, isn't he? tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: balanced-augmented-distilbert-gest-pred-seqeval-partialmatch results: [] pipeline_tag: token-classification datasets: - Jsevisal/balanced_augmented_dataset --- # balanced-augmented-distilbert-gest-pred-seqeval-partialmatch This model is a fine-tuned version of [elastic/distilbert-base-cased-finetuned-conll03-english](https://huggingface.co/elastic/distilbert-base-cased-finetuned-conll03-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8831 - Precision: 0.8268 - Recall: 0.8056 - F1: 0.8096 - Accuracy: 0.7890 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 3.3759 | 1.0 | 32 | 2.8926 | 0.0638 | 0.0673 | 0.0464 | 0.2120 | | 2.771 | 2.0 | 64 | 2.2857 | 0.4028 | 0.2440 | 0.2402 | 0.3981 | | 2.1033 | 3.0 | 96 | 1.8025 | 0.5923 | 0.4516 | 0.4646 | 0.5434 | | 1.5324 | 4.0 | 128 | 1.4866 | 0.6961 | 0.5570 | 0.5709 | 0.6293 | | 1.1576 | 5.0 | 160 | 1.2689 | 0.7556 | 0.6457 | 0.6624 | 0.6779 | | 0.8788 | 6.0 | 192 | 1.1505 | 0.7568 | 0.6992 | 0.7069 | 0.7011 | | 0.6714 | 7.0 | 224 | 1.0633 | 0.7887 | 0.7405 | 0.7470 | 0.7260 | | 0.5171 | 8.0 | 256 | 1.0177 | 0.8137 | 0.7512 | 0.7666 | 0.7528 | | 0.4106 | 9.0 | 288 | 0.9926 | 0.8054 | 0.7660 | 0.7724 | 0.7410 | | 0.3163 | 10.0 | 320 | 0.9222 | 0.8063 | 0.7813 | 0.7851 | 0.7653 | | 0.2589 | 11.0 | 352 | 0.8984 | 0.8179 | 0.8018 | 0.8027 | 0.7834 | | 0.2133 | 12.0 | 384 | 0.8848 | 0.8217 | 0.8020 | 0.8053 | 0.7844 | | 0.1736 | 13.0 | 416 | 0.8831 | 0.8268 | 0.8056 | 0.8096 | 0.7890 | | 0.148 | 14.0 | 448 | 0.9212 | 0.8386 | 0.8152 | 0.8173 | 0.7932 | | 0.1301 | 15.0 | 480 | 0.9115 | 0.8242 | 0.8098 | 0.8092 | 0.7896 | | 0.1104 | 16.0 | 512 | 0.9095 | 0.8230 | 0.8123 | 0.8104 | 0.7844 | | 0.1016 | 17.0 | 544 | 0.9210 | 0.8334 | 0.8089 | 0.8124 | 0.7839 | | 0.0911 | 18.0 | 576 | 0.9234 | 0.8408 | 0.8126 | 0.8170 | 0.7890 | | 0.0883 | 19.0 | 608 | 0.9177 | 0.8349 | 0.8106 | 0.8135 | 0.7885 | | 0.0848 | 20.0 | 640 | 0.9256 | 0.8357 | 0.8106 | 0.8136 | 0.7890 | ### Framework versions - Transformers 4.27.3 - Pytorch 1.13.1+cu116 - Datasets 2.10.1 - Tokenizers 0.13.2 ### LICENSE Copyright (c) 2014, Universidad Carlos III de Madrid. Todos los derechos reservados. Este software es propiedad de la Universidad Carlos III de Madrid, grupo de investigación Robots Sociales. La Universidad Carlos III de Madrid es titular en exclusiva de los derechos de propiedad intelectual de este software. Queda prohibido cualquier uso indebido o no autorizado, entre estos, a título enunciativo pero no limitativo, la reproducción, fijación, distribución, comunicación pública, ingeniería inversa y/o transformación sobre dicho software, ya sea total o parcialmente, siendo el responsable del uso indebido o no autorizado también responsable de las consecuencias legales que pudieran derivarse de sus actos.