NER-finetuned-BETO / README.md
paulrojasg's picture
Upload README.md with huggingface_hub
f779696 verified
|
raw
history blame
1.93 kB
metadata
language:
  - es
license: apache-2.0
datasets:
  - eriktks/conll2002
metrics:
  - precision
  - recall
  - f1
  - accuracy
pipeline_tag: token-classification

Model Name: NER-finetuned-BETO

This is a BERT model fine-tuned for Named Entity Recognition (NER).

Model Description

This is a fine-tuned BERT model for Named Entity Recognition (NER) task using CONLL2002 dataset.

In the first part, the dataset must be pre-processed in order to give it to the model. This is done using the 🤗 Transformers and BERT tokenizers. Once this is done, finetuning is applied from bert-base-cased and using the 🤗 AutoModelForTokenClassification.

Finally, the model is trained obtaining the neccesary metrics for evaluating its performance (Precision, Recall, F1 and Accuracy)

Summary of executed tests can be found in: https://docs.google.com/spreadsheets/d/1lI7skNIvRurwq3LA5ps7JFK5TxToEx4s7Kaah3ezyQc/edit?usp=sharing

Model can be found in: https://huggingface.co/paulrojasg/NER-finetuned-BETO

Github repository: https://github.com/paulrojasg/nlp_4th_workshop

Training

Training Details

  • Epochs: 5
  • Learning Rate: 2e-05
  • Weight Decay: 0.01
  • Batch Size (Train): 16
  • Batch Size (Eval): 8

Training Metrics

Epoch Training Loss Validation Loss Precision Recall F1 Score Accuracy
1 0.0178 0.1665 0.8275 0.8509 0.8390 0.9706
2 0.0144 0.1737 0.8355 0.8495 0.8424 0.9689
3 0.0121 0.1754 0.8432 0.8612 0.8521 0.9715
4 0.0085 0.1986 0.8352 0.8527 0.8439 0.9701
5 0.0060 0.2106 0.8390 0.8536 0.8462 0.9696

Authors

Made by:

  • Paul Rodrigo Rojas Guerrero
  • Jose Luis Hincapie Bucheli
  • Sebastián Idrobo Avirama

With help from: