license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: validation
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9346783529022656
- name: Recall
type: recall
value: 0.9511948838774823
- name: F1
type: f1
value: 0.9428642922679124
- name: Accuracy
type: accuracy
value: 0.9863572143403779
bert-finetuned-ner
Model Description
This model is a Named Entity Recognition (NER) model built using PyTorch and trained on the CoNLL-2003 dataset. The model is designed to identify and classify named entities in text into categories such as persons (PER), organizations (ORG), locations (LOC), and miscellaneous entities (MISC).
Intended Uses & Limitations
Intended Uses:
- Text Analysis: This model can be used for extracting named entities from unstructured text data, which is useful in various NLP tasks such as information retrieval, content categorization, and automated summarization.
- NER Task: Specifically designed for NER tasks in English.
Limitations:
- Language Dependency: The model is trained on English data and may not perform well on texts in other languages.
- Domain Specificity: Performance may degrade on text from domains significantly different from the training data.
- Error Propagation: Incorrect predictions may propagate to downstream tasks, affecting overall performance.
How to Use
To use this model, load it through the Hugging Face Transformers library. Below is a basic example:
from transformers import pipeline
# Load the NER pipeline
ner_pipeline = pipeline("ner", model="Ashaduzzaman/bert-finetuned-ner")
# Example text
text = "Hugging Face Inc. is based in New York City."
# Perform NER
entities = ner_pipeline(text)
print(entities)
Limitations and Bias
- Bias in Data: The model is trained on the CoNLL-2003 dataset, which may contain biases related to the sources of the text. The model might underperform on entities not well represented in the training data.
- Overfitting: The model may overfit to the specific entities present in the CoNLL-2003 dataset, affecting its generalization to new entities or text styles.
Training Data
The model was trained on the CoNLL-2003 dataset, a widely used benchmark dataset for NER tasks. The dataset contains annotated text from news articles, with labels for persons, organizations, locations, and miscellaneous entities.
Training Procedure
The model was fine-tuned using a pre-trained transformer model (e.g., BERT) with a token classification head for NER. The training involved:
- Optimizer: AdamW optimizer
- Learning Rate: Learning rate scheduler was employed
- Batch Size: Defined in the notebook based on available resources
- Epochs: The model was trained for a specified number of epochs until convergence
- Evaluation: Model performance was evaluated on a validation set, with metrics like F1-score, precision, and recall.
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
Evaluation Results
This model is a fine-tuned version of bert-base-cased on the CoNLL-2003 test set, with performance measured using standard NER metrics:
Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
---|---|---|---|---|---|---|---|
0.076 | 1.0 | 1756 | 0.0657 | 0.9076 | 0.9337 | 0.9204 | 0.9819 |
0.0359 | 2.0 | 3512 | 0.0693 | 0.9265 | 0.9418 | 0.9341 | 0.9847 |
0.0222 | 3.0 | 5268 | 0.0599 | 0.9347 | 0.9512 | 0.9429 | 0.9864 |
These results indicate the model's ability to correctly identify and classify named entities in text.
Framework versions
- Transformers 4.42.4
- Pytorch 2.3.1+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1