Edit model card

Model Card for Arabic Named Entity Recognition with AraBERT

Model Details

Model Name: AraBERT-NER

Model Type: AraBERT (Pre-trained on Arabic text and fine-tuned on Arabic Named Entity Recognition task)

Language: Arabic

License: MIT

Model Creator: Mostafa Ahmed

Contact Information: mostafa.ahmed00976@gmail.com

Model Version: 1.0

Overview

AraBERT-NER is a fine-tuned version of the AraBERT model specifically designed for Named Entity Recognition (NER) tasks in Arabic. The model has been trained to identify and classify named entities such as persons, organizations, locations and MISC and more within Arabic text. This makes it suitable for various applications such as information extraction, document categorization, and data annotation in Arabic.

Intended Use

The model is intended for use in:

  • Named Entity Recognition systems for Arabic
  • Information extraction from Arabic text
  • Document categorization and annotation
  • Arabic language processing research

Training Data

The model was fine-tuned on the CoNLL-NER-AR dataset.

Data Sources:

  • CoNLL-NER-AR: A dataset for named entity recognition tasks in Arabic.

Training Procedure

The model was trained using the Hugging Face transformers library. The training process involved:

  • Preprocessing the CoNLL-NER-AR to format the text and entity annotations for NER.
  • Fine-tuning the pre-trained AraBERT model on the Arabic NER dataset.
  • Evaluating the model's performance using standard NER metrics (e.g., Precision, Recall, F1 Score).

Evaluation Results

The model was evaluated on a held-out test set from the CoNLL-NER-AR dataset. Here are the key performance metrics:

  • Precision: 0.8547
  • Recall: 0.8633
  • F1 Score: 0.8590
  • Accuracy: 0.9542

These metrics indicate the model's effectiveness in accurately identifying and classifying named entities in Arabic text.

How to Use

You can load and use the model with the Hugging Face transformers library as follows:

from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline

tokenizer = AutoTokenizer.from_pretrained("MostafaAhmed98/AraBert-Arabic-NER-CoNLLpp")
model = AutoModelForTokenClassification.from_pretrained("MostafaAhmed98/AraBert-Arabic-NER-CoNLLpp")

# Create a NER pipeline
ner_pipeline = pipeline("ner", model=model, tokenizer=tokenizer)

# Example usage
text = "ولد محمد علي في القاهرة وعمل في شركة مايكروسوفت."
ner_results = ner_pipeline(text)

for entity in ner_results:
    print(f"Entity: {entity['word']}, Label: {entity['entity']}, Confidence: {entity['score']:.2f}")
Downloads last month
29
Safetensors
Model size
135M params
Tensor type
F32
·
Inference API
This model can be loaded on Inference API (serverless).

Dataset used to train MostafaAhmed98/AraBert-Arabic-NER-CoNLLpp