File size: 3,323 Bytes
3f7d406 b9083c3 80d89ca 3f7d406 db9a87f 3f7d406 666c26a 359d07b 666c26a 359d07b 666c26a |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 |
---
language:
- ar
widget:
- text: >-
اجتياح رفح الفلسطينية أكبر جريمة إبادة فى التاريخ المعاصر
- text: >-
ولد محمد علي في القاهرة وعمل في شركة مايكروسوفت
- text: >-
أحمد مازن أحمد أسعد الشقيري (ولد في 6 يونيو 1973) إعلامي وكاتب سعودي ومقدم برامج تلفزيونية
tags:
- AraBERT
- ner
- nlp
license: mit
datasets:
- e-hossam96/conllpp-ner-ar
metrics:
- f1
- precision
- accuracy
- recall
---
---
# Model Card for Arabic Named Entity Recognition with AraBERT
## Model Details
**Model Name:** AraBERT-NER
**Model Type:** AraBERT (Pre-trained on Arabic text and fine-tuned on Arabic Named Entity Recognition task)
**Language:** Arabic
**License:** MIT
**Model Creator:** Mostafa Ahmed
**Contact Information:** mostafa.ahmed00976@gmail.com
**Model Version:** 1.0
## Overview
AraBERT-NER is a fine-tuned version of the AraBERT model specifically designed for Named Entity Recognition (NER) tasks in Arabic. The model has been trained to identify and classify named entities such as persons, organizations, locations and MISC and more within Arabic text. This makes it suitable for various applications such as information extraction, document categorization, and data annotation in Arabic.
## Intended Use
The model is intended for use in:
- Named Entity Recognition systems for Arabic
- Information extraction from Arabic text
- Document categorization and annotation
- Arabic language processing research
## Training Data
The model was fine-tuned on the CoNLL-NER-AR dataset.
**Data Sources:**
- [CoNLL-NER-AR](https://huggingface.co/datasets/e-hossam96/conllpp-ner-ar): A dataset for named entity recognition tasks in Arabic.
## Training Procedure
The model was trained using the Hugging Face `transformers` library. The training process involved:
- Preprocessing the CoNLL-NER-AR to format the text and entity annotations for NER.
- Fine-tuning the pre-trained AraBERT model on the Arabic NER dataset.
- Evaluating the model's performance using standard NER metrics (e.g., Precision, Recall, F1 Score).
## Evaluation Results
The model was evaluated on a held-out test set from the CoNLL-NER-AR dataset. Here are the key performance metrics:
- **Precision:** 0.8547
- **Recall:** 0.8633
- **F1 Score:** 0.8590
- **Accuracy:** 0.9542
These metrics indicate the model's effectiveness in accurately identifying and classifying named entities in Arabic text.
## How to Use
You can load and use the model with the Hugging Face `transformers` library as follows:
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("MostafaAhmed98/AraBert-Arabic-NER-CoNLLpp")
model = AutoModelForTokenClassification.from_pretrained("MostafaAhmed98/AraBert-Arabic-NER-CoNLLpp")
# Create a NER pipeline
ner_pipeline = pipeline("ner", model=model, tokenizer=tokenizer)
# Example usage
text = "ولد محمد علي في القاهرة وعمل في شركة مايكروسوفت."
ner_results = ner_pipeline(text)
for entity in ner_results:
print(f"Entity: {entity['word']}, Label: {entity['entity']}, Confidence: {entity['score']:.2f}")
```
|