MinhMinh09's picture
End of training
5096007
---
language:
- en
license: cc-by-sa-4.0
library_name: span-marker
tags:
- span-marker
- token-classification
- ner
- named-entity-recognition
- generated_from_span_marker_trainer
datasets:
- DFKI-SLT/few-nerd
metrics:
- precision
- recall
- f1
widget:
- text: The Hebrew Union College libraries in Cincinnati and Los Angeles, the Library
of Congress in Washington, D.C ., the Jewish Theological Seminary in New York
City, and the Harvard University Library (which received donations of Deinard's
texts from Lucius Nathan Littauer, housed in Widener and Houghton libraries) also
have large collections of Deinard works.
- text: Abu Abd Allah Muhammad al-Idrisi (1099–1165 or 1166), the Moroccan Muslim
geographer, cartographer, Egyptologist and traveller who lived in Sicily at the
court of King Roger II, mentioned this island, naming it جزيرة مليطمة ("jazīrat
Malīṭma", "the island of Malitma ") on page 583 of his book "Nuzhat al-mushtaq
fi ihtiraq ghal afaq", otherwise known as The Book of Roger, considered a geographic
encyclopaedia of the medieval world.
- text: The font is also used in the logo of the American rock band Greta Van Fleet,
in the logo for Netflix show "Stranger Things ", and in the album art for rapper
Logic's album "Supermarket ".
- text: Caretaker manager George Goss led them on a run in the FA Cup, defeating Liverpool
in round 4, to reach the semi-final at Stamford Bridge, where they were defeated
2–0 by Sheffield United on 28 March 1925.
- text: In 1991, the National Science Foundation (NSF), which manages the U.S . Antarctic
Program (US AP), honoured his memory by dedicating a state-of-the-art laboratory
complex in his name, the Albert P. Crary Science and Engineering Center (CSEC)
located in McMurdo Station.
pipeline_tag: token-classification
base_model: bert-base-cased
model-index:
- name: SpanMarker with bert-base-cased on DFKI-SLT/few-nerd
results:
- task:
type: token-classification
name: Named Entity Recognition
dataset:
name: Unknown
type: DFKI-SLT/few-nerd
split: test
metrics:
- type: f1
value: 0.767937326836725
name: F1
- type: precision
value: 0.7684512428298279
name: Precision
- type: recall
value: 0.7674240977658965
name: Recall
---
# SpanMarker with bert-base-cased on DFKI-SLT/few-nerd
This is a [SpanMarker](https://github.com/tomaarsen/SpanMarkerNER) model trained on the [DFKI-SLT/few-nerd](https://huggingface.co/datasets/DFKI-SLT/few-nerd) dataset that can be used for Named Entity Recognition. This SpanMarker model uses [bert-base-cased](https://huggingface.co/bert-base-cased) as the underlying encoder.
## Model Details
### Model Description
- **Model Type:** SpanMarker
- **Encoder:** [bert-base-cased](https://huggingface.co/bert-base-cased)
- **Maximum Sequence Length:** 256 tokens
- **Maximum Entity Length:** 8 words
- **Training Dataset:** [DFKI-SLT/few-nerd](https://huggingface.co/datasets/DFKI-SLT/few-nerd)
- **Language:** en
- **License:** cc-by-sa-4.0
### Model Sources
- **Repository:** [SpanMarker on GitHub](https://github.com/tomaarsen/SpanMarkerNER)
- **Thesis:** [SpanMarker For Named Entity Recognition](https://raw.githubusercontent.com/tomaarsen/SpanMarkerNER/main/thesis.pdf)
### Model Labels
| Label | Examples |
|:-------------|:-------------------------------------------------------------------------------|
| art | "Time", "The Seven Year Itch", "Imelda de ' Lambertazzi" |
| building | "Henry Ford Museum", "Boston Garden", "Sheremetyevo International Airport" |
| event | "French Revolution", "Iranian Constitutional Revolution", "Russian Revolution" |
| location | "Croatian", "the Republic of Croatia", "Mediterranean Basin" |
| organization | "Church 's Chicken", "IAEA", "Texas Chicken" |
| other | "Amphiphysin", "BAR", "N-terminal lipid" |
| person | "Hicks", "Ellaline Terriss", "Edmund Payne" |
| product | "Phantom", "Corvettes - GT1 C6R", "100EX" |
## Evaluation
### Metrics
| Label | Precision | Recall | F1 |
|:-------------|:----------|:-------|:-------|
| **all** | 0.7685 | 0.7674 | 0.7679 |
| art | 0.7749 | 0.6884 | 0.7291 |
| building | 0.6045 | 0.6612 | 0.6316 |
| event | 0.6437 | 0.5161 | 0.5729 |
| location | 0.8066 | 0.8425 | 0.8241 |
| organization | 0.7127 | 0.6836 | 0.6978 |
| other | 0.6802 | 0.6775 | 0.6789 |
| person | 0.8900 | 0.9135 | 0.9016 |
| product | 0.6596 | 0.6305 | 0.6447 |
## Uses
### Direct Use for Inference
```python
from span_marker import SpanMarkerModel
# Download from the 🤗 Hub
model = SpanMarkerModel.from_pretrained("span_marker_model_id")
# Run inference
entities = model.predict("Caretaker manager George Goss led them on a run in the FA Cup, defeating Liverpool in round 4, to reach the semi-final at Stamford Bridge, where they were defeated 2–0 by Sheffield United on 28 March 1925.")
```
### Downstream Use
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
```python
from span_marker import SpanMarkerModel, Trainer
# Download from the 🤗 Hub
model = SpanMarkerModel.from_pretrained("span_marker_model_id")
# Specify a Dataset with "tokens" and "ner_tag" columns
dataset = load_dataset("conll2003") # For example CoNLL2003
# Initialize a Trainer using the pretrained model & dataset
trainer = Trainer(
model=model,
train_dataset=dataset["train"],
eval_dataset=dataset["validation"],
)
trainer.train()
trainer.save_model("span_marker_model_id-finetuned")
```
</details>
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:----------------------|:----|:--------|:----|
| Sentence length | 1 | 24.4956 | 163 |
| Entities per sentence | 0 | 2.5439 | 35 |
### Training Hyperparameters
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training Results
| Epoch | Step | Validation Loss | Validation Precision | Validation Recall | Validation F1 | Validation Accuracy |
|:------:|:----:|:---------------:|:--------------------:|:-----------------:|:-------------:|:-------------------:|
| 0.1629 | 200 | 0.0323 | 0.7242 | 0.5919 | 0.6514 | 0.8980 |
| 0.3259 | 400 | 0.0232 | 0.7537 | 0.7149 | 0.7337 | 0.9252 |
| 0.4888 | 600 | 0.0212 | 0.7767 | 0.7301 | 0.7527 | 0.9301 |
| 0.6517 | 800 | 0.0209 | 0.7605 | 0.7615 | 0.7610 | 0.9353 |
| 0.8147 | 1000 | 0.0194 | 0.7815 | 0.7604 | 0.7708 | 0.9383 |
| 0.9776 | 1200 | 0.0195 | 0.7681 | 0.7833 | 0.7756 | 0.9403 |
### Framework Versions
- Python: 3.10.12
- SpanMarker: 1.5.0
- Transformers: 4.35.2
- PyTorch: 2.1.0+cu121
- Datasets: 2.16.0
- Tokenizers: 0.15.0
## Citation
### BibTeX
```
@software{Aarsen_SpanMarker,
author = {Aarsen, Tom},
license = {Apache-2.0},
title = {{SpanMarker for Named Entity Recognition}},
url = {https://github.com/tomaarsen/SpanMarkerNER}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->