license: apache-2.0
base_model:
- microsoft/deberta-v3-large
library_name: transformers
tags:
- relation extraction
- nlp
model-index:
- name: iter-conll03-deberta-large
results:
- task:
type: relation-extraction
dataset:
name: conll03
type: conll03
metrics:
- name: F1
type: f1
value: 92.06
ITER: Iterative Transformer-based Entity Recognition and Relation Extraction
This model checkpoint is part of the collection of models published alongside our paper ITER,
accepted at EMNLP 2024.
To ease reproducibility and enable open research, our source code has been published on GitHub.
This model achieved an F1 score of 92.060
on dataset conll03
Using ITER in your code
First, install ITER in your preferred environment:
pip install git+https://github.com/fleonce/iter
To use our model, refer to the following code:
from iter import ITER
model = ITER.from_pretrained("fleonce/iter-conll03-deberta-large")
tokenizer = model.tokenizer
encodings = tokenizer(
"An art exhibit at the Hakawati Theatre in Arab east Jerusalem was a series of portraits of Palestinians killed in the rebellion .",
return_tensors="pt"
)
generation_output = model.generate(
encodings["input_ids"],
attention_mask=encodings["attention_mask"],
)
# entities
print(generation_output.entities)
# relations between entities
print(generation_output.links)
Checkpoints
We publish checkpoints for the models performing best on the following datasets:
- ACE05:
- CoNLL04:
- ADE:
- SciERC:
- CoNLL03:
- GENIA:
Reproducibility
For each dataset, we selected the best performing checkpoint out of the 5 training runs we performed during training. This model was trained with the following hyperparameters:
- Seed:
2
- Config:
conll03/small_lr
- PyTorch
2.3.0
with CUDA11.8
and precisiontorch.bfloat16
- GPU:
1 NVIDIA H100 SXM 80 GB GPU
Varying GPU and CUDA version as well as training precision did result in slightly different end results in our tests for reproducibility.
To train this model, refer to the following command:
python3 train.py --dataset conll03/small_lr --transformer microsoft/deberta-v3-large --use_bfloat16 --seed 2
@inproceedings{citation}