romjansen's picture
Create README.md
b117dd2
---
multilinguality:
- monolingual
task_categories:
- token-classification
task_ids:
- named-entity-recognition
train-eval-index:
- task: token-classification
task_id: entity_extraction
splits:
train_split: train
eval_split: test
val_split: validation
col_mapping:
tokens: tokens
ner_tags: tags
metrics:
- type: seqeval
name: seqeval
---
# Dataset description
This dataset was created for fine-tuning the model [robbert-base-v2-NER-NL-legislation-refs](https://huggingface.co/romjansen/robbert-base-v2-NER-NL-legislation-refs) and consists of 512 token long examples which each contain one or more legislation references. These examples were created from a weakly labelled corpus of Dutch case law which was scraped from [Linked Data Overheid](https://linkeddata.overheid.nl/), pre-tokenized and labelled ([biluo_tags_from_offsets](https://spacy.io/api/top-level#biluo_tags_from_offsets)) through [spaCy](https://spacy.io/) and further tokenized through applying Hugging Face's [AutoTokenizer.from_pretrained()](https://huggingface.co/docs/transformers/model_doc/auto#transformers.AutoTokenizer.from_pretrained) for [pdelobelle/robbert-v2-dutch-base](https://huggingface.co/pdelobelle/robbert-v2-dutch-base)'s tokenizer.