romjansen's picture
Create README.md
b117dd2
metadata
multilinguality:
  - monolingual
task_categories:
  - token-classification
task_ids:
  - named-entity-recognition
train-eval-index:
  - task: token-classification
    task_id: entity_extraction
    splits:
      train_split: train
      eval_split: test
      val_split: validation
    col_mapping:
      tokens: tokens
      ner_tags: tags
    metrics:
      - type: seqeval
        name: seqeval

Dataset description

This dataset was created for fine-tuning the model robbert-base-v2-NER-NL-legislation-refs and consists of 512 token long examples which each contain one or more legislation references. These examples were created from a weakly labelled corpus of Dutch case law which was scraped from Linked Data Overheid, pre-tokenized and labelled (biluo_tags_from_offsets) through spaCy and further tokenized through applying Hugging Face's AutoTokenizer.from_pretrained() for pdelobelle/robbert-v2-dutch-base's tokenizer.