colln2003 / README.md
Jake Mitchinson
Update readme
fa3bfa9
metadata
language:
  - en
task_categories:
  - token-classification
task_ids:
  - named-entity-recognition
tags:
  - ner
  - conll2003
size_categories:
  - 10K<n<100K

CoNLL-2003 Named Entity Recognition Dataset

This is a self-contained version of the CoNLL-2003 dataset for Named Entity Recognition (NER).

🔒 No trust_remote_code required - This dataset uses only standard parquet files with no custom loading scripts.

Dataset Description

The CoNLL-2003 shared task dataset consists of newswire text from the Reuters corpus tagged with four entity types: persons (PER), locations (LOC), organizations (ORG), and miscellaneous (MISC).

Dataset Structure

Data Instances

Each instance contains:

  • id: Unique identifier for the example
  • tokens: List of tokens (words)
  • pos_tags: List of part-of-speech tags
  • chunk_tags: List of chunk tags
  • ner_tags: List of named entity tags

Data Splits

  • train: 14,041 examples
  • validation: 3,250 examples
  • test: 3,453 examples

Features

  • tokens (list of strings): The words in the sentence
  • pos_tags (list of ClassLabel): Part-of-speech tags
  • chunk_tags (list of ClassLabel): Chunk tags (phrases)
  • ner_tags (list of ClassLabel): Named entity tags with BIO scheme
    • O: Outside any named entity
    • B-PER: Beginning of a person name
    • I-PER: Inside a person name
    • B-ORG: Beginning of an organization name
    • I-ORG: Inside an organization name
    • B-LOC: Beginning of a location name
    • I-LOC: Inside a location name
    • B-MISC: Beginning of a miscellaneous entity
    • I-MISC: Inside a miscellaneous entity

Usage

This dataset is completely self-contained and does NOT require trust_remote_code=True. All data is bundled in parquet files.

Loading from Hugging Face Hub

from datasets import load_dataset

# Load the dataset directly from the Hub
dataset = load_dataset("jacobmitchinson/conll2003")

# Access splits
train_data = dataset["train"]
validation_data = dataset["validation"]
test_data = dataset["test"]

Loading from Local Files

from datasets import load_dataset

# Load the dataset from local parquet files
dataset = load_dataset('parquet', data_files={
    'train': 'data/train.parquet',
    'validation': 'data/validation.parquet',
    'test': 'data/test.parquet'
})

Example Usage

# Get an example
example = train_data[0]
print("Tokens:", example["tokens"])
# Output: ['EU', 'rejects', 'German', 'call', 'to', 'boycott', 'British', 'lamb', '.']

print("NER tags:", example["ner_tags"])
# Output: [3, 0, 7, 0, 0, 0, 7, 0, 0]

# Convert NER tags to readable labels
ner_feature = train_data.features["ner_tags"].feature
ner_labels = [ner_feature.int2str(tag) for tag in example["ner_tags"]]
print("NER labels:", ner_labels)
# Output: ['B-ORG', 'O', 'B-MISC', 'O', 'O', 'O', 'B-MISC', 'O', 'O']

Citation

@inproceedings{tjong-kim-sang-de-meulder-2003-introduction,
    title = "Introduction to the {C}o{NLL}-2003 Shared Task: Language-Independent Named Entity Recognition",
    author = "Tjong Kim Sang, Erik F. and De Meulder, Fien",
    booktitle = "Proceedings of the Seventh Conference on Natural Language Learning at {HLT}-{NAACL} 2003",
    year = "2003",
    pages = "142--147",
    url = "https://www.aclweb.org/anthology/W03-0419",
}

License

The dataset is licensed under the same terms as the original CoNLL-2003 dataset.