nkjp-ner / README.md
albertvillanova's picture
Replace YAML keys from int to str (#2)
9d69aea
metadata
annotations_creators:
  - expert-generated
language_creators:
  - other
language:
  - pl
license:
  - gpl-3.0
multilinguality:
  - monolingual
size_categories:
  - 10K<n<100K
source_datasets:
  - original
task_categories:
  - token-classification
task_ids:
  - named-entity-recognition
pretty_name: NJKP NER
dataset_info:
  features:
    - name: sentence
      dtype: string
    - name: target
      dtype:
        class_label:
          names:
            '0': geogName
            '1': noEntity
            '2': orgName
            '3': persName
            '4': placeName
            '5': time
  splits:
    - name: train
      num_bytes: 1612125
      num_examples: 15794
    - name: test
      num_bytes: 221092
      num_examples: 2058
    - name: validation
      num_bytes: 196652
      num_examples: 1941
  download_size: 821629
  dataset_size: 2029869

Dataset Card for NJKP NER

Table of Contents

Dataset Description

Dataset Summary

A linguistic corpus is a collection of texts where one can find the typical use of a single word or a phrase, as well as their meaning and grammatical function. Nowadays, without access to a language corpus, it has become impossible to do linguistic research, to write dictionaries, grammars and language teaching books, to create search engines sensitive to Polish inflection, machine translation engines and software of advanced language technology. Language corpora have become an essential tool for linguists, but they are also helpful for software engineers, scholars of literature and culture, historians, librarians and other specialists of art and computer sciences. The manually annotated 1-million word subcorpus of the NJKP, available on GNU GPL v.3

Supported Tasks and Leaderboards

Named entity recognition

[More Information Needed]

Languages

Polish

Dataset Structure

Data Instances

Two tsv files (train, dev) with two columns (sentence, target) and one (test) with just one (sentence).

Data Fields

  • sentence
  • target

Data Splits

Data is splitted in train/dev/test split.

Dataset Creation

Curation Rationale

This dataset is one of nine evaluation tasks to improve polish language processing.

Source Data

Initial Data Collection and Normalization

[More Information Needed]

Who are the source language producers?

[More Information Needed]

Annotations

Annotation process

[More Information Needed]

Who are the annotators?

[More Information Needed]

Personal and Sensitive Information

[More Information Needed]

Considerations for Using the Data

Social Impact of Dataset

[More Information Needed]

Discussion of Biases

[More Information Needed]

Other Known Limitations

[More Information Needed]

Additional Information

Dataset Curators

[More Information Needed]

Licensing Information

GNU GPL v.3

Citation Information

@book{przepiorkowski2012narodowy, title={Narodowy korpus j{\k{e}}zyka polskiego}, author={Przepi{'o}rkowski, Adam}, year={2012}, publisher={Naukowe PWN} }

Contributions

Thanks to @abecadel for adding this dataset.