es-ner-massive / README.md
hlhdatscience's picture
Update README.md
7e59583 verified
|
raw
history blame
4.49 kB
metadata
dataset_info:
  features:
    - name: Tokens
      sequence: string
    - name: Tags
      sequence: int64
    - name: Tags_string
      sequence: string
    - name: Original_source
      dtype: string
  splits:
    - name: train
      num_bytes: 276428315
      num_examples: 471343
    - name: test
      num_bytes: 6419858
      num_examples: 11136
    - name: validation
      num_bytes: 6345480
      num_examples: 11456
  download_size: 54821843
  dataset_size: 289193653
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: test
        path: data/test-*
      - split: validation
        path: data/validation-*
task_categories:
  - token-classification
language:
  - es
size_categories:
  - 100K<n<1M
license: apache-2.0

Dataset Card for es-ner-massive

Dataset Details

Dataset Description

The es-ner-massive dataset is a combination of three datasets: tner/wikineural, conll2002, and polyglot_ner. It is designed for Named Entity Recognition (NER) tasks. Tags are curated to be span-based and encoded according to the following convention:


  encodings_dictionary = {
      "O": 0,
      "PER": 1,
      'ORG': 2,
      "LOC": 3,
      "MISC": 4
  }
 

Dataset Details

Dataset Description

The dataset was desing with the idea of combining middle size NER datasets in Spanish in order to perfom basic NER or to make Transfer Learning Operations with a solid knowledge base on the pretrained model.

  • Curated by:

polyglot_ner

conll2002

tner/wikineural

  • Language(s) (NLP): [Spanish]
  • License: [More Information Needed]

Dataset Sources [optional]

Here the original sources:

polyglot_ner

conll2002

tner/wikineural

Uses

The intended use is to perform fine-tune of your pretrainned model into NER task.

Dataset Structure

[More Information Needed]

Dataset Creation

Curation Rationale

refer to the original datasets of the compilation

[More Information Needed]

Source Data

Data Collection and Processing

refer to the original datasets of the compilation

polyglot_ner

conll2002

tner/wikineural

Who are the source data producers?

refer to the original datasets of the compilation

polyglot_ner

conll2002

tner/wikineural

Annotations [optional]

All the original NER tags that were in a BIO schema were passed to Span Schema

Annotation process

refer to the original datasets of the compilation

polyglot_ner

conll2002

tner/wikineural

Who are the annotators?

refer to the original datasets of the compilation polyglot_ner

conll2002

tner/wikineural

Personal and Sensitive Information

refer to the original datasets of the compilation polyglot_ner

conll2002

tner/wikineural

Bias, Risks, and Limitations

refer to the original datasets of the compilation

polyglot_ner

conll2002

tner/wikineural

Recommendations

Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.

refer to the original datasets of the compilation

polyglot_ner

conll2002

tner/wikineural

Dataset Card Contact

You can email the author of this compilation at data_analitics_HLH@protonmail.com