restor_punct_Lenta2 / README.md
Dmitriy007's picture
Update README.md
3f644e3
|
raw
history blame
3.51 kB

annotations_creators:

  • machine-generated language:
  • ru language_creators:
  • machine-generated license:
  • afl-3.0 multilinguality: [] pretty_name: Dmitriy007/restor_punct_Lenta2 size_categories:
  • 100K<n<1M source_datasets:
  • original tags: [] task_categories:
  • token-classification task_ids: []

Dataset Card for Dmitriy007/restor_punct_Lenta2

Table of Contents

Dataset Description

  • Homepage:
  • Repository:
  • Paper:
  • Leaderboard:
  • Point of Contact:

Dataset Summary

Набор данных restor_punct_Lenta2 (версия 1.0) представляет собой набор из 800 975 блоков русскоязычных предложений, разбитых на слова, каждое слово размечено маркером для последующей классификации токенов.

Виды маркеров: L L. L! L? B B.

Примеры значений маркеров:

L -- данное слово с маленькой буквы + пробел

L. -- данное слово с маленькой буквы + тчк

B -- данное слово с заглавной буквы

B. -- данное слово с заглавной буквы + тчк

Supported Tasks and Leaderboards

[More Information Needed]

Languages

[More Information Needed]

Dataset Structure

Data Instances

[More Information Needed]

Data Fields

[More Information Needed]

Data Splits

[More Information Needed]

Dataset Creation

Curation Rationale

[More Information Needed]

Source Data

Initial Data Collection and Normalization

[More Information Needed]

Who are the source language producers?

[More Information Needed]

Annotations

Annotation process

[More Information Needed]

Who are the annotators?

[More Information Needed]

Personal and Sensitive Information

[More Information Needed]

Considerations for Using the Data

Social Impact of Dataset

[More Information Needed]

Discussion of Biases

[More Information Needed]

Other Known Limitations

[More Information Needed]

Additional Information

Dataset Curators

[More Information Needed]

Licensing Information

[More Information Needed]

Citation Information

[More Information Needed]

Contributions

Thanks to @github-username for adding this dataset.