AlekseyScorpi's picture
Update README.md
2cf138b verified
---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': az
'1': by
'2': cn
'3': en
'4': es
'5': fn
'6': gr
'7': jp
'8': ko
'9': kz
'10': la
'11': li
'12': mo
'13': 'no'
'14': pl
'15': ru
'16': ua
splits:
- name: train
num_bytes: 1893804579.79
num_examples: 1987
- name: test
num_bytes: 374568135
num_examples: 339
download_size: 2423302965
dataset_size: 2268372714.79
task_categories:
- text-classification
- translation
- feature-extraction
tags:
- code
size_categories:
- 1K<n<10K
license: mit
language:
- az
- be
- en
- et
- fi
- ka
- ja
- ko
- kk
- lv
- lt
- mn
- 'no'
- pl
- ru
- uk
- zh
---
# Dataset Card for "docs_on_several_languages"
This dataset is a collection of different images in different languages.
The set includes the following languages: Azerbaijani, Belorussian, Chinese, English, Estonian, Finnish, Georgian, Japanese, Korean, Kazakh, Latvian, Lithuanian, Mongolian, Norwegian, Polish, Russian, Ukranian.
Each language has a corresponding class label defined. At least 100 images in the entire dataset are allocated per class. This dataset was originally used for the task of classifying the language of a document based on its image, but I hope it can help you in other machine learning tasks.
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)