anrikus's picture
Update README.md
9efbca3 verified
metadata
dataset_info:
  features:
    - name: image
      dtype: image
    - name: label
      dtype:
        class_label:
          names:
            '0': assamese
            '1': bangla
  splits:
    - name: train
      num_bytes: 3195764
      num_examples: 700
    - name: test
      num_bytes: 1340803
      num_examples: 300
  download_size: 4300152
  dataset_size: 4536567
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: test
        path: data/test-*

Overview

The dataset is composed of 1000 images containing Bangla and Assamese text. Bangla and Assamese are closely related languages with sharing the Bengali–Assamese script and have similar lexical constructs (https://en.wikipedia.org/wiki/Bengali%E2%80%93Assamese_script#cite_note-MajR-22). The text is this dataset has been generated from Open English Bible (https://openenglishbible.org/oeb/2022.1/OEB-2022.1-US.txt) by:

  • Selecting the first 500 lines containg between 50 and 100 chars (after stripping leading / trailing spaces)
  • Translating them to Bangla and Assamese using Google Translate API.
  • Dropping words containing the following characters: {"ৰ", "র", "ৱ", "অʼ", "অ্যা", "এ্যা", "এʼ", "’", "1", "2", "3", "4", "5", "6", "7", "8", "9", "০", "১", "২", "৩", "৪", "৫", "৬", "৭", "৮", "৯"} as these characters are: -- Either specific to Bangla or Assamese -- Or inconsistenly translated by Google Translate API
  • Transcibing the text to 224*224 PNG images using the NotoSansBengali-Regular.ttf font (https://cdn.jsdelivr.net/gh/notofonts/notofonts.github.io/fonts/NotoSansBengali/unhinted/ttf/NotoSansBengali-Regular.ttf)

Dataset Summary

  • Dataset Name: anrikus/lexical_diff_bangla_assamese_v2
  • Dataset Type: Images
  • Images Format: 224*224 PNG images
  • Number of Instances**:
    • Train: 700 [350 Bangla + 350 Assamese]
    • Test: 300 [150 Bangla + 150 Assamese]
  • Number of Labels/Classes: 2
  • Languages: Bangla / Assamese

Usage

Installation

Explain how to install and load the dataset using the Hugging Face datasets library.

from datasets import load_dataset

dataset = load_dataset("anrikus/lexical_diff_bangla_assamese_v2")