AlphaNum / README.md
louisraedisch's picture
Update README.md
d55e70c
metadata
license: mit
task_categories:
  - image-classification
language:
  - en
tags:
  - OCR
  - Handwriting
  - Character Recognition
  - Grayscale Images
  - ASCII Labels
  - Optical Character Recognition
pretty_name: alphanum
size_categories:
  - 100K<n<1M

AlphaNum Dataset

AlphaNum

Abstract

The AlphaNum dataset is a collection of 108.791 grayscale images of handwritten characters and numerals as well as special character, each sized 24x24 pixels. This dataset is designed to bolster Optical Character Recognition (OCR) research and development.

For consistency, images extracted from the MNIST dataset have been color-inverted to match the grayscale aesthetics of the AlphaNum dataset.

Data Sources

  1. Handwriting Characters Database
  2. MNIST
  3. AZ Handwritten Alphabets in CSV format

In an effort to maintain uniformity, the dataset files have been resized to 24x24 pixels and recolored from white-on-black to black-on-white.

Dataset Structure

Instance Description

Each dataset instance contains an image of a handwritten character or numeral, paired with its corresponding ASCII label.

Data Organization

The dataset is organized into three separate .zip files: train.zip, test.zip, and validation.zip. Each ASCII symbol is housed in a dedicated folder, the name of which corresponds to the ASCII value of the symbol.

  • train.zip size: 55.9 MB
  • test.zip size: 16 MB
  • validation.zip size: 8.06 MB

Dataset Utility

The AlphaNum dataset caters to a variety of use cases including text recognition, document processing, and machine learning tasks. It is particularly instrumental in the development, fine-tuning, and enhancement of OCR models.

Null Category Image Generation

The 'null' category comprises images generated by injecting noise to mimic randomly distributed light pixels. The creation of these images is accomplished through the following Python script: This approach is particularly valuable as it enables the model to effectively disregard specific areas of the training data by utilizing a 'null' label. By doing so, the model becomes better at recognizing letters and can ignore irrelevant parts, enhancing its performance in reallive OCR tasks.

The 'null' labelled images in this dataset have been generated using the following algorithm. (Please note that this is a non-deterministic approach, so you will most likely get different results.)

import os
import numpy as np
from PIL import Image, ImageOps, ImageEnhance

def generate_noisy_images(num_images, image_size=(24, 24) output_dir='NoisyImages', image_format='JPEG'):
    if not os.path.exists(output_dir):
        os.makedirs(output_dir)
        
    for i in range(num_images):
        variation_scale = abs(np.random.normal(30, 15))
        # Generate random noise with reduced strength
        noise = np.random.rand(image_size[0], image_size[1]) * 0.05
        noise = (noise * 255).astype(np.uint8)
        
        # Create a PIL image from the noise
        image = Image.fromarray(noise, mode='L')  # 'L' for grayscale
        
        # Invert the image
        inverted_image = ImageOps.invert(image)
        
        # Enhance the contrast with increased amplitude
        enhancer = ImageEnhance.Contrast(inverted_image)
        contrast_enhanced_image = enhancer.enhance(variation_scale)  # Increased amplitude (e.g., 3.0)
        
        # Save the image
        contrast_enhanced_image.save(os.path.join(output_dir, f'{i}.jpg'), format=image_format)

generate_noisy_images(5000)

example: noisy Image

ASCII Table and Corresponding File Counts

ASCII Value Character Number of Files
33 ! 207
34 " 267
35 # 152
36 $ 192
37 % 190
38 & 104
39 ' 276
40 ( 346
41 ) 359
42 * 128
43 + 146
44 , 320
45 - 447
46 . 486
47 / 259
48 0 2664
49 1 2791
50 2 2564
51 3 2671
52 4 2530
53 5 2343
54 6 2503
55 7 2679
56 8 2544
57 9 2617
58 : 287
59 ; 223
60 < 168
61 = 254
62 > 162
63 ? 194
64 @ 83
65 A 1923
66 B 1505
67 C 1644
68 D 1553
69 E 2171
70 F 1468
71 G 1443
72 H 1543
73 I 1888
74 J 1470
75 K 1504
76 L 1692
77 M 1484
78 N 1683
79 O 2097
80 P 1605
81 Q 1409
82 R 1811
83 S 1786
84 T 1729
85 U 1458
86 V 1405
87 W 1521
88 X 1366
89 Y 1456
90 Z 1451
91 [ 111
93 ] 104
94 ^ 88
95 _ 80
96 ` 42
97 a 2219
98 b 624
99 c 880
100 d 1074
101 e 2962
102 f 608
103 g 760
104 h 990
105 i 2035
106 j 427
107 k 557
108 l 1415
109 m 879
110 n 1906
111 o 2048
112 p 786
113 q 427
114 r 1708
115 s 1557
116 t 1781
117 u 1319
118 v 555
119 w 680
120 x 463
121 y 680
122 z 505
123 { 73
124 | 91
125 } 77
126 ~ 59
999 null 4999