Keras
Edit model card

Model Card for Model ID

This model is a deep neural network for classifying handwritten digits (0-9) from images. It was a submission for a coursework assignment and is built using Keras.

Model Details

Model Description

This model is designed to classify handwritten digits from the MNIST dataset. It is a basic implementation and can be a starting point for further exploration and improvement.

  • Developed by: Paul J. Aru
  • Model type: Convolutional Neural Network (CNN)
  • License: GNU GPLv3

Uses

Direct Use

This model can be used to classify handwritten digits from images. However, it is important to note that its performance may not be optimal and can be further improved.

Out-of-Scope Use

This model is not intended for real-world applications where high accuracy and robustness are critical. It is for learning purposes and serves as an example for my portfolio.

Bias, Risks, and Limitations

The model may exhibit bias depending on the training data used. The MNIST and EMNIST dataset might contain inherent biases, and the model might learn these biases. The model might not perform well on unseen data, especially if the handwriting styles differ significantly from the training data. This is a basic implementation and likely has limitations in accuracy and generalizability. It serves as a starting point for further exploration and can be improved by experimenting with different architectures, hyperparameters, and data augmentation techniques.

Recommendations

Users should be aware of the limitations of this model and not rely on it for critical tasks. The model can be a good foundation for further development and experimentation in deep learning for handwritten digit classification.

How to Get Started with the Model

from tensorflow.keras.models import load_model
import os

model=load_model("Best_Model.h5")

Training Details

Training Data

The model is trained on the MNIST and EMNIST dataset, a standard dataset for handwritten digit classification.

Training Procedure

Preprocessing

The images were preprocessed using data augmentation techniques such as shifting, rotation, resizing and introducing noise.

image/png

Training Hyperparameters

  • Epoch: 50
  • Batch Size: 32

Speeds, Sizes, Times

image/png

Evaluation

Testing Data, Factors & Metrics

Testing Data

The datasets used for testing include:

  • the MNIST dataset
  • the EMNIST dataset
  • a combined dataset of MNIST and EMNIST
  • an augmented combined dataset of MNIST and EMNIST

Factors

The factors considered in the testing process are the misclassification errors, which indicate the percentage of incorrectly classified samples in each dataset. The metrics used to measure the performance of the models are the percentage of misclassifications for each dataset.

Metrics

After testing all the models, the misclassification errors for each model are plotted using a bar chart. The range between the best and worst errors is calculated, and the model with the lowest maximum error is identified as the best model.

Results

image/png

Summary

In summary, my testing approach involves evaluating the models on different datasets, considering misclassification errors as the primary metric, and comparing the performance of the models to determine the best model.

Environmental Impact

  • Hardware Type: Apple M2 Max
  • Hours used: 93min (last Model)

Model Card Authors

Paul J. Aru

Downloads last month
0
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Dataset used to train paulpall/Beyond_MNIST