Implemented as a Multi-Layer Perceptron to classify handwritten Digits (0-9)
Model Architecture and Results
The model comprises a flattening layer and three linear layers ((256, 64) hidden dimensions)
with relus to approximate non-linearity. It achieves 95.6% accuracy after 15 training epochs
and batch size = 64
. Taining and Test MNIST datasets are loaded with PyTorch dataloaders.
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
HF Inference deployability: The model has no library tag.