mnist_nnn_vision / README.md
ComradeCat's picture
Update README.md
5c06078
---
license: apache-2.0
datasets:
- mnist
metrics:
- accuracy
pipeline_tag: image-classification
model-index:
- name: mnist_nnn_vision
results:
- task:
type: image-classification # Required. Example: automatic-speech-recognition
name: Image Classification # Optional. Example: Speech Recognition
dataset:
type: mnist # Required. Example: common_voice. Use dataset id from https://hf.co/datasets
name: MNIST # Required. A pretty name for the dataset. Example: Common Voice (French)
split: test # Optional. Example: test
metrics:
- type: accuracy # Required. Example: wer. Use metric id from https://hf.co/metrics
value: 0.9311 # Required. Example: 20.90
name: Accuracy # Optional. Example: Test WER
verified: true # Optional. If true, indicates that evaluation was generated by Hugging Face (vs. self-reported).
---
# Model Card for NNN (Not a Neural Network)
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
Just a simple exercise I did to learn how to use the PyTorch and TorchHD libraries
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This MNIST model was made using 2 libraries: PyTorch and TorchHD.
The HD in TorchHD stands for Hyperdimensional Computing, which means TorchHD is a library that allows you to do hyperdimensional computing in PyTorch.
Hyperdimensional Computing (Or HDC) models are much less accurate than neural networks, that's why this model's accuracy is ~82%
- **Developed by:** Comrade Cat (me)
- **Shared by:** Comrade Cat (me)
- **Model type:** Image Classification
- **Language(s) (NLP):** None
- **License:** Apache 2.0
- **Finetuned from model:** None. This is a pretrained model.
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** Here
- **Paper:** None
- **Demo:** Not available yet.
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
This model is intended to be used as an experiment to compare TorchHD models to PyTorch models.
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
This model is intended to be used for recognizing digits. Please be aware that it has a lower accuracy than a normal PyTorch model.
### Downstream Use
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
This model could be fine-tuned to improve its accuracy, as it is surprisingly low.
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
Please do not misuse the model. This model will not work for tasks other than handwritten digit recognition.
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
This model is too simple and inaccurate to be biased against a social group.
The technical limitations are its inaccuracy.
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be aware of the risks, biases and limitations of the model.
Be aware of how inaccurate this model is!!!
## How to Get Started with the Model
Download both the model and the encoder. Make sure to download their weights too if you want to fine-tune them!
After that you can load them in PyTorch.
```python
import torch
# Load the base model and weights
model = torch.load("mnist.pt")
model.load_state_dict(torch.load("mnist_weights.pt"))
# Load the encoder and its weights
encoder = torch.load("mnist_encoder.pt")
encoder.load_state_dict("mnist_encoder_weights.pt")
# Load an image of a handwritten digit.
# sample_image = (load your image here)
# Encode the loaded image
encoded_image = encode(sample_image)
outputs = model(encoded_image)
print(outputs)
```
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[Link to MNIST will be added soon]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [I don't know yet] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
- **DIMENSIONS:** 11000
- **IMAGE SIZE:** 28
- **NUMBER OF LEVELS:** = 1000
- **BATCH SIZE:** 2
#### Speeds, Sizes, Times
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
The training of this model took 1 hour, because I have a potato PC
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[Link to MNIST will be added soon]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[Accuracy: 82.850%]
### Results
[Low accuracy]
#### Summary
This model is simply too inaccurate for its own good. However, I (Comrade Cat), will try to retrain the model until it has better accuracy.
## Model Card Contact
[More Information Needed]