Model description
This is an image classification model based on a WideResNet-2-28, trained using the AdaMatch method by Berthelot et al.
The training was based on the example Semi-supervision and domain adaptation with AdaMatch on keras.io by Sayak Paul.
The main difference to the training in the keras.io example is that here I increased the number of Epochs to 30, for a better target dataset performance.
Intended uses & limitations
AdaMatch attempts to combine semi-supervised learning, i.e. learning with a partially labelled dataset and unsupersived domain adaption, i.e. adapting a model to a different domain dataset without any labels.
So it actually performs semi-supervised domain adaptation (SSDA).
The model is inteded to show that AdaMatch is able to carry out SSDA, with a accuracy on the target domain (SVHN) that is exceeding or competitive with other methods.
Limitations
The model was trained on MNIST as source and SVHN as target dataset. Thus, the classification performance on MNIST is very good (98.46%), while the accuracy on SVHN is "only" at 26.51%. Compared to the training of the same architecture without AdaMatch, this still is about 17% better
Training and evaluation data
Training Data
The model was trained using the MNIST (as source domain) and SVHN cropped (as target domain) datasets. For training the images were used at a resolution of (32,32,3).
Augmented versions of the source and target data were created in two versions - weakly and strongly augmented, as written in the original paper.
Training Procedure
This image from the original paper shows the workflow of AdaMatch: For more information, refer to the paper or the original example at keras.io.
Hyperparameters
The following hyperparameters were used during training:
- Epochs: 30
- Source Batch Size: 64
- Target Batch Size: 3 * 64
- Learning Rate: 0.03
- Weight Decay: 0.0005
- Network Depth: 28
- Network Width Multiplier = 2
Evaluation
Accuracy on source test set: 98.46%
Accuracy on target test set: 26.51%
- Downloads last month
- 1