Distil Audio Spectrogram Transformer AudioSet
Distil Audio Spectrogram Transformer AudioSet is an audio classification model based on the Audio Spectrogram Transformer architecture. This model is a distilled version of MIT/ast-finetuned-audioset-10-10-0.4593 on the AudioSet dataset.
This model was trained using HuggingFace's PyTorch framework. All training was done on a Google Cloud Engine VM with a Tesla A100 GPU. All necessary scripts used for training could be found in the Files and versions tab, as well as the Training metrics logged via Tensorboard.
||44M||Audio Spectrogram Transformer||AudioSet|
The model achieves the following results on evaluation:
The following hyperparameters were used during training:
optimizer: Adam with
mixed_precision_training: Native AMP
|Training Loss||Epoch||Step||Validation Loss||F1||Roc Auc||Accuracy||Map|
Do consider the biases which came from pre-training datasets that may be carried over into the results of this model.
Distil Audio Spectrogram Transformer AudioSet was trained and evaluated by Ananto Joyoadikusumo, David Samuel Setiawan, Wilson Wongso. All computation and development are done on Google Cloud.
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu117
- Datasets 2.10.0
- Tokenizers 0.13.2
- Downloads last month