Edit model card

Diagnosing Medical Images for COVID-19 Severity (Regression task)

The COVID-19 pandemic has strained healthcare resources and prompted discussion about how machine learning can alleviate physician burdens and contribute to diagnosis. Chest x-rays (CXRs) are used for diagnosis of COVID-19, but few studies predict the severity of a patient’s condition from CXRs. In this study, we produce a large COVID severity dataset by merging three sources and investigate the efficacy of transfer learning vision transformers (ViTs) in severity regression task. The ViT had the best regression results, with an MSE of 0.5135. Code developed in this project is available at https://github.com/stwhitfield/ift6759_project.

Model description

We were fine-tuned the Vision Transformer (ViT) from HuggingFaces on our training set for 300 epochs using an AdamW optimizer with a learning rate of 0.00001 and a batch size of 10. Early stopping was applied to the model to prevent over-fitting. For ViT, training stopped after 70 epochs. The only difference between the preprocessed data in the ViT and the other models was that it normalized to a tensor image with mean and standard deviation during data augmentation.

Downloads last month
31
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.