ludolara's picture
Update README.md
e783c6c
metadata
license: apache-2.0
metrics:
  - mse
  - mae
  - r_squared
tags:
  - medical
model-index:
  - name: Vit-COVID-19-severity
    results:
      - task:
          name: Image Classification
          type: image-classification
        metrics:
          - name: MSE
            type: mse
            value: 0.5135470628738403
          - name: MAE
            type: mae
            value: 0.5676153898239136
          - name: R^ 2
            type: r_squared
            value: 0.5378701686859131
pipeline_tag: image-classification

Diagnosing Medical Images for COVID-19 Severity (Regression task)

The COVID-19 pandemic has strained healthcare resources and prompted discussion about how machine learning can alleviate physician burdens and contribute to diagnosis. Chest x-rays (CXRs) are used for diagnosis of COVID-19, but few studies predict the severity of a patient’s condition from CXRs. In this study, we produce a large COVID severity dataset by merging three sources and investigate the efficacy of transfer learning vision transformers (ViTs) in severity regression task. The ViT had the best regression results, with an MSE of 0.5135. Code developed in this project is available at https://github.com/stwhitfield/ift6759_project.

Model description

We were fine-tuned the Vision Transformer (ViT) from HuggingFaces on our training set for 300 epochs using an AdamW optimizer with a learning rate of 0.00001 and a batch size of 10. Early stopping was applied to the model to prevent over-fitting. For ViT, training stopped after 70 epochs. The only difference between the preprocessed data in the ViT and the other models was that it normalized to a tensor image with mean and standard deviation during data augmentation.