TUCN
/

coralexbadea's picture
Update README.md
7266073 verified
|
raw
history blame
712 Bytes
metadata
license: mit

SegFormer model fine-tuned on AROI

SegFormer model fine-tuned on AROI dataset AROI: Annotated Retinal OCT Images Database.

Disclaimer: The team releasing SegFormer did not write a model card for this model so this model card has been written by the Hugging Face team.

Model description

SegFormer consists of a hierarchical Transformer encoder and a lightweight all-MLP decode head to achieve great results on semantic segmentation benchmarks such as ADE20K and Cityscapes. The hierarchical Transformer is first pre-trained on ImageNet-1k, after which a decode head is added and fine-tuned altogether on a downstream dataset.