File size: 711 Bytes
f2ef04e 90e399b |
1 2 3 4 5 6 7 8 9 10 11 12 |
---
license: mit
---
# SegFormer model fine-tuned on AROI
SegFormer model fine-tuned on AROI dataset [AROI: Annotated Retinal OCT Images Database](https://ieeexplore.ieee.org/abstract/document/9596934).
Disclaimer: The team releasing SegFormer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
SegFormer consists of a hierarchical Transformer encoder and a lightweight all-MLP decode head to achieve great results on semantic segmentation benchmarks such as ADE20K and Cityscapes. The hierarchical Transformer is first pre-trained on ImageNet-1k, after which a decode head is added and fine-tuned altogether on a downstream dataset. |