ConvNext-V2
ConvNext-V2 model pre-trained on ImageNet-1k (1.28 million images, 1,000 classes) at resolution 224x224 in a fully convolutional masked autoencoder framework (FCMAE). It was introduced in the paper ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders.
The weights were converted from the convnextv2_huge_1k_224_fcmae.pt
file presented in the official repository.
- Downloads last month
- 165
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.