ConvNext-V2
ConvNext-V2 model pre-trained on ImageNet-1k (1.28 million images, 1,000 classes) at resolution 224x224 in a fully convolutional masked autoencoder framework (FCMAE). It was introduced in the paper ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders.
The weights were converted from the convnextv2_large_1k_224_fcmae.pt
file presented in the official repository.
- Downloads last month
- 4
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.