--- license: apache-2.0 --- # ConvNext-V2 ConvNext-V2 model pre-trained on ImageNet-1k (1.28 million images, 1,000 classes) at resolution 224x224 in a fully convolutional masked autoencoder framework (FCMAE). It was introduced in the paper [ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders](https://arxiv.org/abs/2301.00808). The weights were converted from the `convnextv2_large_1k_224_fcmae.pt` file presented in the [official repository](https://github.com/facebookresearch/ConvNeXt-V2).