# ConvNeXT (base-sized model)

ConvNeXT model trained on ImageNet-1k at resolution 384x384. It was introduced in the paper A ConvNet for the 2020s by Liu et al. and first released in this repository.

Disclaimer: The team releasing ConvNeXT did not write a model card for this model so this model card has been written by the Hugging Face team.

## Model description

ConvNeXT is a pure convolutional model (ConvNet), inspired by the design of Vision Transformers, that claims to outperform them. The authors started from a ResNet and "modernized" its design by taking the Swin Transformer as inspiration.

## Intended uses & limitations

You can use the raw model for image classification. See the model hub to look for fine-tuned versions on a task that interests you.

### How to use

Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:

from transformers import ConvNextFeatureExtractor, ConvNextForImageClassification
import torch

image = dataset["test"]["image"][0]

inputs = feature_extractor(image, return_tensors="pt")

logits = model(**inputs).logits

# model predicts one of the 1000 ImageNet classes
predicted_label = logits.argmax(-1).item()
print(model.config.id2label[predicted_label]),


For more code examples, we refer to the documentation.

### BibTeX entry and citation info

@article{DBLP:journals/corr/abs-2201-03545,
author    = {Zhuang Liu and
Hanzi Mao and
Chao{-}Yuan Wu and
Christoph Feichtenhofer and
Trevor Darrell and
Saining Xie},
title     = {A ConvNet for the 2020s},
journal   = {CoRR},
volume    = {abs/2201.03545},
year      = {2022},
url       = {https://arxiv.org/abs/2201.03545},
eprinttype = {arXiv},
eprint    = {2201.03545},
timestamp = {Thu, 20 Jan 2022 14:21:35 +0100},
biburl    = {https://dblp.org/rec/journals/corr/abs-2201-03545.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}