{"guide": {"name": "image-classification-with-vision-transformers", "category": "other-tutorials", "pretty_category": "Other Tutorials", "guide_index": null, "absolute_index": 68, "pretty_name": "Image Classification With Vision Transformers", "content": "# Image Classification with Vision Transformers\n\n\n\n\n## Introduction\n\nImage classification is a central task in computer vision. Building better classifiers to classify what object is present in a picture is an active area of research, as it has applications stretching from facial recognition to manufacturing quality control.\n\nState-of-the-art image classifiers are based on the _transformers_ architectures, originally popularized for NLP tasks. Such architectures are typically called vision transformers (ViT). Such models are perfect to use with Gradio's _image_ input component, so in this tutorial we will build a web demo to classify images using Gradio. We will be able to build the whole web application in a **single line of Python**, and it will look like the demo on the bottom of the page.\n\nLet's get started!\n\n### Prerequisites\n\nMake sure you have the `gradio` Python package already [installed](/getting_started).\n\n## Step 1 \u2014 Choosing a Vision Image Classification Model\n\nFirst, we will need an image classification model. For this tutorial, we will use a model from the [Hugging Face Model Hub](https://huggingface.co/models?pipeline_tag=image-classification). The Hub contains thousands of models covering dozens of different machine learning tasks.\n\nExpand the Tasks category on the left sidebar and select \"Image Classification\" as our task of interest. You will then see all of the models on the Hub that are designed to classify images.\n\nAt the time of writing, the most popular one is `google/vit-base-patch16-224`, which has been trained on ImageNet images at a resolution of 224x224 pixels. We will use this model for our demo.\n\n## Step 2 \u2014 Loading the Vision Transformer Model with Gradio\n\nWhen using a model from the Hugging Face Hub, we do not need to define the input or output components for the demo. Similarly, we do not need to be concerned with the details of preprocessing or postprocessing.\nAll of these are automatically inferred from the model tags.\n\nBesides the import statement, it only takes a single line of Python to load and launch the demo.\n\nWe use the `gr.Interface.load()` method and pass in the path to the model including the `huggingface/` to designate that it is from the Hugging Face Hub.\n\n```python\nimport gradio as gr\n\ngr.Interface.load(\n \"huggingface/google/vit-base-patch16-224\",\n examples=[\"alligator.jpg\", \"laptop.jpg\"]).launch()\n```\n\nNotice that we have added one more parameter, the `examples`, which allows us to prepopulate our interfaces with a few predefined examples.\n\nThis produces the following interface, which you can try right here in your browser. When you input an image, it is automatically preprocessed and sent to the Hugging Face Hub API, where it is passed through the model and returned as a human-interpretable prediction. Try uploading your own image!\n\n<gradio-app space=\"gradio/vision-transformer\">\n\n---\n\nAnd you're done! In one line of code, you have built a web demo for an image classifier. If you'd like to share with others, try setting `share=True` when you `launch()` the Interface!\n", "tags": ["VISION", "TRANSFORMERS", "HUB"], "spaces": ["https://huggingface.co/spaces/abidlabs/vision-transformer"], "url": "/guides/image-classification-with-vision-transformers/", "contributor": null}} |