Spaces:
Sleeping
Update app.py
Browse filesKey Functionality and Components
The application performs the following main steps:
Model Setup: It uses the huggingface_hub library to download and extract a pre-trained AutoGluon MultiModal image predictor from a specified Hugging Face model repository (apsora/autoML_images_data). This model is loaded locally.
Prediction Logic: The do_predict function takes an uploaded image, saves it temporarily, and then uses the loaded MultiModalPredictor to classify the image into one of two classes: "🍅 Tomato" or "🚫 Not a tomato." It returns the class probabilities for display.
Interactive User Interface (Gradio):
It creates a user-friendly web interface using Gradio where users can upload an image or capture one using a webcam.
When a new image is provided, the do_predict function runs automatically and the result is displayed in a Gradio Label component, showing the predicted class and the confidence score (probability) for both "Tomato" and "Not a tomato." Example images are provided to demonstrate the application's capabilities.
In essence, this is a deployable minimal example demonstrating how to serve a machine learning model, specifically an AutoGluon image classifier, within a Gradio interface.
|
@@ -12,10 +12,6 @@ import PIL.Image # For image I/O
|
|
| 12 |
|
| 13 |
import huggingface_hub # For downloading model assets
|
| 14 |
import autogluon.multimodal # For loading AutoGluon image classifier
|
| 15 |
-
import os
|
| 16 |
-
os.environ['HF_HOME'] = '/data/huggingface'
|
| 17 |
-
|
| 18 |
-
# ... your other imports follow
|
| 19 |
|
| 20 |
|
| 21 |
# --- Model Loading ---
|
|
|
|
| 12 |
|
| 13 |
import huggingface_hub # For downloading model assets
|
| 14 |
import autogluon.multimodal # For loading AutoGluon image classifier
|
|
|
|
|
|
|
|
|
|
|
|
|
| 15 |
|
| 16 |
|
| 17 |
# --- Model Loading ---
|