Spaces:
Sleeping
Sleeping
Update src/streamlit_app.py
Browse files- src/streamlit_app.py +2 -2
src/streamlit_app.py
CHANGED
|
@@ -130,14 +130,14 @@ processor = CLIPProcessor.from_pretrained(
|
|
| 130 |
)
|
| 131 |
|
| 132 |
with open(mapping_path, "rb") as f:
|
| 133 |
-
id_map =
|
| 134 |
|
| 135 |
st.title("Fashion Product Image-Text Retrieval")
|
| 136 |
|
| 137 |
st.markdown("""
|
| 138 |
### **Overview**
|
| 139 |
|
| 140 |
-
In this project,
|
| 141 |
|
| 142 |
The goal of this system is to retrieve the most relevant fashion images based on a given text prompt (e.g., "red dress") and vice versa. With this system, users can search for fashion products in a more intuitive, text-based manner.
|
| 143 |
|
|
|
|
| 130 |
)
|
| 131 |
|
| 132 |
with open(mapping_path, "rb") as f:
|
| 133 |
+
id_map = pickle.load(f)
|
| 134 |
|
| 135 |
st.title("Fashion Product Image-Text Retrieval")
|
| 136 |
|
| 137 |
st.markdown("""
|
| 138 |
### **Overview**
|
| 139 |
|
| 140 |
+
In this project, I demonstrate an **Image-Text Retrieval** system for fashion products. The system uses a fine-tuned **CLIP model** (`clip-vit-base-patch32`) to match images with relevant text descriptions. We have a dataset of **1000 fashion product images**, stored on **Amazon S3**. Each image is associated with detailed product descriptions, such as **product type**, **color**, **category**, and **brand**.
|
| 141 |
|
| 142 |
The goal of this system is to retrieve the most relevant fashion images based on a given text prompt (e.g., "red dress") and vice versa. With this system, users can search for fashion products in a more intuitive, text-based manner.
|
| 143 |
|