roy214 commited on
Commit
4315a06
·
verified ·
1 Parent(s): ec373c2

Update src/streamlit_app.py

Browse files
Files changed (1) hide show
  1. src/streamlit_app.py +2 -2
src/streamlit_app.py CHANGED
@@ -130,14 +130,14 @@ processor = CLIPProcessor.from_pretrained(
130
  )
131
 
132
  with open(mapping_path, "rb") as f:
133
- id_map = json.load(f)
134
 
135
  st.title("Fashion Product Image-Text Retrieval")
136
 
137
  st.markdown("""
138
  ### **Overview**
139
 
140
- In this project, we demonstrate an **Image-Text Retrieval** system for fashion products. The system uses a fine-tuned **CLIP model** (`clip-vit-base-patch32`) to match images with relevant text descriptions. We have a dataset of **1000 fashion product images**, stored securely on **Amazon S3**. Each image is associated with detailed product descriptions, such as **product type**, **color**, **category**, and **brand**.
141
 
142
  The goal of this system is to retrieve the most relevant fashion images based on a given text prompt (e.g., "red dress") and vice versa. With this system, users can search for fashion products in a more intuitive, text-based manner.
143
 
 
130
  )
131
 
132
  with open(mapping_path, "rb") as f:
133
+ id_map = pickle.load(f)
134
 
135
  st.title("Fashion Product Image-Text Retrieval")
136
 
137
  st.markdown("""
138
  ### **Overview**
139
 
140
+ In this project, I demonstrate an **Image-Text Retrieval** system for fashion products. The system uses a fine-tuned **CLIP model** (`clip-vit-base-patch32`) to match images with relevant text descriptions. We have a dataset of **1000 fashion product images**, stored on **Amazon S3**. Each image is associated with detailed product descriptions, such as **product type**, **color**, **category**, and **brand**.
141
 
142
  The goal of this system is to retrieve the most relevant fashion images based on a given text prompt (e.g., "red dress") and vice versa. With this system, users can search for fashion products in a more intuitive, text-based manner.
143