Spaces:
Running
Running
File size: 1,840 Bytes
da172d6 a519b9e da172d6 a519b9e da172d6 29302cd da172d6 3ca8c75 a519b9e 3ca8c75 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 |
from home import read_markdown_file
import streamlit as st
def app():
st.title("Examples & Applications")
st.write(
"""
## Image Retrieval
Even though we trained the Italian CLIP model on way less examples than the original
OpenAI's CLIP, our training choices and quality datasets led to impressive results!
Here, we collected few of **the most impressive text-image associations** learned by our model.
Remember you can head to the **Text to Image** section of the demo at any time to test your own🤌 Italian queries!
"""
)
st.markdown("### 1. Actors in Scenes")
st.markdown("These examples comes from the CC dataset")
st.subheader("una coppia")
st.markdown("*a couple*")
st.image("static/img/examples/couple_0.jpeg")
col1, col2 = st.beta_columns(2)
col1.subheader("una coppia con il tramonto sullo sfondo")
col1.markdown("*a couple with the sunset in the background*")
col1.image("static/img/examples/couple_1.jpeg")
col2.subheader("una coppia che passeggia sulla spiaggia")
col2.markdown("*a couple walking on the beach*")
col2.image("static/img/examples/couple_2.jpeg")
st.subheader("una coppia che passeggia sulla spiaggia al tramonto")
st.markdown("*a couple walking on the beach at sunset*")
st.image("static/img/examples/couple_3.jpeg")
st.markdown("### 2. Dresses")
st.markdown("These examples comes from the Unsplash dataset")
col1, col2 = st.beta_columns(2)
col1.subheader("un vestito primavrile")
col1.markdown("*a dress for the spring*")
col1.image("static/img/examples/vestito1.png")
col2.subheader("un vestito autunnale")
col2.markdown("*a dress for the autumn*")
col2.image("static/img/examples/vestito_autunnale.png")
|