srisweet commited on
Commit
754239b
1 Parent(s): 8f2b799

Typos fixed

Browse files
Files changed (1) hide show
  1. examples.py +7 -6
examples.py CHANGED
@@ -6,13 +6,14 @@ def app():
6
  #st.title("Examples & Applications")
7
  st.markdown("<h1 style='text-align: center; color: #CD212A;'> Examples & Applications </h1>", unsafe_allow_html=True)
8
  st.markdown("<h2 style='text-align: center; color: #008C45; font-weight:bold;'> Complex Queries -Image Retrieval </h2>", unsafe_allow_html=True)
 
9
  st.write(
10
  """
11
 
12
 
13
- Even though we trained the Italian CLIP model on way less examples than the original
14
- OpenAI's CLIP, our training choices and quality datasets led to impressive results!
15
- Here, we collected few of **the most impressive text-image associations** learned by our model.
16
 
17
  Remember you can head to the **Text to Image** section of the demo at any time to test your own🤌 Italian queries!
18
 
@@ -20,7 +21,7 @@ def app():
20
  )
21
 
22
  st.markdown("### 1. Actors in Scenes")
23
- st.markdown("These examples comes from the CC dataset")
24
 
25
  st.subheader("una coppia")
26
  st.markdown("*a couple*")
@@ -40,7 +41,7 @@ def app():
40
  st.image("static/img/examples/couple_3.jpeg")
41
 
42
  st.markdown("### 2. Dresses")
43
- st.markdown("These examples comes from the Unsplash dataset")
44
 
45
  col1, col2 = st.beta_columns(2)
46
  col1.subheader("un vestito primavrile")
@@ -58,4 +59,4 @@ def app():
58
  "Is the DALLE-mini logo an *avocado* or an armchair (*poltrona*)?")
59
 
60
  st.image("static/img/examples/dalle_mini.png")
61
- st.markdown("It seems it's half an armchair and half an avocado! We thank the team for the great idea :)")
 
6
  #st.title("Examples & Applications")
7
  st.markdown("<h1 style='text-align: center; color: #CD212A;'> Examples & Applications </h1>", unsafe_allow_html=True)
8
  st.markdown("<h2 style='text-align: center; color: #008C45; font-weight:bold;'> Complex Queries -Image Retrieval </h2>", unsafe_allow_html=True)
9
+
10
  st.write(
11
  """
12
 
13
 
14
+ Even though we trained the Italian CLIP model on way less examples(~1.4M) than the original
15
+ OpenAI's CLIP (~400M), our training choices and quality datasets led to impressive results!
16
+ Here, we present some of **the most impressive text-image associations** learned by our model.
17
 
18
  Remember you can head to the **Text to Image** section of the demo at any time to test your own🤌 Italian queries!
19
 
 
21
  )
22
 
23
  st.markdown("### 1. Actors in Scenes")
24
+ st.markdown("These examples were taken from the CC dataset")
25
 
26
  st.subheader("una coppia")
27
  st.markdown("*a couple*")
 
41
  st.image("static/img/examples/couple_3.jpeg")
42
 
43
  st.markdown("### 2. Dresses")
44
+ st.markdown("These examples were taken from the Unsplash dataset")
45
 
46
  col1, col2 = st.beta_columns(2)
47
  col1.subheader("un vestito primavrile")
 
59
  "Is the DALLE-mini logo an *avocado* or an armchair (*poltrona*)?")
60
 
61
  st.image("static/img/examples/dalle_mini.png")
62
+ st.markdown("It seems it's half an armchair and half an avocado! We thank the DALLE-mini team for the great idea :)")