amphora commited on
Commit
42c971d
1 Parent(s): 8ff0261

chore: added explanation

Browse files
Files changed (1) hide show
  1. image2text.py +2 -1
image2text.py CHANGED
@@ -13,7 +13,8 @@ def app(model_name):
13
  st.title("Zero-shot Image Classification")
14
  st.markdown(
15
  """
16
- Some text goes in here.
 
17
  """
18
  )
19
 
 
13
  st.title("Zero-shot Image Classification")
14
  st.markdown(
15
  """
16
+ This demonstration explores capability of KoCLIP in the field of Zero-Shot Prediction. This demo takes a set of image and captions from, and predicts the most likely label among the different captions given.
17
+ KoCLIP is a retraining of OpenAI's CLIP model using 82,783 images from MSCOCO dataset and Korean caption annotations. Korean translation of caption annotations were obtained from AI Hub. Base model koclip uses klue/roberta as text encoder and openai/clip-vit-base-patch32 as image encoder. Larger model koclip-large uses klue/roberta as text encoder and bigger google/vit-large-patch16-224 as image encoder.
18
  """
19
  )
20