jaketae commited on
Commit
fedeff8
1 Parent(s): 8375841

style: edit text description white space

Browse files
Files changed (1) hide show
  1. image2text.py +2 -2
image2text.py CHANGED
@@ -17,8 +17,8 @@ def app(model_name):
17
  """
18
  This demonstration explores capability of KoCLIP in the field of Zero-Shot Prediction. This demo takes a set of image and captions from, and predicts the most likely label among the different captions given.
19
 
20
- KoCLIP is a retraining of OpenAI's CLIP model using 82,783 images from [MSCOCO](https://cocodataset.org/#home) dataset and Korean caption annotations. Korean translation of caption annotations were obtained from [AI Hub](https://aihub.or.kr/keti_data_board/visual_intelligence). Base model `koclip` uses `klue/roberta` as text encoder and `openai/clip-vit-base-patch32` as image encoder. Larger model `koclip-large` uses `klue/roberta` as text encoder and bigger `google/vit-large-patch16-224` as image encoder.
21
- """
22
  )
23
 
24
  query1 = st.file_uploader("Choose an image...", type=["jpg", "jpeg", "png"])
 
17
  """
18
  This demonstration explores capability of KoCLIP in the field of Zero-Shot Prediction. This demo takes a set of image and captions from, and predicts the most likely label among the different captions given.
19
 
20
+ KoCLIP is a retraining of OpenAI's CLIP model using 82,783 images from [MSCOCO](https://cocodataset.org/#home) dataset and Korean caption annotations. Korean translation of caption annotations were obtained from [AI Hub](https://aihub.or.kr/keti_data_board/visual_intelligence). Base model `koclip` uses `klue/roberta` as text encoder and `openai/clip-vit-base-patch32` as image encoder. Larger model `koclip-large` uses `klue/roberta` as text encoder and bigger `google/vit-large-patch16-224` as image encoder.
21
+ """
22
  )
23
 
24
  query1 = st.file_uploader("Choose an image...", type=["jpg", "jpeg", "png"])