Bingsu commited on
Commit
0d9f381
1 Parent(s): 436d2e3

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +65 -0
README.md ADDED
@@ -0,0 +1,65 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ widget:
3
+ - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/cat-dog-music.png
4
+ candidate_labels: 기타 치는 고양이, 피아노 치는 강아지
5
+ example_title: Guitar, cat and dog
6
+ language: ko
7
+ license: mit
8
+ ---
9
+
10
+ # clip-vit-large-patch14-ko
11
+
12
+ Korean CLIP model trained by [Making Monolingual Sentence Embeddings Multilingual using Knowledge Distillation](https://arxiv.org/abs/2004.09813)
13
+
14
+ [Making Monolingual Sentence Embeddings Multilingual using Knowledge Distillation](https://arxiv.org/abs/2004.09813)로 학습된 한국어 CLIP 모델입니다.
15
+
16
+ 훈련 코드: <https://github.com/Bing-su/KoCLIP_training_code>
17
+
18
+ 사용된 데이터: AIHUB에 있는 모든 한국어-영어 병렬 데이터
19
+
20
+ ## How to Use
21
+
22
+ #### 1.
23
+
24
+ ```python
25
+ import requests
26
+ import torch
27
+ from PIL import Image
28
+ from transformers import AutoModel, AutoProcessor
29
+
30
+ repo = "Bingsu/clip-vit-large-patch14-ko"
31
+ model = AutoModel.from_pretrained(repo)
32
+ processor = AutoProcessor.from_pretrained(repo)
33
+
34
+ url = "http://images.cocodataset.org/val2017/000000039769.jpg"
35
+ image = Image.open(requests.get(url, stream=True).raw)
36
+ inputs = processor(text=["고양이 두 마리", "개 두 마리"], images=image, return_tensors="pt", padding=True)
37
+ with torch.inference_mode():
38
+ outputs = model(**inputs)
39
+ logits_per_image = outputs.logits_per_image
40
+ probs = logits_per_image.softmax(dim=1)
41
+ ```
42
+
43
+ ```python
44
+ >>> probs
45
+ tensor([[0.9974, 0.0026]])
46
+ ```
47
+
48
+ #### 2.
49
+
50
+ ```python
51
+ from transformers import pipeline
52
+
53
+ repo = "Bingsu/clip-vit-large-patch14-ko"
54
+ pipe = pipeline("zero-shot-image-classification", model=repo)
55
+
56
+ url = "http://images.cocodataset.org/val2017/000000039769.jpg"
57
+ result = pipe(images=url, candidate_labels=["고양이 한 마리", "고양이 두 마리", "분홍색 소파에 드러누운 고양이 친구들"], hypothesis_template="{}")
58
+ ```
59
+
60
+ ```python
61
+ >>> result
62
+ [{'score': 0.9907576441764832, 'label': '분홍색 소파에 드러누운 고양이 친구들'},
63
+ {'score': 0.009206341579556465, 'label': '고양이 두 마리'},
64
+ {'score': 3.606083555496298e-05, 'label': '고양이 한 마리'}]
65
+ ```