Bingsu commited on
Commit
53d7b87
1 Parent(s): bf5599b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +60 -26
README.md CHANGED
@@ -1,47 +1,81 @@
1
  ---
 
 
 
 
 
2
  license: mit
3
- tags:
4
- - generated_from_keras_callback
5
- model-index:
6
- - name: clip-vit-base-patch32-ko
7
- results: []
8
  ---
9
 
10
- <!-- This model card has been generated automatically according to the information Keras had access to. You should
11
- probably proofread and complete it, then remove this comment. -->
12
-
13
  # clip-vit-base-patch32-ko
14
 
15
- This model is a fine-tuned version of [Bingsu/clip-vit-base-patch32-ko](https://huggingface.co/Bingsu/clip-vit-base-patch32-ko) on an unknown dataset.
16
- It achieves the following results on the evaluation set:
 
 
 
 
 
17
 
 
18
 
19
- ## Model description
20
 
21
- More information needed
 
 
 
 
22
 
23
- ## Intended uses & limitations
 
 
24
 
25
- More information needed
 
 
 
 
 
 
 
26
 
27
- ## Training and evaluation data
 
 
 
28
 
29
- More information needed
30
 
31
- ## Training procedure
 
32
 
33
- ### Training hyperparameters
 
34
 
35
- The following hyperparameters were used during training:
36
- - optimizer: None
37
- - training_precision: float32
38
 
39
- ### Training results
 
 
 
 
 
40
 
 
41
 
 
42
 
43
- ### Framework versions
 
 
 
 
 
 
 
 
44
 
45
- - Transformers 4.23.1
46
- - TensorFlow 2.9.2
47
- - Tokenizers 0.13.1
 
1
  ---
2
+ widget:
3
+ - src: http://images.cocodataset.org/val2017/000000039769.jpg
4
+ candidate_labels: 고양이, 강아지, 토끼
5
+ example_title: cat and remote
6
+ language: ko
7
  license: mit
 
 
 
 
 
8
  ---
9
 
 
 
 
10
  # clip-vit-base-patch32-ko
11
 
12
+ Korean CLIP model trained by [Making Monolingual Sentence Embeddings Multilingual using Knowledge Distillation](https://arxiv.org/abs/2004.09813)
13
+
14
+ [Making Monolingual Sentence Embeddings Multilingual using Knowledge Distillation](https://arxiv.org/abs/2004.09813)로 학습된 한국어 CLIP 모델입니다.
15
+
16
+ 훈련 코드: <https://github.com/Bing-su/KoCLIP_training_code>
17
+
18
+ 사용된 데이터: AIHUB에 있는 모든 한국어-영어 병렬 데이터
19
 
20
+ ## How to Use
21
 
22
+ #### 1.
23
 
24
+ ```python
25
+ import requests
26
+ import torch
27
+ from PIL import Image
28
+ from transformers import AutoModel, AutoProcessor
29
 
30
+ repo = "Bingsu/clip-vit-base-patch32-ko"
31
+ model = AutoModel.from_pretrained(repo)
32
+ processor = AutoProcessor.from_pretrained(repo)
33
 
34
+ url = "http://images.cocodataset.org/val2017/000000039769.jpg"
35
+ image = Image.open(requests.get(url, stream=True).raw)
36
+ inputs = processor(text=["고양이 두 마리", "개 두 마리"], images=image, return_tensors="pt", padding=True)
37
+ with torch.inference_mode():
38
+ outputs = model(**inputs)
39
+ logits_per_image = outputs.logits_per_image
40
+ probs = logits_per_image.softmax(dim=1)
41
+ ```
42
 
43
+ ```python
44
+ >>> probs
45
+ tensor([[0.9926, 0.0074]])
46
+ ```
47
 
48
+ #### 2.
49
 
50
+ ```python
51
+ from transformers import pipeline
52
 
53
+ repo = "Bingsu/clip-vit-base-patch32-ko"
54
+ pipe = pipeline("zero-shot-image-classification", model=repo)
55
 
56
+ url = "http://images.cocodataset.org/val2017/000000039769.jpg"
57
+ result = pipe(images=url, candidate_labels=["고양이 한 마리", "고양이 두 마리", "분홍색 소파에 드러누운 고양이 친구들"], hypothesis_template="{}")
58
+ ```
59
 
60
+ ```python
61
+ >>> result
62
+ [{'score': 0.9456236958503723, 'label': '분홍색 소파에 드러누운 고양이 친구들'},
63
+ {'score': 0.05315302312374115, 'label': '고양이 두 마리'},
64
+ {'score': 0.0012233294546604156, 'label': '고양이 한 마리'}]
65
+ ```
66
 
67
+ ## Tokenizer
68
 
69
+ 토크나이저는 한국어 데이터와 영어 데이터를 7:3 비율로 섞어, 원본 CLIP 토크나이저에서 `.train_new_from_iterator`를 통해 학습되었습니다.
70
 
71
+ https://github.com/huggingface/transformers/blob/bc21aaca789f1a366c05e8b5e111632944886393/src/transformers/models/clip/modeling_clip.py#L661-L666
72
+ ```python
73
+ # text_embeds.shape = [batch_size, sequence_length, transformer.width]
74
+ # take features from the eot embedding (eot_token is the highest number in each sequence)
75
+ # casting to torch.int for onnx compatibility: argmax doesn't support int64 inputs with opset 14
76
+ pooled_output = last_hidden_state[
77
+ torch.arange(last_hidden_state.shape[0]), input_ids.to(torch.int).argmax(dim=-1)
78
+ ]
79
+ ```
80
 
81
+ CLIP 모델은 `pooled_output`을 구할때 id가 가장 큰 토큰을 사용하기 때문에, eos 토큰은 가장 마지막 토큰이 되어야 합니다.