wanng commited on
Commit
14fe9d6
1 Parent(s): 1b1abd0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +52 -22
README.md CHANGED
@@ -13,16 +13,46 @@ tags:
13
  - feature-extraction
14
  ---
15
 
16
- # Model Details
17
 
18
- This model is a Chinese CLIP model trained on [Noah-Wukong Dataset(100M)](https://wukong-dataset.github.io/wukong-dataset/) and [Zero(23M)](https://zero.so.com/). We use ViT-B-32 from [openAI](https://github.com/openai/CLIP) as image encoder and Chinese pre-trained language model [chinese-roberta-wwm](https://huggingface.co/hfl/chinese-roberta-wwm-ext) as text encoder. We freeze the image encoder and only finetune the text encoder. The model was trained for 24 epochs and it takes about 10 days with 16 A100 GPUs.
 
19
 
20
- # Taiyi (太乙)
21
- Taiyi models are a branch of the Fengshenbang (封神榜) series of models. The models in Taiyi are pre-trained with multimodal pre-training strategies. We will release more image-text model trained on Chinese dataset and benefit the Chinese community.
22
 
 
23
 
 
24
 
25
- # Usage
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
26
 
27
  ```python3
28
  from PIL import Image
@@ -60,31 +90,31 @@ with torch.no_grad():
60
 
61
  ```
62
 
63
- # Evaluation
64
 
65
- ### Zero-Shot Classification
66
- | model | dataset | Top1 | Top5 |
67
- | ---- | ---- | ---- | ---- |
68
- | Taiyi-CLIP-Roberta-102M-Chinese | ImageNet1k-CN | 42.85% | 71.48% |
69
 
70
- ### Zero-Shot Text-to-Image Retrieval
71
-
72
- | model | dataset | Top1 | Top5 | Top10 |
73
- | ---- | ---- | ---- | ---- | ---- |
74
- | Taiyi-CLIP-Roberta-102M-Chinese | Flickr30k-CNA-test | 46.32% | 74.58% | 83.44% |
75
- | Taiyi-CLIP-Roberta-102M-Chinese | COCO-CN-test | 47.10% | 78.53% | 87.84% |
76
- | Taiyi-CLIP-Roberta-102M-Chinese | wukong50k | 49.18% | 81.94% | 90.27% |
77
 
 
 
 
 
 
 
 
 
 
78
 
79
- # Citation
80
 
81
- If you find the resource is useful, please cite the following website in your paper.
82
 
83
- ```
84
  @misc{Fengshenbang-LM,
85
  title={Fengshenbang-LM},
86
  author={IDEA-CCNL},
87
- year={2022},
88
  howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
89
  }
90
- ```
 
13
  - feature-extraction
14
  ---
15
 
16
+ # Taiyi-CLIP-Roberta-102M-Chinese
17
 
18
+ - Github: [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM)
19
+ - Docs: [Fengshenbang-Docs](https://fengshenbang-doc.readthedocs.io/)
20
 
21
+ ## 简介 Brief Introduction
 
22
 
23
+ 首个开源的中文CLIP模型,1.23亿图文对上进行预训练的文本端RoBERTa-base。
24
 
25
+ The first open source Chinese CLIP, pre-training on 123M image-text pairs, the text encoder: RoBERTa-base.
26
 
27
+ ## 模型分类 Model Taxonomy
28
+
29
+ | 需求 Demand | 任务 Task | 系列 Series | 模型 Model | 参数 Parameter | 额外 Extra |
30
+ | :----: | :----: | :----: | :----: | :----: | :----: |
31
+ | 特殊 Special | 多模态 Multimodal | 太乙 Taiyi | CLIP | 102M | Chinese |
32
+
33
+ ## 模型信息 Model Information
34
+
35
+ 我们遵循CLIP的实验设置,以获得强大的视觉-语言表征。在训练中文版的CLIP时,我们使用[chinese-roberta-wwm](https://huggingface.co/hfl/chinese-roberta-wwm-ext)作为语言的编码器,并将[CLIP](https://github.com/openai/CLIP)中的ViT-B-32应用于视觉的编码器。为了快速且稳定地进行预训练,我们冻结了视觉编码器并且只微调语言编码器。此外,我们将[Noah-Wukong](https://wukong-dataset.github.io/wukong-dataset/)数据集(100M)和[Zero](https://zero.so.com/)数据集(23M)用作预训练的数据集。据我们所知,我们的Taiyi-CLIP是目前Huggingface社区中首个的开源中文CLIP。
36
+
37
+ We follow the experimental setup of CLIP to obtain powerful visual-language intelligence. To obtain the CLIP for Chinese, we employ [chinese-roberta-wwm](https://huggingface.co/hfl/chinese-roberta-wwm-ext) for the language encoder, and apply the ViT-B-32 in [CLIP](https://github.com/openai/CLIP) for the vision encoder. We freeze the vision encoder and tune the language encoder to speed up and stabilize the pre-training process. Moreover, we apply [Noah-Wukong](https://wukong-dataset.github.io/wukong-dataset/) dataset (100M) and [Zero](https://zero.so.com/) dataset (23M) as the pre-training datasets. To the best of our knowledge, our TaiyiCLIP is currently the only open-sourced Chinese CLIP in the huggingface community.
38
+
39
+ ### 下游效果 Performance
40
+
41
+ **Zero-Shot Classification**
42
+
43
+ | model | dataset | Top1 | Top5 |
44
+ | ---- | ---- | ---- | ---- |
45
+ | Taiyi-CLIP-Roberta-102M-Chinese | ImageNet1k-CN | 42.85% | 71.48% |
46
+
47
+ **Zero-Shot Text-to-Image Retrieval**
48
+
49
+ | model | dataset | Top1 | Top5 | Top10 |
50
+ | ---- | ---- | ---- | ---- | ---- |
51
+ | Taiyi-CLIP-Roberta-102M-Chinese | Flickr30k-CNA-test | 46.32% | 74.58% | 83.44% |
52
+ | Taiyi-CLIP-Roberta-102M-Chinese | COCO-CN-test | 47.10% | 78.53% | 87.84% |
53
+ | Taiyi-CLIP-Roberta-102M-Chinese | wukong50k | 49.18% | 81.94% | 90.27% |
54
+
55
+ ## 使用 Usage
56
 
57
  ```python3
58
  from PIL import Image
 
90
 
91
  ```
92
 
93
+ ## 引用 Citation
94
 
95
+ 如果您在您的工作中使用了我们的模型,可以引用我们的[论文](https://arxiv.org/abs/2209.02970):
 
 
 
96
 
97
+ If you are using the resource for your work, please cite the our [paper](https://arxiv.org/abs/2209.02970):
 
 
 
 
 
 
98
 
99
+ ```text
100
+ @article{fengshenbang,
101
+ author = {Junjie Wang and Yuxiang Zhang and Lin Zhang and Ping Yang and Xinyu Gao and Ziwei Wu and Xiaoqun Dong and Junqing He and Jianheng Zhuo and Qi Yang and Yongfeng Huang and Xiayu Li and Yanghan Wu and Junyu Lu and Xinyu Zhu and Weifeng Chen and Ting Han and Kunhao Pan and Rui Wang and Hao Wang and Xiaojun Wu and Zhongshen Zeng and Chongpei Chen and Ruyi Gan and Jiaxing Zhang},
102
+ title = {Fengshenbang 1.0: Being the Foundation of Chinese Cognitive Intelligence},
103
+ journal = {CoRR},
104
+ volume = {abs/2209.02970},
105
+ year = {2022}
106
+ }
107
+ ```
108
 
109
+ 也可以引用我们的[网站](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
110
 
111
+ You can also cite our [website](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
112
 
113
+ ```text
114
  @misc{Fengshenbang-LM,
115
  title={Fengshenbang-LM},
116
  author={IDEA-CCNL},
117
+ year={2021},
118
  howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
119
  }
120
+ ```