File size: 10,431 Bytes
81290eb
40beadf
81290eb
 
 
 
40beadf
81290eb
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
40beadf
81290eb
40beadf
81290eb
40beadf
81290eb
 
 
 
 
40beadf
81290eb
40beadf
81290eb
40beadf
 
 
 
81290eb
40beadf
81290eb
 
 
 
 
 
 
 
 
40beadf
81290eb
40beadf
81290eb
40beadf
 
81290eb
 
40beadf
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4d8d62b
40beadf
 
 
 
4d8d62b
40beadf
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
81290eb
 
 
 
86f107f
81290eb
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8fcfcbd
81290eb
8fcfcbd
81290eb
 
 
40beadf
81290eb
40beadf
81290eb
40beadf
81290eb
 
 
 
 
40beadf
81290eb
40beadf
81290eb
40beadf
 
 
 
81290eb
40beadf
81290eb
 
 
40beadf
81290eb
 
 
 
4d8d62b
 
81290eb
4d8d62b
 
81290eb
4d8d62b
81290eb
4d8d62b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
81290eb
4d8d62b
 
 
 
 
81290eb
4d8d62b
 
 
81290eb
4d8d62b
 
 
 
 
 
81290eb
4d8d62b
81290eb
 
 
 
 
 
86f107f
81290eb
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
---
license: apache-2.0
---



# **opencsg-bunny-v0.1-3B**          [[中文]](#chinese)    [[English]](#english)

<a id="english"></a>

<p align="center">
<img width="300px" alt="OpenCSG" src="https://cdn-uploads.huggingface.co/production/uploads/64c71b27d43e4dee51a8b31a/GwYXPKuEoGCGcMICeW-sb.jpeg">
</p>

<p align="center"><a href="https://portal.opencsg.com/models">[OpenCSG Community]</a>   <a href="https://github.com/opencsgs">[github]</a>  <a href="https://cdn-uploads.huggingface.co/production/uploads/64c71b27d43e4dee51a8b31a/HU6vz21qKTEmUBCWqCFh9.jpeg">[wechat]</a>  <a href="https://twitter.com/OpenCsg">[Twitter]</a> </p>


</div>
OpenCSG stands for Converged resources, Software refinement, and Generative LM. The 'C' represents Converged resources, indicating the integration and full utilization of hybrid resources. The 'S' stands for Software refinement, signifying software that is refined by large models. The 'G' represents Generative LM, which denotes widespread, inclusive, and democratized generative large models.

The vision of OpenCSG is to empower every industry, every company, and every individual to own their models. We adhere to the principles of openness and open source, making the large model software stack of OpenCSG available to the community. We welcome everyone to use, send feedback, and contribute collaboratively.


## Model Description

[Bunny](https://github.com/BAAI-DCAI/Bunny) is a family of lightweight but powerful multimodal models. It offers multiple plug-and-play vision encoders, like EVA-CLIP, SigLIP and language backbones, including Phi-1.5, StableLM-2, Qwen1.5 and Phi-2. To compensate for the decrease in model size, we construct more informative training data by curated selection from a broader data source. Remarkably, our Bunny-v1.0-3B model built upon SigLIP and Phi-2 outperforms the state-of-the-art MLLMs, not only in comparison with models of similar size but also against larger MLLM frameworks (7B), and even achieves performance on par with 13B models.

The model is pretrained on LAION-2M and finetuned on Bunny-695K.

opencsg-bunny-v0.1-3B is a model based on Bunny-v1_0-3B that have been fine-tuned using LoRA tuning methods on opencsg-bunny-880k datasets.
<br>


## Model Eval

We evaluate opencsg-bunny-v0.1 on several popular benchmarks: [MME](https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models/tree/Evaluation) perception, [MME](https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models/tree/Evaluation) cognition, [MMMU](https://huggingface.co/datasets/MMMU/MMMU) validation split, [MMMU](https://huggingface.co/datasets/MMMU/MMMU) test split, to throughout assess its multimodal capacity.

<p align="center">

| Model                         | Visual Encoder | LLM           | PEFT | MME<sup>p</sup> | MME<sup>C</sup> | MMMU<sup>V</sup> | MMMU<sup>T</sup> |
| ----------------------------- | -------------- | ------------- | ---- | ------- | ------- | ------- | ------- |
| bunny-v1_0-3B                 | SigLIP         | Phi-2(2.7B)   | LoRA | 1488.8   | 289.3   | 38.2   | 33.0 |
| **opencsg-bunny-v0.1-3B**     | SigLIP         | Phi-2(2.7B)   | LoRA | **1527.1**   | **299.3**   | **38.4**   | **33.0**   |

</p>


**TODO**
- We will provide more benchmark scores on fine-tuned models in the future.
- We will provide different practical problems to evaluate the performance of fine-tuned models in the field of software engineering.


# Model Usage

Here we show a code snippet to show you how to use the model with transformers.

Before running the snippet, you need to install the following dependencies:

```shell
pip install torch transformers accelerate pillow
```

```python
import torch
import transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
from PIL import Image
import warnings

# disable some warnings
transformers.logging.set_verbosity_error()
transformers.logging.disable_progress_bar()
warnings.filterwarnings('ignore')

# set device
torch.set_default_device('cpu')  # or 'cuda'

# create model
model = AutoModelForCausalLM.from_pretrained(
    'opencsg/opencsg-bunny-v0.1-3B',
    torch_dtype=torch.float16,
    device_map='auto',
    trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained(
    'opencsg/opencsg-bunny-v0.1-3B',
    trust_remote_code=True)

# text prompt
prompt = 'Why is the image funny?'
text = f"A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: <image>\n{prompt} ASSISTANT:"
text_chunks = [tokenizer(chunk).input_ids for chunk in text.split('<image>')]
input_ids = torch.tensor(text_chunks[0] + [-200] + text_chunks[1], dtype=torch.long).unsqueeze(0)

# image, sample images can be found in images folder
image = Image.open('example_2.png')
image_tensor = model.process_images([image], model.config).to(dtype=model.dtype)

# generate
output_ids = model.generate(
    input_ids,
    images=image_tensor,
    max_new_tokens=100,
    use_cache=True)[0]

print(tokenizer.decode(output_ids[input_ids.shape[1]:], skip_special_tokens=True).strip())
```

## Hardware

- **GPUs:** 8 Tesla A800 
- **Training time:** 15 hours

## Software

- **Orchestration:** [Deepspeed](https://github.com/OpenCSGs)
- **Neural networks:** [PyTorch](https://github.com/pytorch/pytorch)
- **BP16 if applicable:** [apex](https://github.com/NVIDIA/apex)


<a id="chinese"></a>

<p>

</p>

# OpenCSG介绍


<p align="center">
<img width="300px" alt="OpenCSG" src="https://cdn-uploads.huggingface.co/production/uploads/64c71b27d43e4dee51a8b31a/GwYXPKuEoGCGcMICeW-sb.jpeg">
</p>

<p align="center"><a href="https://opencsg.com/models">[OpenCSG 社区]</a>   <a href="https://github.com/opencsgs">[github]</a>  <a href="https://cdn-uploads.huggingface.co/production/uploads/64c71b27d43e4dee51a8b31a/HU6vz21qKTEmUBCWqCFh9.jpeg">[微信]</a>  <a href="https://twitter.com/OpenCsg">[推特]</a> </p>



</div>
OpenCSG中 Open是开源开放;C 代表 Converged resources,整合和充分利用的混合异构资源优势,算力降本增效;S 代表 Software refined,重新定义软件的交付方式,通过大模型驱动软件开发,人力降本增效;G 代表 Generative LM,大众化、普惠化和民主化的可商用的开源生成式大模型。

OpenCSG的愿景是让每个行业、每个公司、每个人都拥有自己的模型。 我们坚持开源开放的原则,将OpenCSG的大模型软件栈开源到社区,欢迎使用、反馈和参与共建,欢迎关注。


## 模型介绍
[Bunny](https://github.com/BAAI-DCAI/Bunny)是一个轻量级但功能强大的多模式模型家族。它提供多种即插即用视觉编码器,如EVA-CLIP、SigLIP和语言骨干,包括Phi-1.5、StableLM-2、Qwen1.5和Phi-2。为了补偿模型大小的减少,我们通过从更广泛的数据源中精心选择来构建信息量更大的训练数据。值得注意的是,我们基于SigLIP和Phi-2构建的Bunny-v1.0-3B模型不仅与类似大小的模型相比,而且与更大的MLLM框架(7B)相比,都优于最先进的MLLM,甚至实现了与13B模型相当的性能。

该模型在LAION-2M上进行了预训练,并在Bunny-695K上进行了微调。

opensg-bnny-v0.1-3B是一个基于bunny-v1_0-3B的模型,该模型已在opensg-benny-880k数据集上使用LoRA进行了微调。
<br>


## 模型评估

我们在几个主流的基准评测集上对opencsg-bunny-v0.1进行了全面评估:[MME](https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models/tree/Evaluation) perception, [MME](https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models/tree/Evaluation) cognition, [MMMU](https://huggingface.co/datasets/MMMU/MMMU) validation split, [MMMU](https://huggingface.co/datasets/MMMU/MMMU) test split, to throughout assess its multimodal capacity.

<p align="center">

| Model                         | Visual Encoder | LLM           | PEFT | MME<sup>p</sup> | MME<sup>C</sup> | MMMU<sup>V</sup> | MMMU<sup>T</sup> |
| ----------------------------- | -------------- | ------------- | ---- | ------- | ------- | ------- | ------- |
| bunny-v1_0-3B                 | SigLIP         | Phi-2(2.7B)   | LoRA | 1488.8   | 289.3   | 38.2   | 33.0 |
| **opencsg-bunny-v0.1-3B**     | SigLIP         | Phi-2(2.7B)   | LoRA | **1527.1**   | **299.3**   | **38.4**   | **33.0**   |

</p>


**TODO**
- 未来我们将提供更多微调模型的在更多基准上的分数。
- 我们将提供不同的实际问题来评估微调模型在软件工程领域的性能。

# 模型使用

```shell
pip install torch transformers accelerate pillow
```

```python
import torch
import transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
from PIL import Image
import warnings

# disable some warnings
transformers.logging.set_verbosity_error()
transformers.logging.disable_progress_bar()
warnings.filterwarnings('ignore')

# set device
torch.set_default_device('cpu')  # or 'cuda'

# create model
model = AutoModelForCausalLM.from_pretrained(
    'opencsg/opencsg-bunny-v0.1-3B',
    torch_dtype=torch.float16,
    device_map='auto',
    trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained(
    'opencsg/opencsg-bunny-v0.1-3B',
    trust_remote_code=True)

# text prompt
prompt = 'Why is the image funny?'
text = f"A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: <image>\n{prompt} ASSISTANT:"
text_chunks = [tokenizer(chunk).input_ids for chunk in text.split('<image>')]
input_ids = torch.tensor(text_chunks[0] + [-200] + text_chunks[1], dtype=torch.long).unsqueeze(0)

# image, sample images can be found in images folder
image = Image.open('example_2.png')
image_tensor = model.process_images([image], model.config).to(dtype=model.dtype)

# generate
output_ids = model.generate(
    input_ids,
    images=image_tensor,
    max_new_tokens=100,
    use_cache=True)[0]

print(tokenizer.decode(output_ids[input_ids.shape[1]:], skip_special_tokens=True).strip())
```
# 训练

## 硬件资源

- **GPU数量:** 8 Tesla A800 
- **训练时间:** 15 小时

## 软件使用

- **微调训练框架:** [Deepspeed](https://github.com/OpenCSGs)
- **深度学习框架:** [PyTorch](https://github.com/pytorch/pytorch)
- **BP16:** [apex](https://github.com/NVIDIA/apex)