xiaolong1216 commited on
Commit
29fc834
1 Parent(s): 86f107f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +71 -80
README.md CHANGED
@@ -1,14 +1,10 @@
1
  ---
2
- language:
3
- - code
4
- pipeline_tag: text-generation
5
- tags:
6
- - code
7
  ---
8
 
9
 
10
 
11
- # **opencsg-bunny-phi-2-siglip-lora-v0.1** [[中文]](#chinese) [[English]](#english)
12
 
13
  <a id="english"></a>
14
 
@@ -25,45 +21,28 @@ OpenCSG stands for Converged resources, Software refinement, and Generative LM.
25
  The vision of OpenCSG is to empower every industry, every company, and every individual to own their models. We adhere to the principles of openness and open source, making the large model software stack of OpenCSG available to the community. We welcome everyone to use, send feedback, and contribute collaboratively.
26
 
27
 
28
-
29
-
30
-
31
  ## Model Description
32
 
33
- Phi-2 is a 2.7 billion-parameter Transformer model trained on augmented data sources, including synthetic NLP texts and filtered websites, alongside existing data used for Phi-1.5. It performs nearly state-of-the-art on benchmarks for common sense, language understanding, and logical reasoning, despite having fewer than 13 billion parameters.
34
- Unlike some models, Phi-2 hasn't been fine-tuned through reinforcement learning from human feedback. The goal of this open-source model is to enable research into safety challenges like reducing toxicity, understanding biases, enhancing controllability, etc.
35
 
 
36
 
37
- opencsg-phi-2-v0.1 is a model based on phi-2 that have been fine-tuned using full-parameter tuning methods.
38
  <br>
39
 
40
- This is the repository for the base 2.7B version finetuned based on [phi-2](https://huggingface.co/microsoft/phi-2).
41
-
42
- | Model Size | Base Model |
43
- | --- | ----------------------------------------------------------------------------- |
44
- | 2.7B | [opencsg/Opencsg-phi-2-v0.1](https://huggingface.co/opencsg/opencsg-phi-2-v0.1) |
45
- | opencsg-stable-coder-3b-v1 |[opencsg/Opencsg-stable-coder-3b-v1](https://huggingface.co/opencsg/opencsg-stable-code-3b-v1)|
46
 
47
  ## Model Eval
48
 
49
- HumanEval is the most common code generation benchmark for evaluating model performance, especially on the compeltion of code exercise cases.
50
- Model evaluation is, to some extent, a metaphysics. Different models have different sensitivities to decoding methods, parameters and instructions.
51
- It is impratical for us to manually set specific configurations for each fine-tuned model, because a real LLM should master general capabilities despite the parameters being manipulated by users.
52
-
53
- Therefore, OpenCSG racked their brains to provide a relatively fair method to compare the fine-tuned models on the HumanEval benchmark.
54
- To simplify the comparison, we chosed the Pass@1 metric for the Python language, but our fine-tuning dataset includes samples in multiple languages.
55
 
56
- **For fairness, we evaluated the original and fine-tuned phi-2 models based only on the prompts from the original cases, without including any other instructions.**
57
-
58
- **Besides, we use the greedy decoding method for each model during evaluation.**
59
 
60
- | Model | HumanEval python pass@1 |
61
- | --- |----------------------------------------------------------------------------- |
62
- | phi-2 | 48.2% |
63
- | **opencsg-phi-2-v0.1** |**54.3%**|
64
- | stable-coder-3b | 29.3%|
65
- | **opencsg-stable-coder-3b-v1**| **46.3%** |
66
 
 
67
 
68
 
69
  **TODO**
@@ -71,29 +50,60 @@ To simplify the comparison, we chosed the Pass@1 metric for the Python language,
71
  - We will provide different practical problems to evaluate the performance of fine-tuned models in the field of software engineering.
72
 
73
 
74
-
75
  # Model Usage
76
 
77
- ```
78
- import torch
79
- from transformers import AutoModelForCausalLM, AutoTokenizer
80
 
81
- torch.set_default_device("cuda")
82
 
83
- model = AutoModelForCausalLM.from_pretrained("opencsg/opencsg-phi-2-v0.1", torch_dtype="auto", trust_remote_code=True)
84
- tokenizer = AutoTokenizer.from_pretrained("opencsg/opencsg-phi-2-v0.1", trust_remote_code=True)
85
-
86
- inputs = tokenizer('''def print_prime(n):
87
- """
88
- Print all primes between 1 and n
89
- """''', return_tensors="pt", return_attention_mask=False)
90
-
91
- outputs = model.generate(**inputs, max_length=200)
92
- text = tokenizer.batch_decode(outputs)[0]
93
- print(text)
94
  ```
95
 
96
- # Training
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
97
 
98
  ## Hardware
99
 
@@ -130,52 +140,33 @@ OpenCSG 致力于资源融合、软件求精和生成式 LM。其中,“C”
130
  OpenCSG 的愿景是让每个行业、每个公司、每个人都拥有自己的模型。 我们坚持开放开源的原则,将OpenCSG的大模型栈提供给社区。 欢迎大家积极使用、反馈想法和贡献内容。
131
 
132
 
133
-
134
  ## 模型介绍
 
135
 
 
136
 
137
- Phi-2是一个拥有27亿参数的Transformer模型,使用了经过增强的数据源进行训练,包括合成的NLP文本和经过筛选的网站,同时还使用了Phi-1.5使用的现有数据。尽管参数少于130亿,但它在常识、语言理解和逻辑推理的基准测试中表现出了接近最先进的水平。
138
- 与一些模型不同,Phi-2没有通过人类反馈的强化学习进行微调。这个开源模型的目标是促进对安全挑战的研究,如减少毒性、理解偏见、增强可控性等。
139
-
140
- opencsg-phi-2-v0.1是基于phi-2的通过全参数微调方法进行调优的模型。
141
  <br>
142
 
143
- 这是基于 [phi-2](https://huggingface.co/microsoft/phi-2) 进行微调的模型版本。
144
-
145
- | 模型大小 | 基座模型 |
146
- | --- | ----------------------------------------------------------------------------- |
147
- | 2.7B | [opencsg/Opencsg-phi-2-v0.1](https://huggingface.co/opencsg/opencsg-phi-2-v0.1) |
148
- | opencsg-stable-coder-3b-v1 |[opencsg/Opencsg-stable-coder-3b-v1](https://huggingface.co/opencsg/opencsg-stable-code-3b-v1)|
149
-
150
 
151
  ## 模型评估
152
 
153
- HumanEval 是评估模型在代码生成���面性能的最常见的基准,尤其是在代码习题的补全方面。
154
- 模型评估在某种程度上是一种玄学。不同的模型对解码方法、参数和指令的敏感度不同,
155
- 优秀的大模型是具备通用能力的,而不会因为解码参数的调整使得模型的生成表现有很大的差异。
156
-
157
- 因此,OpenCSG 提供了一个相对公平的方法来在 HumanEval 基准上比较各微调模型。
158
- 方便起见,我们选择了Python语言Pass@1指标,但要注意的是,我们的微调数据集是包含多种编程语言。
159
-
160
- **为了公平起见,我们仅根据原始问题的提示来评估原始和微调过的 phi-2 模型,不包含任何其他说明。**
161
 
162
- **除此之外,我们在评估过程中对每个模型都使用贪婪解码方法。**
163
 
164
- | 模型 | HumanEval python pass@1 |
165
- | --- |----------------------------------------------------------------------------- |
166
- | phi-2 | 48.2% |
167
- | **opencsg-phi-2-v0.1** |**54.3%**|
168
- | stable-coder-3b | 29.3%|
169
- | **opencsg-stable-coder-3b-v1**| **46.3%** |
170
 
 
171
 
172
 
173
  **TODO**
174
- - 未来我们将提供更多微调模型的在各基准上的分数。
175
  - 我们将提供不同的实际问题来评估微调模型在软件工程领域的性能。
176
 
177
-
178
-
179
  # 模型使用
180
 
181
  ```
 
1
  ---
2
+ license: apache-2.0
 
 
 
 
3
  ---
4
 
5
 
6
 
7
+ # **opencsg-bunny-v0.1-3B** [[中文]](#chinese) [[English]](#english)
8
 
9
  <a id="english"></a>
10
 
 
21
  The vision of OpenCSG is to empower every industry, every company, and every individual to own their models. We adhere to the principles of openness and open source, making the large model software stack of OpenCSG available to the community. We welcome everyone to use, send feedback, and contribute collaboratively.
22
 
23
 
 
 
 
24
  ## Model Description
25
 
26
+ [Bunny](https://github.com/BAAI-DCAI/Bunny) is a family of lightweight but powerful multimodal models. It offers multiple plug-and-play vision encoders, like EVA-CLIP, SigLIP and language backbones, including Phi-1.5, StableLM-2, Qwen1.5 and Phi-2. To compensate for the decrease in model size, we construct more informative training data by curated selection from a broader data source. Remarkably, our Bunny-v1.0-3B model built upon SigLIP and Phi-2 outperforms the state-of-the-art MLLMs, not only in comparison with models of similar size but also against larger MLLM frameworks (7B), and even achieves performance on par with 13B models.
 
27
 
28
+ The model is pretrained on LAION-2M and finetuned on Bunny-695K.
29
 
30
+ opencsg-bunny-v0.1-3B is a model based on Bunny-v1_0-3B that have been fine-tuned using LoRA tuning methods on opencsg-bunny-880k datasets.
31
  <br>
32
 
 
 
 
 
 
 
33
 
34
  ## Model Eval
35
 
36
+ We evaluate opencsg-bunny-v0.1 on several popular benchmarks: [MME](https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models/tree/Evaluation) perception, [MME](https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models/tree/Evaluation) cognition, [MMMU](https://huggingface.co/datasets/MMMU/MMMU) validation split, [MMMU](https://huggingface.co/datasets/MMMU/MMMU) test split, to throughout assess its multimodal capacity.
 
 
 
 
 
37
 
38
+ <p align="center">
 
 
39
 
40
+ | Model | Visual Encoder | LLM | PEFT | MME<sup>p</sup> | MME<sup>C</sup> | MMMU<sup>V</sup> | MMMU<sup>T</sup> |
41
+ | ----------------------------- | -------------- | ------------- | ---- | ------- | ------- | ------- | ------- |
42
+ | bunny-v1_0-3B | SigLIP | Phi-2(2.7B) | LoRA | 1488.8 | 289.3 | 38.2 | 33.0 |
43
+ | **opencsg-bunny-v0.1-3B** | SigLIP | Phi-2(2.7B) | LoRA | **1527.1** | **299.3** | **38.4** | **33.0** |
 
 
44
 
45
+ </p>
46
 
47
 
48
  **TODO**
 
50
  - We will provide different practical problems to evaluate the performance of fine-tuned models in the field of software engineering.
51
 
52
 
 
53
  # Model Usage
54
 
55
+ Here we show a code snippet to show you how to use the model with transformers.
 
 
56
 
57
+ Before running the snippet, you need to install the following dependencies:
58
 
59
+ ```shell
60
+ pip install torch transformers accelerate pillow
 
 
 
 
 
 
 
 
 
61
  ```
62
 
63
+ ```python
64
+ import torch
65
+ import transformers
66
+ from transformers import AutoModelForCausalLM, AutoTokenizer
67
+ from PIL import Image
68
+ import warnings
69
+
70
+ # disable some warnings
71
+ transformers.logging.set_verbosity_error()
72
+ transformers.logging.disable_progress_bar()
73
+ warnings.filterwarnings('ignore')
74
+
75
+ # set device
76
+ torch.set_default_device('cpu') # or 'cuda'
77
+
78
+ # create model
79
+ model = AutoModelForCausalLM.from_pretrained(
80
+ 'opencsg/opencsg-bunny-phi-2-siglip-lora-v0.1',
81
+ torch_dtype=torch.float16,
82
+ device_map='auto',
83
+ trust_remote_code=True)
84
+ tokenizer = AutoTokenizer.from_pretrained(
85
+ 'BAAI/Bunny-v1_0-3B',
86
+ trust_remote_code=True)
87
+
88
+ # text prompt
89
+ prompt = 'Why is the image funny?'
90
+ text = f"A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: <image>\n{prompt} ASSISTANT:"
91
+ text_chunks = [tokenizer(chunk).input_ids for chunk in text.split('<image>')]
92
+ input_ids = torch.tensor(text_chunks[0] + [-200] + text_chunks[1], dtype=torch.long).unsqueeze(0)
93
+
94
+ # image, sample images can be found in images folder
95
+ image = Image.open('example_2.png')
96
+ image_tensor = model.process_images([image], model.config).to(dtype=model.dtype)
97
+
98
+ # generate
99
+ output_ids = model.generate(
100
+ input_ids,
101
+ images=image_tensor,
102
+ max_new_tokens=100,
103
+ use_cache=True)[0]
104
+
105
+ print(tokenizer.decode(output_ids[input_ids.shape[1]:], skip_special_tokens=True).strip())
106
+ ```
107
 
108
  ## Hardware
109
 
 
140
  OpenCSG 的愿景是让每个行业、每个公司、每个人都拥有自己的模型。 我们坚持开放开源的原则,将OpenCSG的大模型栈提供给社区。 欢迎大家积极使用、反馈想法和贡献内容。
141
 
142
 
 
143
  ## 模型介绍
144
+ [Bunny](https://github.com/BAAI-DCAI/Bunny)是一个轻量级但功能强大的多模式模型家族。它提供多种即插即用视觉编码器,如EVA-CLIP、SigLIP和语言骨干,包括Phi-1.5、StableLM-2、Qwen1.5和Phi-2。为了补偿模型大小的减少,我们通过从更广泛的数据源中精心选择来构建信息量更大的训练数据。值得注意的是,我们基于SigLIP和Phi-2构建的Bunny-v1.0-3B模型不仅与类似大小的模型相比,而且与更大的MLLM框架(7B)相比,都优于最先进的MLLM,甚至实现了与13B模型相当的性能。
145
 
146
+ 该模型在LAION-2M上进行了预训练,并在Bunny-695K上进行了微调。
147
 
148
+ opensg-bnny-v0.1-3B是一个基于bunny-v1_0-3B的模型,该模型已在opensg-benny-880k数据集上使用LoRA进行了微调。
 
 
 
149
  <br>
150
 
 
 
 
 
 
 
 
151
 
152
  ## 模型评估
153
 
154
+ 我们在几个主流的基准评测集上对opencsg-bunny-v0.1进行了全面评估:[MME](https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models/tree/Evaluation) perception, [MME](https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models/tree/Evaluation) cognition, [MMMU](https://huggingface.co/datasets/MMMU/MMMU) validation split, [MMMU](https://huggingface.co/datasets/MMMU/MMMU) test split, to throughout assess its multimodal capacity.
 
 
 
 
 
 
 
155
 
156
+ <p align="center">
157
 
158
+ | Model | Visual Encoder | LLM | PEFT | MME<sup>p</sup> | MME<sup>C</sup> | MMMU<sup>V</sup> | MMMU<sup>T</sup> |
159
+ | ----------------------------- | -------------- | ------------- | ---- | ------- | ------- | ------- | ------- |
160
+ | bunny-v1_0-3B | SigLIP | Phi-2(2.7B) | LoRA | 1488.8 | 289.3 | 38.2 | 33.0 |
161
+ | **opencsg-bunny-v0.1-3B** | SigLIP | Phi-2(2.7B) | LoRA | **1527.1** | **299.3** | **38.4** | **33.0** |
 
 
162
 
163
+ </p>
164
 
165
 
166
  **TODO**
167
+ - 未来我们将提供更多微调模型的在更多基准上的分数。
168
  - 我们将提供不同的实际问题来评估微调模型在软件工程领域的性能。
169
 
 
 
170
  # 模型使用
171
 
172
  ```