hyx21 commited on
Commit
ac57e9f
1 Parent(s): 4f6ea04

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +106 -50
README.md CHANGED
@@ -1,72 +1,128 @@
1
- # MiniCPM
2
-
3
- ## 介绍 Introduction
4
-
5
-
6
- - 与`Llama`的关系 The Relationship between `Llama`
7
-
8
- `MiniCPM`与`Llama`均使用了仅解码器架构。代码实现上,`MiniCPM`基于`Llama`实现,增加了放缩机制。
9
-
10
- `MiniCPM` uses Decoder-only Structure as well as `Llama`. The implementation of `MiniCPM` is based on `Llama` code, with scaling mechenism added.
11
-
12
- ## 软件依赖 Dependency
13
-
14
- - `transformers >= 4.36.0`
15
- - `accelerate`
16
-
17
- ## 使用 Usage
18
-
19
- 我们推荐使用`AutoModelForCausalLM`与`AutoTokenizer`载入`MiniCPM`,并使用`torch.bfloat16`作为计算精度。我们推荐在GPU上进行推理。
20
-
21
- We recommend using `AutoModelForCausalLM` and `AutoTokenizer` to load `MiniCPM`, and use `torch.bfloat16` as the calculation precision. GPU reference is recommended.
22
-
23
- 以下是一个使用`MiniCPM`生成的例子。
24
-
25
- An example is provided below for using `MiniCPM` to generate tokens.
26
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
27
  ```python
28
  from transformers import AutoModelForCausalLM, AutoTokenizer
29
  import torch
 
30
 
31
- path = '/data/miniCPM_opensource/miniCPM-bf16' # TODO
32
-
33
  tokenizer = AutoTokenizer.from_pretrained(path)
34
- model = AutoModelForCausalLM.from_pretrained(path, torch_dtype=torch.bfloat16, device_map='auto', trust_remote_code=True)
35
 
36
- dialog = [{'role': 'user', 'content': '请问中国哪几个城市最适合旅游?'}]
37
-
38
- input = tokenizer.apply_chat_template(dialog, tokenize=False, add_generation_prompt=False)
39
- enc = tokenizer(input, return_tensors='pt').to('cuda')
40
-
41
- output = model.generate(**enc, max_length=1024)
42
- print(tokenizer.decode(output[0]))
43
  ```
44
 
45
- 期望的输出 Expected Output:
 
 
 
 
46
  ```
47
- <s> <用户>请问中国哪几个城市最适合旅游?<AI> 中国有很多适合旅游的城市,以下是一些建议:
48
 
49
- 1. 北京:作为中国的首都,北京拥有丰富的历史文化遗产,如故宫、长城、天坛等。此外,北京还有许多现代化的景点,如798艺术区、颐和园等。
50
 
51
- 2. 上海:作为中国的经济中心,上海拥有许多现代化的高楼大厦和繁华的商业区。同时,上海还有许多历史悠久的景点,如外滩、豫园等。
52
 
53
- 3. 西安:作为古都,西安拥有丰富的历史文化遗产,如兵马俑、大雁塔等。此外,西安还有许多美食,如肉夹馍、羊肉泡馍等。
 
 
 
54
 
55
- 4. 成都:作为四川的省会,成都有着丰富的美食文化,如火锅、麻辣烫等。此外,成都还有���多自然风光,如青城山、都江堰等。
 
 
 
56
 
57
- 5. 杭州:作为美丽的西湖所在地,杭州拥有许多自然风光和历史文化遗产,如西湖、灵隐寺等。此外,杭州还有许多美食,如西湖醋鱼、龙井虾仁等。
58
 
59
- 6. 广州:作为南方的重要城市,广州拥有丰富的美食文化,如广式早茶、烧腊等。此外,广州还有许多现代化景点,如珠江夜游、白云山等。
 
 
60
 
61
- 7. 南京:作为六朝古都,南京拥有丰富的历史文化遗产,如中山陵、夫子庙等。此外,南京还有许多美食,如鸭血粉丝汤、盐水鸭等。
 
62
 
63
- 8. 厦门:作为美丽的海滨城市,厦门拥有许多自然风光和历史文化遗产,如鼓浪屿、南普陀寺等。此外,厦门还有许多美食,如沙茶面、土笋冻等。
64
 
65
- 9. 昆明:作为云南的省会,昆明拥有许多自然风光和历史文化遗产,如石林、滇池等。此外,昆明还有许多美食,如过桥米线、酸笋鱼等。
66
 
67
- 10. 哈尔滨:作为北方的城市,哈尔滨拥有许多冰雪景观和自然风光,如冰雪大世界、亚布力滑雪场等。此外,哈尔滨还有许多美食,如东北大拉皮、锅包肉等。
 
68
 
69
- 以上仅是一些建议,中国还有许多其他适合旅游的城市,具体取决于您的兴趣和偏好。</s>
70
  ```
 
 
 
 
 
 
71
 
72
- ## 引用 Reference
 
1
+ <div align="center">
2
+ <h1>
3
+ MiniCPM
4
+ </h1>
5
+ </div>
6
+
7
+ <p align="center">
8
+ <a href="XXXX" target="_blank">MiniCPM 技术报告 Technical Report</a> |
9
+ <a href="https://github.com/OpenBMB/OmniLMM/" target="_blank">OmniLMM 多模态模型 Multi-modal Model</a> |
10
+ <a href="https://luca.cn/" target="_blank">CPM-C 千亿模型试用 ~100B Model Trial </a>
11
+ </p>
12
+
13
+ MiniCPM 是面壁与清华大学自然语言处理实验室共同开源的系列端侧语言大模型,主体语言模型 MiniCPM-2B 仅有 24亿(2.4B)的非词嵌入参数量。
14
+ - 经过 SFT 后,MiniCPM 在公开综合性评测集上,MiniCPM 与 Mistral-7B相近(中文、数学、代码能力更优),整体性能超越 Llama2-13B、MPT-30B、Falcon-40B 等模型。
15
+ - 经过 DPO 后,MiniCPM 在当前最接近用户体感的评测集 MTBench上,MiniCPM-2B 也超越了 Llama2-70B-Chat、Vicuna-33B、Mistral-7B-Instruct-v0.1、Zephyr-7B-alpha 等众多代表性开源大模型。
16
+ - 以 MiniCPM-2B 为基础构建端侧多模态大模型 MiniCPM-V,整体性能在同规模模型中实现最佳,超越基于 Phi-2 构建的现有多模态大模型,在部分评测集上达到与 9.6B Qwen-VL-Chat 相当甚至更好的性能。
17
+ - 经过 Int4 量化后,MiniCPM 可在手机上进行部署推理,流式输出速度略高于人类说话速度。MiniCPM-V 也首次跑通了多模态大模型在手机上的部署。
18
+ - 一张1080/2080可高效参数微调,一张3090/4090可全参数微调,一台机器可持续训练 MiniCPM,二次开发成本较低。
19
+
20
+ 我们将完全开源MiniCPM-2B的模型参数供学术研究和有限商用,以及训练过程中的所有Checkpoint和大部分非专有数据供模型机理研究。
21
+
22
+ - 基于MiniCPM-2B的指令微调与人类偏好对**MiniCPM-2B-SFT/DPO。**
23
+ - 基于MiniCPM-2B的多模态模型**MiniCPM-V**,能力超越基于Phi-2的同参数级别多模态模型**。**
24
+ - MiniCPM-2B-SFT/DPO的Int4量化版**MiniCPM-2B-SFT/DPO-Int4。**
25
+ - 基于MLC-LLM、LLMFarm开发的MiniCPM手机端程序,**文本及多模态模型均可在手机端进行推理。**
26
+
27
+
28
+ MiniCPM is an End-Size LLM developed by ModelBest Inc. and TsinghuaNLP, with only 2.4B parameters excluding embeddings.
29
+
30
+ - MiniCPM has very close performance compared with Mistral-7B on open-sourced general benchmarks with better ability on Chinese, Mathmetics and Coding after SFT. The overall performance exceeds Llama2-13B, MPT-30B, Falcon-40B, etc.
31
+ - After DPO, MiniCPM outperforms Llama2-70B-Chat, Vicuna-33B, Mistral-7B-Instruct-v0.1, Zephyr-7B-alpha, etc. on MTBench.
32
+ - MiniCPM can be deployed and infer on smartphones, and the speed of streaming output is relatively higher than the verbal speed of human. MiniCPM-V is the first time that multi-modal models can be deployed on smartphones.
33
+ - The cost of developing based on MiniCPM is low. Parameter efficient finetuning can be conducted with a single 1080/2080 GPU and full parameter finetuning can be conducted with a 3090/4090 GPU.
34
+
35
+ We release all model parameters for research and limited commercial use. We also release all the checkpoint during training and most public training data for research on model mechanism.
36
+
37
+ - SFT and DPO version based on MiniCPM-2B and human preference
38
+ - The multi-modal model MiniCPM-V based on MiniCPM-2B, which outperforms models with similar size, i.e., Phi-2
39
+ - The INT4 quantized version MiniCPM-2B-SFT/DPO-Int4 based on MiniCPM-2B-SFT/DPO
40
+ - Smartphone application based on MLC-LLM and LLMFarm. All models can conduct inference on smartphones.
41
+
42
+
43
+ ### 局限性 Limitations:
44
+
45
+ - 受限于模型规模,模型可能出现幻觉性问题。其中由于DPO模型生成的回复内容更长,更容易出现幻觉。我们也将持续进行MiniCPM模型的迭代改进;
46
+ - 为了保证在学术研究用途上模型的通用性,我们未对模型进行任何身份认同训练。同时由于我们用ShareGPT开源语料作为部分训练数据,模型可能会输出类似GPT系列模型的身份认同信息;
47
+ - 受限于模型规模,模型的输出受到提示词(prompt)的影响较大,可能多次尝试产生不��致的结果;
48
+ - 受限于模型容量,模型的知识记忆较不准确,后续我们将结合RAG方法来增强模型的知识记忆能力。
49
+
50
+ - Due to the size of the model, hallucination might arise. DPO models have longer outputs, which increases the probability of hallucination. We will inhibit the hallucination in the future.
51
+ - To maximum the generalization ability in research, we have not conduct any self-identification training. Since SharedGPT is included in our training data, the model might identify itself as GPT.
52
+ - Due to the size of the model, the output is highly related to the prompt. Multiple runs might give inconsistent results.
53
+ - Due to the capacity of the model, the inaccuracy of its memory remains high. We will increase its memorizing ability with RAG in the future.
54
+
55
+ ## 模型下载 Download
56
+
57
+ | HuggingFace | ModelScope | WiseModel |
58
+ |-------------|------------|-----------|
59
+ |[sft-bf16](https://huggingface.co/openbmb/MiniCPM-2B-sft-bf16)|[sft-bf16](https://modelscope.cn/models/OpenBMB/miniCPM-bf16)|[sft-bf16](https://wisemodel.cn/models/OpenBMB/miniCPM-bf16)
60
+ |[sft-fp32](https://huggingface.co/openbmb/MiniCPM-2B-sft-fp32)|[sft-fp32](https://modelscope.cn/models/OpenBMB/MiniCPM-2B-sft-fp32)|[sft-fp32](https://wisemodel.cn/models/OpenBMB/miniCPM-dpo-fp32)
61
+ |[dpo-bf16](https://huggingface.co/openbmb/MiniCPM-2B-dpo-bf16)|[dpo-bf16](https://modelscope.cn/models/OpenBMB/MiniCPM-2B-dpo-bf16/summary)|[dpo-bf16](https://wisemodel.cn/models/OpenBMB/MiniCPM-2B-dpo-bf16)
62
+ |[dpo-fp16](https://huggingface.co/openbmb/MiniCPM-2B-dpo-fp16)|[dpo-fp16](https://modelscope.cn/models/OpenBMB/MiniCPM-2B-dpo-fp16/)|[dpo-fp16](https://wisemodel.cn/models/OpenBMB/MiniCPM-2B-dpo-fp16)
63
+ |[dpo-fp32](https://huggingface.co/openbmb/MiniCPM-2B-dpo-fp32)|[dpo-fp32](https://modelscope.cn/models/OpenBMB/MiniCPM-2B-dpo-fp32)|[dpo-fp32](https://wisemodel.cn/models/OpenBMB/miniCPM-dpo-fp32)
64
+
65
+ ## 模型使用 Usage
66
+
67
+ * 安装`transformers>=4.36.0`以及`accelerate`后,运行以下代码
68
+ * 注意:需要在`from_pretrained`中明确指明模型的数据类型,否则会引起较大计算误差
69
+ * Run the following code after install `transformers>=4.36.0` and `accelerate`
70
+ * Warning: It is necessary to specify the data type of the model clearly in 'from_pretrained', otherwise large calculation errors will be caused
71
  ```python
72
  from transformers import AutoModelForCausalLM, AutoTokenizer
73
  import torch
74
+ torch.manual_seed(0)
75
 
76
+ path = 'openbmb/MiniCPM-2B-sft-bf16'
 
77
  tokenizer = AutoTokenizer.from_pretrained(path)
78
+ model = AutoModelForCausalLM.from_pretrained(path, torch_dtype=torch.bfloat16, device_map='cuda', trust_remote_code=True)
79
 
80
+ responds, history = model.chat(tokenizer, "山东省最高的山是哪座山, 它比黄山高还是矮?差距多少?", temperature=0.8, top_p=0.8)
81
+ print(responds)
 
 
 
 
 
82
  ```
83
 
84
+ * 期望输出 Expected Output
85
+ ```shell
86
+ 山东省最高的山是泰山,海拔1545米。
87
+
88
+ 相对于黄山(海拔1864米),泰山海拔较低,相差约319米。
89
  ```
 
90
 
91
+ ## 开源协议 License
92
 
93
+ #### 模型协议 Model License
94
 
95
+ * 本仓库中代码依照 [Apache-2.0](https://github.com/OpenBMB/MiniCPM/blob/main/LICENSE) 协议开源
96
+ * MiniCPM 模型权重的使用则需要遵循 [“通用模型许可协议-来源说明-宣传限制-商业授权”](https://github.com/OpenBMB/General-Model-License/blob/main/%E9%80%9A%E7%94%A8%E6%A8%A1%E5%9E%8B%E8%AE%B8%E5%8F%AF%E5%8D%8F%E8%AE%AE-%E6%9D%A5%E6%BA%90%E8%AF%B4%E6%98%8E-%E5%AE%A3%E4%BC%A0%E9%99%90%E5%88%B6-%E5%95%86%E4%B8%9A%E6%8E%88%E6%9D%83.md)。
97
+ * MiniCPM 模型权重对学术研究完全开放。
98
+ * 如需将模型用于商业用途,请联系cpm@modelbest.cn来获取书面授权,在登记后亦允许免费商业使用。
99
 
100
+ * The code in this repo is released according to [Apache-2.0](https://github.com/OpenBMB/MiniCPM/blob/main/LICENSE)
101
+ * The usage of MiniCPM's parameters is subject to ["General Model License Agreement - Source Notes - Publicity Restrictions - Commercial License"](https://github.com/OpenBMB/General-Model-License/blob/main/)
102
+ * The parameters are fully open to acedemic research
103
+ * Please contact cpm@modelbest.cn to obtain a written authorization for commercial uses. Free commercial use is also allowed after registration.
104
 
105
+ #### 声明 Statement
106
 
107
+ * 作为一个语言模型,MiniCPM 通过学习大量的文本来生成内容,但它无法理解、表达个人观点或价值判断,它所输出的任何内容都不代表模型开发者的观点和立场。
108
+ * 因此用户在使用 MiniCPM 生成的内容时,应自行负责对其进行评估和验证。
109
+ * 如果由于使用 MinCPM 开源模型而导致的任何问题,包括但不限于数据安全问题、公共舆论风险,或模型被误导、滥用、传播或不当利用所带来的任何风险和问题,我们将不承担任何责任。
110
 
111
+ * As an LLM, MiniCPM generates contents by learning a large mount of texts, but it cannot comprehend, express personal opinions or make value judgement. Anything generated by MiniCPM does not represent the views and positions of the model developers
112
+ * We will not be liable for any problems arising from the use of the MinCPM open Source model, including but not limited to data security issues, risk of public opinion, or any risks and problems arising from the misdirection, misuse, dissemination or misuse of the model.
113
 
114
+ <p id="8"></p>
115
 
116
+ ## 工作引用 Reference
117
 
118
+ * 如果觉得MiniCPM有助于您的工作,请考虑引用下列[技术报告](todo)
119
+ * Please refer to the [Technical Report](todo) if this helps
120
 
 
121
  ```
122
+ @inproceedings{minicpm2024,
123
+ title={MiniCPM: todo},
124
+ booktitle={OpenBMB Blog},
125
+ year={2024}
126
+ }
127
+
128