Kaguya-19 commited on
Commit
ee3da81
1 Parent(s): 6253eb2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -29
README.md CHANGED
@@ -1,7 +1,6 @@
1
  ---
2
  base_model: openbmb/MiniCPM3-4B
3
  library_name: peft
4
- license: apache-2.0
5
  language:
6
  - zh
7
  - en
@@ -14,8 +13,8 @@ language:
14
  欢迎关注 `MiniCPM3` 与 RAG 套件系列:
15
 
16
  - 基座模型:[MiniCPM3](https://huggingface.co/openbmb/MiniCPM3-4B)
17
- - 检索模型:[RankCPM-E](https://huggingface.co/openbmb/RankCPM-E)
18
- - 重排模型:[RankCPM-R](https://huggingface.co/openbmb/RankCPM-R)
19
  - 面向 RAG 场景的 LoRA 插件:[MiniCPM3-RAG-LoRA](https://huggingface.co/openbmb/MiniCPM3-RAG-LoRA)
20
 
21
  **MiniCPM3-RAG-LoRA** developed by ModelBest Inc., NEUIR and THUNLP, is a generative model specifically designed for Retrieval-Augmented Generation (RAG) scenarios. Based on [MiniCPM3](https://huggingface.co/openbmb/MiniCPM3-4B), the model is fine-tuned using the Low-Rank Adaptation (LoRA) technique through Direct Preference Optimization (DPO). The fine-tuning process is based on over 20,000 open-source data points from open-domain question answering and logical reasoning tasks, leading to an average performance improvement of approximately 13% on general evaluation datasets.
@@ -23,8 +22,8 @@ language:
23
  We also invite you to explore `MiniCPM3` and the RAG toolkit series:
24
 
25
  - Foundation Model: [MiniCPM3](https://huggingface.co/openbmb/MiniCPM3-4B)
26
- - Retrieval Model: [RankCPM-E](https://huggingface.co/openbmb/RankCPM-E)
27
- - Re-ranking Model: [RankCPM-R](https://huggingface.co/openbmb/RankCPM-R)
28
  - LoRA Plugin for RAG scenarios: [MiniCPM3-RAG-LoRA](https://huggingface.co/openbmb/MiniCPM3-RAG-LoRA)
29
 
30
  ## 模型信息 Model Information
@@ -107,27 +106,4 @@ After being fine-tuned with LoRA for RAG scenarios, MiniCPM3-RAG-LoRA outperform
107
 
108
  * The code in this repo is released under the [Apache-2.0](https://github.com/OpenBMB/MiniCPM/blob/main/LICENSE) License.
109
  * The usage of MiniCPM3-RAG-LoRA model weights must strictly follow [MiniCPM Model License.md](https://github.com/OpenBMB/MiniCPM/blob/main/MiniCPM%20Model%20License.md).
110
- * The models and weights of MiniCPM3-RAG-LoRA are completely free for academic research. After filling out a ["questionnaire"](https://modelbest.feishu.cn/share/base/form/shrcnpV5ZT9EJ6xYjh3Kx0J6v8g) for registration, MiniCPM3-RAG-LoRA weights are also available for free commercial use.
111
- <!-- ### 测试集介绍:
112
-
113
- - **Natural Questions (NQ, Accuracy):**
114
- - **简介**: Natural Questions 是一个开放域问答数据集,由真实用户在Google搜索中提出的问题组成。数据集中每个问题都有一个长文档作为上下文,并包含短答案和长答案。
115
- - **评价指标**: 准确率(Accuracy)用于衡量模型是否能够正确地识别出与问题相关的短答案。
116
- - **TriviaQA (TQA, Accuracy):**
117
- - **简介:** TriviaQA 是一个涵盖广泛主题的问答数据集,问题和答案从各类问答网站和百科全书中收集而来。
118
- - **评价指标:** 准确率(Accuracy)用于衡量模型能否正确地回答这些问题。
119
- - **MS MARCO (ROUGE):**
120
- - **简介:** MS MARCO 是一个大规模的开放域问答数据集,主要由Bing搜索引擎用户的查询和相应的答案组成。数据集包含简短答案和相关段落,广泛用于信息检索和生成任务。由于MS MARCO数据集规模庞大,我们从中选取了3000条数据进行本次评测。
121
- - **评价指标:** ROUGE 用于评估模型生成的答案与参考答案之间的重叠程度,衡量生成答案的质量。
122
- - **HotpotQA (Accuracy):**
123
- - **简介:** HotpotQA 是一个多跳问答数据集,要求模型通过跨越多个文档的推理来回答复杂问题。该数据集不仅测试模型的答案生成能力,还考察其推理过程的可解释性。
124
- - **评价指标:** 准确率(Accuracy)用于衡量模型能否正确地回答需要多跳推理的问题。
125
- - **Wizard of Wikipedia (WoW, F1 Score):**
126
- - **简介:** Wizard of Wikipedia 是一个对话数据集,专注于知识型对话场景,要求模型能够在对话中生成与主题相关的、丰富的信息,每个对话轮次都有对应的知识库条目作为支持。
127
- - **评价指标:** F1 值用于衡量模型生成的回答与参考答案在词级别上的重合情况,评估回答的准确性和全面性。
128
- - **FEVER (Accuracy):**
129
- - **简介:** FEVER 是一个事实核查数据集,包含大量的陈述句,模型需要根据给定的文档来判断这些陈述句是否为真或假,该数据集旨在测试模型的事实核查能力。
130
- - **评价指标:** 准确率(Accuracy)用于评估模型在判断陈述句的真实性方面的表现。
131
- - **T-REx (Accuracy):**
132
- - **简介:** T-REx 是一个知识库槽填充数据集,包含从维基百科中提取的实体-关系对。模型需要根据上下文信息填充缺失的槽值,测试其对知识库关系的理解和填充能力。
133
- - **评价指标:** 准确率(Accuracy)用于衡量模型在正确填充缺失槽值方面的表现。 -->
 
1
  ---
2
  base_model: openbmb/MiniCPM3-4B
3
  library_name: peft
 
4
  language:
5
  - zh
6
  - en
 
13
  欢迎关注 `MiniCPM3` 与 RAG 套件系列:
14
 
15
  - 基座模型:[MiniCPM3](https://huggingface.co/openbmb/MiniCPM3-4B)
16
+ - 检索模型:[MiniCPM-Embedding](https://huggingface.co/openbmb/MiniCPM-Embedding)
17
+ - 重排模型:[MiniCPM-Reranker](https://huggingface.co/openbmb/MiniCPM-Reranker)
18
  - 面向 RAG 场景的 LoRA 插件:[MiniCPM3-RAG-LoRA](https://huggingface.co/openbmb/MiniCPM3-RAG-LoRA)
19
 
20
  **MiniCPM3-RAG-LoRA** developed by ModelBest Inc., NEUIR and THUNLP, is a generative model specifically designed for Retrieval-Augmented Generation (RAG) scenarios. Based on [MiniCPM3](https://huggingface.co/openbmb/MiniCPM3-4B), the model is fine-tuned using the Low-Rank Adaptation (LoRA) technique through Direct Preference Optimization (DPO). The fine-tuning process is based on over 20,000 open-source data points from open-domain question answering and logical reasoning tasks, leading to an average performance improvement of approximately 13% on general evaluation datasets.
 
22
  We also invite you to explore `MiniCPM3` and the RAG toolkit series:
23
 
24
  - Foundation Model: [MiniCPM3](https://huggingface.co/openbmb/MiniCPM3-4B)
25
+ - Retrieval Model: [MiniCPM-Embedding](https://huggingface.co/openbmb/MiniCPM-Embedding)
26
+ - Re-ranking Model: [MiniCPM-Reranker](https://huggingface.co/openbmb/MiniCPM-Reranker)
27
  - LoRA Plugin for RAG scenarios: [MiniCPM3-RAG-LoRA](https://huggingface.co/openbmb/MiniCPM3-RAG-LoRA)
28
 
29
  ## 模型信息 Model Information
 
106
 
107
  * The code in this repo is released under the [Apache-2.0](https://github.com/OpenBMB/MiniCPM/blob/main/LICENSE) License.
108
  * The usage of MiniCPM3-RAG-LoRA model weights must strictly follow [MiniCPM Model License.md](https://github.com/OpenBMB/MiniCPM/blob/main/MiniCPM%20Model%20License.md).
109
+ * The models and weights of MiniCPM3-RAG-LoRA are completely free for academic research. After filling out a ["questionnaire"](https://modelbest.feishu.cn/share/base/form/shrcnpV5ZT9EJ6xYjh3Kx0J6v8g) for registration, MiniCPM3-RAG-LoRA weights are also available for free commercial use.