Spaces:
Runtime error
Runtime error
yanqiang
commited on
Commit
•
82515cb
1
Parent(s):
e85f4a5
update@发布Wikipedia知识库
Browse files- README.md +15 -0
- corpus/zh_wikipedia/README.md +114 -0
- corpus/zh_wikipedia/chinese_t2s.py +82 -0
- corpus/zh_wikipedia/clean_corpus.py +88 -0
- corpus/zh_wikipedia/wiki_process.py +46 -0
- create_knowledge.py +45 -8
- images/wiki_process.png +0 -0
- resources/OpenCC-1.1.6-cp310-cp310-manylinux1_x86_64.whl +0 -0
README.md
CHANGED
@@ -11,8 +11,15 @@ https://github.com/yanqiangmiffy/Chinese-LangChain
|
|
11 |
![](https://github.com/yanqiangmiffy/Chinese-LangChain/blob/master/images/web_demo.png)
|
12 |
![](https://github.com/yanqiangmiffy/Chinese-LangChain/blob/master/images/web_demo_new.png)
|
13 |
|
|
|
|
|
|
|
|
|
|
|
|
|
14 |
## 🚀 特性
|
15 |
|
|
|
16 |
- 🐯 2023/04/19 引入ChuanhuChatGPT皮肤
|
17 |
- 📱 2023/04/19 增加web search功能,需要确保网络畅通!(感谢[@wanghao07456](https://github.com/wanghao07456),提供的idea)
|
18 |
- 📚 2023/04/18 webui增加知识库选择功能
|
@@ -24,6 +31,14 @@ https://github.com/yanqiangmiffy/Chinese-LangChain
|
|
24 |
|
25 |
## 🧰 知识库
|
26 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
27 |
| 知识库数据 |FAISS向量|
|
28 |
|--------|----|
|
29 |
|💹 [大规模金融研报知识图谱](http://openkg.cn/dataset/fr2kg)|链接:https://pan.baidu.com/s/1FcIH5Fi3EfpS346DnDu51Q?pwd=ujjv 提取码:ujjv |
|
|
|
11 |
![](https://github.com/yanqiangmiffy/Chinese-LangChain/blob/master/images/web_demo.png)
|
12 |
![](https://github.com/yanqiangmiffy/Chinese-LangChain/blob/master/images/web_demo_new.png)
|
13 |
|
14 |
+
## 🚋 使用教程
|
15 |
+
|
16 |
+
- 选择知识库询问相关领域的问题
|
17 |
+
|
18 |
+
## 🏗️ 部署教程
|
19 |
+
|
20 |
## 🚀 特性
|
21 |
|
22 |
+
- 📝 2023/04/19 发布45万Wikipedia的文本预处理语料以及FAISS索引向量
|
23 |
- 🐯 2023/04/19 引入ChuanhuChatGPT皮肤
|
24 |
- 📱 2023/04/19 增加web search功能,需要确保网络畅通!(感谢[@wanghao07456](https://github.com/wanghao07456),提供的idea)
|
25 |
- 📚 2023/04/18 webui增加知识库选择功能
|
|
|
31 |
|
32 |
## 🧰 知识库
|
33 |
|
34 |
+
### 构建知识库
|
35 |
+
|
36 |
+
- Wikipedia-zh
|
37 |
+
|
38 |
+
> 详情见:corpus/zh_wikipedia/README.md
|
39 |
+
|
40 |
+
### 知识库向量索引
|
41 |
+
|
42 |
| 知识库数据 |FAISS向量|
|
43 |
|--------|----|
|
44 |
|💹 [大规模金融研报知识图谱](http://openkg.cn/dataset/fr2kg)|链接:https://pan.baidu.com/s/1FcIH5Fi3EfpS346DnDu51Q?pwd=ujjv 提取码:ujjv |
|
corpus/zh_wikipedia/README.md
ADDED
@@ -0,0 +1,114 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
## 知识库构建
|
2 |
+
|
3 |
+
|
4 |
+
### 1 Wikipedia构建
|
5 |
+
|
6 |
+
参考教程:https://blog.51cto.com/u_15127535/2697309
|
7 |
+
|
8 |
+
|
9 |
+
一、维基百科
|
10 |
+
|
11 |
+
维基百科(Wikipedia),是一个基于维基技术的多语言百科全书协作计划,也是一部用不同语言写成的网络百科全书。维基百科是由吉米·威尔士与拉里·桑格两人合作创建的,于2001年1月13日在互联网上推出网站服务,并在2001年1月15日正式展开网络百科全书的项目。
|
12 |
+
|
13 |
+
|
14 |
+
|
15 |
+
二、维基百科处理
|
16 |
+
|
17 |
+
1 环境配置(1)编程语言采用 python3(2)Gensim第三方库,Gensim是一个Python的工具包,其中有包含了中文维基百科数据处理的类,使用方便。
|
18 |
+
Gensim : https://github.com/RaRe-Technologies/gensim
|
19 |
+
|
20 |
+
使用 pip install gensim 安装gensim。
|
21 |
+
|
22 |
+
(3)OpenCC第三方库,是中文字符转换,包括中文简体繁体相互转换等。
|
23 |
+
|
24 |
+
OpenCC:https://github.com/BYVoid/OpenCC,OpenCC源码采用c++实现,如果会用c++的可以使用根据介绍,make编译源码。
|
25 |
+
|
26 |
+
OpenCC也有python版本实现,可以通过pip安装(pip install opencc-python),速度要比c++版慢,但是使用方便,安装简单,推荐使用pip安装。
|
27 |
+
|
28 |
+
|
29 |
+
|
30 |
+
2 数据下载
|
31 |
+
|
32 |
+
中文维基百科数据按月进行更新备份,一般情况下,下载当前最新的数据,下载地址(https://dumps.wikimedia.org/zhwiki/latest/),我们下载的数据是:zhwiki-latest-pages-articles.xml.bz2。
|
33 |
+
|
34 |
+
中文维基百科数据一般包含如下几个部分:
|
35 |
+
|
36 |
+
|
37 |
+
|
38 |
+
训练词向量采用的数据是正文数据,下面我们将对正文数据进行处理。
|
39 |
+
|
40 |
+
|
41 |
+
|
42 |
+
3 数据抽取
|
43 |
+
|
44 |
+
下载下来的数据是压缩文件(bz2,gz),不需要解压,这里已经写好了一份利用gensim处理维基百科数据的脚本
|
45 |
+
|
46 |
+
wikidata_processhttps://github.com/bamtercelboo/corpus_process_script/tree/master/wikidata_process
|
47 |
+
|
48 |
+
使用:
|
49 |
+
|
50 |
+
python wiki_process.py zhwiki-latest-pages-articles.xml.bz2 zhwiki-latest.txt
|
51 |
+
|
52 |
+
这部分需要一些的时间,处理过后的得到一份中文维基百科正文数据(zhwiki-latest.txt)。
|
53 |
+
|
54 |
+
输出文件类似于:
|
55 |
+
|
56 |
+
歐幾里得 西元前三世紀的古希臘數學家 現在被認為是幾何之父 此畫為拉斐爾的作品 雅典學院 数学 是利用符号语言研究數量 结构 变化以及空间等概念的一門学科
|
57 |
+
|
58 |
+
|
59 |
+
|
60 |
+
4 中文繁体转简体
|
61 |
+
|
62 |
+
经过上述脚本得到的文件包含了大量的中文繁体字,我们需要将其转换成中文简体字。
|
63 |
+
|
64 |
+
我们利用OpenCC进行繁体转简体的操作,这里已经写好了一份python版本的脚本来进行处理
|
65 |
+
|
66 |
+
chinese_t2s
|
67 |
+
|
68 |
+
https://github.com/bamtercelboo/corpus_process_script/tree/master/chinese_t2s
|
69 |
+
|
70 |
+
使用:
|
71 |
+
|
72 |
+
python chinese_t2s.py –input input_file –output output_file
|
73 |
+
|
74 |
+
like:
|
75 |
+
|
76 |
+
python chinese_t2s.py –input zhwiki-latest.txt –output zhwiki-latest-simplified.txt
|
77 |
+
|
78 |
+
输出文件类似于
|
79 |
+
|
80 |
+
欧几里得 西元前三世纪的古希腊数学家 现在被认为是几何之父 此画为拉斐尔的作品 雅典学院 数学 是利用符号语言研究数量 结构 变化以及空间等概念的一门学科
|
81 |
+
|
82 |
+
5.清洗语料
|
83 |
+
|
84 |
+
上述处理已经得到了我们想要的数据,但是在其他的一些任务中,还需要对这份数据进行简单的处理,像词向量任务,在这得到的数据里,还包含很多的英文,日文,德语,中文标点,乱码等一些字符,我们要把这些字符清洗掉,只留下中文字符,仅仅留下中文字符只是一种处理方案,不同的任务需要不同的处理,这里已经写好了一份脚本
|
85 |
+
|
86 |
+
clean
|
87 |
+
|
88 |
+
https://github.com/bamtercelboo/corpus_process_script/tree/master/clean
|
89 |
+
|
90 |
+
使用:
|
91 |
+
|
92 |
+
python clean_corpus.py –input input_file –output output_file
|
93 |
+
|
94 |
+
like:
|
95 |
+
|
96 |
+
python clean_corpus.py –input zhwiki-latest-simplified.txt –output zhwiki-latest-simplified_cleaned.txt
|
97 |
+
|
98 |
+
效果:
|
99 |
+
|
100 |
+
input:
|
101 |
+
|
102 |
+
哲学 哲学(英语:philosophy)是对普遍的和基本的问题的研究,这些问题通常和存在、知识、价值、理性、心灵、语言等有关。
|
103 |
+
|
104 |
+
output:
|
105 |
+
|
106 |
+
哲学哲学英语是对普遍的和基本的问题的研究这些问题通常和存在知识价值理性心灵语言等有关
|
107 |
+
|
108 |
+
|
109 |
+
|
110 |
+
三、数据处理脚本
|
111 |
+
|
112 |
+
近在github上新开了一个Repositorycorpus-process-scripthttps://github.com/bamtercelboo/corpus_process_script在这个repo,将存放中英文数据处理脚本,语言不限,会有详细的README,希望对大家能有一些帮助。
|
113 |
+
References
|
114 |
+
|
corpus/zh_wikipedia/chinese_t2s.py
ADDED
@@ -0,0 +1,82 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
#!/usr/bin/env python
|
2 |
+
# -*- coding:utf-8 _*-
|
3 |
+
"""
|
4 |
+
@author:quincy qiang
|
5 |
+
@license: Apache Licence
|
6 |
+
@file: chinese_t2s.py.py
|
7 |
+
@time: 2023/04/19
|
8 |
+
@contact: yanqiangmiffy@gamil.com
|
9 |
+
@software: PyCharm
|
10 |
+
@description: coding..
|
11 |
+
"""
|
12 |
+
import sys
|
13 |
+
import os
|
14 |
+
import opencc
|
15 |
+
from optparse import OptionParser
|
16 |
+
|
17 |
+
|
18 |
+
class T2S(object):
|
19 |
+
def __init__(self, infile, outfile):
|
20 |
+
self.infile = infile
|
21 |
+
self.outfile = outfile
|
22 |
+
self.cc = opencc.OpenCC('t2s')
|
23 |
+
self.t_corpus = []
|
24 |
+
self.s_corpus = []
|
25 |
+
self.read(self.infile)
|
26 |
+
self.t2s()
|
27 |
+
self.write(self.s_corpus, self.outfile)
|
28 |
+
|
29 |
+
def read(self, path):
|
30 |
+
print(path)
|
31 |
+
if os.path.isfile(path) is False:
|
32 |
+
print("path is not a file")
|
33 |
+
exit()
|
34 |
+
now_line = 0
|
35 |
+
with open(path, encoding="UTF-8") as f:
|
36 |
+
for line in f:
|
37 |
+
now_line += 1
|
38 |
+
line = line.replace("\n", "").replace("\t", "")
|
39 |
+
self.t_corpus.append(line)
|
40 |
+
print("read finished")
|
41 |
+
|
42 |
+
def t2s(self):
|
43 |
+
now_line = 0
|
44 |
+
all_line = len(self.t_corpus)
|
45 |
+
for line in self.t_corpus:
|
46 |
+
now_line += 1
|
47 |
+
if now_line % 1000 == 0:
|
48 |
+
sys.stdout.write("\rhandling with the {} line, all {} lines.".format(now_line, all_line))
|
49 |
+
self.s_corpus.append(self.cc.convert(line))
|
50 |
+
sys.stdout.write("\rhandling with the {} line, all {} lines.".format(now_line, all_line))
|
51 |
+
print("\nhandling finished")
|
52 |
+
|
53 |
+
def write(self, list, path):
|
54 |
+
print("writing now......")
|
55 |
+
if os.path.exists(path):
|
56 |
+
os.remove(path)
|
57 |
+
file = open(path, encoding="UTF-8", mode="w")
|
58 |
+
for line in list:
|
59 |
+
file.writelines(line + "\n")
|
60 |
+
file.close()
|
61 |
+
print("writing finished.")
|
62 |
+
|
63 |
+
|
64 |
+
if __name__ == "__main__":
|
65 |
+
print("Traditional Chinese to Simplified Chinese")
|
66 |
+
# input = "./wiki_zh_10.txt"
|
67 |
+
# output = "wiki_zh_10_sim.txt"
|
68 |
+
# T2S(infile=input, outfile=output)
|
69 |
+
|
70 |
+
parser = OptionParser()
|
71 |
+
parser.add_option("--input", dest="input", default="", help="traditional file")
|
72 |
+
parser.add_option("--output", dest="output", default="", help="simplified file")
|
73 |
+
(options, args) = parser.parse_args()
|
74 |
+
|
75 |
+
input = options.input
|
76 |
+
output = options.output
|
77 |
+
|
78 |
+
try:
|
79 |
+
T2S(infile=input, outfile=output)
|
80 |
+
print("All Finished.")
|
81 |
+
except Exception as err:
|
82 |
+
print(err)
|
corpus/zh_wikipedia/clean_corpus.py
ADDED
@@ -0,0 +1,88 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
#!/usr/bin/env python
|
2 |
+
# -*- coding:utf-8 _*-
|
3 |
+
"""
|
4 |
+
@author:quincy qiang
|
5 |
+
@license: Apache Licence
|
6 |
+
@file: clean_corpus.py.py
|
7 |
+
@time: 2023/04/19
|
8 |
+
@contact: yanqiangmiffy@gamil.com
|
9 |
+
@software: PyCharm
|
10 |
+
@description: coding..
|
11 |
+
"""
|
12 |
+
"""
|
13 |
+
FILE : clean_corpus.py
|
14 |
+
FUNCTION : None
|
15 |
+
"""
|
16 |
+
import sys
|
17 |
+
import os
|
18 |
+
from optparse import OptionParser
|
19 |
+
|
20 |
+
|
21 |
+
class Clean(object):
|
22 |
+
def __init__(self, infile, outfile):
|
23 |
+
self.infile = infile
|
24 |
+
self.outfile = outfile
|
25 |
+
self.corpus = []
|
26 |
+
self.remove_corpus = []
|
27 |
+
self.read(self.infile)
|
28 |
+
self.remove(self.corpus)
|
29 |
+
self.write(self.remove_corpus, self.outfile)
|
30 |
+
|
31 |
+
def read(self, path):
|
32 |
+
print("reading now......")
|
33 |
+
if os.path.isfile(path) is False:
|
34 |
+
print("path is not a file")
|
35 |
+
exit()
|
36 |
+
now_line = 0
|
37 |
+
with open(path, encoding="UTF-8") as f:
|
38 |
+
for line in f:
|
39 |
+
now_line += 1
|
40 |
+
line = line.replace("\n", "").replace("\t", "")
|
41 |
+
self.corpus.append(line)
|
42 |
+
print("read finished.")
|
43 |
+
|
44 |
+
def remove(self, list):
|
45 |
+
print("removing now......")
|
46 |
+
for line in list:
|
47 |
+
re_list = []
|
48 |
+
for word in line:
|
49 |
+
if self.is_chinese(word) is False:
|
50 |
+
continue
|
51 |
+
re_list.append(word)
|
52 |
+
self.remove_corpus.append("".join(re_list))
|
53 |
+
print("remove finished.")
|
54 |
+
|
55 |
+
def write(self, list, path):
|
56 |
+
print("writing now......")
|
57 |
+
if os.path.exists(path):
|
58 |
+
os.remove(path)
|
59 |
+
file = open(path, encoding="UTF-8", mode="w")
|
60 |
+
for line in list:
|
61 |
+
file.writelines(line + "\n")
|
62 |
+
file.close()
|
63 |
+
print("writing finished")
|
64 |
+
|
65 |
+
def is_chinese(self, uchar):
|
66 |
+
"""判断一个unicode是否是汉字"""
|
67 |
+
if (uchar >= u'\u4e00') and (uchar <= u'\u9fa5'):
|
68 |
+
return True
|
69 |
+
else:
|
70 |
+
return False
|
71 |
+
|
72 |
+
|
73 |
+
if __name__ == "__main__":
|
74 |
+
print("clean corpus")
|
75 |
+
|
76 |
+
parser = OptionParser()
|
77 |
+
parser.add_option("--input", dest="input", default="", help="input file")
|
78 |
+
parser.add_option("--output", dest="output", default="", help="output file")
|
79 |
+
(options, args) = parser.parse_args()
|
80 |
+
|
81 |
+
input = options.input
|
82 |
+
output = options.output
|
83 |
+
|
84 |
+
try:
|
85 |
+
Clean(infile=input, outfile=output)
|
86 |
+
print("All Finished.")
|
87 |
+
except Exception as err:
|
88 |
+
print(err)
|
corpus/zh_wikipedia/wiki_process.py
ADDED
@@ -0,0 +1,46 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
#!/usr/bin/env python
|
2 |
+
# -*- coding:utf-8 _*-
|
3 |
+
"""
|
4 |
+
@author:quincy qiang
|
5 |
+
@license: Apache Licence
|
6 |
+
@file: wiki_process.py
|
7 |
+
@time: 2023/04/19
|
8 |
+
@contact: yanqiangmiffy@gamil.com
|
9 |
+
@software: PyCharm
|
10 |
+
@description: https://blog.csdn.net/weixin_40871455/article/details/88822290
|
11 |
+
"""
|
12 |
+
import logging
|
13 |
+
import sys
|
14 |
+
from gensim.corpora import WikiCorpus
|
15 |
+
|
16 |
+
logging.basicConfig(format='%(asctime)s: %(levelname)s: %(message)s', level=logging.INFO)
|
17 |
+
'''
|
18 |
+
extract data from wiki dumps(*articles.xml.bz2) by gensim.
|
19 |
+
@2019-3-26
|
20 |
+
'''
|
21 |
+
|
22 |
+
|
23 |
+
def help():
|
24 |
+
print("Usage: python wikipro.py zhwiki-20190320-pages-articles-multistream.xml.bz2 wiki.zh.txt")
|
25 |
+
|
26 |
+
|
27 |
+
if __name__ == '__main__':
|
28 |
+
if len(sys.argv) < 3:
|
29 |
+
help()
|
30 |
+
sys.exit(1)
|
31 |
+
logging.info("running %s" % ' '.join(sys.argv))
|
32 |
+
inp, outp = sys.argv[1:3]
|
33 |
+
i = 0
|
34 |
+
|
35 |
+
output = open(outp, 'w', encoding='utf8')
|
36 |
+
wiki = WikiCorpus(inp, dictionary={})
|
37 |
+
for text in wiki.get_texts():
|
38 |
+
output.write(" ".join(text) + "\n")
|
39 |
+
i = i + 1
|
40 |
+
if (i % 10000 == 0):
|
41 |
+
logging.info("Save " + str(i) + " articles")
|
42 |
+
output.close()
|
43 |
+
logging.info("Finished saved " + str(i) + "articles")
|
44 |
+
|
45 |
+
# 命令行下运行
|
46 |
+
# python wikipro.py cache/zh_wikipedia/zhwiki-latest-pages-articles.xml.bz2 wiki.zh.txt
|
create_knowledge.py
CHANGED
@@ -10,7 +10,8 @@
|
|
10 |
@description: - emoji:https://emojixd.com/pocket/science
|
11 |
"""
|
12 |
import os
|
13 |
-
|
|
|
14 |
from langchain.document_loaders import UnstructuredFileLoader
|
15 |
from langchain.embeddings.huggingface import HuggingFaceEmbeddings
|
16 |
from langchain.vectorstores import FAISS
|
@@ -20,6 +21,9 @@ embedding_model_name = '/root/pretrained_models/text2vec-large-chinese'
|
|
20 |
docs_path = '/root/GoMall/Knowledge-ChatGLM/cache/financial_research_reports'
|
21 |
embeddings = HuggingFaceEmbeddings(model_name=embedding_model_name)
|
22 |
|
|
|
|
|
|
|
23 |
# docs = []
|
24 |
|
25 |
# with open('docs/zh_wikipedia/zhwiki.sim.utf8', 'r', encoding='utf-8') as f:
|
@@ -30,13 +34,46 @@ embeddings = HuggingFaceEmbeddings(model_name=embedding_model_name)
|
|
30 |
# vector_store = FAISS.from_documents(docs, embeddings)
|
31 |
# vector_store.save_local('cache/zh_wikipedia/')
|
32 |
|
|
|
|
|
33 |
docs = []
|
34 |
|
35 |
-
|
36 |
-
|
37 |
-
|
38 |
-
|
39 |
-
|
40 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
41 |
vector_store = FAISS.from_documents(docs, embeddings)
|
42 |
-
vector_store.save_local('cache/
|
|
|
10 |
@description: - emoji:https://emojixd.com/pocket/science
|
11 |
"""
|
12 |
import os
|
13 |
+
import pandas as pd
|
14 |
+
from langchain.schema import Document
|
15 |
from langchain.document_loaders import UnstructuredFileLoader
|
16 |
from langchain.embeddings.huggingface import HuggingFaceEmbeddings
|
17 |
from langchain.vectorstores import FAISS
|
|
|
21 |
docs_path = '/root/GoMall/Knowledge-ChatGLM/cache/financial_research_reports'
|
22 |
embeddings = HuggingFaceEmbeddings(model_name=embedding_model_name)
|
23 |
|
24 |
+
|
25 |
+
# Wikipedia数据处理
|
26 |
+
|
27 |
# docs = []
|
28 |
|
29 |
# with open('docs/zh_wikipedia/zhwiki.sim.utf8', 'r', encoding='utf-8') as f:
|
|
|
34 |
# vector_store = FAISS.from_documents(docs, embeddings)
|
35 |
# vector_store.save_local('cache/zh_wikipedia/')
|
36 |
|
37 |
+
|
38 |
+
|
39 |
docs = []
|
40 |
|
41 |
+
with open('cache/zh_wikipedia/wiki.zh-sim-cleaned.txt', 'r', encoding='utf-8') as f:
|
42 |
+
for idx, line in tqdm(enumerate(f.readlines())):
|
43 |
+
metadata = {"source": f'doc_id_{idx}'}
|
44 |
+
docs.append(Document(page_content=line.strip(), metadata=metadata))
|
45 |
+
|
46 |
+
vector_store = FAISS.from_documents(docs, embeddings)
|
47 |
+
vector_store.save_local('cache/zh_wikipedia/')
|
48 |
+
|
49 |
+
|
50 |
+
# 金融研报数据处理
|
51 |
+
# docs = []
|
52 |
+
#
|
53 |
+
# for doc in tqdm(os.listdir(docs_path)):
|
54 |
+
# if doc.endswith('.txt'):
|
55 |
+
# # print(doc)
|
56 |
+
# loader = UnstructuredFileLoader(f'{docs_path}/{doc}', mode="elements")
|
57 |
+
# doc = loader.load()
|
58 |
+
# docs.extend(doc)
|
59 |
+
# vector_store = FAISS.from_documents(docs, embeddings)
|
60 |
+
# vector_store.save_local('cache/financial_research_reports')
|
61 |
+
|
62 |
+
|
63 |
+
# 英雄联盟
|
64 |
+
|
65 |
+
docs = []
|
66 |
+
|
67 |
+
lol_df = pd.read_csv('cache/lol/champions.csv')
|
68 |
+
# lol_df.columns = ['id', '英雄简称', '英雄全称', '出生地', '人物属性', '英雄类别', '英雄故事']
|
69 |
+
print(lol_df)
|
70 |
+
|
71 |
+
for idx, row in lol_df.iterrows():
|
72 |
+
metadata = {"source": f'doc_id_{idx}'}
|
73 |
+
text = ' '.join(row.values)
|
74 |
+
# for col in ['英雄简称', '英雄全称', '出生地', '人物属性', '英雄类别', '英雄故事']:
|
75 |
+
# text += row[col]
|
76 |
+
docs.append(Document(page_content=text, metadata=metadata))
|
77 |
+
|
78 |
vector_store = FAISS.from_documents(docs, embeddings)
|
79 |
+
vector_store.save_local('cache/lol/')
|
images/wiki_process.png
ADDED
resources/OpenCC-1.1.6-cp310-cp310-manylinux1_x86_64.whl
ADDED
Binary file (778 kB). View file
|
|