text
stringlengths
0
18.7k
“智慧政务”中的文本挖掘应用
摘要
近年来, 随着微信、 微博、市长信箱、阳光热绪等网络问政平台逐步成为政府了解民意、
汇聚民智、凝聚氏气的重要渠道, 各类社情民意相关的文本数据量不断梦升,给以往主要依
靠人工来进行留言划分和热点整理的相关部门的工作带来了极太挑战。同时,随着大数据、
云计算、人工智能等技术的发展, 建立基于自然语言处理技术的智慧政务系统已经是社会治
理创新发展的新趋势,对提升政府的管理水平和施政效率具有极太的推动作用。
对于问题 1,首先对“留言主题”以及“留言详情”中的内容进行去重,并且去除缺失
值。利用 jieba 分词对于留言内容进行分词, 使用哈工大停用词表去除停用词, 通过 TF-IDF
算法 提取每个文档中的 16 个关键词。对于每个文档中的关键词使用 one-hot 编码“, 使用
朴素风叶斯算法“,先计算出各个类别关键词的先验概率,再利用风叶斯定理计算出文档中
各关键词属于某个类别的后验概率 ,通过选出具有最大后验概率估计值的类别即为最终的类
别。
对于问题 2, 由于计算文本相似度的时间复杂度非常大, 因此使用 LDA 主题模型“对于热
点问题进行归纳,聚类。通过 TF-IDF 算法提取关键词,并且使用LDA 主题模型,发现文档
集中的潜在语义结构, 它是由文档、主题和词语组成的三层内叶斯生成模型。由自己规定一
个 70 主题数量去训款模型, 得到规定数量的主题。 由每一个主题下的 10 个高频词人为的规
定其主题类别,再通过计算文档特征词属于哪一类别的概率最太,就将文档归为哪一类别,
其中主题是根据 LDA 主题模型得到的 10 个高频词汇来人工指定主题属于哪一个类别。并且
将热点问题生成表格 。
对于问题 3,通过计算留言内容与答复的相似度来计算相关性,根据答复内容相关的话
术,构建专家词典,计算专家词典与答复内容的相似度来判断答复内容的可解释性与完整性,
其中相似度使用余弦相似度进行计算。 由于 one-hot 编码的词向量只包含了0 和1 数据 , 无
法对其进行相似度计算, 且 one-hot 编码没有考虑文章分词的顺序, 忽略了整句话的语义信
息, 在本问题中, 使用word2vec 对于留言内容和答复内容进行向量化, 对于每一个分词构
成一个 80 维的向量空间,通过词向量的平均值来表达名向量。对于分词不再提取关键词,
只需要去除停用词,因为 word2vece 问量化需要结合上下文语境,才能给分词较为完整清晰
的向量。 通过计算所有答复内容的相关性、 完整性、可解释性, 对于得到的所有相似度相加,
并且归一化,通过相关性高低来判断答复内容的质量。
关键词: 中文分词,TF-IDF 算法,朴素风叶斯算法,LDA 主题模型 ,word2vec 词向量,相
似度计算
Abstract
In recent years,wWith wechat,microblog,mayor s mailbox,sunshine hotline and
other online political platforms gradually becoming an important channel for the
government to understand the public opinion,gather people s wisdom and gather
people's spirit,the amount of text data related to various social situations and
public opinions has been increasing, which has brought great challenges to the work
of relevant departments that mainly rely on manual work to divide messages and sort
out hot spots. At the same 七ime,with the development of big data,cloud comput ing,
artificial intelligence and othe technologies, the establishment of smart
government system based on natural language processing technology has become a new
trend of social governance innovation and development,which has a great role in
promot ing the management level and efficiency of the government.
Aiming at the problem of the one,first1y,the content in "message subject”and
"message details” is de duplicated and the missing value is removed. ln this paper,
We Use the Jieba segmentation to segment the message content,use the halting words
list of Harbin University of technology to remove the halting words,and extract
16 keywords in each document by TF-1DF algorithm.、For each keyword in the document,
one hot coding is used,and naive Bayesian algorithm is used to calculate the prior
probability of each category of keywords,and then the posterior probability of each
keyword in the document belonging to a certain category is calculated by Bayesian
theorem,、The final category is selected by selecting the category with the maximum
a posteriori (MAP) estimation value.
Aiming at the problem of the two, Because the time complexity of computing text
similarity isvery |arge, LDAtopicmodel is used to summarize and cluster hot issues.
TF-1DF algorithm is used to extract keywords,and LDA topic model is used to discover
the potential semantic structure of document set. It is a three-layer Bayesian
generation model composed of documents,topics and words. By setting a 70 topic
number to train the mode1,we can get a set number of topics. The subject category
is defined by 10high-frequency words under each topic,and then by calculating which
category the document characteristic words belong to, the document will be
classified into which category. The topic is to manually specify which category the
subject belongs to according to the10 high-frequency words obtained from LDA sub ject
mode1,And generate tables of hot issues.
Aiming at the problem of the three, The relevance is calculated by calculating the
similarity between the message content and the reply., According to the relevant
scripts of the reply content,anexpert dictionary is constructed,and the similarity
between the expert dictionary and the reply content is calculated to determine the
interpretability and integrity of the reply content. The similarity is calculated
by cosine similarity,、 Because the word vector of one hot code only contains 0 and
1 data,it can't calculate the similarity,and the one hot code doesn t consider
the word segmentation order of the article,and ignores the semantic information
of the whole sentence、 ln this problem,word2vec is used to quantify the message
content and the reply content. For each word segmentation,an 80 dimensional vector
space is constructed,and the average value of the word vector is used To express
the sentence vector. For word segmentation,we don t need to extract key Words any
more,we only need to remove the stop words,because word 2vec vectorization needs
to comb ine with context to giveamorecomplete andclear vector of word segmentat ion.
By calculating the relevance,integrity and interpretability of all the replies,
adding all the similarity,and normalizing, the quality of the replies can be judged
by the relevance-.
Keyword:Chinese word segmentation,TF 1DF algorithm, Naive Bayesian algorithm,LDA
topic mode1,word2vec word vector,similarity calculation
目录
ER 尖
2. 分析方法与过程……eeeeeeereeeennnnms