%---------------------------------------------------------------------------%
%->> Frontmatter
%---------------------------------------------------------------------------%
%-
%-> 生成封面
%-
\maketitle% 生成中文封面
% \MAKETITLE% 生成英文封面
%-
%-> 作者声明
%-
% \makedeclaration% 生成声明页
%-
%-> 中文摘要
%-
% \intobmk\chapter*{摘要}% 显示在书签但不显示在目录
% syntax: \chapter[目录]{标题}\chaptermark{页眉}
\chapter[摘要]{\MyTitleCh}\chaptermark{摘要}
\setcounter{page}{1}% 开始页码
\pagenumbering{Roman}% 页码符号

\begin{center}
\vspace{-0.3cm}
\zihao{3} \songti 摘要
\vspace{0.3cm}
\end{center}

% 近年来，卷积神经网络为代表的深度学习算法在许多领域中取得了巨大的突破，如模式识别、图像处理、计算机视觉等。由于 FPGA 低功耗、低延时、可重配置的特性，适合应用于一些对功耗有限制的小型流式应用场景。目前限制 FPGA 在深度学习领域无法大规模应用的主要有以下的两点：

% \begin{enumerate}
%     \item 开发效率低。随着高层次综合技术（HLS）的不断成熟，已经可以使用 C、C++ 等高级语言进行 FPGA 算法开发，该问题正逐步被解决；

%     \item 通用性差，对硬件资源要求高。受制于 FPGA 芯片资源的限制，低成本的 FPGA 芯片往往无法实现一些复杂的算法。
% \end{enumerate}

% 因此本文设计的一种使用 RISC-V 处理器的异构卷积神经网络算法加速系统，将 CNN 网络进行部分展开，展开程度可以根据具体 FPGA 芯片资源的丰富程度灵活调整，同时使用了一个 RISC-V 处理器来完成一些不适合放在 FPGA 上完成的计算任务，在保证了运算速度的提升下，该异构计算架构也有一定的通用性。

% 使用了目前被广泛使用的 YOLO 算法，对其进行详细分析，部署到 FPGA 上，最终在 PYNQ-Z2 平台上获得了 30.15GOP/s 的性能，与 Intel i7-9700K CPU 相比，能效是其 120.4 倍、性能是其 7.3 倍；与双核 ARM-A9 CPU 相比，能效是其 86 倍，性能是其 112.9 倍。

图(Graph)数据广泛的存在于各种实际生活的应用中，比如社交网络、交通网络等。随着大数据时代的到来，如何处理指数级增长的图类型的数据收到越来越多的关注。随后，各种基于神经网络的机器学习方法在其他领域取得了令人侧目的成就，这也为神经网络方法引入图领域带来了顺理成章的理由。图表征是将图上的节点或是整个图表征为一个向量，典型的例子就是图嵌入(Graph Embedding)技术。而如何在将具有复杂结构的图数据表征成整齐的向量的同时仍然保留其图结构的特征，目前仍然是研究的难点。

本文主要是基于目前常用的图神经网络模型(Graph Neural Networks; GNNs)，结合图的一种高阶结构属性，来探寻如何能更好地利用图神经网络模型获得更好的图表征，从而提高下游机器学习任务的性能。具体的，本文探讨了如何通过复杂网络上的诱导子图(Graphlets)来得到图的局部结构的方法，尝试由此得到每个节点的局部结构特征向量(Graphlet Degree Vector; GDV),并基于目前常用的图神经网络模型和分层池化方法构造了图分类模型。通过将模型用于三个经典图分类数据集，本文讨论了引入GDV到图神经网络模型中能否利用图的局部结构性质，得到更好的分类结果。

为了验证算法的有效性，本文设计了对照实验，通过将每个节点的GDV做一个随机的置换，让这个图的局部特征对应到一个随机的图的局部特征，来比较引入GDV到GNNs中能否提高分类的准确性。结果表明: 在DD数据集上使用基于GCN模型的话，采用GDV相较于不使用GDV的模型平均测试集上精度要高3.13\%； 在MUTAG数据集上使用基于GAT模型的话，采用GDV相较于不使用GDV的模型平均测试集上精度要高12.59\%；在NCI1数据集上使用基于GAT模型的话，采用GDV相较于不使用GDV的模型平均测试集上精度要高2.27\%；并且所有对照组的表现均不如采用GDV作为初始属性的模型好，这说明本文提出的引入GDV的方法是有效的。


{
    \zihao{5}
    \keywords{图神经网络；图卷积神经网络；图嵌入；图分类；图池化}% 中文关键词
}
%-
%-> 英文摘要
%-
% \intobmk\chapter*{Abstract}% 显示在书签但不显示在目录
\chapter[Abstract]{\MyTitleEn}\chaptermark{Abstract}

\begin{center}
\vspace{-0.3cm}
\zihao{3} \songti Abstract
\vspace{0.3cm}
\end{center}

Graph data, widely exists in various real-life applications, such as social networks, transportation networks, etc. With the advent of Big Data era, how to handle the exponential growth of graph type data has received more and more attention. Subsequently, various neural network-based machine learning methods have made side-splitting achievements in other fields, which brings a logical reason for introducing neural network methods into the graph domain. Graph representation is the characterization of nodes on a graph or the entire graph as a vector, typically exemplified by graph embedding techniques. It is still difficult to characterize the graph data with complex structure into neat vectors while still retaining its graph structure.

In this paper, we focus on how to better utilize graph neural network models to obtain better graph representations and improve the performance of downstream machine learning tasks, based on a commonly used graph neural network model with a higher-order structural property of graphs. Specifically, this paper explores how to obtain the local structure of a graph through induced subgraphs on a complex network, attempts to obtain the local structure feature vector of each node from it, and constructs a graph classification model based on currently used graph neural network models and hierarchical pooling methods. By applying the model to three classical graph classification datasets, this paper discusses whether introducing GDVs into GNNs can exploit the local structural properties of graphs and obtain better classification results.

To verify the effectiveness of the algorithm, a control experiment is designed to compare whether introducing GDV into GNNs can improve the classification accuracy by making a random permutation of the GDV of each node, so that the local features of this graph correspond to the local features of a random graph. The results show that: on the DD dataset using the GCN-based model, the average test set accuracy with GDV is 3.13\% higher than that without GDV; on the MUTAG dataset using the GAT-based model, the average test set accuracy with GDV is 12.59\% higher than that without GDV; on the NCI1 dataset using the GAT-based model, the average test set accuracy with GDV is 12.59\% higher than that with GDV. GAT-based model on the NCI1 dataset, the average test set accuracy is 2.27\% higher with GDV compared to the model without GDV; and all the control groups do not perform as well as the model with GDV as the initial attribute, which indicates that the method proposed in this paper for introducing GDV is effective.

\KEYWORDS{Graph Neural Network; Graph Convolutional Neural Network; Graph Embedding; Graph Classification; Graph Pooling}% 英文关键词
%---------------------------------------------------------------------------%
