\documentclass[11pt]{article}
\usepackage[top=0.8in, bottom=0.8in, left=1in, right=1in]{geometry}
\usepackage{graphicx}
\usepackage{hyperref}
\setlength{\parindent}{0pt}
\renewcommand{\baselinestretch}{1.1}


\begin{document}
\title{IMPROVEMENTS ON MAX AGGREGATE QUERIES IN THE NEO4J GRAPH DATABASE}
\author{Fan Yang and Sheng Zha \\
\emph{Department of Computer Science, University of Maryland, College Park, 20742, USA} \\
\tt{\{fyang,szha\}@cs.umd.edu}}
\date{}

\maketitle

\begin{abstract}
The blooming of social networks and progress in medicine development give birth to the wide
use of graph databases. Aggregate queries on such database is hard to intuitively form an
efficient process for and are still quite limited. In this paper, we explore the aggregate
query solutions in a popular, open source database management system, neo4j. We discuss the
limitations in the existing solutions in this system, and we propose our solution to address
this shortcoming. We focus on a particular type of aggregate query, the MAX query, and we 
introduce a new query key word, which we call MAXB.
\end{abstract}

\section{Introduction}

Recently, graph-based data model is proposed and widely used in
the current database design and implementation, especially in the social
network and biological networks.
%
The traditional relational and hierarchical model have a basic assumption
that some elements have more precedence over other elements, which are referred
as primary key or parent node.
%
In contrast, the graph model is built on the concept that all elements
have same equality; there is no single resource having any particular
importance over another.

Although the graph data model conforms more with the structure of
social networks, tools for querying graph database are still
not well-developed.
%
For canonical relational model, the relation algebra is handy for
querying such as aggregates and ranking.
%
Simply computing average along a single column already provides
us with a scope of the data distribution.
%
However, for graph database, it is difficult to intuitively
devise an effective query language to perform aggregates queries
due to its intrinsic complexity of the data structure.
%
For instance, the standard SQL is not directly applicable if we
want to efficiently select specific number of users with less than
3 degree friendship with a celebrity from Facebook.

Based on the above discussions, we aim to propose a possible
way to improve aggregates queries for the graph database.
%
Nowadays, the capability of aggregates query for graph database
is still not well satisfying, though several graph query language
are proposed to enable uses to conduct aggregates queries in
an acceptable time and yields promising results.
%
Therefore, our project will focus on the possible improvements
on the aggregates queries for graph database.


With the prevalence of the graph database used in biology networks and social networks
nowadays, it is becoming more and more important to devise effective graph models
and query languages to handle the more complicated situations presented in the graph databases,
compared with traditional relational databases.
%
In our work, we mainly focus on the extension of existent query languages rather than propose
an entire new graph data model.
%
Although several query languages have been proposed, we find that most of them only support
limited queries, such as addition, deletion and join, etc.
%
However, only the basic components are far from enough to deal with various complex cases appearing
in the graph database.
%
For instance, the aggregate functionality of the query language is desirable when we
want to count the number or average value over the data of an attribute on a specific condition.
%
Therefore, we aim to improve the performance of aggregate queries for analyzing and mining
data in the graph database.
%
In the following paragraphs, we first provide an overview on the development of graph-based data
models and databases, then introduce a number of query languages for graph databases to give
the context of our work.

\section{Related work}
\subsection{Graph Database}
There is a rising concept in the database community that represents data from the
real world as a graph.
%
Since real world data is intrinsically interconnected with each other, especially in
the social networks, biological networks and geographical information systems, a data
model supporting such connection and the concepts of nodes and edges is desirable.
%
However, in this case, traditional relational models or network models cannot well
support these concepts.
%
As the development of database systems and increasing requirement on the graph-like data
representation, researchers have attempted to devise several graph databases.
%

The graph data model and object-oriented model emerged almost at the same time.
%
Although there is no clear trajectory of the progress of graph database, there are some important works regarded as milestones.
%
In early years, Shipman~\cite{shipman1981} proposed a functional data model with DAPLEX data language.
%
It explicitly use the graph concept by representing actual and visual data relationships,
though did not clearly refer to the terminology ``graph''.
%
Later, based on Hull and Yap's format model~\cite{hull1984}, Kuper and Vardi~\cite{kuper1993} proposed the
Logical Data Model, which is an explicitly directed graph
using leaves and nodes to represent data and connections among data.
%
Levene and Poulovassilis proposed the Hypernode model~\cite{levene1990} where hypernodes are graphs
whose nodes can contain graphs to represent arbitrary complex objects.
%
They also proposed the GROOVY data model~\cite{levene1994} whose graphical representation provides
a simple unifying formalism for the main concepts of object-oriented data modeling.
%
Mainguenaud~\cite{Mainguenaud1992} presented a multi-level networks by introducing the master nodes to
describe a sub-network within multiple levels of definition.
%
This approach incorporated the graph theory into the object-oriented framework to represent
network oriented data.
%
Mainguenaud~\cite{Mainguenaud1995} also applied the similar idea to geographical information systems and
introduced the concept of ``Open Graph Assumption''.

There are several significant applications of the graph database, one of which is the biological network.
%
Graves \emph{et al.}~\cite{Graves1994,Graves1995} have done some research on representing genomic data by graphs and a graph
database to store mapping data.
%
Since the data from biological domain has much more complex relationship with each other
compared the that from the ordinary life, a graph-based database can naturally capture
such intangible structure.
%
Frishman \emph{et al.}~\cite{frishman1998} gave a nice review about the current development of biological databases
with an in-depth investigation on the models of data representation, data quality, data
management software, etc.

With the development of the graph theory for databases, a number of graph data models
and graph database systems/projects have been proposed and applied to many areas.
%
A graph-oriented database management system named GOOD has been proposed by Gemis \emph{et al.}~\cite{gemis1993}
with a new query language to solve the problem of finding the desirable information from
the complicated database.
%
Amann and Scholl~\cite{amann1992} presented a graph data model called GRAM in which regular expressions
over the types of the node and edge labels are used to qualify connected subgraphs.
%
They also proposed an algebraic language used for hypertext querying.
%
G{\"u}ting proposed GraphDB~\cite{guting1994}, which is a object-oriented data model.
%
GraphDB partitions object classes into separate simple classes, link classes
and path classes whose objects can
be viewed as nodes, edges, and explicitly stored
paths of a graph, thereby allowing for an explicit representation of graphs.
%
It also uses a new query language.
%
Kiesel \emph{et al.}~\cite{kiesel1995} introduced their Gras database system which is able to
efficiently handle different types of coarse- and fine-grained objects,
hierarchical and non-hierarchical relations between objects, and finally
attributes of rather different size.
%
Later, the Resource Description Framework (RDF)~\cite{rdf} for representing information
in the Web is proposed.
%
RDF is expected to handle the tasks like character normalization and URI references
that appear in the Web.
%
Besides, Mart{\'i}nez-Bazan \emph{et al.}~\cite{dex2007} proposed a high performance graph database querying system called DEX.
%
DEX allows for the integration of multiple data source and is fit for the purpose
of research on large graph databases.
%
Currently, Neo4J~\cite{neo4j} is the most popular graph database which is open-source.
%
It provides Java interfaces to implement high performance, disk-based and robust graph database.



\subsection{Queries for Graph Database}
Compared to the traditional relational database, it is very different
to execute a query to find desirable information from the graph database.
%
Since the standard SQL language does not support the concepts of objects, nodes, edges and
weights of edges, it is not applicable to the graph database.
%
For instance, it is impossible for the standard SQL to find the shortest path between
a node and its neighbors within a specific distance.
%
Therefore, to handle queries in the graph database, numerous query languages are proposed.
%
We will give a brief review on several prominent query languages with analysis of
their main characteristics.

In early years, when researchers proposed various graph databases/models, they also introduced
some query languages at the same time.
%
Levene and Poulovassilis~\cite{levene1990} introduced a declarative logic-based language supporting
query and update for the hypermode model.
%
Using the set of rules in a hypemode program, they define the operational semantics of hypemode programs in
terms of an INFER operator which infers new hypernodes
from a set of existing hypemodes.
%
In~\cite{guting1994}, a query language for GraphDB was proposed.
%
It only covers several basic elements rather than a complete query language.
%
The tools it uses include the derive statement, the rewrite operation, the union operation and
shortest path search.
%
However, it does not implement the aggregate query.
%
In GOOD~\cite{good1990}, there is a graphical transformation language, which
bases the basic operations on pattern matching.
%
This language is derived from graph grammars and and conventional grammars.
%
It includes five basic operations: addition of nodes, addition of edges,
deletion of nodes and deletion of edges.
%
The fifth operation is abstraction which is used to group objects on the basis
of some of their properties.
%
The abstraction operation creates a unique object for each equivalence class of
duplicate objects.
%
It can be seen as the ``group by'' operation in the standard SQL language.
%
This language has a follower GRAM~\cite{amann1992}, which includes a query algebra where regular expressions
over data types are used to select walks
in a graph.
%
Here, ``walk'' means the path from one node to another node.
%
Combining walks into set of walks to form hyperwalks.
%
The query language is based on the hyperwalk algebra, which includes unary operations
such as projection, selection and binary operations such as join, concatenation and set
operations.
%
Paredaens\emph{ et al.}~\cite{Paredaens1995} introduced G-Log, a declarative
query language based on graphs, which combines the expressive
power of logic, the modeling power of complex objects with identity
and the representation power of graphs.
%
The authors claimed that G-log is the only
nondeterministic and computationally complete language that
does not suffer from the copy-elimination problem at that time.
%
Sheng \emph{et al.}~\cite{sheng1999} proposed GOQL, which is derived from the OQL used for
object-oriented data model.
%
GOQL is an extension of OQL by adding the functionality to handle nodes, edges and
pathes.
%
Besides, GOQL also use ``select ... from ...'' statement for querying,
which is consistent with the OQL in term of the query format and may be easily
understood for people familiar with the OQL.
%
GOQL also uses the temporal operators next, until and
connected for queries involving the relative ordering of sequence
elements.
%
Unfortunately, there is still no aggregate functionality in the GOQL.
%
Hidders~\cite{hiddders2003} presented a graph-based data model called GDM with the introduction of
two graph-manipulation operations, an addition and a deletion.
%
The proposed graph-based update language (GUL) adopts pattern matching, which
means the query is executed by searching for a subgraph of a pattern from similar
to a data graph.
%
With only two basic operations, GUL also has the drawback in handling aggregate queries.
%
To solve the problem that there is no proper language for describing and retrieving
specific parts from the large biology networks, Leser~\cite{leser2005} proposed the pathway query language (PQL)
to achieve the above goal.
%
With similar syntax to the standard SQL, PQL is very easy to learn and understand.
%
It is based on the simple graph data model and aims to match subgraphs in the database
according to some matching criteria on the nodes and edges.
%
Since the output of PQL is also a graph, it naturally leads to a simple query composition
and query embedding.
%
He and Singh~\cite{he2008} presented a query language for graph database called GraphQL that supports arbitrary attributes on nodes, edges and graphs.
%
In this model, graphs are the basic unit and graph structures are composable by extending the notion of
formal languages from strings to the graph field.
%
They also investigated access methods of the selection operator.
%
They solved the pattern matching problem by use of neighborhood subgraphs and profiles,
joint reduction of the search space, and optimization of the search order.
%
However, GraphQL does not include the aggregate functionality either.
%
To handle the data from RDF, the SPARQL query language is proposed~\cite{sparql}.
%
SPARQL is able to deal with querying required and optional graph patterns along with their conjunctions and disjunctions, but there is still no aggregation.
%

Focusing on the subgraph and similarity queries, we briefly introduce several representative work in this area
consists of five types of approaches, which are feature-based approach,
closure-based approach, verification-free approach, coding-based approach
and fast sub-iso approach.
%
The GraphGrep method introduced by Shasha \emph{et al.}~\cite{shasha2002} is the first
work that adopts the filtering-and-verification framework for subgraph
query processing.
%
Since sequential scan too expensive, the idea of this approach is to
reduce candidate set size by filtering by paths.
%
The index construction involves enumerating the set of all paths,
of length up to L, of all graphs in the database, and keeping these
paths in a hashtable.
%
This method is a quite efficient simple approach, but has the limitation that filtering power of paths is limited,
and since the candidate set is large, the verification phase
has a high cost.
%
Yan \emph{et al.}~\cite{yan2004} introduced the first working using pattern mining to
do graph indexing, which is called gIndex.
%
Since paths lose structural information and filtering is not
effective enough, this method uses discriminative frequent subgraphs
to improve the filtering method.
%
The index construction involves mining the set of discriminative
frequent subgraphs with a size-increasing support, and the query
processing involves filtering phase and verification phase.
%
Also, Discriminative frequent subgraphs effectively eliminate
redundancy in the feature set.
%
But again it has the limitation that the verification phase
always require the size of candidate size is larger than the
actual size of the answer set.
%
TreePi, proposed by Zhang \emph{et al.}~\cite{zhang2007}, is designed to tackle the
efficiency problem in the previous frameworks.
%
It maintains trees instead of general graph.
%
The main idea is to filter by discriminative frequent
subtrees, and to do fast sub-Iso testing by measuring
distance between tree centers.
%
The indexing cost is lower than than subgraph approach,
and the use of tree center distance further reduces
candidate set size and speeds up sub-Iso test.
%
The problem is, since the graph is limited to tree structures,
the filtering power of trees may be limited, and it still has
the same problem that requires a larger candidate set just
like all the methods under the filtering-verification framework.
%
The tree$+\delta$ approach, proposed by Zhao \emph{et al.}~\cite{zhao2007}, is proposed
to strengthen the filtering power of the previous model.
%
The main idea is to filter by frequent subtrees + on-demand
discriminative subgraphs, and to select on-demand a
small set of graph-features, where the filtering power of
a graph-feature is estimated from its subtree-features.
%
This model could achieve similar filtering power of graph-features
without costly graph mining, and thus yield low indexing cost.
%
But the query performance is bounded by that of using graph-features.
%
Also, the on-demand graph-feature selection incurs extra query cost.
%
Closure-based approach is a more advanced form of the previous
feature-based framework. A most representative model of this
kind is C-tree, proposed by He and Singh~\cite{he2006}.
%
The main idea is to construct an R-tree like graph index
built on graph closures.
%
This model supports both subgraph and similarity queries,
and it has an R-tree like structure.
%
But it still has the limitation incurred by verification.
%
In order to address the limitation of verification, which
requires the candidate set is larger than the actual answer
set, a new category of methods is proposed, which is called
verification-free approach.
%
The representative works include FG-index/FG*-index by Cheng \emph{et al.}~\cite{cheng2007} and GDIndex by Williams \emph{et al.}~\cite{williams2007}.
%
The main idea of FG-index is to answer an important subset of queries directly without verification, and to answer the remaining queries with minimal verification.
%
This method is verification-free for answering FG-queries, which means it doesn't have the limitation of verification step for this type of queries.
%
But for non-FG queries it is still answered by the filtering-and-verification framework, and also the FG-index may have a high index-probing cost if the frequent set is too big.
%
This method is later improved by FG*-index, which involves two other indexes: a feature-index that is used to facilitate efficient index-probing in FG-index, and an FAQ-index which is used to answer non-FG queries without verification in general.
%
The motivation of GDIndex is that it is observed that graphs in many applications are small, and that should be made use of.
%
The main idea is to hash all subgraphs of all graphs in the database, and then match a query by hashing, and focus on graphs with limited sizes.
%
This model does not need verification phase for any query, but it is not suitable for applications with large graphs.
%
Coding-based approaches are proposed to take into consideration the semantics of structures that feature-based approaches does not. The most representative works are GString by Jiang \emph{et al.}~\cite{jiang2007} and GCoding by Zou \emph{et al.}~\cite{zou2008}.
%
The main idea of GString is to encode graphs into strings using semantics of sub-structures, and transform subgraph query processing into string matching.
%
It uses spectral graph coding to encode the structure of a graph into a numerical space and encodes query and matches it by comparing graph codes.
%
It is made to be easy to update, and thus to support frequent updates.
%
But these methods still have the verification bottleneck.
%
A new category of approach, which is called fast sub-iso approach, is proposed to better address the verification bottleneck problem.
%
The most representative work in this category is QuickSI proposed by Shang \emph{et al.}~\cite{shang2008}.
%
The main idea is to improve the sub-iso test in the verification step, and to reduce branch-and-bound in Ullman's sub-Iso algorithm by an effective search order based on the frequencies of vertices/edges in the underneath graph database and the topological information of the graphs. It reduces the verification cost by a fast sub-iso algorithm, although it is still limited to the verification bottleneck.


Recently, Dries \emph{et al.}~\cite{dries2009} proposed a query language (BiQL).
%
BiQL treats nodes and edges uniformly and is based on SQL query language.
%
It includes a simple aggregation operation.
%
However, a drawback of this BiQL is that it is only a prototype at that time, so
it is not clear how to implement an efficient and scalable real system.
%
Later, Dries and and Nijssen~\cite{dries10} improved their BiQL by in term of the aggregation.
%
This extension incorporated aggregates and ranking when performing data mining on the
graph database.
%
It also supports probabilistic graph queries.
%
However, this extension is still a Prolog-like prototype: the implementation relies on
an in-memory Prolog or on-disk relational database.
%
Therefore, strictly speaking, it is not clear about the performance of the extended BiQL
on a real graph database, like Neo4J.

\subsection{Our work}

Despite that numerous query strategies are proposed to deal with queries on graph databases, these methods are somewhat limited in aggregation operations.
%
We think there is still a lot of work that could be done to improve the performance of aggregation functionality
in the query languages.
%
We aim at developing a method that addresses the limitation of some previous works such as the extended BiQL~\cite{dries10} that is only a prototype.
%
More specifically, we would like to use make use of the advantages of the extended BiQL and incorporate
some data mining techniques (such as the Graph-based OLAP~\cite{olap2008}) to achieve our goal.

\section{Basic concepts}

We will go over some basic concepts of graph database, including nodes, edges, relationships,
path and traversal.
%
Figure~\ref{fig:graph} shows a simple example of nodes and edges in an directed graph.
\begin{figure*}[h]
\begin{center}
\includegraphics[width=.5\linewidth]{graph.png}
\end{center}
\caption{A simple illustration of graph model}
\label{fig:graph}
\end{figure*}

\emph{Nodes.} Nodes are often used for representing entities and can be seen as the record sets in
the relational database.
%
It has some properties.

\emph{Relationship and Edge.} Two nodes may have relationship which means that the two nodes are linked together under certain condition.
%
Relationship is often represented by edge.

Note that in some cases we may have directed graph in which the direction from a node
to another node matters.
%
Therefore, a node may have an edge pointing to another node and also can have another edge
pointed by the third node.
%
For example, in a social network, a person likes one of his friend's posting status and thus
is created an edge from him to the status with a property ``like''.
%
Meanwhile, a status may also point to a person if it mentions the person.
%
Also, there may be weighted edges to indicate the strength on the relationship between
two nodes.
%
However, in our project we only consider weightless graph and we use
social network to conduct our experiments.

\emph{Path.} A path consists of a number of nodes with connecting relationships.
%
Usually we are interested in the shortest path, which means we can connect two
nodes by bypassing minimal number of nodes in an unweighted graph or computing the
minimal sum of weights in a weighed graph.

\emph{Traversal.} It means that we search for the nodes under certain rules and return the nodes
with their relationships that satisfy the rule.
%
Traversal is necessary for many queries.

\emph{Graph Database.} In our project, we use the popular graph database Neo4j.
%
Neo4j is a high-performance, open source and NOSQL graph database.
%
It provides Java APIs which can be easily imbedded in a Java application.
%
In Neo4j, nodes are considered as objects with properties and relationships.
%
Nodes can be of different types of classes defined by the users.
%
A set of methods from the APIs facilitates the operation on the nodes.
%
Besides, Neo4j includes a new query language named ``Cypher query language'' to
achieve queries for graph database.
%
It is a declarative query language that allows expressive and efficient querying
without having to write traversers in code.
%
Different from Java, it focuses on what to retrieve rather than how to retrieve.
%
The features of cypher query language partially come from SQL such as WHERE and ORDER BY
and SPARQL such as pattern matching.
%
Cypher query language also has some basic aggregation functionalities, such as COUNT, SUM,
AVG, MAX and MIN.
%
However, they are far from enough even though they can do some fundamental aggregations.
%
For example, it cannot do aggregation on some complex query conditions as we will shown in
Section~\ref{sec3}.
%
Therefore, we propose an improved functionality to achieve complex aggregations for the original
query language included in Neo4j.
%

\section{Aggreagate Queries}

\subsection{Data Structure}
In this section, we will construct a simple graph containing the main features of a social network
and perform queries on this graph.

Our social network has the following characteristics.
\begin{itemize}
\item There are $N$ users in the social network with name and gender.
%
Each person can add and delete friends.
%
He/She can have multiple friends and be friend of multiples persons.
%
A user can be added into the social network and into someone's friend circle.

\item Each person has $N$ posted statuses that can be viewed by other persons just like the Facebook.
%
He/She can add new status and delete old ones.

\item Each status has three properties, TEXT, DATE and LIKE, which represent the content of the
status, the published date and the votes it receives from the owner's friend.
%
A person can virtually click ``like'' if he likes the status of his friend, and then the vote of
this status will increment by 1.
%
A person can click ``like'' multiple times.

\item Persons and statuses are all treated as nodes in the social network.
\end{itemize}

Details are shown in Figure~\ref{fig:table}.
\begin{figure*}[h]
\begin{center}
\includegraphics[width=.9\linewidth]{table.png}
\end{center}
\caption{Table of nodes and relationships in the social network database.}
\label{fig:table}
\end{figure*}

\subsection{Data Manipulation}
First, we aim to perform simple operation such as insertion, deletion and relationship
construction.
%
We implement these using Java APIs provided by Neo4j.

Insertion can be easily realized by calling \verb+graphDb.createNode()+ to create a new node
in the database, either a new user or a new status.
%
Deletion is achieved by deleting all
corresponding relationships and then the node itself by calling \verb+node.delete()+.
%
Adding a friend is achieved by calling \verb+from.createRelationshipTo(to, relationship_type)+
to establish a relationship between two persons.
%
Liking a status is implemented by the same method to establish a relationship between
a person and a specific status of his friend; at the same time the vote of the status
increments by 1.

We perform a series operations to produce a small social network with 10 people.
Each person has about 10 statuses.
%
A number of friendships are established among several people.
%
Some statuses are marked as ``like'' by several people and the votes are stored in the
property of the corresponding status nodes.

\subsection{Basic Queries}
We use several build-in aggregation functions to perform queries on the database, among which
one fails.

\subsubsection{Count} 
Similar with standard SQL, cypher query language also has \verb+COUNT+ to count
the number of nodes or relationships based on certain criteria.

First, we count the number of friends of a certain person in the social network.
%
The query command is
\begin{verbatim}
START person=node(2)
MATCH (person)-[:FRIEND]-(friend)
RETURN person, count(friend)
\end{verbatim}
The result is
\begin{verbatim}
+----------------------------------------------------------+
| person                                   | count(friend) |
+----------------------------------------------------------+
| Node[2]{Name->"Liu, Bei",Gender->"Male"} | 6             |
+----------------------------------------------------------+
1 rows, 55 ms
\end{verbatim}

Then we perform \verb+COUNT+ to count the number of statuses of a person liked by his friends,
which means the votes for these statues are non-zero.
%
The query and result are listed as follows.
\begin{verbatim}
START person=node(2) MATCH (person)-[:STATUS]->(status)
WHERE status.LIKE > 0
RETURN person, count(status)

+----------------------------------------------------------+
| person                                   | count(status) |
+----------------------------------------------------------+
| Node[2]{Name->"Liu, Bei",Gender->"Male"} | 4             |
+----------------------------------------------------------+
1 rows, 8 ms
\end{verbatim}

\subsubsection{Summation and Average} 
To obtain the information about the popularity of a person
in his friend circle, we can use the total number of ``likes'' received by all his friends.
%
It is naturally to use \verb+SUM+ from cypher query language to achieve our goal.
%
The query is
\begin{verbatim}
START person=node(2)
MATCH (person)-[:STATUS]->(status)
RETURN person, SUM(status.LIKE)
\end{verbatim}
The output is
\begin{verbatim}
+-------------------------------------------------------------+
| person                                   | sum(status.LIKE) |
+-------------------------------------------------------------+
| Node[2]{Name->"Liu, Bei",Gender->"Male"} | 22               |
+-------------------------------------------------------------+
1 rows, 7 ms
\end{verbatim}

Besides, we may also be interested in the average number of ``likes'' of a person, which
can be obtained by simply replace \verb+SUM+ with \verb+AVG+.
%
The result is
\begin{verbatim}
+-------------------------------------------------------------+
| person                                   | avg(status.LIKE) |
+-------------------------------------------------------------+
| Node[2]{Name->"Liu, Bei",Gender->"Male"} | 2.2              |
+-------------------------------------------------------------+
1 rows, 8 ms
\end{verbatim}

\subsubsection{Maximum and Minimum} 
Like other query language, cypher query language also
supports \verb+MAX+ and \verb+MIN+ queries to obtain the maximal and minimal number
on a property.
%
We aim to find the maximal number of ``likes'' of a person using the following query.
\begin{verbatim}
START person=node(2)
MATCH (person)-[:STATUS]->(status)
RETURN person, MAX(status.LIKE)
\end{verbatim}
The output is
\begin{verbatim}
+-------------------------------------------------------------+
| person                                   | max(status.LIKE) |
+-------------------------------------------------------------+
| Node[2]{Name->"Liu, Bei",Gender->"Male"} | 10               |
+-------------------------------------------------------------+
1 rows, 53 ms
\end{verbatim}

\subsubsection{Complex Aggregate Query} 
Until now, we have tested the basic aggregations using
the built-in cypher query language to achieve simple aggregates.
%
However, the existing function of aggregate query cannot achieve the correct results once
we require finer results or impose more restrictions on the query results.
%
We will show an example below.

Suppose now we start with a person $p$ and we want know the status with maximal votes from each of
his friends.
%
That is, we are interested in knowing the content of the most popular status of each person
befriended with $p$.
%
We may quickly come up with the following query
\begin{verbatim}
START person=node(2)
MATCH (person)-[:FRIEND]-(friend)-[:STATUS]->(status)
RETURN friend, MAX(status.LIKE)
\end{verbatim}
However, the query only gives out number of votes of the corresponding status for
each friend rather than the actual content of the status as follows.
\begin{verbatim}
+----------------------------------------------------------------------+
| friend                                            | max(status.LIKE) |
+----------------------------------------------------------------------+
| Node[11]{Name->"Diao, chan",Gender->"Female"}     | 0                |
| Node[10]{Name->"Xiaoqiao",Gender->"Female"}       | 8                |
| Node[8]{Name->"Huang, Yueying",Gender->"Female"}  | 2                |
| Node[3]{Name->"Guan, Yu",Gender->"Male"}          | 10               |
| Node[4]{Name->"Zhang, Fei",Gender->"Male"}        | 10               |
| Node[9]{Name->"Sun, Shangxiang",Gender->"Female"} | 0                |
+----------------------------------------------------------------------+
6 rows, 57 ms
\end{verbatim}
%
If we change the aggregation key to \verb+status.TEXT+, we still cannot have the results we want
since the query outputs all statuses instead of the most popular ones.

As we know, there is no existing query in Neo4j to satisfy our requirement.
%
We believe that there must be other cases on which the query language will fail to produce
correct answers, especially in large database with more complicated structure.
%
Therefore, we think of using pure Java to achieve our goal and implement a interface to be incorporated
into the naive query language in Neo4j.

\subsection{Our approach}
As we demonstrate in the previous sections, the embedded cypher query language cannot 
handle some complex queries.
%
Therefore, we attempt to devise a new query approach to achieve the desirable results.

First, We define a new aggregate keyword \verb+MAXB+, which indicates the ``B'' version compared to the 
original \verb+MAX+ in the cypher query language regarded as ``A'' version. 
%
We modify the standard query procedure by writing an interface before 
we send a query command into the execution engine provided by Neo4j.
%
In this interface, if we do not find the specially defined, normal 
query is conducted; otherwise the query is sent into an additional 
parser function.
%
In this query parser, we parse the query command to three parts: starting
node, matching condition and returning value.
%
We start from the starting node, and look for the condition to
decide how to traverse the graph.
%
By comparing the nodes and relationships in the matching condition with 
those in the return command, we 
can decide which parts to return and cache them when we do the traversal.
%
Finally, we return the actual content of the nodes (status in our experiments) 
after we finish the traversal in the required depth.
%
An example is shown in Figure xxx.



\section{Experiments}

First, we test the proposed new MAX keyword in the same simulated social network
used in the previous sections.
%
This time, we are still interested in knowing the content of the most popular status of each person
befriended with one person.
%
We change the original \verb+MAX+ to our \verb+MAXB+ and then send the modified query into 
our imposed query interface. 
%
Then we have the following query results.
\begin{verbatim}
|   Node[3]{Name->"Guan, Yu",Gender->"Male"} -> Status 3 | 10
|   Node[10]{Name->"Xiaoqiao",Gender->"Female"} -> Status 4 | 8
|   Node[11]{Name->"Diao, chan",Gender->"Female"} -> Status 7 | 0
|   Node[9]{Name->"Sun, Shangxiang",Gender->"Female"} -> Status 2 | 0
|   Node[4]{Name->"Zhang, Fei",Gender->"Male"} -> Status 7 | 10
|   Node[8]{Name->"Huang, Yueying",Gender->"Female"} -> Status 11 | 2
\end{verbatim}

As we can see, we successfully obtain the content of the most popular status of all friends 
of a person.

It is noteworthy that similarly we can also have new \verb+MINB+, 
\verb+SUMB+, \verb+AVGB+, etc by defining corresponding query processing
functions, which return the content of the status rather than the aggregate
number.

Next, we test the proposed query keyword in a real dataset, which is WikiVotes 
collected from data of the Wikipedia.
%
There are some facts about this dataset.
%
In the Wikipedia, users can contribute to the documents.
%
Among these users, a small part of contributors are administrators, who have 
higher rights to access to the technical details. 
%
To become an administrator, a Request for adminship
(RfA) is issued and the Wikipedia community via a public decision or a
vote decides who to promote to adminship. 
%
The WikiVote dataset contains all administrator elections and vote history data. 
%
There are 2,794 elections with 103,663 total votes and 7,066 users participating in
the elections (either casting a vote or being voted on).
%
The statistics of this dataset is shown in Table~\ref{wikivote}.
\begin{table}
\begin{center}
\begin{tabular}{l|r}
\hline
Nodes & 7,115 \\
Edges & 103,689 \\
Number of triangles & 608,389 \\
Fraction of closed triangles & 0.1255 \\
Diameter (longest shortest path) & 7 \\
\hline
\end{tabular}
\end{center}
\caption{Statistics of WikiVotes dataset.}
\label{wikivote}
\end{table}

The structure of WikiVote dataset used in our experiments has the following 
characteristics.
\begin{itemize}
\item Each user has a unique ID represented by a number. 
%
All these IDs are not necessarily continuous.
\item We consider the ID as the unique name of the user, and
import the entire database into Neo4j.
%
To simplify the problem, the genders of all users are set to male.
\item If one user votes for another user, they become friends in the network.
\item To test the new aggregate keyword, We add two statuses ``User No. ID v1'' and 
``User No. ID v2'' for each 
user with the votes and 2$\times$votes, respectively, from other users gathered from the database.
%
The number of votes is treated as the number of ``likes'' a status receives.
\end{itemize}

After constructing such a large database, we want to know the status 
of the user with maximal number of votes from other users in Wikipedia. 
%
Still, we are interested in knowing the content of the most popular status of each
user voted by a single user.
% 
We use the proposed \verb+MAXB+ to execute the query.
%
Part of the query result is shown in the following.
\begin{verbatim}
Node[1685]{Name->"584",Gender->"Male"} -> User No. 584 v2 | 10
Node[1691]{Name->"586",Gender->"Male"} -> User No. 586 v2 | 26
Node[1703]{Name->"590",Gender->"Male"} -> User No. 590 v2 | 16
Node[1745]{Name->"604",Gender->"Male"} -> User No. 604 v2 | 16
Node[1577]{Name->"611",Gender->"Male"} -> User No. 611 v1 | 0
Node[21167]{Name->"8283",Gender->"Male"} -> User No. 8283 v1 | 0
Node[65]{Name->"25",Gender->"Male"} -> User No. 25 v2 | 180
Node[11]{Name->"6",Gender->"Male"} -> User No. 6 v2 | 604
Node[23]{Name->"10",Gender->"Male"} -> User No. 10 v2 | 172
Node[35]{Name->"14",Gender->"Male"} -> User No. 14 v2 | 228
Node[41]{Name->"17",Gender->"Male"} -> User No. 17 v2 | 90
......
......
\end{verbatim}

In this query, the starting node is the node of the user with ID 30.
%
Apart from the content of the statuses, we also obtain the index of nodes
and the number of maximal votes for each status.
%
By this experiment, we show that our defined aggregate keyword can handle 
the large scale graph database from the real world. 

\section{Conclusion and Future work}

Despite of the success of Neo4j in the graph database community, its ability 
to handle different types of queries is relatively weak, though it has a graph 
database query language - cypher query language.
%
Even for a simple social network, using cypher query language cannot give us 
the exact information we want.
%
Therefore, we attempt to improve the aggregate functionality, particularly the 
maximum aggregate, of the cypher query language.
%
Our approach is by defining a new aggregate keyword and imposing an interface 
to the standard query execution engine.
%
We search for our special keyword to decide whether to execute normal query
or not.
%
If we find the keyword, our own parser will be activated to parse the query string
into several parts, then decide the starting node and the traversal path, as well
as the values to be returned.
%
A manually traversal is then performed to find our desired results based on the query
conditions.

Experiments on a small simulated social network and real data from WikiVotes dataset
demonstrate the effectiveness of our approach in querying the desired content using 
the newly defined aggregation keyword.

There are still some open problems to be solved. 
%
First, to achieve more complicated aggregate query, more new query keywords should 
be defined and implemented.
%
Second, a more generic approach to be used for other graph
databases rather than only on Neo4j is much more desirable. 
%
Third, a thorough performance evaluation on the query time would be better.
%
These could be the future work in this research direction. 


{\small
\begin{thebibliography}{99}
\bibitem{shipman1981}D. W. Shipman. The Functional Data Model and the Data Language DAPLEX. ACM
Transactions on Database Systems (TODS), (6)1, 140-173, 1981.
\bibitem{hull1984}R. Hull and C. K. Yap. The Format Model: A Theory of database Organization.
Journal of the ACM (JACM), (31)3, 518-544, 1984.
\bibitem{kuper1993}G. M. Kuper and	M. Y. Vardi. The Format Model: A Theory of database Organization.
ACM Transactions on Database Systems (TODS), (18)3, 379-413, 1993.
\bibitem{levene1990}M. Levene and A. Poulovassilis. The  Hypernode  Model  and  its  Associated  Query
Language. In Proc. of the 5th Jerusalem Conference on Information technology, 520-530, 1990.
\bibitem{levene1994}A. Poulovassilis and M. Levene. A Nested-Graph Model for the Representation and
Manipulation of Complex Objects. ACM Transactions on Information Systems (TOIS), (12)1,
35-68, 1994.
\bibitem{Mainguenaud1992}M. Mainguenaud. Simatic XT: A Data Model to Deal with Multi-scaled Networks. Computer, Environment and Urban Systems, (16), 281-288, 1992.
\bibitem{Mainguenaud1995}M. Mainguenaud. Modelling the Network Component of Geographical Information Systems.
International Journal of Geographical Information Systems (IJGIS), (9)6, 575-593, 1995.
\bibitem{Graves1994}M. Graves, E. R. Bergeman, and C. B. Lawrence. Querying a Genome Database using
Graphs. In In Proc. of the 3th International Conference on Bioinformatics and Genome Research, 1994.
\bibitem{Graves1995}M. Graves, E. R. Bergeman, and C. B. Lawrence. Graph Database Systems for
Genomics. IEEE Engineering in Medicine and Biology. Special issue on Managing Data for
the Human Genome Project (11)6, 737-745, 1995.
\bibitem{frishman1998}D. Frishman, K. Heumann, A. Lesk, and H. Mewes. Comprehensive, Comprehensible, Distributed
and Intelligent Databases: Current Status. Bioinformatics, (14)7, 551-561, 1998.
\bibitem{gemis1993}M. Gemis, J. Paredaens, I. Thyssens, and J. V. den
Bussche. Good: A graph-oriented object database system.
In Proc. of the 1993 ACM SIGMOD, Washington, D.C., 505-510, 1993.
\bibitem{amann1992}B. Amann and M. Scholl. Gram: a graph data model and
query languages. In ECHT '92 Proc. of the ACM
conference on Hypertext, 201-211, 1992.
\bibitem{guting1994}R. H. G{\"u}ting. GraphDB: Modeling and Querying Graphs in Databases.
In VLDB '94 Proc. of the 20th International Conference on Very Large Data Bases, 297-308, 1994.
\bibitem{kiesel1995}N. Kiesel, A. Schuerr, and B. Westfechtel. Gras, a
graph-oriented engineering database system. Information Systems, 20(1), 21-51, 1995.
\bibitem{rdf}Resource Description Framework (RDF):
Concepts and Abstract Syntax. http://www.w3.org/TR/rdf-concepts/
\bibitem{dex2007}N. Mart{\'i}nez-Bazan, V. Munt{\'e}s-Mulero, S. G{\'o}mez-Villamor, J. Nin, M. S{\'a}nchez-Mart{\'i}nez, and
    J. Larriba-Pey. Dex: high-performance exploration on large graphs for information retrieval. In CIKM '07 Proc. of the Sixteenth ACM Conference on Conference on information and Knowledge Management, 573-582, 2007.
\bibitem{neo4j}Neo4j: The World��s Leading Graph Database. http://neo4j.org/.
\bibitem{good1990}M. Gemis, J. Paredaens, I. Thyssens, and J. V. den
Bussche. Good: A graph-oriented object database system.
In Proc. of the 1993 ACM SIGMOD, 505-510, 1993.
\bibitem{Paredaens1995}J. Paredaens, P. Peelman, and L. Tanca. G-Log: a graph-based query language. IEEE Transactions on Knowledge and Data Engineering (TKDE), (7)3, 436-453, 1995.
\bibitem{sheng1999}L. Sheng, Z. Ozsoyoglu, and G. Ozsoyogly. A graph
query language and its query processing. In ICDE '99 Proc. 15th International Conference on Data Engineering, 572-581, 1999.
\bibitem{hiddders2003}J. Hidders. Typing graph manipulation operations. In
ICDT '03 Proc. of 9th International Conference Database Theory, 394-409, 2003.
\bibitem{leser2005}U. Leser. A query language for biological networks.
Bioinformatics, 21(2), 33-39, 2005.
\bibitem{he2008}H. He and A. K. Singh. Graphs-at-a-time: query
language and access methods for graph databases. In SIGMOD '08 Proc. of the 2008 ACM SIGMOD international conference on Management of data, 405-418, 2008.
\bibitem{sparql}SPARQL Query Language for RDF. http://www.w3.org/TR/rdf-sparql-query/.
\bibitem{dries2009}A. Dries, S. Nijssen, and L. De Raedt. A query
language for analyzing networks. In CIKM '09 Proc. of the 18th ACM conference on Information and knowledge management, 485-494, 2009.
\bibitem{dries10}A. Dries and S. Nijssen. Analyzing graph databases by aggregate queries.
In 	MLG '10 Proc. of the Eighth Workshop on Mining and Learning with Graphs, 37-45, 2010.
\bibitem{shasha2002}D. Shasha, J. T. L. Wang, and R. Giugno. Algorithmics and Applications of Tree and Graph Searching. In Proc. PODS '02 Proceeding of the International Conference in Pattern recognition (ICPR), 2002.
\bibitem{yan2004}X. Yan, P. S. Yu, and J. Han. Graph indexing: A Frequent Structure-based Approach, In SIGMOD '04: Proceedings of the 2004 ACM SIGMOD international conference on Management of data, 2004.
\bibitem{zhang2007}S. Zhang, M. Hu, and J. Yang. Treepi: A novel graph indexing method. In ICDE '07 Proc. of
IEEE 23rd International Conference on Data Engineering, 66-75, 2007.
\bibitem{zhao2007}P. Zhao, J. X. Yu, and P. S. Yu. Graph indexing: tree + delta $<=$ graph. In VLDB '07 Proc. of the 33rd international conference on Very large data bases, 938-949, 2007.
\bibitem{he2006}H. He and A. K. Singh. Closure-tree: An index structure for graph queries. In ICDE '06 Proc. of the 22nd International Conference on Data Engineering, 38, 2006.
\bibitem{cheng2007}J. Cheng, Y. Ke, W. Ng, and A. Lu. Fg-index: towards verification-free query processing on graph databases. In SIGMOD '07: Proc. of the 2007 ACM SIGMOD international conference on Management of data, 857-872, 2007.
\bibitem{williams2007}D. Williams, J. Huan, and W. Wang. Graph database indexing using structured graph decomposition. In ICDE '07 Proc. of IEEE 23rd International Conference on Data Engineering, 976-985, 2007.
\bibitem{jiang2007}H. Jiang, H. Wang, P. Yu, and S. Zhou. Gstring: A novel approach for efficient search in graph databases. In ICDE '07 Proc. of IEEE 23rd International Conference on Data Engineering, 566-675, 2007.
\bibitem{zou2008}L. Zou, L. Chen, J. X. Yu, and Y. Lu. A novel spectral coding in a large graph database. In EDBT '08 Proc. of the 11th International Conference on Extending Database Technology, 181-192, 2008.
\bibitem{shang2008}H. Shang, Y. Zhang, X. Lin, and J. X. Yu. Taming verification hardness: an efficient algorithm for testing subgraph isomorphism. In Proc. of the VLDB Endowment, 1(1), 364-375, 2008.
\bibitem{olap2008}C. Chen, X. Yan, F. Zhu, J. Han, and P. S. Yu. Graph OLAP: Towards Online Analytical Processing on Graphs. In ICDM '08 Proc. of Eighth IEEE International Conference on Data Mining, 103-112, 2008.
\end{thebibliography}
}

\end{document}
