
\documentclass[journal]{journal}
% \documentclass[10pt, conference, compsocconf]{IEEEtran}
% Add the compsocconf option for Computer Society conferences.
%
% If IEEEtran.cls has not been installed into the LaTeX system files,
% manually specify the path to it like:
% \documentclass[conference]{../sty/IEEEtran}

\usepackage{ulem}
\usepackage{ifpdf}
\usepackage{cite}
\usepackage{graphicx}
\usepackage{color}
\usepackage{listings}
\usepackage{xcolor}
\lstset{escapeinside={<@}{@>}}
\lstset{
    language=C++,
    %keywordstyle=\color{blue}\ttfamily\textbf,
    basicstyle=\footnotesize,% basic font setting
    stringstyle=\color{red}\rmfamily,
    morekeywords={Class,Method,list,string,in,Emit,id,pair,value,Do,For,Else,If,gather,sum,apply,scatter},
    commentstyle=\itshape\color{purple!40!black}
}

\usepackage{multirow}
\usepackage{pgfplots}
\usepackage{array}

\ifCLASSINFOpdf
\else
\fi

\usepackage[cmex10]{amsmath}
\usepackage{amsthm}
\usepackage{algorithmic}
\usepackage{array}

\usepackage[
pdfauthor={derajan},
pdftitle={How to do this},
pdfstartview=XYZ,
bookmarks=true,
colorlinks=true,
linkcolor=blue,
urlcolor=blue,
citecolor=blue,
pdftex,
bookmarks=true,
linktocpage=true, % makes the page number as hyperlink in table of content
hyperindex=true
]{hyperref}

% *** SUBFIGURE PACKAGES ***
\usepackage[tight,footnotesize]{subfigure}


% *** PDF, URL AND HYPERLINK PACKAGES ***
\usepackage{url}
% \url{my_url_here}.

% correct bad hyphenation here
\hyphenation{op-tical net-works semi-conduc-tor}

\begin{document}
% paper title
\title{Hiver: A Completely Decentralized, Parallel, Distributed File System}

% author names and affiliations
% use a multiple column layout for up to two different
\author{Weidong Zhang, Lei Zhang, Yifeng Chen }

\maketitle

\begin{abstract}
  The centralized network topology has been an obstacle hampering further growth of cluster scale. The key issue conquering this problem is to eliminate the central nodes, since the central nodes is neither fault-tolerant nor scalable. The simplest way of implementing distributed architecture is to employ hash algorithm. Hashing can help scatter requests among cluster nodes evenly. However, it does not always works well. In this work, an algorithm called Redirection Hash Table (RHT) algorithm is proposed to distribute file objects in a completely decentralized distributed topology of the network. In the algorithm, both individual workload and cluster's further workload are taken into consideration. Furthermore, the algorithm is adopted by a distributed file system for data placement and retrieval, although the technique is not limited to file systems. It is also applicable to other systems demanding resource management without any central node. Experimental results show that the implemented system Hiver could cut down the access latency and improve network utilization.
\end{abstract}

\begin{IEEEkeywords}
File systems; Data storage systems; Distributed algorithms; Distributed management; Distributed computing; Distributed processing; Storage management; Decentralized control;
\end{IEEEkeywords}

% For peer review papers, you can put extra information on the cover
% page as needed:
% \ifCLASSOPTIONpeerreview
% \begin{center} \bfseries EDICS Category: 3-BBND \end{center}
% \fi
%
% For peerreview papers, this IEEEtran command inserts a page break and
% creates the second title. It will be ignored for other modes.
\IEEEpeerreviewmaketitle

\section{Introduction}
% no \IEEEPARstart
\par During recent decades, network applications have been replicating data and services widely and speedily, leading to high desire on big volume and high reliability data storage solution. In addition, the supercomputing also hastens the growth of need for high performance and concurrency file systems. Distributed file systems is the mainstream solution to meet these strict demands of big scale data storage and processing, for their irreplaceable capacity and throughput. A host of distributed file systems are spawned to support these applications and services.
\par However, few studies have been done on completely decentralized distributed file systems. On a  centralized cluster, the workload increases rapidly with the cluster's scale, when load grows up to the maximum limit of central directory node, the cluster's throughput will stop growing. Even worse, the loss of central node may bring your system to a sudden halt. Further progress of centralized distributed file systems is restrained by its network topology structure, since the central node is becoming a single point of failure, contention and performance bottleneck. In decentralized network, there is no central node anymore, the problems of single point of failure and performance bottleneck is no longer a factor. However, some other issues accompanying with decentralized network must be solved, such as load balance, resource management, consistency, etc.
\par Several proposals, Ceph [9], Chord [1], Pastry [2], CAN [3], address the load aspect issues by distributing the directory information over a large number of nodes. P2P overlay networks based on DHT [5] place and retrieve objects using distributed application-level network, where the directory information is stored with hash table. Then the information in all nodes composes an overall hash table. Through the overall hash table, we could get an arbitrary value by provided corresponding key. Unfortunately, unneglectable network latencies are introduced by using a number of overlay hops which is polylogarithmic. The actual network latencies incurred by queries usually is notably more than those incurred by finding object in central directory [4]. Ceph proposed CRUSH [8] algorithm, a pseudo-random data distribution algorithm, which could efficiently and robustly distributes object replicas across a heterogeneous, structured storage cluster [8]. CRUSH distributes objects across devices according to the pseudo-random algorithm and devices' respective weights which are derived from their capabilities. CRUSH partly shares the workload of directory node at least on two aspects, device allocation and load balance. However, the management of metadata in Ceph depends on centralized approach, but decrease the granularity to parts of whole metadata set.
\par To address the problems accompanying with current decentralized system, we initiate this work. We are dedicate to develop a completely decentralized distributed file systems in which no specified metadata node exists, metadata query is not mandatory before file access, or depend on relaying queries on peer nodes (like DHT [5]) to implement object location. Although much endeavor has been tried on improving naive hash table [6], applying Machine Learning to improve performance of central nodes, yet little effect has been achieved. So we tried to get rid of the confine of centralized architecture, and designed a new completely decentralized architecture.
\par The organization of the paper is as follows. Section \ref{sect_relatedwork} summarizes the related works, includes Ceph, P2P system [7], CURSH [8], DHT algorithm and some centralized distributed file systems. Section \ref{sect_systemoverview} describes the overview of our system and algorithm. The detailed algorithm and policies are stated in section \ref{sect_rht}. Section \ref{sect_datasafty} describes the mechanism on reliability aspect. The last section \ref{sect_evaluation} evaluates Hiver and presents the experiments results.
\par \textbf{Contribution}. The contribution of this work can be summarized as follows:
\par a) An object locating algorithm (RHT) in completely decentralized network is proposed. With the algorithm, object placement and retrieve can be implemented without centralized elements, furthermore the length of object retrieve path is compressed from 2 RTTs (Round-Trip Time) in centralized network to 1.5 RTTs in RHT.
\par b) The RHT algorithm integrates the advantages of hash table and centralized network, it not only has the capability of rapid placement using hash algorithm, but also could dispatch every request to the current best nodes to achieve load balance. The comparisons between RHT and hash-based, centralized-based systems are shown in Table \uppercase\expandafter{\romannumeral3}.
\par c) A completely decentralized distributed file systems is designed and implemented. Besides P2P overlay network and Ceph, we provide another option to manage the data in completely decentralized distributed topology network.

\section{Related Work}
\label{sect_relatedwork}
\par Aiming at different application scenarios, many distributed file systems are developed, such as Lustre used in supercomputing, HDFS for data-intensive computing, PVFS for linux cluster, Panasas for object storage devices (OSDs), P2P for file sharing in internet, and Ceph for non-centrenlized network. Most of these distributed file systems are implemented based on centralized topology architecture. Few of them are based on decentralized topology network, such as Farsite [17], Ceph and P2P.
\par Farsite is distributed file systems with no centralized server. With Byzantine-fault-tolerant protocol [18], each member of directory group stores a replica of directory information, and each member of group takes part in processing the requests when they receive. In our work, we are motivated by the same cause, but with different application scenario, implementation approach and guarantee protocol.
\par The Hadoop distributed file system (HDFS) [20] is a Java implemented distributed, scalable, and portable file systems. It draws an extensive range of applications with the popularity and increasing proliferation of Hadoop. HDFS adopts typical Mater/Slave architecture. A HDFS cluster has nominally a single namenode (although redundancy options are available), which is used to store the metadata, and a cluster of datanode which is used to store data blocks. HDFS use TCP/IP sockets for data communication. HDFS clients use remote procedure call (RPC) to exchange messages between each other. The architecture of HDFS is shown in Figure~\ref{fig_hdfsarchitecture}.

\begin{figure}[!t]
\centering
  \includegraphics[scale=0.5]{images/HDFSArchitecture.pdf}
  \caption{HDFS Architecture [25]}
\label{fig_hdfsarchitecture}
\end{figure}

\par Lustre is a widely adopted parallel distributed file systems in supercomputers for its high performance capabilities and open licensing. It has one or more metadata servers (MDS) nodes that has one or more metadata target (MDT) devices. Unlike block-based distributed file systems, such as GPFS [21] and PanFS [22] where the metadata server controls all of the block allocation, the metadata server of Lustre is only involved in pathname and permission checks, partly avoiding I/O scalability bottlenecks on the metadata server. Furthermore, in our work, we design completely decentralized distributed metadata system thoroughly avoids the I/O scalability bottlenecks and single point of failure.
\par PVFS [23] is another type of open source parallel file systems, jointly developed by Clemson University, Argonne National Laboratory and the Ohio Supercomputer Center. Now it is known as OrangeFS [19] in its newest development branch. The features currently under development and planned include: Meatadata and data redundancy, distributed directories and so on [19]. Similar work appears in our design, our implementation also achieves the function of distributed directories too.
\par The Panasas [24] file systems uses parallel and redundant access to object storage devices, distributed metadata management, consistent client caching to provide a scalable, fault tolerant, high performance distributed file systems. The distributed metadata management and consistent client caching are analogue to the distributed directory system and client cluster\_state caching proposed in our work.
\par Ceph is a general storage platform designed to manage objects, blocks, and files in distributed systems. The absence of center node in Ceph is the distinction from other distributed file systems. Any operation based on center node is substituted by directly data access. To compromise, intermediate mapping procedures and hierarchical structure are introduced into system. Usually more procedures and hierarchies imply higher complexity, more executing time and resource consumption, as well as steeper learning curve for beginner. Since the dawn of Ceph and CRUSH algorithm, more and more industrial and academic communities announce to support it. From Linux kernel 2.6.34, the Ceph has been absorbed into Linux kernel, and marked as experimental function.
\par Ceph adopts pseudo-random generation function to replace centralized Name-node for file locating. On receiving file operation command, Ceph first maps objects into placement groups (PGs) using a simple hash function with an adjustable bit mask to control the number of PGs. Then the PGs are assigned to OSDs using CRUSH (Controlled Replication under Scalable Hashing). After twice mappings, Ceph finds the final storage location for the object. However, there are still a few defects in Ceph. The CRUSH algorithm could not utilize the nodes in best state to process the load requests every time. Additionally, it adopts a relatively complex generation function in the locating process. In Hiver, we simplify the placement and retrieve algorithm in decentralized cluster.
\par DHT algorithm is a class of a decentralized distributed system, and wildly adopted for file placement and retrieve on P2P network. It provides a lookup service similar to a hash table. Like general hash table, Key/Value pair is applied to record the store location. Using Key/Value pair and relaying routing between nearest-neighbors, participants could locate file efficiently. Various variants of consistent hashing algorithms are adopted wildly by DHTs. This technique adopts a distance function $\delta$(k$_1$, k$_2$) [10] to calculate the distance between the keys k$_1$ and k$_2$, usually the distance doesn't represent the geographical distance or network latency. Each node is assigned a unique key called identifier (ID) i$_x$. A node with ID i$_x$ contains all the key k$_m$ for which i$_x$ is the closest ID, measured according to $\delta$(k$_m$, i$_x$). With this crucial property, on node removal or addition event, consistent hashing only changes the set of keys owned by the nodes with adjacent IDs, and keep all other nodes are unaffected.

\section{System Overview}
\label{sect_systemoverview}
\par The primary goals of our design are high concurrency, reliability, scalability and performance. In order to achieve these goals, the completely decentralized network topology is adopted for its extraordinary scalability. With that, the bottleneck of performance and reliability is eliminated too. Nodes can join in and drop out from network at any time without system downtime. Furthermore, the theoretical cluster size is unlimited, and the theoretical maximum bandwidth can reach up to the sum of all the cluster nodes, instead of bandwidth on central nodes.
\par After eliminating central nodes, every node is equal to each other. All nodes take part in processing requests with the chance account to their capability. The object placement and retrieve no longer depend on central node (usually appears as Name-node or Directory server), instead of relying on the pseudo-random generation algorithm and nodes' respective capabilities completely. Four components are crucial to compose the algorithm. The following part in this section will briefly summary the four components.
\par \textbf{\_ClusterState}\text{---}Full knowledge of cluster is usually required when global optimum choice is made. The \textit{\_ClusterState} is designed to cache cluster latest state, which mainly consists of member nodes' capabilities. In order to find out the best nodes that possess the best capability rapidly, every node holds one replica of latest cluster state. \textit{\_ClusterState} contains a Key/Value pair table named \textit{\_nodeStateMap}, whose key indicates node entry, and value records node state. It is easy to find out the best nodes through a single pass of \textit{\_nodeStateMap}. In order to keep \textit{\_nodeStateMap} up-to-date, every node broadcasts its state periodically. When broadcast message is gotten, it will be updated on \textit{\_nodeStateMap} at once. The time interval of broadcast is configured as heartbeat period. Also through the periodical broadcast, the cluster could perceive node's removal and addition.
\par \textbf{Metadata Placement and Retrieve}\text{---}Instead of registering metadata in centralized server, Hiver adopts pseudo-random generation function to distribute metadata over cluster. The way of metadata placement is significantly different from Ceph. Taking \_ClusterState and filename as input, generation function generates a list of target nodes. The first N (number of replica, usually N $\geq$ 3) nodes are usually selected to place metadata. During retrieve process, the input of generation function still is filename and latest \_ClusterState regardless of whether it has changed. The reason why the pseudo-random generation function could find target metadata nodes when \_ClusterState has changed will be explained in section \ref{sect_redistributionproof}. So far, we have already been able to distribute and retrieve metadata among the cluster equally.
\par \textbf{File Placement and Retrieve}\text{---}To boost file placement, Hiver client always submit writing requests to the best nodes picked up through a single pass of \_nodeStateMap. On obtaining target nodes of metadata and file, Hiver could simultaneously start transmissions of metadata and file to respective target nodes. During file reading, client first calculate the target nodes of metadata, then send metadata request to them. Instead of returning metadata, file content requests will be send from the metadata nodes. After the file nodes get request, it will return file blocks to clients directly. Figure~\ref{fig_hiverfileaccess} shows the process of file reading.
\par \textbf{Redistribution}\text{---}Data redistribution is designed to guarantee the result node sets of different execution of pseudo-random generation algorithm could contain the same metadata, even when cluster changes. A simple strategy is to adjust the location of metadata when node join in and drop out. To avoid massive data movement, section \ref{sect_redistributionproof} will prove that the data movement in redistribution adjustment can be confined in reasonable limited extent.
\par In present implementation of Hiver, we provide the server-side system, pdfs, and client-side software. The client-side software contains the command line tool, pdfsc, and the C++ API for programmers to leverage their own applications with the ability of operating pdfs. The pdfs is similar to the server side of ftp, they are both running at server-side and providing the services of file and metadata operation. Almost all the file operation can be accomplished through pdfsc, such as push/pull file, list file, and so on. With the provided C++ API, programmers can connect to the server, operate files and deal with server management work in their own programs. No matter to professional programmer, or to ordinary user, the Hiver is easy to start and use.

\section{Redirection Hash Table Algorithm (RHT)}
\label{sect_rht}
\par In centralized network, object placement and retrieve both depend on central server. After getting rid of central server, they turn to depend on pseudo-random generation algorithm and devices' capabilities. Ceph proposed CRUSH algorithm for object placement according to devices' capabilities, P2P sharing system uses DHT to distribute and retrieve resources. In Hiver, we propose Redirection Hash Table algorithm. The following parts of this section will describe the RHT in detail.

\subsection{\_ClusterState}
\label{sect_clusterstate}
In distribution systems, global information, such as full knowledge of participants, usually is crucial when global decision was made. In Hiver, when a peer node wants to know which nodes have the best capabilities, a state list of all nodes is required. So we design the \_ClusterState structure in Hiver to cache the latest state of all nodes. The pseudo-random generation function takes \_ClusterSate as a input, together with the input of filename, finally output the target nodes of metadata.
\par In Hiver, \_ClusterState collects and caches the state information of cluster participants. In \_ClusterState, the core member is \_nodeStateMap. \_nodeStateMap maintains a table of nodes state, which indicates current node's capability. By periodical broadcasting of node state, \_ClusterState preserves a fresh \_nodeStateMap. \_ClusterState is usually used as an input when global optimal choices are made, like selecting best nodes. \_nodeStateMap is organized as a table of Key/Value pair, which looks like $\langle$IP, NodeStatus$\rangle$. The structure of NodeStatus is shown as follow:
\par $\langle$Starttime, Overall, CpuUage, MemUsage, DiskUsage$\rangle$.
\par In current implementation, \_nodeStateMap is stored in a dictionary, which is sorted by the Overall weights. Only 40 bytes are used to record a NodeStatus, so even a cluster with 10,000 nodes, \_nodeStateMap still keeps under the size of 400KB. Moreover, a simple test shows that a partial sort on 10,000 NodeStatus costs only $\sim$300 microseconds on a regular PC, while a round-trip time (RTT) usually costs several to hundreds of milliseconds. So even the size of nodes increases multifold, the Hiver still could manipulate \_nodeStateMap efficiently. Section \ref{sect_rdhalgorithm} will explain how to find target metadata nodes with \_ClusterState and generation algorithm.

\subsection{Redirection Hash Table Algorithm}
\label{sect_rdhalgorithm}
\par Comparing with the operation on metadata, file operation has higher possibility to make cluster evolve into imbalance. Therefore, we adopt two different way to take care of file content and metadata.
\par Each time when client  starts file writing request, it first selects the best nodes from \_ClusterState, and then records the selected nodes in a metadata file which will be distributed across the cluster nodes by the pseudo-random generation algorithm. When to retrieve file, client first finds out the target nodes of metadata through pseudo-random generation algorithm, and then resolves the target node of file content from meatadata file. The pseudo-code of the pseudo-random generation algorithm is shown in Figure~\ref{fig_pseudocode}. Figure~\ref{fig_metadatanodesselection} demonstrates the selection procedure of target metadata nodes with pseudo-random generation algorithm. The FNVHash1 algorithm [11] is used in this process.

\begin{figure}[!t]
\rule{8cm}{1pt}
\begin{lstlisting}
/*
Parameter: filename is an object name;
num is the number of wanted nodes;
nodeStatusList is the list of the nodes status.
*/
list RHT(list nodeStatusList, string filename,
      int num)
{
    list tempList, retList;
    for (int i = 0; i < nodeStatusList.size(); i++)
        tempList.insert(FNVHash1(nodeStatusList.ID
            + filename));
    SortList(tempList);
    for (int j = 0; j < num; j++)
        retList.insert(tempList[j]);
    return retList;
}
\end{lstlisting}
\rule{8cm}{1pt}
\caption{Pseudo-code of the pseudo-random generation algorithm}
\label{fig_pseudocode}
\end{figure}

\par So far, the placement and retrieve of file and metadata already could be  achieved. Someone would argue that the metadata could also cause the load imbalance like file does when the pseudo-random generation function is adopted to distribute metadata. Compared with files with totally different sizes, the metadata are all with almost identical small size, it costs nearly equally to process them. Therefore, we put forward the assumption that the imbalance probability caused by file is much more than metadata.

\begin{figure}[!t]
\centering
  \includegraphics[scale=0.44]{images/MetadataNodesSelection.pdf}
  \caption{Metadata nodes selection process}
\label{fig_metadatanodesselection}
\end{figure}

\par In order to illustrate the differences between centralized file systems and Hiver on file accessing, we compare them as follows.
\par In centralized file systems, file operation is carried out as following steps, and shown in Figure~\ref{fig_hdfsfileaccess}.
\begin{itemize}
  \item Step-1: Client query metadata from Name-node;
  \item Step-2: Name-node return metadata to client;
  \item Step-3: Client read file blocks from target nodes according to metadata.
\end{itemize}

\begin{figure}[!t]
\centering
  \includegraphics[scale=0.6]{images/HDFSFileAccess.pdf}
  \caption{File accessing in centralized cluster (HDFS)}
\label{fig_hdfsfileaccess}
\end{figure}

In Hiver, the file access operation is conducted as following steps, and Figure~\ref{fig_hiverfileaccess} shows the file reading process in Hiver.
\begin{itemize}
  \item Step-1: Calculate target nodes of metadata locally.
  \item Step-2: Send accessing request to metadata node.
  \item Step-3: Query target file nodes on metadata node.
  \item Step-4: Redirect file access request to file nodes from metadata node.
  \item Step-5: Return file blocks from file nodes.
\end{itemize}
\par Shown as in above Step-4, the metadata node directly redirect file content requests to file content nodes instead of returning metadata, in turn shortens the access path. This is why our algorithm is named as \textbf{Redirection} Hash Table algorithm.

\begin{figure}[!t]
\centering
  \includegraphics[scale=0.6]{images/HiverFileAccess.pdf}
  \caption{File reading in Hiver}
\label{fig_hiverfileaccess}
\end{figure}

\subsection{Algorithm Performance Analysis}
\par \noindent\textbf{Time Complexity}
\par According to the pseudo-code described in Figure~\ref{fig_pseudocode}, the time complexity of generation function is O(nlogn), the n indicates the size of cluster nodes. In intranet, the round-trip time (RTT) usually costs several to hundreds of milliseconds. But the RHT algorithm execution usually costs no more than several hundred microseconds when the size of cluster is around 10,000.
\par \noindent\textbf{Space complexity}
\par As described in section \ref{sect_clusterstate}, \_nodeStateMap is introduced and resides in all cluster nodes and clients. In a single node, \_nodeStateMap costs 400KB when cluster size is 10,000, the sum size of \_nodeStateMap in all the cluster nodes will cost 4GB (10,000$\times$400KB).
\par\noindent \textbf{Communication cost}
\par Metadata accessing: 1 RTT will be cost to complete metadata operation, this is same with HDFS;
\par File accessing: 1.5 RTTs will be cost to complete file operation, less than 2 RTTs in HDFS.

\subsection{Replica Placement}
\par Backup mechanism is a primitive measure to enhance the reliability and safety of system and data. Nevertheless, it is widely adopted, such as in Lustre, HDFS, etc. Backups are usually used to restore original data on data loss event. In Hiver, backup mechanism is also employed to ensure both data safety and system reliability. Each data always possesses no less than 3 replicas when the size of cluster nodes is more than 3.
\par To simplify replica placement, we did not take the physical structure of nodes into consideration. So in our current design, there are no rack and data center concepts, all the distances (or hops) between nodes are equal. The distances from clients to cluster nodes are equal too, despite they are not always equal actually.
\par There are two types of replica that appear in Hiver: metadata replica and file replica. The placement of metadata replica is completely depending on the pseudo-random generation function and \_ClusterState. Together with filename as function inputs, a list of target nodes from \_ClusterState can be generated. Usually we accept the first 3 nodes as the target nodes of metadata, after the first transmission, the first node will back up metadata to other two nodes.
\par Compared with metadata backup, the process of file backup emphasizes more on transmission speed for their big size. To accelerate transmission, a list of best nodes will be selected to share the file writing load. Through \_ClusterState, we could get a node list which owns the best capabilities, usually the first 3 best nodes will be chosen for current file writing. After transmission to the first target node, the first node will start the backup to other two nodes.

\section{System Reliability, Data Safety and Consistency}
\label{sect_datasafty}
\subsection{Failure Detection and Data Safety}
\par In order to detect node failure, heartbeat broadcast is introduced to synchronize cluster state. Every node receives state information broadcasted from others, and assembles them to a full copy of real-time cluster state, \_ClusterState. With state information collection and update mechanism, it is feasible to detect the node dropping out or joining in.
\par It is also essential to preserve a fresh cluster state in every cluster node, as well as client in Hiver. With the global knowledge, \_ClusterState, all the nodes could detect and process the related events timely. Otherwise much deviation may be introduced into \_ClusterState, then faults like requests are sent to the lost nodes will likely happen.
\par Even deviation is introduced temporarily like few nodes lost from cluster, Hiver still could provide normal services using other replicas. The probability of file loss is relatively low, and can be expressed as:

\begin{equation}
\begin{split}
  P&=\sum_{m=b}^NQ(k=m|T=\Delta t),
\end{split}
\end{equation}
where $Q(k=m|T=\Delta t)$ means the possibility of m nodes dropping out in $\Delta$t seconds simultaneously, and can be expressed as:
\begin{equation}
\begin{split}
Q(k=m|T=\Delta t)&=\frac{m}{N}p^m(T=\Delta t)
\end{split}
\end{equation}

\par On the right side of formula (1), m should be no less than b which is the number of backups. On the right side of formula (2), N indicates cluster size, p(T=$\Delta$t) denotes the possibility of a single node dropping out during $\Delta$t seconds, $\frac{m}{N}$ represents the possibility of all the replicas of some file is contained in the lost m nodes.
\par To reduce the probability of permanent data loss, the recovery mechanism starts to eliminate the deviations once failure was detected.
\par Besides replica policy, Hiver also introduces many other policies and mechanisms to keep data from pollution or destruction, like EEC [12], MD5 summary checking [13], etc.

\subsection{Redistribution on Cluster Changes and Data Movement}
\label{sect_redistributionproof}
\par In this section, we will explain how Hiver could retrieve metadata or files when cluster changes.
\par If we put in different \_ClusterState into generation function, we may get different output node sets, showing as follows:
\par \textbf{S$_{origin}$ = RHT (\_ClusterState$^{t1}$, filename, howMany)}
\par \textbf{S$_{new}$ = RHT (\_ClusterState$^{t2}$, filename, howMany)}
\par \noindent i.e. S$_{origin} \neq S_{new}$. This situation may happens when cluster changes. Furthermore, when cluster nodes drop out, it will likely cause failure of file reading.
\par In order to find target nodes of metadata throught the same generation algorithm but with different \_ClusterState, we introduce a simple adjustment mechanism, Data Redistribution.
\par  When there is new node joining in, since the comming queries is acquired through
\par \textbf{S$_{new}$=RHT (\_ClusterState$^{new}$, filename, howMany)},
\par\noindent then metadata redistribution over \{$S_{origin}, S_{new}$\} is required, i.e., if S$_{origin} \neq S_{new}$, the corresponding metadata should be migrated from S$_{origin}$ to S$_{new}$.
\par Similarly, when nodes drop out, lost file and metadata replicas are both needed to be restored with similar approaches.
\par To elaborate redistribution process, we describe it on two aspect: redistribution when the node joins in and redistribution when node drops out.
\par\noindent\textbf{1) Redistribution on node joining in}
\par As soon as old nodes detect new member, old nodes start to check all the local data whether there is metadata that needs to be migrated to the new joined node. The checking procedure and criteria are similar to initial target nodes finding process described as following.
\begin{itemize}
\item First, generate a candidate set using RHT (\_ClusterState, filename, howMany) for every local file, the RHT() is generation function.
\item Then, check whether the new node is contained in the candidate set.
\item If so, migrate the metadata from the last original node in candidate set to the new node;
\item If not, check the next metadata until all metadata is checked. The procedure is shown in Figure~\ref{fig_hiverredistribution}.
\end{itemize}

\begin{figure}[!t]
\centering
  \includegraphics[scale=0.6]{images/HiverRedistribution.pdf}
  \caption{Metadata/File redistribution when node joins in cluster}
\label{fig_hiverredistribution}
\end{figure}

\par\noindent\textbf{2) Redistribution on node dropping out:}
\par On node removal, node's local metadata and file are both required to be checked. The procedure of metadata checking is similar to the node addition process. The difference is to check if the removal node is contained in the candidate set. If it is contained, then find another qualified node to hold the lost metadata replica.
\par Recovery of file replica firstly needs find which replica reside on lost node. Therefore, a global search in metadata is required to check the recorded file whether is stored in the lost node. And then, if found, a new file replica will be transferred to a new target node; if not, check the next metadata. The procedure is shown in Figure~\ref{fig_hiverredistribution}.
\par To prevent massive file and metadata replica movement, we could prove that Hiver keeps data movement in limited extent. Furthermore, we reach the conclusions in Lemma 1, 2. We further draw a conclusion that larger size of cluster the less scale of data migration will be caused.

\newtheorem{mylemma}{Lemma}
\newtheorem{myproof}{Proof}
\begin{mylemma}
The moved file only contains the file resided on the lost node.
\end{mylemma}
\begin{proof}
According to the recovery policy, the redistribution module will find another best node from \_ClusterState to restore the lost replica.
\end{proof}
\begin{mylemma}
1) When node joins in, the amount of moved metadata is 3/n$^2$ of all the metadata, where the n is the cluster size after node joins in, and 3 represents the number of backups; \\ 2) when node drops out, the amount of moved metadata is 3/n$^2$ of all the metadata, where the n is the cluster size before node drops out, 3 represents the number of backups.
\end{mylemma}
\begin{proof} Shown as in Figure~\ref{fig_proof2}, the prabability of metadata is mapped into each node of cluster is 1/n. Then the probability of metadata is mapped into the first three nodes is 3/n. According to metadata's placement and retrieve algorithm, metadata should be contained in the first 3 nodes of the result list of generation algorithm, So when the joining in or dropping out node appears in the first three elements, the 1/n metadata will be move to \uline{the new node} (or, the 1/n metadata which resided \uline{the lost node} is required to be moved), so 3/n should multiply 1/n.
\begin{figure}[!t]
\centering
  \includegraphics[scale=0.6]{images/proof2.pdf}
  \caption{Metadata/File redistribution when node joins in cluster}
\label{fig_proof2}
\end{figure}
\end{proof}

\subsection{Data Consistency during Cluster Update}
\par When cluster changes, some data is required to be migrated. In the processes, we adopted the Paxos algorithm [14] to keep the consistency of replications on different nodes. Paxos algorithm is used for solving consensus in a network consisting of unreliable participants, and the communication medium between them also may experience failures.

\section{Evaluations}
\label{sect_evaluation}
\par In this section, practical prototype and simulation platform are both employed to evaluate the throughput, latency and fault tolerance of Hiver. Experiments are classified into two groups, one is data transmission and the other is system reliability.
\par Data transmission experiments include system throughput, network utilization rate, connection latency and metadata access latency. Reliability experiments contain fault tolerance, failure recovery and disaster recovery.
\par We adopt four Dell PowerEdge servers to carry out our experiments on the first version of Hiver. The cluster contains one PowerEdge R710 and two PowerEdge R410 servers. The client ran on PowerEdge R410 server. These servers are connected by a Cisco router with the bandwidth of 10Gbps.
\par The simulation platform of OMNeT++ [15] is a widely used discrete event simulator. Within it, we implement prototype of the centralized and decentralized metadata and file systems.

\begin{table*}[h!]
\centering
 \caption{Probabilities of data lost when node drop out (Backup number is 3)}
 \begin{tabular}{|c |c |c |c |c |c|}
 \hline
 \multirow{2}{*}{Failure Nodes Number} & \multicolumn{5}{c|}{Cluster Nodes Number} \\
 \cline{2-6}
 & 8 & 16 & 32 & 64 & 128 \\
 \hline
 1 & 0 & 0 & 0 & 0 & 0 \\
 \hline
 2 & 0 & 0 & 0 & 0 & 0 \\
 \hline
 3 & 2.98$\times10^{-3}$ & 2.98$\times10^{-4}$ & 3.36$\times10^{-5}$ & 4.00$\times10^{-6}$ & 4.88$\times10^{-7}$ \\
 \hline
 4 & 1.19$\times10^{-2}$ & 1.19$\times10^{-3}$ & 1.34$\times10^{-4}$ & 1.60$\times10^{-5}$ & 1.95$\times10^{-6}$ \\
 \hline
 8 & 1 & 1.60$\times10^{-2}$ & 1.88$\times10^{-3}$ & 2.24$\times10^{-4}$ & 2.73$\times10^{-5}$ \\
 \hline
 16 & 1 & 1 & 1.88$\times10^{-2}$ & 2.24$\times10^{-3}$ & 2.73$\times10^{-4}$ \\
 \hline
 \end{tabular}
\label{table_dataloss}
\end{table*}

\subsection{Delay of metadata operation}
\par In order to compare the delays of metadata operation, we simulate a network composed of 3 server nodes and 10 client nodes. In each network, the 10 clients submit requests with the frequency of 10,000 request/minute to one server or servers. We record the delays of all the requests, and calculate the average delay on each client.
\par As shown in Figure~\ref{fig_averagedelay}, each point on the line presents the average delay of 10,000 metadata operations. We could find that, the decentralized network owns lower delay and better responsiveness.

\begin{figure}[!t]
\begin{tikzpicture}
\begin{axis}[
    title style={text width=6cm, align=center, font=\small},
    ymajorgrids,
    xmax=11,xmin=0,
    ymin=365,ymax=415,
    xlabel=\#Client,
    ylabel=Average Delay(ms),
    xtick={1,...,10},
    ytick={365, 370,..., 415},
    legend style={legend pos=north east} ]
\legend{centralized, decentralized}
\addplot coordinates {
    (1, 402) (2, 403) (3, 408) (4, 392) (5, 396) (6, 400) (7, 398) (8, 404) (9, 393) (10,395)
};
\addplot coordinates {
    (1, 388) (2, 391) (3, 387) (4, 383) (5, 397) (6, 384) (7, 386) (8, 388) (9, 392) (10, 381)
};
\end{axis}
\end{tikzpicture}
\caption{Average Latencies of Centralized and Decentralized Network}
\label{fig_averagedelay}
\end{figure}

\subsection{Throughput}
\par After eliminating central nodes, there is no single point of performance bottleneck. Theoretically Hiver owns higher throughput than any other centralized distributed file systems, like HDFS. The maximum throughput of Hiver can be up to the sum of all the nodes throughput. In order to support our architecture has high performance on throughput. We test the throughput of Hiver on our experimental platform with different file sizes. Table \ref{table_throughput} shows the results.

\begin{table}[h!]
\centering
 \caption{Throughput of Hiver}
 \begin{tabular}{|c c c c c c c|}
 \hline
 Size (B) & 1K & 64K & 1M & 64M & 1G & 2G \\
 \hline
 Speed (Kbits/s) & 30 & 558 & 8 & 93 & 187 & 187 \\
 \hline
 \end{tabular}
 \label{table_throughput}
\end{table}

\par Through measuring with Iperf [16], we estimate the bandwidth between servers is around 941.5Mbits/s. The following tests on Hiver are all on the condition of single thread transmission and without file stripping. The result shows that when the size of transferred file is big enough, the transmission rate of single thread can be up to the 19.97\% of bandwidth.

\subsection{Reliability}
\par In Hiver, if a node stopped broadcasting in NODE\_OFFLINE\_TIME seconds (configured as 15 seconds in Hiver) , it will be marked as lost by other nodes. Simultaneously other nodes will start the data redistribution task to recover the lost data. But if there are more than three (three is configured as replicas number) nodes lost in NODE\_OFFLINE\_TIME seconds, the worst case maybe happen, that is the three replicas of one file or metadata will be lost at the same time. In this case, some metadata or file will be lost from Hiver forever. Although shortening the NODE\_OFFLINE\_TIME will reduce the possibility of data lost, because it could advance discovering event of node removal. While it will raise the risk of miss redistribution of metadata or file for some node may don’t receive the normal broadcast for some other reasons, such as network failure. The inappropriate distribution of data also will aggravate the workload of system and network. There is a tradeoff between data safety and redistribution performance. Beyond that, increasing the number of the replicas is another feasible solution to reduce possibility of data loss.
\par We calculated the probabilities of data loss on nodes removal in several different scales of clusters in one NODE\_OFFLINE\_TIME cycle. Table \ref{table_dataloss} shows the possibility of a single file or metadata forever loss.
\par All the probabilities shown on Table \ref{table_dataloss} is on the condition of N (on the first column of Table \ref{table_dataloss}) nodes loss simultaneously in one NODE\_OFFLINE\_TIME cycle, the probability can be calculated by
\begin{equation}
  p=\frac{1}{(N-1)(N-2)(N-3)}.
\end{equation}
From the results, we can infer that bigger cluster size and more data replicas will reduce the possibility of data loss sharply. Compared with HDFS, data reliability doesn't dependent on central nodes anymore. All the metadata are distributed on different nodes. Any crash of single node will not break down the overall system or lose all data.

\section{Conclusions}
\par We propose an object locating algorithm for completely decentralized networks and design a completely decentralized distributed file system called Hiver. In the algorithm, we have adopted a new pseudo-random generation algorithm. With this algorithm, Hiver processes writing requests are always on currently the best nodes. This is achieved without querying any central nodes. Additionally, after getting rid of central nodes, the access path is shortened from 2 RTTs to 1.5 RTTs, in consequence access latency is reduced too.
\par A compare between our system, centralized system and naive hash-based system is shown as in Table \ref{table_centralvsdecentral}:
\begin{table}[h!]
\centering
\caption{Comparisons between Hiver, centralized System and Naive hash-based System}
\begin{tabular}{|c|m{3cm}|m{3.3cm}|}
\hline
\multirow{2}{*}{} & \multicolumn{2}{c|}{\textbf{Comparision}} \\
\cline{2-3}
& \textbf{Centralized System} & \textbf{Naive hash-based System} \\
\hline
\textbf{Hiver} &
  \leavevmode\newline 1.) No centralized performance bottleneck and get ride of single point of failure;
  \newline 2.) Higher reliability, scalability and security;
  \newline 3.) Shorter access path and lower access latency. &
  \leavevmode\newline 1.) Maximize the utilization of best performance nodes;
  \newline 2.) Reduce the data migration. \\
\hline
\end{tabular}
\label{table_centralvsdecentral}
\end{table}

\par Briefly, besides DHT algorithm and CRUSH algorithm, we provide another efficient option to solve the problem of object locating in completely decentralized network.

\section{Future Work}
\par We will first implement the functions designed but not implemented yet, and make it as a fundamental infrastructure of another distributed system. Although through the optimal data distribution and dynamic redistribution, the RHT algorithm and architecture have improved the scalability, performance and reliability greatly. Our ultimate purpose is to eliminate the metadata, and access file in remote nodes directly.
\par With this goal, we are looking for a model that satisfies the following two conditions at the same time.

\newtheorem{myconclusion}{Condition}
\begin{myconclusion}
The requests always are delivered to the best nodes.
\end{myconclusion}
\begin{myconclusion}
By local calculation of model, file can be located where it is or it will be stored. It is saying that no query occurs during file locating but instead of local calculating.
\end{myconclusion}

\par The machine learning is the first model we took into consideration. But one of the prerequisites to adopt, that the data should exist inherent regularity, stops us getting into further research. Even that, we didn't draw the conclusion of there is no regularity in file accessing. We are looking forward to more researches on this issue.


\begin{thebibliography}{1}
\bibitem{IEEEhowto:kopka}
Stoica, I. (2001). Chord : A Scalable Peer-To-Peer Lookup Service for Internet Applications. Proc. 2001 ACM SIGCOMM Conference (Vol.31, pp.149--160).
\bibitem{1}
Rowstron, A., \& Peter Druschel. (2001). Pastry: Scalable, Decentralized Object Location, and Routing for Large-Scale Peer-to-Peer Systems. Middleware 2001. Springer Berlin Heidelberg.
\bibitem{}
Sylviaratnasamy, P. F., Markhandley, R. K., \& andScottShenker. (2001). A scalable content-addressable network. Sigcomm Conference, 355(4), 65–79.
\bibitem{}
Kubiatowicz, K. H. J. D. (2002). Distributed object location in a dynamic network. in Proceedings of SPAA (pp.41--52).
\bibitem{}
Urdaneta, G., Pierre, G., \& Steen, M. V. (2011). A survey of dht security techniques. Acm Computing Surveys, 43(2), 33-63.
\bibitem{}
Hash table [EB/OL]. http://en.wikipedia.org/wiki/Hash\_table.
\bibitem{}
Schollmeier, R. (2001). [16] A Definition of Peer-to-Peer Networking for the Classification of Peer-to-Peer Architectures and Applications. Peer-to-Peer Computing, IEEE International Conference on (pp.0101-0101).
\bibitem{}
Weil, S. /., Brandt, S. /., Miller, E. L., \& Maltzahn, C. (2006). CRUSH: Controlled, Scalable, Decentralized Placement of Replicated Data. SC 2006 Conference, Proceedings of the ACM/IEEE (pp.31-31).
\bibitem{}
Weil, S. A., Brandt, S. A., Miller, E. L., Long, D. D. E., \& Maltzahn, C. (2006). Ceph: a scalable, high-performance distributed file system. Proceedings of the 7th symposium on Operating systems design and implementation (pp.307--320).
\bibitem{}
DHT [EB/OL]. http://en.wikipedia.org/wiki/DHT.
\bibitem{}
FNV Hash [EB/OL]. \url{http://www.isthe.com/chongo/tech/comp/fnv/index.html}.
\bibitem{}
Macwilliams, F. J. (1977). The Theory of Error-Correcting Codes. The Theory of Error-Correcting Codes. North Holland Publishing Co.
\bibitem{}
RFC 1321 – The MD5 Message-Digest Algorithm. Internet Engineering Task Force. April 1992. Retrieved 5 October 2013.
\bibitem{}
Lamport L. Fast Paxos (2006). Distributed Computing, 2006, volume 19(2):79-103(25).
\bibitem{}
OMNeT++[EB/OL]. https://omnetpp.org/.
\bibitem{}
Iperf [EB/OL]. https:// iperf.fr/.
\bibitem{}
Adya, A., Bolosky, W. J., Castro, M., Cermak, G., Chaiken, R., \& Douceur, J. R., et al. (2002). Farsite: federated, available, and reliable storage for an incompletely trusted environment. Proceedings of the 5th symposium on Operating systems design and implementation Copyright restrictions prevent ACM from being able to make the PDFs for this conference available for downloading (Vol.36, pp.1-14). ACM.
\bibitem{}
Castro, M., \& Liskov, B. (1999). Practical Byzantine fault tolerance. Symposium on Operating Systems Design and Implementation (Vol.20, pp.173--186). USENIX Association.
\bibitem{}
Bonnie, Michael Moore David, et al. "OrangeFS: Advancing PVFS."
\bibitem{}
Shvachko, K., Kuang, H., Radia, S., \& Chansler, R. (2010, May). The hadoop distributed file system. In 2010 IEEE 26th symposium on mass storage systems and technologies (MSST) (pp. 1-10). IEEE.
\bibitem{}
Schmuck, F. B., \& Haskin, R. L. (2002, January). GPFS: A Shared-Disk File System for Large Computing Clusters. In FAST (Vol. 2, pp. 231-244).
\bibitem{}
Nagle, D., Serenyi, D., \& Matthews, A. (2004, November). The panasas activescale storage cluster: Delivering scalable high bandwidth storage. In Proceedings of the 2004 ACM/IEEE conference on Supercomputing (p. 53). IEEE Computer Society.
\bibitem{}
Ross, R. B., \& Thakur, R. (2000, October). PVFS: A parallel file system for Linux clusters. In Proceedings of the 4th annual Linux Showcase and Conference (pp. 391-430).
\bibitem{}
Welch, B., Unangst, M., Abbasi, Z., Gibson, G. A., Mueller, B., Small, J., ... \& Zhou, B. (2008, February). Scalable Performance of the Panasas Parallel File System. In FAST (Vol. 8, pp. 1-17).
\bibitem{}
  HDFS Architecture[EB/OL]. \url{http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html}.

\end{thebibliography}


\begin{IEEEbiographynophoto}{Yifeng Chen, Weidong Zhang, Lei Zhang}
at HCST Key Lab, EECS\\
Peking University\\
Beijing 100871, China\\
\{cyf, zhangwd, lei\_z\}@pku.edu.cn
\end{IEEEbiographynophoto}

\end{document}

