\documentclass[10pt,psfig,letterpaper,twocolumn]{article}

\include{preamble}

\begin{document}
\bibliographystyle{acm} 

%%%%%%%%%%%%        Title     %%%%%%%%%%%%%%%%

\title{\fontfamily{phv}\selectfont{\huge{\bfseries{A hybrid Distributed File System \\ with partial mounting}}}}
\author{
{\fontfamily{ptm}\selectfont{\large{\bfseries{Elena Apostol}}}}, \and
{\fontfamily{ptm}\selectfont{\large{\bfseries{Alexandru Radovici}}}}\\
}
\date{}
\maketitle

%%%%%%%%%%%%%%%%    Abstract    %%%%%%%%%%%%%%%%

\thispagestyle{empty}
\begin{abstract}
\par
Here we shall put a generic description about \textit{Distributed File Systems}.
\par
But meanwhile, ... I'll write one of Zalman Stern's quotes:
\par
\textit{``Distributed file systems are a cruel hoax.''} 

This article describes the NetFS architecture and its main advantages over a disk based file system.
\end{abstract}
\par
{\bf Keywords:}
distributed file system, memory storage, peer-to-peer, Gigabit Ethernet, partial mounting.

%%%%%%%%%%%%%%%%    Introduction    %%%%%%%%%%%%%%%%


\section*{\fontfamily{phv}\selectfont{\normalsize{\bfseries{INTRODUCTION}}}}
% The main objective of the present solution is to obtain 
\par
This paper presents a solution for a distributed file system architecture that has several performance improvements over a traditional distributed file system.
\par
The file system that we have developed has several features that makes it very fast and also gives it the possibility to store large data blocks.
\par
The main advantage of this system is that files can be stored in multiple places. NetFS is a hybrid file system and is composed of several parts that can use either local or network storage. The different storage components of the file system are named from
now on fs\_nodes. A fs\_node can work in one of two ways: memory backed or disk backed. Depending on the mode of
involved nodes, NetFS has a speed advantage over traditional disk based file systems and a capacity advantage over in memory file systems.
%Because we use the memory as storing device reads and write are very fast. 
\par
NetFS is a scalable system because the system performance increases when adding a new storage resource.
\par
% location transperancy
This file system offers location transparency. The system design allows that individual data blocks from the same file to be distributed over the shared storage at different fs\_nodes. NetFS can be mounted on different storage spaces, but the user sees all those parts as a single one, because all the fs\_nodes share the same directory structure.
\par
% partially mounted AND dynamically add and remove nodes
The system can be partially mounted. This is extremely helpful because not all nodes are necessarily connected at a given time. New nodes can be dynamically added to the file system. Because some parts of the file system can be memory based, a persistence mechanism must be implemented. This mechanism is based on copying the data from memory to disk on unmount. The files from memory can be backed-up on disk while the system is mounted if the CPU has an average load less than 2\%. When the system is remounted, if the node is memory based, the data is restored from the disk.
\par
% priority mechanism
File creation conflicts may occur as nodes can be dynamically added and removed from the file system. To resolve this conflict a priority mechanism was developed. This mechanism can be based on different criteria, depending how the applications are currently using the file system. If the applications needs to use fast read and write of data then the speed priority is applied. If a new file must be created, depending on the size of the new file and on the available space of the fs\_nodes, the memory storage will be first be taken in consideration, as it offers the highest read/write speed, and afterwards the other fs\_nodes like SATA disks, USB flash drives. Other priority mechanisms are local priority, uniform distribution or user based priority. A more detailed study about those priority mechanisms will be conducted later on this paper.


%%%%%%%%%%%%%%%%    Architecture    %%%%%%%%%%%%%%%%

\section*{\fontfamily{phv}\selectfont{\normalsize{\bfseries{ARCHITECTURE}}}}
\par
The high level architecture of the NetFS file system is depicted in Figure \ref{netfs_nodes}. It consists of:

\begin{figure}[!ht]
{\centering \resizebox*{3in}{2.5in}{\includegraphics{pictures/netfs_nodes}} \par}
\caption{\fontfamily{ptm}\selectfont{\normalsize{NetFS High Level Architecture}}}
\label{netfs_nodes}
\end{figure}

\begin{itemize}
\item \textit{File Manager} - its main purpose is to implement inode and superblock operations, and mount/unmount operations.
\item \textit{Partition Manager} - its main purpose is to obtain efficient concurrent data transfer between the nodes that compose the distributed architecture. ``Partition Manager'' keeps information about the metadata structures and so it knows in what fs\_node a particular data block is stored.
\par
It also resolves file creation conflicts using the priority mechanism.  If different fs\_nodes are mounted at different times,
the filesystem could become inconsistent by creating directories or files with the same names and the same paths.  In order to
avoid this inconsistency, the one with the most recent modification time will be given priority and it will be allowed to
use the name that is under conflict.  The other files that use the same name will be transparently renamed.  The chosen renaming convention is to add an underscore and a number to the other files competing for the same name, in an order established by checking
file modification time.
\par
The partition manager also decides what storage device a write should be sent to.
There are several types of write strategies, depending on the user or application requirements:
\begin{itemize}
 \item speed - this write strategy focuses on the fastest available storage;
 \item local priority - this write strategy will always prefer local storage to network storage, if it is possible;
 \item uniformly distributed - this write strategy will make an effort to distribute the data uniformly among 
       available storage devices (memory, hard disk, flash disk, network);
 \item user specified write strategy - specified as a mount option.
\end{itemize}

\item \textit{RD/WR Module} - there can be several RD/WR modules active in any running instance of NetFS.  They are runtime loadable, and follow a standard interface in order to allow accessing different kinds of storage.  Possible types of RD/WR modules are:
\begin{itemize}
 \item Memory based.  This RD/WR module implements the file operations as memory copy operations.  Upon shutdown, it saves its
       current state to persistent storage.
 \item Disk based.  This RD/WR module implements plain reads and writes to a file that is on disk.
 \item Network based.  This RD/WR module communicates with the associated Socket Engine, in order to send read and write
       requests to the network.
\end{itemize}

\item \textit{Socket Engine} - its purpose is to communicate with remote nodes from the network.  It uses TCP connections for its communications, and it sends and receives asynchronous messages.  It also implements a rudimentary cache system.
\end{itemize}

An operation request has the following steps:  The request arrives at the File Manager.  The File Manager asks the Partition Manager for the metadata.  If the request is a write request, the Partition Manager also applies the write strategy and decides on what storage device a write should go to, and returns the answer to the File Manager.  For all other operations, it only returns the corresponding metadata structure and the storage device to which the operation will be assigned.  Next, the File Manager transmits the operation to the corresponding RD/WR module.

%%%%%%%%%%%%%%%%    Implementation    %%%%%%%%%%%%%%%%

\section*{\fontfamily{phv}\selectfont{\normalsize{\bfseries{IMPLEMENTATION}}}}
\par
NetFS is composed out of several C++ modules as illustrated in Figure \ref{netfs_arch}.

\begin{figure}[!ht]
{\centering \resizebox*{3in}{2.5in}{\includegraphics{pictures/netfs_arch}} \par}
\caption{\fontfamily{ptm}\selectfont{\normalsize{NetFS Implementation Modules}}}
\label{netfs_arch}
\end{figure}

\subsection*{\fontfamily{phv}\selectfont{\normalsize{\bfseries{File Manager implementation}}}}

For the filesystem implementation, we have chosen to use FUSE (the Filesystem in Userspace program).  We have preferred this solution to a loadable kernel module solution for ease of debugging and portability, as FUSE is not Linux-specific.

The File Manager was implemented using the FUSE API.  We have implemented the operations for the superblock and inode.  For the inode structure we used the EXT3 inode template, but dropped some of the fields for extra options.  This was done with memory-based storage in mind, so that metadata would not use a large amount of memory.  The File Manager talks to other modules in a request/response fashion:  each request needs a response.

For the actual execution of filesystem specific operations, the File Manager interfaces the various RD/WR modules.  The message format for the RD/WR module has the following structure:

\begin{itemize}
 \item {\bf tag} - represents the ID of the storage device.
 \item {\bf offset} - represents the offset for the operation.
\item {\bf size} - represents the size of the next (data) field.
\item {\bf data} - the data to be written (only present for write requests)
\end{itemize}

\subsection*{\fontfamily{phv}\selectfont{\normalsize{\bfseries{Partition Manager implementation}}}}

The Partiton Manager is implemented as a database for metadata.  On initialization the database is loaded from a local file.  On completion, the file is updated with the latest version of the database.  The write strategy is set upon initialization by the File Manager.

\subsection*{\fontfamily{phv}\selectfont{\normalsize{\bfseries{RD/WR Module implementation}}}}

This module is implemented as a C++ library.  For each new storage device, the File Manager loads a new RD/WR object.  This object exposes the following methods:
% trebuie spus ca sunt 3 tipuri de astfel de obiecte, in functie de device-ul folosit (mem, disk, etc)
\begin{itemize}
 \item init - Receives 2 parameters:  an ID, which is the device ID, and a string which, depending on the type of module, consists of
       the permanence file for the RAM module, the disk support file for the disk module, or the representation of an IP address for the
       network module
 \item read - Receives 3 parameters:  tag, offset and size.
 \item write - Receives 4 parameters:  tag, offset, size and the data to be written.
 \item flush - Receives 1 parameter:  the tag, which represents the device that needs to be flushed.
 \item mount - The new device becomes a part of NetFS.
 \item unmount - The device is disconnected from NetFS.
\end{itemize}

\subsection*{\fontfamily{phv}\selectfont{\normalsize{\bfseries{Socket Engine implementation}}}}

The Socket Engine is used for communication between network nodes.  It uses TCP connections that are established on-demand when the need to talk to a remote node appears.  The TCP connections are only terminated upon shutdown.  The data received from the File Manager or from a remote Socket Engine are stored in circular buffers.

Upon issuing a write command with a certain tag, all other commands for the same tag will be delayed until a response is received from the same tag.

This module also implements a rudimentary cache:  a circular buffer is kept with write commands and their data.  If a read command arrives that can be satisfied by only looking in this write cache, no other request will be made, and the reply will be copied (or assembled from the data in multiple writes) and the read will be finalized.

Upon detecting a lost connection to a remote node, the Socket Engine will keep retrying the requests that are already in its buffers, for a certain time.  During the retries, further requests received with a tag on the node that seems to be down will be instantly answered with an error.  If the connection is re-established before the time is up, pending operations will be satisfied.  If the connection is not established, pending operations will be discarded, and they will return errors to the issuing module.

%%%%%%%%%%%%%%%%    Evaluation    %%%%%%%%%%%%%%%%

\section*{\fontfamily{phv}\selectfont{\normalsize{\bfseries{EVALUATION}}}}

%%%%%%%%%%%%%%%%    Related work    %%%%%%%%%%%%%%%%

\section*{\fontfamily{phv}\selectfont{\normalsize{\bfseries{RELATED WORK}}}}

%%%%%%%%%%%%%%%%    Conclusion \& Further Work   %%%%%%%%%%%%%%%%

\section*{\fontfamily{phv}\selectfont{\normalsize{\bfseries{CONCLUSION \& FURTHER WORK}}}}


%\bibliographystyle{acm}
\begin{thebibliography}{99}

\bibitem{unionfs} {Charles P. Wright textit{``Versatility and Unix Semantics in a Fan-Out Unification File System''}}
\bibitem{os_prj} {The OceanStore Project, UC Berkeley \textit{http://oceanstore.cs.berkeley.edu/}}
\bibitem{hFS} {Zhihui Zhang, Kanad Ghose \textit{``hFS: a hybrid file system prototype''}}
\bibitem{MRAMFS} {Nathan K. Edel, Deepa Tuteja \textit{``MRAMFS: A compressing file system for non-volatile RAM''}}



\end{thebibliography} 
%\include{bibliography}

\end{document}


