% Chapter 2

\chapter{Migrating of the Data Layer to the Cloud} % Main chapter title

\label{Chapter2} % For referencing the chapter elsewhere, use \ref{Chapter1} 

\lhead{Chapter 2. \emph{Migrating of the Data Layer to the Cloud}} % This is for the header on each page - perhaps a shortened title

%----------------------------------------------------------------------------------------

\section{Introduction}

Cloud computing has become increasingly popular with the industry due to the clear advantage of reducing capital expenditure and transforming it into operational costs \citep{Reference24}. To take advantage of Cloud computing, an existing application may be moved to the Cloud (migration of the application to the Cloud) or designed from the beginning to use Cloud technologies (Cloud-native applications). Many applications of course are not ready to be moved to the Cloud because the environment is not mature enough for this type of applications, e.g. safety-critical software \citep{Reference25}. For others, it may not make sense to be migrated at all, e.g. embedded systems. Some software will be implemented specifically for the Cloud, but other systems must be adapted to be suitable for the Cloud. 

Applications are typically built using a three layer architecture model consisting of a Presentation Layer, a Business Logic Layer, and a Data Layer \citep{Reference26}. \textbf{The Presentation Layer} describes the application-users interactions, \textbf{the Business Layer} realizes the business logic and the data layer is responsible for application data storage. The \textbf{Data Layer} is in turn subdivided into the \textbf{Data Access Layer (DAL)} and the \textbf{Database Layer (DBL)}. The DAL encapsulates the data access functionality, while the DBL is responsible for data persistence and data manipulation. Each application layer can be hosted using different Cloud deployment models. Possible Cloud deployment models were studied in the Chapter 1: Private, Public, Community, and Hybrid Cloud.

At this point is possible consider the migration of the all application stack or to migrate only one architectural layer, instead the whole application. The Google App Engine for example can be used for the business layer and Amazon Relational
Database Service for the Data Layer. Furthermore, a set of architectural components from one or more layers can also be moved to the Cloud, and different deployment models (private, public, community and hybrid clouds) can be used, resulting into a partial migration of the application. 

In this thesis, we will focus on the migration of the application layers regarding to the data storage: The Data Access Layer (DAL) and Data Base Layer (DBL). Also we will focus on the PaaS and SaaS levels.

%In this chapter, firstly, a survey of different types of migrations to the Cloud will be exposed. 

%----------------------------------------------------------------------------------------
\section{Migration Types}

There are different possibilities for Cloud-enabling an existing application. According to Andrikopoulos \citep{Reference27}, there are four migration types that Cloud-enable application through adaptation:

\textbf{Type I:} Replace component(s) with Cloud offerings. This is the least invasive type of migration, where one or more (architectural) components are replaced by Cloud services. As a result, data and/or business logic have to be migrated to the Cloud service. A series of configurations, rewiring and adaptation activities to cope with possible incompatibilities may be triggered as part of this migration. Using Google App Engine Datastore in place of a local MySQL database is an example of this
migration type.  

\textbf{Type II:} Partially migrate some of the application functionality to the Cloud. This type entails migrating one or more application layers, or a set of architectural components from one or more layers implementing a particular functionality to the Cloud. 

\textbf{Type III:} Migrate the whole software stack of the application to the Cloud. This is the classic example of migration to the Cloud, where for example the application is encapsulated in VMs and ran on the Cloud.

\textbf{Type IV:} A complete migration of the application takes place. The application functionality is implemented as a composition of services running on the Cloud. As in the case of component replacement (Type I migration), ``cloudification'' requires the migration of data and business logic to the Cloud, in addition to any adaptive actions to address possible incompatibilities.

The assumption for each one of these types is that in its initial state, the application is hosted on-premises in a non-Cloud environment, for instance, on a local server.


\section{The Data layer}

The Data Layer is responsible for data storage of an application and it is in turn subdivided into the Data Access Layer (DAL) and Database Layer (DBL). The DAL is an abstraction layer encapsulating the data access functionality. The DBL is responsible for data persistence and data manipulation. The subdivision of the Data Layer leads to a four layer application architecture. The Figure \ref{fig:application_layers} shows the subdivision of the Data Layer into Data Access Layer and Database Layer.

\begin{figure}[htbp]
	\centering
		\includegraphics[width=0.8\textwidth]{Figures/application_layers.png}
		\rule{30em}{0.5pt}
	\caption[Subdivision of the Data Layer into Data Access Layer and Database Layer.]{Subdivision of the Data Layer into Data Access Layer and Database Layer.}
	\label{fig:application_layers}
\end{figure} 

The migration of the Data Layer to the Cloud includes two main steps to be distinguished for all types of migration: the migration of the DBL to the Cloud, and the adaptation of the DAL to enable Cloud data access. Due to, the migration of the Data layer to the Cloud could represent to lead with challenges such as incompatibilities with the database previously used or the accidental disclosing of the critical data by e.g., moving the Data Layer to a Public Cloud. On the other hand, move the Data Layer to the Cloud could be the best option when the scaling of the database is required and the efforts to solve the problems regarding to the migration might be worth.

However, before take the decision for or against the migration of the Data Layer to the Cloud and how it will be implemented, it is necessary to know which are actually the main motivations to migrate this layer over the Cloud and later considering the impact factors or issues involved, as well as, become familiar with some technical concept that will be studied in the next sections.

\section{Concepts involved in the migration of Data Layer}

In order to have later a clear understanding on the main challenges facing the migration of Data Layer to the Cloud and how to perform it, in the next sub sections will be compared and studied some of the key concepts that are involved in the cloud migration.  

\subsection{Scalability}

In general term, Scalability could be defined as the ability of a system, network, or process to handle a growing amount of work in a capable manner or its ability to be enlarged to accommodate that growth. The scalability of a system to growing demand is crucial to its long-term success \citep{Reference28}. From the computing systems point of view, Scalability is the ability of a system to keep its performance against the increased load of data, user requests and so on, through adding hardware resources to address the increase. There are two different ways or solutions to enabling scalability:

\textbf{Vertical scaling or scaling up:} Refers to resource maximization of a single unit to expand its ability to handle increasing load. This includes adding processing power and memory to the physical machine running the server. In general all of these resources are contained within a single chassis or box with more than CPUs and there is only one instance of the OS covering the processors, memory and I/O components. Resources are added inside the box by adding system boards into the system. In vertical systems memory is shared, meaning that all processors and all I/O connections have equal access to all memory. Memory appears to the user as one large chunk. Although scaling up may be relatively straightforward, this method suffers from several disadvantages. The cost for expansion increases exponentially \citep{Reference29}.

\begin{figure}[htbp]
	\centering
		\includegraphics[width=0.6\textwidth]{Figures/vertical_scaling.png}
		\rule{30em}{0.5pt}
	\caption[Example of vertical scaling by upgrading the physical machine.]{Example of vertical scaling by upgrading the physical machine.}
	\label{fig:vertical_scaling}
\end{figure}
 
\textbf{Horizontal scaling or scaling out:} Refers to resource increment by the addition of units to the system. This means adding more units of smaller capacity instead of adding a single unit of larger capacity. The requests for resources are then spread across multiple units thus reducing the excess load on a single machine. The horizontal scalability is provided by network/cluster connectivity between systems. Resources are contained within “nodes”. Each node has its own processor and memory and one OS instance. Resources are increased by adding more nodes and not by adding more resources within a node \citep{Reference29}.

\begin{figure}[htbp] 
	\centering
		\includegraphics[width=0.8\textwidth]{Figures/horizantal_scaling.png}
		\rule{30em}{0.5pt}
	\caption[Example of horizontal scaling by adding more machines.]{Example of horizontal scaling by adding more machines.}
	\label{fig:horizantal_scaling}
\end{figure}

Having multiple units allows us the possibility of keeping the system up even if some units go down, thus, avoiding the “single point of failure” problem and increasing the availability of the system. Also generally, the total cost incurred by multiple smaller machines is less than the cost of a single larger unit. Thus horizontal scaling can be more cost effective compared to vertical scaling. 

However, there are disadvantages of horizontal scaling as well. Increasing the number of units means that more resources need to be invested in their maintenance. Also the code of the application itself needs to be modified to allow 
parallelism and distribution of work among various units. In some cases this task is not trivial and scaling horizontally may be a tough task. 

There are two strategies to scaling any application, where the horizontal scalability offers the most flexibility option to scale, but is considerably more complex. Horizontal data scaling can be performed either along a functional scaling involves grouping data by function or spreading functional groups across databases. Splitting data within functional areas across multiple databases, or sharding, adds the second dimension to horizontal scaling. The diagram 
in Figure \ref{fig:data_scaling} illustrates horizontal data-scaling strategies.

\begin{figure}[htbp] 
	\centering
		\includegraphics[width=0.5\textwidth]{Figures/data_scaling.png}
		\rule{30em}{0.5pt}
	\caption[Horizontal data-scaling strategies.]{Horizontal data-scaling strategies.}
	\label{fig:data_scaling}
\end{figure}

As Figure \ref{fig:data_scaling} illustrates, both approaches to horizontal scaling can be applied at once. Users, products, and transactions can be in separate databases. Additionally, each functional area can be split across multiple databases for transactional capacity \citep{Reference30}.

In the next table, a comparison between Vertical scaling and Horizontal scaling is showed:

\begin{table}[!hbt]
\begin{center}
\begin{tabular}{|p{7cm} |p{7cm} |}
\hline
\textbf{Vertical scaling} & \textbf{Horizontal scaling}\\
\hline
Uses specialty hardware. & Uses commodity hardware.\\
\hline
Bigger and higher performing hardware to scale. & Relies on using more machines instead of more powerful machines.\\
\hline
It is limited by the most powerful hardware that is available. & The scalability is unlimited.\\ 
\hline
In long-term it is an expensive alternative. & More cheaper and more cost effective.\\ 
\hline
\end{tabular}
\caption{Vertical scaling vs. Horizontal scaling}
\end{center}
\end{table}

\subsection{ACID properties}

ACID (Atomicity, Consistency, Availability and Durability) is a set of properties that guarantee the database transactions are processed reliably. As signified by the acronym, ACID transactions provide the following guarantees \citep{Reference30}:

\textbf{Atomicity:} All of the operations in the transaction will complete, or none will. If one part of the transaction fails, the entire transaction fails, and the database state is left unchanged.

\textbf{Consistency:} The database will be in a consistent state when the transaction begins and ends. Any data written to the database must be valid according to all defined rules.
 
\textbf{Isolation:} The transaction will behave as if it is the only operation being performed upon the database.

\textbf{Durability:} Upon completion of the transaction, the operation will not be reversed. In a relational database, for instance, once a group of SQL statements execute, the results need to be stored permanently (even if the database crashes immediately thereafter). To defend against power loss, transactions (or their effects) must be recorded in a non-volatile memory.

\subsection{CAP Theorem}

Eric Brewer, a professor at the University of California, Berkeley, made the conjecture that states that it is impossible for a distributed computer system to simultaneously provide all three of the following guarantees \citep{Reference31}:

\textbf{Consistency:} The client perceives that a set of operations has occurred all at once. All nodes, involved in the distributed system, see the same data at the same time.

\textbf{Availability:} Every operation must terminate in an intended response. Availability is a guarantee that every request receives a response about whether it was successful or failed.

\textbf{Partition tolerance:} Operations will complete, even if individual components are unavailable. It means that communication among the servers is not reliable, and the servers may be partitioned into multiple groups that cannot communicate with each other. 

For instance, a Web application can support, at most, only two of these properties with any database design. Obviously, any horizontal scaling strategy is based on data partitioning; therefore, designers are forced to decide between consistency and availability \citep{Reference30}.

\begin{figure}[htbp] 
	\centering
		\includegraphics[width=0.5\textwidth]{Figures/cap_theorem2.png}
		\rule{30em}{0.5pt}
	\caption[Different alternatives according to CAP Theorem.]{Different alternatives according to CAP Theorem.}
	\label{fig:cap_theorem}
\end{figure}

The tree properties described above are desirable. However, any real system must give up one of them as shows the Figure \ref{fig:cap_theorem}. Thus, there are tree possible combinations.

\subsection{Strong and Eventual Consistency}

Brewer’s CAP theorem dictates that it is impossible simultaneously to achieve availability and to ensure that users read the latest written version of a distributed database (consistency) in the presence of partial failure (partitions). As services are increasingly replicated to provide fault tolerance (ensuring that services remain online despite individual server failures) and capacity (to allow systems to scale with variable request rates), architects must face with the tradeoffs between consistency and availability. In a dynamic, partitionable Internet, services requiring guaranteed low latency must often relax their expectations of data consistency such as eventual consistency \citep{Reference32}.

On the one hand, the consistency property of transaction systems as defined in the ACID properties (atomicity, consistency, isolation, durability) is a different kind of consistency guarantee. In ACID, consistency relates to the guarantee that when a transaction is finished the database is in a consistent state; for example, when transferring money from one account to another the total amount held in both accounts should not change \citep{Reference33}. On the other hand, eventual consistency can be defined as either a property of the underlying storage system, or a behavior observed by a client application. Thus, there are two ways of looking at consistency. One is from the developer/client point of view: how they observe data updates. The second way is from the server side: how updates flow through the system and what guarantees systems can give with respect to updates.

\subsubsection{Client-Side Consistency}

In order to explain Client-Side Consistency considering the following components:

\begin{itemize}
\item \textbf{A Storage system:} Is a large scale and highly distributed storage that it is built to guarantee durability and availability.
\item \textbf{Process A:} This is a process that writes to and reads from the storage system. 
\item \textbf{Processes B and C:} These two processes are independent of process A and write to and read from the storage system.  
\end{itemize}

Client-side consistency has to do with how and when observers (in this case the processes A, B, or C) see updates made to a data object in the storage systems. Using the components described above, the following examples illustrating the different types of consistency when the process A has made an update to a data object:

\textbf{Strong Consistency:} After the update completes, any access (by A, B, or C) will return the updated value. In the Figure \ref{fig:strong_consistency} shows that once the process A updates the data object X, the other process (B and C) read the last update.

\begin{figure}[htbp] 
	\centering
		\includegraphics[width=0.8\textwidth]{Figures/strong_consistency.png}
		\rule{30em}{0.5pt}
	\caption[Strong consistency.]{Strong consistency.}
	\label{fig:strong_consistency}
\end{figure}

\textbf{Weak Consistency:} The system does not guarantee that accesses will return the updated value. A number of conditions need to be met before the value will be returned. The period between the update and the moment when it is guaranteed that any observer will always see the updated value is dubbed the \textbf{inconsistency window}. 

\textbf{Eventual Consistency:} This is a specific form of weak consistency. The storage system guarantees that if no new 
updates are made to the object, eventually all accesses will return the last updated value. If no failures occur, the maximum size of the inconsistency window can be determined based on factors such as communication delays, the load on the system, and the number of replicas involved in the replication scheme.

In the Figure \ref{fig:waek_consistency} shows that once the process A updates the data object X, there is a inconsistency windows in which the read of the last update is no guaranteed. However, after the windows inconsistency all process will read the same value.

\begin{figure}[htbp] 
	\centering
		\includegraphics[width=0.8\textwidth]{Figures/waek_consistency.png}
		\rule{30em}{0.5pt}
	\caption[Weak/Eventual consistency.]{Weak/Eventual consistency.}
	\label{fig:waek_consistency}
\end{figure}


The eventual consistency model has a number of variations that are important to consider:

\textbf{Casual Consistency:} The causal consistency makes a distinction between events that are potentially causally related and those that are not. Consider a memory example. Suppose that process A writes a variable X. Then the process B reads X and writes Y. Here the reading of X and the writing of Y are potentially causally related because the computation of Y may have depended on the value of X read by B (i.e., the value written by A). On the other hand, if two processes spontaneously and simultaneously write two variables, these are not causally related and are subject to the normal eventual consistency rules. When there is a read followed later by a write, the two events are potentially causally related. Similarly, a read is causally related to the write that provided the data the read got. Operations that are not causally related are said to be concurrent \citep{Reference33}.

\textbf{Read-your-writes consistency:} This is an important model where process A, after it has updated a data item, always accesses the updated value and will never see an older value. This is a special case of the causal consistency model \citep{Reference33}.

\textbf{Session consistency:} This is a practical version of the previous model, where a process accesses the storage system in the context of a session. As long as the session exists, the system guarantees read-your-writes consistency. If the session terminates because of a certain failure scenario, a new session needs to be created and the guarantees do not overlap the sessions.

\textbf{Monotonic read consistency:} If a process has seen a particular value for the object, any subsequent accesses will never return any previous values \citep{Reference33}. 

\textbf{Monotonic write consistency:} In this case the system guarantees to serialize the writes by the same process. Systems that do not guarantee this level of consistency are notoriously hard to program \citep{Reference33}. 

From a practical point of view these two properties (monotonic reads and read-your-writes) are most desirable in an eventual consistency system, but not always required. These two properties make it simpler for developers to build applications, while 
allowing the storage system to relax consistency and provide high availability.

\subsubsection{Server-Side Consistency}

Firstly, it is necessary establish some definitions in order to understand what drives the different modes that the developer, who uses the system, can experience. 

\begin{itemize}
\item \textbf{N:} The number of nodes that store replicas of the data.
\item \textbf{W:} The number of replicas that need to acknowledge the receipt of the update before the update completes. 
\item \textbf{R:} The number of replicas that are contacted when a data object is accessed through a read operation. 
\end{itemize}

It is possible analyze the differences between Strong and Eventual consistency through the following situations:

\textbf{Strong consistency:} If W+R $>$ N, the guarantee strong consistency because write set and the read set always overlap. For example, if  N=2, W=2, and R=1, no matter from which replica the client reads, it will always get a consistent answer. All nodes will have the same updates. Otherwise, if N=2, W=1, and R=1, in this case R+W=N, and consistency cannot be guaranteed because the read could be made in a node that no contains the last updates. The problems with these configurations is that when the system cannot write to W nodes because of failures, the write operation has to fail, marking the unavailability of the system. With N=3 and W=3 and only two nodes available, the system will have to fail the write. 
 
\textbf{Weak/Eventual consistency:} If W+R $<$= N, meaning that there is a possibility that the read and write set will not overlap, then the system is vulnerable to reading from nodes that have not yet received the updates. 

In distributed-storage systems that need to provide high performance and high availability, the number of replicas is in general high. Systems that need to serve very high read loads often replicate their data over hundreds of nodes, with R configured to 1 such that a single read will return a result. Systems that are concerned with consistency are set to W=N for updates, which may decrease the probability of the write succeeding. 

How to configure N, W, and R depends on what the common case is and which performance path needs to be optimized. In R=1 and N=W we optimize for the read case, and in W=1 and R=N we optimize for a very fast write \citep{Reference33}. 

\subsection{NoSQL databases}

The origin of the NoSQL term is attributed to Johan Oskarsson, who used it in 2009 to name a conference about “open-source, distributed, non-relational databases”. Today, the term is used as an acronym for “Not only SQL”, which emphasizes that SQL-style querying is not the crucial objective of these data stores. Therefore, the term is used as an umbrella classification that includes a large number of immensely diverse data stores that are not based on the relational model, including some solutions designed for very specific applications such as graph storage. Even though there is no agreement on what exactly constitutes a NoSQL solution, the following set of characteristics is often attributed to them \citep{Reference35}:

\textbf{Simple and flexible non-relational data models.} NoSQL data stores offer flexible schemas or are sometimes completely schema-free and are designed to handle a wide variety of data structures. Current solution data models can be divided into four categories: \textbf{key-value stores} (e.g. Redis \citep{Reference36}), \textbf{document stores} (e.g. MongoDB \citep{Reference37}), \textbf{column-family stores} (e.g. Cassandra database \citep{Reference38}), and \textbf{graph databases (e.g. Neo4j \citep{Reference39})}

\textbf{Ability to scale horizontally over many commodity servers.} Some data stores provide data scaling, while others are more concerned with read and/or write scaling.

\textbf{Provide high availability.} Many NoSQL data stores are aimed towards highly distributed scenarios, and consider partition tolerance as unavoidable. Therefore, in order to provide high availability, these solutions choose to compromise consistency in favour of availability, resulting in AP (Available/Partition-tolerant) data stores, while most RDBMs are CA (Consistent Available).

\textbf{Typically, they do not support ACID transactions as provided by RDBMS.} NoSQL data stores are sometimes referred as BASE systems (Basically Available, Soft state, Eventually consistent). In this acronym, Basically Available means that the
data store is available all the time whenever it is accessed, even if parts of it are unavailable; Soft-state highlights that it does not need to be consistent always and can tolerate inconsistency for a certain time period; and Eventually consistent emphasizes that after a certain time period, the data store comes to a consistent state. However, some NoSQL data stores, such as CouchDB \citep{Reference40} (document store database) provide ACID compliance.

These characteristics make NoSQL data stores especially suitable for use as cloud data management systems. Indeed, many of the Database as a Service offerings available today, such as Amazon’s SimpleDB and DynamoDB, are considered to be NoSQL data stores. However, the lack of full ACID transaction support can be a major impediment to their adoption in many mission-critical systems. Furthermore, the use of low-level query languages, the lack of standardized interfaces, and the huge investments already made in SQL by enterprises are other barriers to the adoption of NoSQL data stores.

The follow table summarized the main differences between RDBMS and NoSQL databases.

\begin{table}[!hbt]
\begin{center}
\begin{tabular}{|p{7cm} |p{7cm} |}
\hline
\textbf{RDBMS} & \textbf{NoSQL databases}\\
\hline
RDBMS support centrally managed architecture. & They follow distributed architecture. \\
\hline
They are statically provisioned. & They are dynamically provisioned.\\
\hline
It is difficult to scale them. & They are easily scalable.\\ 
\hline
Static schema; Only can handle structured data. & Flexible schema or schema-free; Are designed to handle a wide variety of data structures.\\ 
\hline
They provide SQL to query data. & They use API to query data (not feature rich as SQL). \\ 
\hline
ACID (Atomicity, Consistency, Isolation and Durability) Compliant; DBMS maintains Consistency & Follow BASE (Basically Available, Soft state, Eventually consistent); The user accesses are guaranteed only at a single-key level.\\
\hline
\end{tabular}
\caption{Comparison of RDBMS and NoSQL databases}
\end{center}
\end{table} 

%\section{Available Cloud database systems}
%
%\textbf{Google Cloud SQL:} It is a MySQL database that lives in cloud of Google PaaS. It has all the capabilities and functionality of MySQL, with a few additional features and a few unsupported features which are detailed in the Cloud SQL documentation \citep{Reference43}. Google Cloud SQL is ideal for small to medium-sized applications. This database is currently available for Google App Engine applications that are written in Java, Python, PHP, and Go. It is also possible to access Google Cloud SQL using MySQL Client, and other administration and reporting tools that work with MySQL databases. For instance it is possible connect the database with external application that not running on Google App Engine through using the standard MySQL protocol. However there is a limitation based in the fact a instance must be located within the same region as the application that uses it. If the application is running on servers located in the US, the Google Cloud SQL instance must also be running on US servers. \citep{Reference44}. It is important to consider the data for a Google Cloud SQL instance is stored in zones in the United States, European Union, or Asia depending on the location that is selected when the instance is created. 
%
%Other important feature is that Google allows to users choose between synchronous or asynchronous replication for its data. In synchronous replication, updates are copied to multiple zones before returning to the client. This is great for reliability and availability in the event of major incidents, but makes writes slower. Asynchronous replication results in faster writes to the database because you do not have to wait for replication to finish. However, you might lose your latest updates in the unlikely event of a data center failure within a few seconds of updating the database. \citep{Reference43}.    
%
%\textbf{Google Cloud Datastore:} It is a NoSQL key-value and schemaless object data store for storing non-relational data. Despite not being a relational database, it supports atomic transactions. The Join operations are not supported. Unlike traditional relational databases, the Datastore uses a distributed architecture to automatically manage scaling to very large data sets. The Datastore holds data objects known as entities. An entity has one or more properties, named values of one of several supported data types. Datastore provides two approaches for accessing to the data: a JDO or JPA-based approach that encourages developers to build persistent classes and leave the details of persistence to the library, or a low-level API designed to provide access to the raw details of the storage layer. Each instance of Google Cloud Datastore is replicated across multiple data centers and it scales as the traffic increase. 
%
%One of the most important thing to consider is the API of Cloud Datastore offers two different sets of queries that allows the developers make a balance between Strong and Eventual consistency and to combine the benefits of both schemes. Using Ancestor queries, for instance, guarantee Strong consistency view for reading. On the other hand, read queries using No-Ancestor queries, like Global queries, will return an eventual consistency view of the data.
%
%Other important feature is the possibility to access an existing App Engine Datastore from an external application running on a different platform.
%
%\textbf{Amazon RDS:} It is a relational database service as a service (RDaaS) running over the Amazon infrastructure. Amazon RDS currently supports the MySQL, PostgreSQL, Oracle, and Microsoft SQL Server database engines. Each database engine has its own supported features, and each version of a database engine may include specific features. It means that a current relational database can be automatically migrated without facing incompatibilities. 
%
%The basic building block of Amazon RDS is the DB instance. It can contain multiple user-created databases, and uses access it by using the same tools and applications  as stand-alone database instance. 
%
%An important feature is that users can run a DB instance on a virtual private cloud using Amazon's Virtual Private Cloud (VPC) service. Users can select yours own IP address range, create subnets, and configure routing and access control lists and without additional cost. 
%
%Amazon RDS offers high availability with a primary instance and a synchronous secondary instance to use as backup when problems occur. But only in the case of MySQL, is possible associate more than one reading replicas to increase read scaling. In this case, Amazon RDS offers two types of replication: Synchronous replication running the DB Instance as a Multi-AZ deployment. It protects the latest update against unplanned outages. On the other hand, using Read Replicas, updates are applied to the other replicas after they occur on the source DB Instance (asynchronous replication) \citep{Reference45}.
%
%\textbf{Amazon DynamoDB:} DynamoDB is a NoSQL database service. Its data model is based in tables, items and attributes. DynamoDB tables do not have fixed schemas, and each item may have a different number of attributes. Multiple data types add richness to the data model. DynamoDB is mainly designed for provide high and fast scalability and predictable performance. There is no limit to the amount of data that is possible to store and the service automatically allocates more storage. When the users are creating a table, they specify how much request capacity is require. DynamoDB allocates resources to meet the performance requirements, and automatically partitions data over a number of servers to meet the request capacity. DynamoDB does not support complex relational queries (e.g. joins) or complex transactions.
%
%Amazon DynamoDB stores three geographically distributed replicas of each table to enable high availability and data durability. When reading data from Amazon DynamoDB, users can specify whether they want the read to be eventually consistent (by default) or strongly consistent for each read request within the application. For instance, GetItem operation returns a set of attributes for an item that matches the primary key but provides an eventually consistent read by default. If eventually consistent reads are not acceptable for the application, users must use the ConsistentRead parameter within the query. If the user uses eventually consistent reads will get twice the throughput in terms of reads per second \citep{Reference46}. 
%
%Furthermore, it is important consider that DynamoDB does not provide an official ORM persistence service, like JPA API, but there are some community contribution tools for different languages that provide an abstraction level in order to simplify the storing and retrieve data.
%
%\textbf{Amazon SimpleDB:} SimpleDB is other Amazon NoSQL database solution. Unlike DynamoDB, SimpleDB is suitable for smaller datasets. While in DynamoDB there are no limits on the request capacity or storage size for a given table, a table in Amazon SimpleDB has a strict storage limitation of 10 GB and is limited in the request capacity it can achieve (typically under 25 writes/second). Thus SimpleDB has scaling limitations. However, SimpleDB automatically indexes all item attributes and thus supports greater query functionality at the cost of performance and scale \citep{Reference47}. SimpleDB does not support ACID transactions and operations like Joins.
%
%As DynamoDB, SimpleDB stores multiple geographically distributed replicas of each domain to enable high availability and data durability and supports two read consistency options: eventually consistent reads and consistent reads and using combination of both \citep{Reference47}. Since a consistent read can potentially incur higher latency and lower read throughput it is best to use it only when an application scenario mandates that a read operation absolutely needs to read all writes that received a successful response prior to that read. For all other scenarios the default eventually consistent read will yield the best performance.
%
%\textbf{Table Store:} It is a NoSQL key-value database with similar features to DynamoDB and has a schemaless design, meaning that two entities in the same table can contain different collections of properties, and those properties can be of different types. However, while in Amazon DynamoDB the users can control how scalable the system should be by provisioning the desired throughput, in Tabla Store the throughput is controlled by the system. Other different is that DynamoDB does not have limitation in much data is stored in for each table. Table Store also does not impose any hard limits for the data per table, however the users are constrained by the size of storage account up to 200 TB. 
%
%Unlike the Google App Engine and Amazon Web Services NoSQL databases, which offer the possibility of Strong and Eventual consistency reads options, Table Store only offers Strong consistency.

%\textbf{Cloudant:} It is a NoSQL JSON document database offered as DBaaS. Unlike the previous database systems, Cloudant can run over different Cloud provider infrastructures, like Amazon and Windows Azure, which are selected when the user is creating an account. It allows users to move its data	layer	from one hosting provider or location to another at any time. Cloudant implements a multi-master replication architecture. This means that all replicas of the database can be read from—and written to (all replicas are masters). 

%Other important feature is different management of data consistency. Unlike DynamoDB or Cloud Datastore, Cloudant uses the Quorum-based	clustering that enables a tune-able Eventual Consistency. It means that users can specify the number of copies of data that they want to store (for high availability), and how many of them must be written to disk or how many must match in order to consider that data safely written or consistent. The quorum	values can be	changed	in	order	to tune performance and	consistency	in a partitioned	environment. By default, Cloudant optimizes for availability \citep{Reference48}. 

%\textbt{MongoDB:} 

   
\section{Motivations to migrate the Data Layer to the Cloud}

The migration of the total or partial application stack to the Cloud means that the organization is driven by some of the following motivation:

\textbf{Scalability:} Take the database in the cloud infrastructure means that is easily scale out or up its capacity by only adding new hardware and pay for the use.

\textbf{Availability:} The desktop cloud infrastructure is based on server hardware and usually runs in different data center distributed around the world, leveraging fault-tolerant components and systems management, without configuration and without locally stored data; if they fail, they can be exchanged quickly and easily.

\textbf{Accessibility:} A central desktop in a desktop cloud can be accessed from almost any device and from almost anywhere in the world.

\textbf{Disaster recovery:} The companies no longer need complex disaster recovery plans to prevent loss of information and equipment. 

%\textbf{Performance:} Due to auto-scaling capacity, performance can be controlled, and adapted as required by only adding the hardware that is needed.

\textbf{Reduce cost:} Beyond of improving the underlying technology capacity and availability, there are also compelling business reasons. Managing the scalability in an on-promise datacenter could be more expensive to pay for rent hardware in the cloud.

\section{Question involved in the migration of the Data Layer}

It is well known the set of benefits that is possible get when the system data storage is relocated in the cloud and there are several motivations that push the migration of this layer. However, despite the benefits that Cloud computing can offers, there are several and critical issues like data security and privacy, data consistency, performance and so on, that need to be considering and solved mainly when the application Data Layer is involved in a total or partial migration of an application to the Cloud. A typical case is when an application intends to use the benefits of the Cloud in order to get high availability and scalability to address the increased data load or number of user requests. In this case, the applications have to face with a replication environment and give up the traditional strong consistency scheme on its databases (According to CAP Theorem constraints). This work will focus on solves the problems related to different issues, beyond security and privacy, and supports the making the decision to migrate the Data Layer to the Cloud and explains how to perform it in order to leverage the main benefits of this new environment.

The following questions try to find the answers to other issues present in the migration beyond of confidentially issues like security and privacy which represent an undisputed problem in making the decision of relocate the database in the Cloud:

\textbf{\textit{1.	What kind of applications can take benefits through the migration the Data Layer to the cloud? How to identify these applications? The migration process is actually necessary? }}%The current application load data and the required level of availability are critical enough to appeal the benefits of the Cloud? What are the problems that a current on-premise application cannot cope?}
 
\textbf{\textit{2.	Beyond security and privacy issues, what are the other critical issues involved in the migration of Data Layer that need to be solved or minimized? What are the challenges to be faced in adapting the Data Layer to the new environment?}}

\textit{\textbf{3.	What patterns or reusable solutions can be used to address the different issues and challenges?}} 

%\textit{3.	How provide scalability to attend increased data load? What type of scalability is suitable? What kinds of database represent the best options to achieve the scalability? How much complex would be the replacement of the original database to another with different paradigm (SQL/NoSQL)?}

\textit{\textbf{4.	How much would impact the use of a scalable database in the level of consistency? How manage the different trade-offs when the eventual consistency replaces the tradition strong consistency? How adapt the application to take benefits of the new form of consistency?}}

%\textit{5.	How to identify the technologies that can be used to allocate the database in the Cloud? }

%\textbf{\textit{5.	How to estimate the cost-benefit relationship associated with migration of the Data Layer?}}
  
The following sections discuss in detail the challenges in addressing these questions.

\section{Suitable applications for migrating its Data Layer}

Moving one or all application layers to the Cloud is not simply a matter of lifting and shifting to a different platform. Instead, each application must be evaluated to determine how well it is suitable for the Cloud operation environment. It evaluation process is often called as \textbf{Cloud Suitability Assessment}. Assessing applications and workloads for cloud readiness allows organizations to determine what applications and data can – and cannot – be readily moved to a cloud environment and what delivery models (public, private, or hybrid).

Microsoft \citep{Reference53} identifies the following types of applications to be considered for migration to the Cloud:

\begin{enumerate}
\item Highly-scalable Web sites.
\item Enterprise applications.
\item Business intelligence and data warehouse applications.
\item Social or customer-oriented applications.
\item Social (online) games.
\item Mobile applications.
\item High performance or parallel computing applications.
\end{enumerate}

Furthermore, Cunningham \citep{Reference52} specifies the following four general scenarios when an application is suitable for migration to the Cloud:

\begin{enumerate}
\item The application is used only in predifined periods.
\item The rapid increase in the need for resources cannot be compensated by buying new hardware.
\item The application load can be anticipated, e.g., for seasonal businesses, allowing to optimize resource utilization.
\item In case of unanticipated load, the load increases without prior indication.
\end{enumerate}

Armbrust \citep{Reference24} identify the following types of applications as drivers for Cloud computing:

\begin{enumerate}
\item Mobile interactive applications.
\item Parallel batch processing.
\item Business analytics as a special case of batch processing.
\item Extension of computationally-intensive desktop applications.
\end{enumerate}

On the other hand, Spinola \citep{Reference55} identified a set of bad candidates for the Cloud:

\begin{enumerate}
\item Applications that involve extremely sensitive data, particularly where there is a regulatory or legal risk involved in any disclosure. These will at minimum require special treatment if they are to be run in a cloud service.

\item Applications that require access to very intensive data workloads (for example, loading the database onto the cloud may be costly).

\item Any performance-sensitive application (i.e., one that is very likely to create performance problems if it is to run on a public cloud).
\end{enumerate}



In the case of moving the Data Layer, one of the most important thing to consider is the level of data sensitivity. Applications that deal primarily with non-restricted data are more appropriate candidates for deployment in the Public Cloud. Applications that process private or proprietary data, on the other hand, are highly sensitive and therefore more compatible with a secure, Private Cloud operation. Although the data sensitivity is an undeniable critical point that can determine whether an application is or not to be moved to the cloud, there are other certain points that can tell us if an application is suitable or not for the Cloud environment:

\textbf{Level of scalability:} Applications with significant variations in computing resource usage, such as storage capacity and processing power, are often primary candidates for migrating its Data Layer to a Cloud environment. This is due to the fact that Cloud computing allows users to acquire resources on demand as conditions warrant, rather than acquiring and maintaining a steady state of resources capable of supporting the periodic maximum load. At this point is important to consider that work with large database in the Cloud may be costly: The time-consuming and bandwidth to move large amounts of data into the cloud be expensive, so if this is a requirement, then you must to decide whether it is worthwhile \citep{Reference42}.

\textbf{Elasticity:} The elasticity is one of the most important advantages of the Cloud computing. With a cloud database optimized to take advantage automatically of the additional resources when needed, companies can ride out dramatic fluctuations in processing demand without a drop in quality of service or an emergency for the database administrator. Application with these fluctuations are typically candidates to allocate its data base in the Cloud.

\textbf{Level of accepted consistency:} As seen in the previous sections, horizontal scalability and high availability are some of the cloud-oriented benefits. However, horizontal scalability results in a replication environment and according to the CAP Theorem the level of the database consistency must be disputed. Any application that want to take high availability and horizontal scalability, first of all, should be consider the possibility of living in an environment of eventual consistency. Applications like social networks, collaborative apps and so on, in which the data consistency is not a critical issue, are the best candidates to take benefits of high availability and scalability available in the Cloud.    

\textbf{On-premise cost vs. Cloud cost:} While there will be operational cost to operate the data later in the Cloud, like subscription cost and so on, they must be weighed against the support costs of the current operation. Existing operating costs to consider in the evaluation include acquisition and maintenance of the operating environment and operating infrastructure, personnel costs associated with developing and maintaining the operating environment, training employees to operate and support it, monitoring costs and maintenance costs such as licensing and hardware/firmware upgrades. The Cloud Suitability Assessment of Amazon Web Services (AWS) \citep{Reference41} shows a list of IT infrastructure that represent the total cost to operate in a local environment.

\begin{table}
\begin{center}
\begin{tabular}{|p{7cm}|p{3cm}|p{3cm}|}
\hline
\textbf{IT infrastructure} & \textbf{On-premise} & \textbf{Cloud provider}\\
\hline
\hline
Server Hardware & - & -\\
\hline
Network Hardware & - & -\\
\hline
Hardware Maintenance & - & -\\
\hline
Software OS & - & -\\
\hline
Power of Cooling & - & -\\
\hline
Administration & - & -\\
\hline
Storage & - & -\\
\hline
Bandwidth & - & -\\
\hline
Resource Management Software & - & -\\
\hline
24x7 Support & - & -\\
\hline
\hline
\textbf{Total}& - & -\\
\hline
\end{tabular}
\caption{Cloud calculation}
\end{center}
\end{table}

The table above shows how is possible to compare the direct costs of the IT infrastructure but ignores the many indirect economic benefits of cloud computing, including high availability, reliability, scalability, flexibility, reduced time-to-market, and many other cloud-oriented benefits.

%\section {Present challenges in the migration of the data layer}
\section {Challenges and issues present in the migration of the Data Layer}

At this point we considered that the decision to migrate the Data Layer has already been taken in order to move this layer to a public cloud. Security and confidentiality concerns with respect to data migration are considering one of the most issues impeding the further adoption of Cloud computing in industry and research \citep{Reference27}. However, this work focuses primarily on solved the challenges that are present in the process of adapting the Data Access layer and moving the Data Base layer to the new environment. The challenges and issues that need to be studied and addressed during the process of migration are listed below: %These challenges are all related to each other and the main propose to address with they is enabling a transparent access for the Business Layer:

\textbf{Choosing the suitable data hosting:} The first step of the migration of the Data Layer is the decision for a specific data hosting solution in the Cloud. There may be several involved issues to consider before determining what is the most suitable Cloud data hosting. 

\textbf{Enable transparent access of the Business layer to the data:} In the process of adaptation of the Data Layer, one of the most important thing is provide a transparent access of the Business layer to the data. Thus, it require the implementation of additional functionalities in the Data Access later because the Business Layer does not know to which type of Cloud data store or data service the request is forwarded to and the Data Access layer has to deal with the transformation of requests.

In order to provide transparent access of the Business layer to the data, depending on the cloud data hosting that has been chosen, it is possible that some of the following challenges must be addressed:  

\begin{itemize}
\item\textbf{Incompatibilities of missing features:} Incompatibilities in the Data Base layer may refer to inconsistencies between the functionality of the traditional Data Base used before migration, and the characteristics of an equivalent Data Base hosted in the Cloud. For instances Google App Engine Datastore (NoSQL database) is incompatible with relational databases (like MySQL) because the Google Query Language supports only a subset of the functionality offered by SQL, for instances, joins are not supported. Other missing feature could be the not support of ACID transactions. Thus, an application making use of such functionality cannot have its Database Layer moved to the Cloud without an impact to its architecture \citep{Reference27}. 
\item\textbf{Incompatibilities of database schema:} They refer to incompatibilities with respect to the semantics of the database schema and/or the database name (for instance, when comparing Oracle with Microsoft SQL Server) or incompatibilities between data types that are not supported by the target data store of the migration (for instance, mapping BOOLEAN to BIT or CHAR) \citep{Reference27}.
\item\textbf{Possibility to work with two or more kind of databases:} It is possible that in the Cloud environment, more than one sort of database may be used to replace the traditional database and fulfill with the application requirements. 
\item\textbf{Difference in the granularity between traditional and Cloud data store APIs:} The adoption of a SQL Cloud solution allows interaction on a fine granular level, e.g., by using SQL after migrating the database hosted traditionally to an Amazon EC2 instance. However, others solution like Amazon SimpleDB provides a service interface to interact with the Cloud data store. The data store becomes a data service, which in turn requires interaction on the level of the service interface that is more coarse grained compared to the interaction when using SQL for instance \citep{Reference27}. It has an impact to the DAL and implies a challenge in how to interact with a data store encapsulated behind a service API.     
\end{itemize}

\textbf{Enable loose coupling between the Business Layer and Data Layer:} Other important thing to provide transparent access of the Business layer to the data, is the loose coupling between the Data Layer and Business layer. This means that the used Cloud data stores or data services can be changed without affecting the Business Layer. Loose coupling means that application components make few assumptions about each other regarding the format of exchanged data or the communication channels used.

\textbf{Ensure scalability and elasticity:} Scalability and elasticity are two of the main benefits provided by Cloud computing. However, beyond that the cloud provider provides high scalability and flexibility features, the components of an on-premise application must to be adapted in order to exploit these cloud benefits.

\textbf{Ensure the level of data consistency required by the application:} One of the most important thing to consider is how to deal with the trade-off between eventual consistency, high availability and latency. If a strong model of consistency is used in the non-cloud version of the application but while is necessary to provide a good degree of availability, then is it possible that our application must be adapted to handle a weaker consistency model.

\section{Addressing the issues and challenges}

\subsection{Choosing the suitable data hosting}

%http://www.stormdb.com/content/7-questions-to-consider-when-choosing-cloud-database
%http://www.tomsitpro.com/articles/cloud_database-cassandra-sql_azure-relational_databases-amazon_rds,2-345.html
%http://www.itbusinessedge.com/slideshows/how-to-choose-the-right-cloud-database-seven-considerations-09.html


Considering the functional and no-functional requirements, it is possible to know what kinds of Cloud databases will be the most suitable for our application. %Taking into account the features of the different Cloud databases hosting studied in previous sections it is possible to select what could be the most suitable ones. 
 
According to AWS, there may be several dimensions that might influence the choice of the appropriate Cloud data hosting. In order to select the correct data hosting will be necessary to do a correct trade-off among the follow dimensions: \textbf{\textit{cost, durability, query-ability, availability, latency, performance (response time), relational (SQL joins), size of object stored (large, small), accessibility, read heavy vs. write heavy, update frequency, cache-ability, consistency (strict, eventual) and transience (short-lived)}}. Doing a careful trade-off among the above dimensions, it is possible to decide which are most suitable data hosting.

%Strauch and Kopp [] provide a taxonomy for Cloud data hosting solutions, in which are considered six distinguishing properties: Application Layer, Deployment Model, Location, Service Model, Data Store Type and Compatibility. However this taxonomy does not consider other properties like kind of consistency (strict or eventual) or full ACID support. For instance,
One technique that is possible to apply in order to select the appropriate database in the Cloud is mapping the most important database features with different data hosting solutions. The follow table represents a variety of features and different cloud databases solutions that may support them:
 
\begin{table}[!ht]
    \begin{tabular}{|p{4cm}|p{10cm}|}
		\hline
		\textbf{Feature} & \textbf{Cloud Database Solutions}\\ \hline 
    \hline
    Full ACID transactions. & Google Cloud SQL, Amazon RDS, MySQL\\ \hline 
		Full SQL support. & Google Cloud SQL, Amazon RDS, MySQL\\ \hline
		High scalability. & Google Cloud Datastore, Amazon SimpleDB, Amazon DynamoDB, Azure Table Storage, Cassandra, MongoDB\\ \hline
		High availability. & Google Cloud Datastore, Amazon SimpleDB, Amazon DynamoDB, Cassandra\\ \hline
		Strong consistency. & Google Cloud SQL, Amazon RDS, MySQL, Azure Table Storage, MongoDB\\ \hline
		Eventual consistency. & Google Cloud Datastore, Amazon SimpleDB, DynamoDB, Cassandra\\ \hline
		Handling big data. & Google Cloud Datastore, Amazon DynamoDB, Azure Table Storage, Cassandra\\ \hline
    \end{tabular}
\end{table}

The above table shows a number of database solutions that could be selected to replace the on-premise database. Cassandra \citep{Reference37}, MongoDB \citep{Reference38} and MySQL are differ from the other database because these are not native cloud database solutions. Instead, these database can be provided as a services by different cloud providers. For instance, MongoDB is offered by a number of cloud provider such as MongoLab \citep{Reference57}, Codename: BlueMix \citep{Reference56}, Rackspace Cloud Services \citep{Reference17}, etc. Amazon RDS \citep{Reference45} and Google Cloud SQL \citep{Reference44} support most of the features of MySQL with some unsupported functionalities \citep{Reference43}. Also there are several Cloud providers offering MySQL-based services \citep{Reference58} such as Codename: BlueMix. Cassandra as a NoSQL database is designed to handle large amounts of data across many commodity servers \citep{Reference37} and is available as a service in a number of cloud provider such as Instaclustr \citep{Reference59}.

%At the time of choosing a Cloud data hosting may be some questions to consider that can help to select the suitable solution:
When the Cloud data hosting must to be selected, may be useful taking into account the following questions:

\begin{enumerate}
	\item \textbf{What is the API or query language offered by the Cloud database?} - Although the Cloud NoSQL databases do not support the traditional SQL, some of the them enable a sort of SQL-like language for querying such as CQL (Cassandra Query Language) which has many similarities to SQL, but some fundamental differences, for instance does not support operations such as JOINs \citep{Reference60}. Other database such as Google Cloud Datastore also support a SQL-like query language but without support of JOINs and filtering of data based on results of a subquery \citep{Reference61}. Despite efforts by some NoSQL databases to offer support for SQL, no NoSQL database offer full support for it. Instead NoSQL solutions enable a service API to query data.
	
	\item \textbf{Can the Cloud database manages the application workloads?} - At this point it is important to consider the frequency of the reading and writing operations performed by the application. These features are often called \textbf{read-heavy} and \textbf{write-heavy}. If the application is read-heavy, any database offering replication will be suitable as reads can be distributed across the replicas. On the other hand, sharding refers to dividing the dataset up over multiple database servers. If the application is write-heavy the suitable database must provides sharding as writes can be distributed over multiple servers. Databases such as MongoDB and Cassandra are designed to offer both replication and sharding \citep{Reference62}.
	
	\item \textbf{Can the Cloud database scale out?} - If the main goal of the migration of the Data Layer is to exploit the scalability and elasticity advantages of the Cloud, then the suitable database should be a NoSQL one, like Google Cloud Datastore, Amazon DynamoDB, Cassandra or MongoDB that can offer support for reads-heavy and write-heavy and are specially designed to scale out. However, SQL solutions can also provide scale out through a master-slave replication, such as MySQL \citep{Reference62}. For instance, by using Amazon RDB, that offers support for MySQL databases, it is possible associate more than one reading replicas to increase read scaling \citep{Reference45}. It might be fine for read scalability, but does not solve the write scalability problem. 
	
	\item \textbf{What is the level of consistency offered by the Cloud database?} - The level of data consistency is other important thing to consider. Any SQL database is the suitable solution if the application requires strong consistency. However, most of the NoSQL solutions have different configuration parameters to ensure varying degrees of consistency. For instance, SimpleDB, DynamoDB and Google Cloud Datastore provide different sets of queries that allows the developers make a balance between Strong and Eventual consistency and to combine the benefits of both schemes. Nevertheless, the latency could be a big problem using strong consistency in a high replication environment. According to Amazon, using DynamoDB, if the user uses eventually consistent reads will get twice the throughput in terms of reads per second \citep{Reference46}.
\end{enumerate}

In summery, the next steps may be follow in order to choose the right Cloud data hosting:

\begin{enumerate}
	\item Evaluate the size of the of the on-premise database: Determine what hardware is required, and how much storage and instances will be needed after migration.
	\item Evaluate the database workloads: Is the application write-heavy, read-heavy or both?.
	\item Ensure that the selected database provides the scalability required.
	\item Evaluate the level of consistency required by the application.
	\item If the decision is to opt for the same type of database, then consider if there are incompatibilities between the query languages or whether the cloud database does not contain the all features supported by the on-premise database.
	
\end{enumerate}
  

\subsection{Providing transparent data access to the Cloud Database layer}

Providing transparent data access to the Database layer is one of the most important steps in the process of adapting of Data Access layer. 

Depending on the cloud data hosting that has been chosen, there may be some challenges must be addressed. For instances, problems such incompatibilities between the on-premise database and the new Cloud storage system or problems like missing features, are based in the fact that a different paradigm of database system has been choose. However, various patterns exist in Computer Science that provide reusable solutions how to face recurring challenges. Thus, it is possible to use and combine these patterns in order to address with the different challenges.   

\subsubsection{Dealing with incompatibilities and missing features}

The challenges of migrating the Data Layer to the Cloud, like adapting the the Data Access layer y provide missing functionalities have been identified for various research projects in collaboration with industry partners and also based on literature research focusing on available reports from companies that already migrated their application Data Base layer to the Cloud. Strauch and Andrikopoulos \citep{Reference50} provide different functional and non-functional patterns that are useful to address these challenges: 

\textbf{Data Store Functionality Extension:} This pattern adds missing functionality to a Cloud data store, for instance, the Cloud data store might not support data joins. In order to avoid the Business layer has to implement this functionalities, a component implements the required functionality as an extension of the data store, either by offering an additional functionality, or by adapting one or more of the existing functionalities offered by the data store. The extension component is placed within the Cloud infrastructure of the Cloud data storage. A low distance (in terms of network performance) ensures low latency between the extension and the data store. 

The additional or extended functionality code has to be wrapped into an application, which can be hosted in the Cloud. The access to the data store of the Cloud provider from this application is done via the API supplied by the provider. The code in the Data Access layer has to be adjusted accordingly. This means that each data access call using the required functionality
has to be replaced by a call to the component implementing the corresponding data store functionality extension.

The main objective of apply this pattern is that there is no adjustment or modification of the Business Layer required.

\begin{figure}[htbp] 
	\centering
		\includegraphics[width=0.5\textwidth]{Figures/Data_Store_Functionality_Extension.png}
		\rule{30em}{0.5pt}
	\caption[Example of Data Store Functionality Extension and Emulator of Stored Procedures patterns.]{Example of Data Store Functionality Extension and Emulator of Stored Procedures patterns.}
	\label{fig:data_store_functionality_ext_pattern}
\end{figure}

\textbf{Emulator of Stored Procedures:} This pattern is a special case of the Data Store Functionality Extension pattern,  where an extension component is built outside the data store, containing a set of predefined groups of commands to be executed by the data store. This set of commands is called Store Procedure. Typical use for stored procedures include data validation or access control mechanisms. However, a Cloud data store does not inherently support stored procedures as most traditional.  data stores do.

In the Figure \ref{fig:data_store_functionality_ext_pattern} shows an example of application of both Data Store Functionality Extension and Emulator of Stored Procedures patterns.

On the other hand, beyond of the missing features, there might be incompatibilities with respect to the semantics of the database schema and different data types that are not supported by the target data store of the migration. These incompatibilities between source and target data store can be overcome by converting between them in the Data Access layer in order to achieve transparency or through an application converter following the patterns described above. For instance, Oracle database does not have support for Boolean data type or Bit data type. Thus, while migrating to Oracle, these data types have to be converted to either a single-digit numeric or a single-character data type \citep{Reference51}.

\subsubsection{Working with two or more kind of databases}

Christoph Fehling in his book ``Cloud Computing Patterns''\citep{Reference52} described a very useful pattern called ``Data Access Component'' which allows isolate the complexity of the Data Access layer while enables different functionalities such as additional data consistency and the possibility to work with different storage offerings.

The solution provide by the Data Access Component pattern ensure that the access to different data sources is integrated by a \textbf{data access component} This component coordinates data manipulation if different storage offerings are used. In case a storage offering shall be replaced or the interface of a storage offering of a cloud provider changes its interface, the data access component is the only component that has to be adjusted, thus, ensuring a loose coupling between the rest of the application and used cloud offerings.

\begin{figure}[htbp] 
	\centering
		\includegraphics[width=0.5\textwidth]{Figures/DataAccessComponent.png}
		\rule{30em}{0.5pt}
	\caption[Example of deployment of Data Access Component pattern.]{Example of deployment of Data Access Component pattern.}
	\label{fig:data_store_functionality_ext_pattern}
\end{figure}

The implementation of the Data Access Component pattern to hide the complexity of accessing different storage offerings due to different interfaces, interaction protocols, authentication methods etc. is subsumed by a data accesses component that is used by other application components. This ensures that the cloud providers among which data storage is distributed may be hidden from other application components to ensure a unified data access behavior to them. If a cloud provider that stores data is exchanged with a different one, the application components to be adjusted may be easily identified. 

\subsubsection{Enabling a less coarse access API for the Business Layer.}

When the database is moved to the Cloud, it is possible to identify two different type of interaction between the database in the Cloud the rest of the application. On the one hand there is the direct interaction with a database hosted on the Cloud as a data store that takes place on a fine granular level, e.g., by using SQL commands. The second interaction type relies on a service interface to interact with the Cloud database, as in the case of e.g. MongoDB, which becomes a data service. This requires interaction on the level of the service interface that is more coarse grained compared to the interaction when using SQL for instance.

In order to provide a better level of granularity, the data structure supported by the Data Access Component needs to be adjusted. First, the data elements and their structure have to be extensible to support additional data elements and to
extend existing data elements with additional data fields. Second, configured or new data elements have to be queried using generic functionality. The extensibility of data elements is realized by a certain data structure, where each data element is associated with a list of arbitrary data elements. This list may either be filled directly with data values or may be used as a pointer to other data elements that shall be associated with the extended data element. For example, if an application handles children of a school and the result of a test not commonly made by schools must be stored with data elements representing children, one of the data fields may be used for it. If the test should instead be modeled as a different data element containing more information, for example, when a child took it, the test result can also be modeled as a separate data element referenced in a field. Later, in order to increase comprehensibility, interfaces of the Data Access Component must provide specialized application specific functions. These functions, for example, can be used to specifically query children
data elements in the above example. Thus, it is significantly ease interaction with the interface. However, if the data elements provided by the Data Access Component are extended, new data fields and new data elements cannot be respected in specialized functions defined for an application. Therefore, the Data Access Component should also provide generic functions, to access arbitrary data elements. These generic functions should at least be usable to create, read, update, and delete data elements, thus, they are called CRUD functions. Using these functions, data elements may be accessed using a unique identifier, which is passed to the operations as parameter. Arbitrary data elements provided by the data access component can, therefore, be queried and manipulated using the generic functions, if no specialized functions exist for this purpose. 

\begin{figure} 
	\centering
		\includegraphics[width=0.99\textwidth]{Figures/DataACInterfaceAndStracture.png}
		\rule{0em}{0.5pt}
	\caption[Data access component interface and data structure.]{Data access component interface and data structure.}
	\label{fig:DataACInterfaceAndStracture}
\end{figure}

By using this technique it is possible to obtain a wider query interface. 


\subsection{Enabling loose coupling between the Business Layer and Data Layer}

In traditional applications built without using any Cloud technology there is in general a tight coupling of the Business Layer with the Database Layer via the Data Access Layer, which implies that the Business Layer is aware of the location of the data and the data store it is interacting with. Thus, a loose coupling between the Data Layer and Business Layer must be ensured.
%Thus, in the process of enable a transparent access to the Business layer, a loose coupling between the Data Layer and Business Layer must be ensured.

According to Fehling \citep{Reference52}, to ensure loose coupling between components, these must respect the following degrees of autonomy:

\begin{enumerate}
	\item \textbf{Platform autonomy:} Communication partners may be implemented in different programming languages and are executed by different execution environments.
	\item \textbf{Reference autonomy:} They must be unaware of the concrete address of each other and also of the number of communication partners with which they interact.
	\item \textbf{Time autonomy:} Communication partners can exchange information even if one of them is temporarily unavailable.
	\item \textbf{Format:} When data is sent over a remote connection, it has to be serialized into an exchange format by the sender and de-serialized by the receiver.
\end{enumerate}

In order to enable loose coupling between the Business Layer and the Database Layer, it is possible to implement some patterns described by Fehling in his book ``Cloud Computing Patterns''\citep{Reference52}. Fehling enables the pattern called \textbf{Three-Tier Cloud Application}, in which the application stack is separated by three tiers: Presentation logic, business logic, and data handling. The interesting part is that the three layer maintain a loose coupling relationship and each layer can scaled independent of each other. In the architecture proposed by Three-Tier Cloud Application pattern, the Data Layer (or data handling tier) is accessed by the Business Layer through a \textbf{message queue}, provided by a \textbf{message-oriented middleware implementation}, to ensure loose coupling between these two layer. The Data Layer is comprised of storage offerings accessed by an application component implementing the Data Access Component pattern. The Data Access Component interacts with the used storage offerings that are obtained from the cloud provider and provides data using message queues provided by the message-oriented middleware. 

\begin{figure} 
	\centering
		\includegraphics[width=0.99\textwidth]{Figures/loose_coupling.png}
		\rule{0em}{0.5pt}
	\caption[Loose coupling architecture between the Business Layer and Data Layer.]{Loose coupling architecture between the Business Layer and Data Layer.}
	\label{fig:loose_coupling}
\end{figure}

The message-oriented middleware will ensure an asynchronous based-message communication between the Data Layer and the Business Layer by using message queue which can store messages until they are retrieved by the receiver.

In the process of communication between two any layer o components, for example, when a sender puts a message on one message queue and the receiver retrieve it from the same queue, in between these two access points the message-oriented middleware handles the complexity of addressing, availability of communication partners and message format. To ensure loose coupling communication a standardized serialization format should, therefore, be used to transfer data to rely on intermediary functionality that still has to be configured but not implemented individually.

Additionally, the Three-Tier Cloud Application suggests the use of the \textbf{elastic queue} component. If the workload experienced by the Data Access Component has reached a certain limit, the elastic queue provisions a new instance of the Data Access Component. An elastic queue monitors queues provided by a message-oriented middleware. The number of required component instances is determined from the number and type of messages contained in the monitored queue, utilization information of the scaled application component (e.g. the Data Access Component), and environmental information about the elastic infrastructure or elastic platform, for example, format standards are the Extensible Markup Language (XML), the Java Script Object Notation (JSON), or SOAP.

On the other hand, an \textbf{enterprise service bus (ESB)} can be used  as an intermediary to ensure loose coupling in a Service-Oriented Architecture (SOA). \textbf{Web Services} are, therefore, one way to realize the application components of a distributed application like the Data Layer. An ESB acts as a broker between service consumers and service providers enabling the above mentioned separation of concerns.

\subsection{Enabling scalability and elasticity}

Scalability and elasticity are two of the main benefits provided by Cloud computing. However, beyond that the cloud provider provides high scalability and flexibility features, the components of an on-premise application must to be adapted in order to exploit these cloud benefits.

In a horizontal scaling approach, the number of independent IT resources, such as servers, is increased if an application requires more processing power, storage, etc. The Data Layer in this case, therefore, has to be designed to run on multiple independent resources. In addition to being horizontally scalable, cloud Data Layer also need to be elastic. Elasticity specifically focuses on the dynamic addition and removal of IT resources to adjust its performance quickly if the workload changes. This ability is essential to exploit the pay-per-use cloud property.

Scalability and elasticity are related features, however, in order to enabling both of them it is necessary to implement two architectural concept in the adaptation of the Data Layer. On the one hand, in order to enabling scalability, the Data Layer has to be a \textbf{loose coupling component}. As shown in the previous section, loose coupling can be ensure by applying the Loose Coupling pattern by Fehling in \citep{Reference52}. In addition to the need to maintain loose coupling between the Data Layer and the Business layer, loose coupling between these components (in this case, Data Layer and Business Layer) also ensures that they may be instantiated multiple times for scaling them out. However, some of these components may need to maintain an \textbf{internal state}. This state may, for example, reflect a list of items that a user of a Web shop has added to his shopping basket. As every request of a client could possibly be handled by a different server (different instances), a server may be unaware of previous interaction with a client hindering it to produce correct results. Thus, all instances need to share o maintain an internal state in order to address with this kind of conflict, for instances. In order to share the internal state, it is replicated among all component instances. A component that maintain an internal state among its different instances are called \textbf{stateful component}. On the other hand, the most significant factor complicating addition and removal of component instances is the internal state maintained by them. Thus, it hinders the ability of elasticity. In case of failure, the internal state may be lost. Therefore, it is necessary to remove the internal state and seek other alternatives to ensure elasticity.

\begin{figure} 
	\centering
		\includegraphics[width=0.8\textwidth]{Figures/stateless_data_layer.png}
		\rule{0em}{0.5pt}
	\caption[Stateless Data Layer.]{Stateless Data Layer.}
	\label{fig:stateless_data_layer}
\end{figure}

In order to ensure the elasticity of the Data Layer, Fehling in \citep{Reference52} provides the \textbf{Stateless Component pattern}. The solution provides by this pattern is that components are implemented in a fashion that they do not have an
internal state. Instead, their state and configuration is stored externally in storage offerings or provided to the component with each request (for instances, sending the client session). An identifier (ID) may be associated with requests to retrieve the required information from the external storage. The figure \ref{fig:stateless_data_layer} shows how the Data Layer might be implemented as a stateless component.

The Stateless Component adopts the best practices of the Web applications, which maintain a \textbf{session state} in the client-side and sent it with every request of a client to the Web applications. This interaction style allows every request to be handled by an arbitrary server (arbitrary instance of the component). 

Since the stateless component instances do not have an internal state, no data is lost if an individual instance fails. This statelessness significantly increases the capability of the Data Layer to scale out, because multiple instances of the Data Layer can share a common external state and, thus, act as if they all had the same internal state.

The following Java code shows an example of implementation of the Stateless Component pattern. In the code, a little \textbf{Shopping Cart web service} is implemented using an \textbf{stateless session bean}, which have the method called \textbf{addToCart}. When this stateless web service is running over a server, different instances of the this web service can be available to serve a number of requests from different clients because it does not maintains any state related to a particular client. In contrast, when any client adds an item to the cart, it sends its session (which contains the current state of its shopping cart) to the stateless web service. Thus, any instance of the web service can use this session to serve the client request.


\begin{lstlisting}

// Server-Side.

@Stateless
@WebService(serviceName="ShoppingCart")
public class ShoppingCart {
   
   @Resource    
   private WebServiceContext wsContext;  
   
   @WebMethod(operationName="addToCart")
   public int addToCart(Item item) {
      // Get the client session.
      MessageContext mc = wsContext.getMessageContext();   
      HttpSession session = ((javax.servlet.http.HttpServletRequest)mc.
			get(MessageContext.SERVLET_REQUEST)).getSession(); 

      if (session == null)
         throw new WebServiceException("No HTTP Session found");
      
      // Get the shopping cart of the client.
      List<Item> cart = (List<Item>)session.getAttribute("ClientCart");
      
      if (cart == null)
         cart = new ArrayList<Item>();

      cart.add(item);
      
      session.setAttribute("ClientCart", cart);
      // Save the updated cart in the HTTPSession (since we use the same 
      // "ClientCart" name, the old cart object will be replaced)
      // return the number of items in the cart.
      return cart.size();
   }
}
\end{lstlisting}
 
\subsection{Ensuring the level of data consistency required}

One step in the process of adapting the Data Access layer is to ensure that the level of consistency required by the application is achieved. Thus, if the data store offering only enables eventual consistency, but the application requires a certain level of data consistency, the Data Access layer must ensures the provision of a stronger consistency to the Business layer. On the other hand there are many applications where eventual consistency is fine. However, in most cases, an application could require different levels of the data consistency as the best option to obtain better performance. In the following are presented some use cases in which both strong and eventual consistency are required.

\textbf{Web Shop:} A typical Web shop, running in an on-premise server, stores different kinds of data and needs to migrate the Data Layer to the Cloud. The system manages data of customer profiles and credit card information, data about the products sold, and records on preferences of users (e.g., “users who bought this item also bought this other thing”) as well as logging information. Also it manages the product inventory data. The customer’s credit card information and the price and stock of the items must be handled carefully. Thus, these information should be accessible under strong consistency guarantees. On the other hand, preferences and logging information could even be lost without any serious damage (e.g., if the system crashes and cached data was not made persistent). Then, these information could be accessible under eventual consistency.

\textbf{Collaborative Editing:} Collaborative Editing allows people to work simultaneous on the same document or source base The main functionality of such a system is to detect conflicts during editing and to track the history of changes. This kind of systems only can work with strong consistency. If the system detects a conflict, the user is usually required to resolve the conflict. Only after resolving the conflict the user is able to submit the change as long as no other conflict was generated in the meantime \citep{Reference64}.

Existing patterns and techniques can be applied in order to ensure a stronger consistency level if the data store offering only enables eventual consistency.

\textbf{A - Using the Data Access Component pattern:}

Taking into account the Data Access Component pattern described  in the previous section, it also can enable a \textbf{client-centric consistency} in addition to the consistency assured by the provider. The data access component may be used to provide a different consistency behavior. If the data access component can access stored data in a transactional context, it can ensure strict consistency of integrated data.  In case of eventually consistent the data access component can enable client-side consistencies. In order to assure client-side consistency, the data handling uses \textbf{versions on data elements}, and \textbf{histories of operations} executed by clients.

The consistency levels that can be realized by the Data Access Component are:

\begin{itemize}
\item\textbf{Monotonic Reads:} One client will never read data that is older than what it has read before.
\item\textbf{Read-your-writes:} One client will immediately see data alterations performed by it.
\item\textbf{Monotonic Writes:} Write operations of one client are executed in the order they were issued.
\end{itemize}

In the case of Monotonic Reads and Read your Write, the consistency levels can be realized by data access components using \textbf{version identifiers} associated with each data element. Upon every write of a data element, this version identifier is increased. The data access component may then know the last version accessed by a client and can drop any results of read and write operations that are too old. 

In the case of Monotonic Writes, the client-side consistency can be ensured by storing the \textbf{unique identifiers} of client’s operations in an operation history. If a data access component retrieves an operation that should be execute but the data to update does not reflect all previously executed operations in the history, it operation must wait.

\begin{figure}[htbp] 
	\centering
		\includegraphics[width=0.6\textwidth]{Figures/consistency_balancer.png}
		\rule{30em}{0.5pt}
	\caption[Enabling Client-centric consistency with the Data Access Component.]{Clients can decide on every read for consistent data or the native eventual consistency.}
	\label{fig:consistency_balancer}
\end{figure}

%In the case that the Data Access Component must scale (there are several instances of the Data Access Component), the all instances must be stateful components. Stateful component means the all components instances should contain the same internal state. Thus, the instances maintain the identifiers (version identifiers or unique identifiers) internally. If an identifiers must be increased, the Data Access Component instances do so in an ACID transaction. All intances need to have consistent information about the version last seen by a client or knowledge about the operation history of a client.

In the case that the Data Access Component must scale (there are several instances of the Data Access Component), the all instances must be stateless components and store the version identifiers and operation identifiers in a strict consistent storage offering that is accessed by all data access component instances. If an identifiers must be increased, the Data Access Component instances do so in an ACID transaction. All instances need to have consistent information about the version last seen by a client or knowledge about the operation history of a client.

Using client-centric consistency, clients can decide on every read if they would like to retrieve consistent data (the
version identifier and operation identifier is accessed) or if eventual-consistent data is sufficient (only the eventual consistent storage offering is accessed) as showed the figure \ref{fig:consistency_balancer}. Today, most Cloud data stores that provide eventual consistency by default, also provide methods that allow obtain an strong consistency view of the data store. 

\textbf{B - Using the Data Abstractor pattern:}

The Data Access Component covers some approaches in order to ensure a stronger consistency when it is needed. Reassuring data consistency on the application level can, however, void the benefits introduced by eventually consistent storage offerings regarding performance and availability.

A data abstractor reads eventually consistent data and provides it in an abstracted, approximated, or summarized form to users and other application components. Additional consistency checks during read and write operations are not required as the abstraction reduces the impact of inconsistent data.

\begin{figure} 
	\centering
		\includegraphics[width=0.2\textwidth]{Figures/data_abstractor_example.png}
		\rule{0em}{0.5pt}
	\caption[Example of data abstraction.]{Example of data abstraction.}
	\label{fig:data_abstractor_example}
\end{figure}

The application of this pattern is not for all cases. The figure \ref{fig:data_abstractor_example} depicts an example which may be suitable for a data abstractor.

In the figure \ref{fig:data_abstractor_example} a progress bar approximation is depicted. Consider, for example, a logistics center where workers pick items from a large storage and prepare them for packaging. If a worker has picked up an item, this status is stored in the eventually consistent storage. The application approximates the number of prepared items, the active workers, and the overall number of concurrent packing processes into an approximated progress bar for each order.

Instead of providing consistent numbers, in order to increase the benefical effects of eventually consistent storage offerings, approximations and tendencies are provided that can be interpreted by humans much more easily.
 

\section{Data Layer migration checklist}

Having studied, analyzed and addressed the different issues and challenges involved in the migration of the Data Layer, it is necessary to provide a series of steps that allow to ensure a successful migration.

The next table summarizes the main steps to follow during the Data Layer migration process.

\begin{center}
\begin{longtable}{|p{5cm}|p{9cm}|}
\caption{Data Layer migration process}\\
\hline
\multicolumn{1}{|c|}{\textbf{Migration procedures}} & \multicolumn{1}{c|}{\textbf{Details}}\\
\hline
\endfirsthead
\multicolumn{2}{c}
{\textit{Continued from previous page}} \\
\hline
\multicolumn{1}{|c|}{\textbf{Migration procedures}} & \multicolumn{1}{c|}{\textbf{Details}}\\
\hline
\endhead
\hline \multicolumn{2}{r}{\textit{Continued on next page}} \\
\endfoot
\hline
\endlastfoot
\textbf{1.}	Evaluate the size of the on-premise database and the functional and non-functional requirements. 
& Determine what hardware is required, and how much storage and instances will be needed after migration according to the current database size and the application requirement. \\ \hline

\textbf{2.}	According to the previous assessment, select the suitable data hosting. 
& Usually, the database selected corresponds to the same type of database used in the on-premise application. However, in specific case, may be necessary to choose a different kind of database. For instances, if ACID transactions are a critical requirement for certain process, then a SQL database will be the suitable solution. However, in the same application there are other processes that require a high degree of availability a low latency. Then, a data base that support eventual consistency will be appropriate. A polyglot database may be other solution in particular cases. \\ \hline

\textbf{3.}	Provide transparent data access to the Cloud Database layer.
& 
Depending on the chosen database, some of the following steps may be required: 

\begin{enumerate}
	\item If the chosen database has a different model (relational, document-oriented, key-value, etc.), migrate the traditional model to the new model.
	\item Analyze the Cloud database and determine the differences with the legacy database (missing features, data type incompatibilities, etc.). Then, Implement or emulate the missing functionalities: The \textbf{Data Store Functionality Extension} and \textbf{Emulator of Stored Procedures} can be used.
	\item In case of the Cloud database is formed by different databases, e.g. relational database and key-value storages, it is necessary to hide the complexity of accessing the data. We can follow the \textbf{Data Access Component pattern}.
	\item Enable a transparent access API to the Business layer: If SQL is not supported by the Cloud database, it is necessary to implement \textbf{Specific and Generic functions} in order to provide a less coarse granularity API to the Business layer. The \textbf{Data Access Component pattern} provides techniques to implement these functions.
\end{enumerate} \\ \hline

\textbf{4.} Enable loose coupling between the Data Layer and Business Layer.
& Since the Business layer is not aware of the location of the Data Layer, it is necessary to implement the Data Layer as a loose coupling component.  To ensure loose coupling we can use a \textbf{message-oriented middleware} implementation between the two layers and using a standardized serialization format to transfer data, like XML, JSON for instance. Also, it is possible to implement a \textbf{web service} that contains the Data Layer functionality because an enterprise service bus (ESB) can be used as an intermediary to ensure loose coupling.\\ \hline

\textbf{5.}	Ensure scalability and elasticity.
& Since the scalability and elasticity are two of the main benefits of the Cloud, the Data Layer must be adapted to exploit these benefits. By applying the \textbf{Loose coupling pattern} the scalability will be guaranteed. By implementing the \textbf{Stateless component pattern} the elasticity will be optimized. \\ \hline

%\textbf{6.}	Evaluate the level of data consistency required by the application and the kind of data consistency provided by the selected database.
%& Analyze all processes that retrieve data from the database to determine the impact of a possible eventual consistency. Also identify the level of data consistency ensured by the offering data storage.\\ \hline

\textbf{6.}	Ensure the level of data consistency required.
& In case of Cloud database provides only eventual consistency, but the application requires a stronger consistency, we can implement the \textbf{Client-centric consistency}, into the \textbf{Data Access Component}, that enables additional data consistency.
We can also implement the \textbf{Data Abstractor pattern} to hide the data inconsistency making abstractions of the data. \\ \hline

\textbf{7.}	Verify the migration.
& Built different scripts and test case to sent queries to the Cloud Data Layer in order to verify the new implementation. \\ \hline

\end{longtable}
\end{center}

\section{Summarize of research questions and answers}

\begin{center}
\begin{longtable}{|p{5cm}|p{9cm}|}
\caption{Research questions and answers}\\
\hline
\multicolumn{1}{|c|}{\textbf{Question}} & \multicolumn{1}{c|}{\textbf{Answer}}\\
\hline
\endfirsthead
\multicolumn{2}{c}
{\textit{Continued from previous page}} \\
\hline
\multicolumn{1}{|c|}{\textbf{Question}} & \multicolumn{1}{c|}{\textbf{Answer}}\\
\hline
\endhead
\hline \multicolumn{2}{r}{\textit{Continued on next page}} \\
\endfoot
\hline
\endlastfoot
\textbf{1.}	What kind of applications can take benefits through the migration the Data Layer to the cloud? 
& This kind of applications are good candidates: 
\begin{enumerate}
	\item Highly-scalable Web sites.
	\item Social or customer-oriented applications.
	\item Mobile applications.
	\item The rapid increase in the need for resources cannot be compensated by buying new hardware. In case of unanticipated load, the load increases without prior indication.
\end{enumerate}

On the other hand, bad candidates for the Cloud are:

\begin{enumerate}
	\item Applications that involve extremely sensitive data.
	\item Any performance-sensitive application.
	\item Applications that require access to very intensive data workloads (for example, loading the database onto the cloud may be costly).
\end{enumerate}
 
 \\ \hline

\textbf{2.}	Beyond security and privacy issues, what are the other critical issues
involved in the migration of Data Layer that need to be solved or minimized? 
& The issues that need to be addressed are:
\begin{enumerate}
	\item Choosing the suitable data hosting.
	\item Enable transparent access of the Business layer to the data.
	\item Enable loose coupling between the Business Layer and Data Layer.
	\item Ensure scalability and elasticity.
	\item Ensure the level of data consistency required by the application.
\end{enumerate}
 \\ \hline

\textbf{3.}	What patterns or reusable solutions can be used to address the different issues and challenges? 
& There are a number a patterns that can be used to address the different issues: 
\begin{enumerate}
	\item The Data Store Functionality Extension and Emulator of Stored Procedures patterns are using for the implementation or emulation the missing functionalities.
	\item Data Access Component pattern is used to provide transparent access to the Data Layer hiding the complexity of the Database Layer. 
	\item Loose coupling and Stateless component patterns are using to ensure flexibility and scalability.
	\item Data Access Component and Data Abstractor patterns can be used to ensure a required level of consistency.
\end{enumerate}
 \\ \hline

\textbf{4.}	How manage the different trade-offs when the eventual consistency replaces the tradition strong consistency? 
& The adoption or not of a weaker level of consistency to reduce the latency and improve the performance, depend on the application requirements. If strong consistency is required in particular processes or modules, then it is impossible to relax this requirement in those parts of the system. For example, Users information management should be accessible under strong consistency. Other processes like acceding news in a social network may be worked under eventual consistency. The trade-off presents in the discussion of using eventual or strong consistency can be resolved by considering:

\textbf{Strong consistency:} All users seeing the last version of the data VS. high latency, low performance, low availability.   

\textbf{Eventual consistency:} Low latency, high availability, high performance in response time VS. There is no guaranties that all users can seeing the last version of the data; the data can be lost if a failure occurs during the replication.  

\\ \hline

%\textbf{5.}	How to estimate the cost-benefit relationship associated with migration of the Data Layer? &  \\ \hline


\end{longtable}
\end{center}

%\section{Summarize of issues}
%
%\begin{center}
%\begin{longtable}{|p{5cm}|p{9cm}|}
%\caption{Research questions and answers}\\
%\hline
%\multicolumn{1}{|c|}{\textbf{Issue}} & \multicolumn{1}{c|}{\textbf{How address the issue?}}\\
%\hline
%\endfirsthead
%\multicolumn{2}{c}
%{\textit{Continued from previous page}} \\
%\hline
%\multicolumn{1}{|c|}{\textbf{Issue}} & \multicolumn{1}{c|}{\textbf{How address the issue?}}\\
%\hline
%\endhead
%\hline \multicolumn{2}{r}{\textit{Continued on next page}} \\
%\endfoot
%\hline
%\endlastfoot
%\textbf{1.}	Choosing the suitable data hosting & . \\ \hline
%
%\textbf{2.}	Enable transparent access of the Business layer to the data & . \\ \hline
%
%\textbf{3.} Enable loose coupling between the Business Layer and Data Layer &  \\ \hline
%
%\textbf{4.}	Ensure scalability and elasticity &  \\ \hline
%
%\textbf{5.}	Ensure the level of data consistency required by the application &  \\ \hline
%
%\end{longtable}
%\end{center}

%What types of use cases and applications are best suited to each read consistency model? -> It questions can be useful for later section. Available in http://aws.amazon.com/simpledb/faqs/ :)


%----------------------------------------------------------------------------------------
