\documentclass{vldb}
\input{pkg}

% Include information below and uncomment for camera ready
\vldbTitle{WHTAP: Dual Consistent Snapshot based Concurrency Control for Hybrid Transaction and Analysis Processing Systems}
\vldbAuthors{Liang Li}
\vldbVolume{13}
\vldbNumber{2}
\vldbYear{2019}
\vldbDOI{https://doi.org/TBD}

\begin{document}
\title{WHTAP: Dual Consistent Snapshot based Concurrency Control for Hybrid Transaction and Analysis Processing Systems}
\numberofauthors{8}
\author{
\alignauthor
Liang Li\\
       \affaddr{Institute for Computer Scientce and Technoleges}\\
       \affaddr{Northeastern University}\\
       \affaddr{Shenyang, China}\\
       \email{liliang@stumail.neu.edu.cn}
% 2nd. author
\alignauthor
G.K.M. Tobin\\
       \affaddr{Institute for Clarity in Documentation}\\
       \affaddr{P.O. Box 1212}\\
       \affaddr{Dublin, Ohio 43017-6221}\\
       \email{webmaster@marysville-ohio.com}
% 3rd. author
\alignauthor Lars Th{\Large{\sf{\o}}}rv{$\ddot{\mbox{a}}$}ld\\
       \affaddr{The Th{\large{\sf{\o}}}rv{$\ddot{\mbox{a}}$}ld Group}\\
       \affaddr{1 Th{\large{\sf{\o}}}rv{$\ddot{\mbox{a}}$}ld Circle}\\
       \affaddr{Hekla, Iceland}\\
       \email{larst@affiliation.org}
\and  % use '\and' if you need 'another row' of author names
% 4th. author
\alignauthor Lawrence P. Leipuner\\
       \affaddr{Brookhaven Laboratories}\\
       \affaddr{Brookhaven National Lab}\\
       \affaddr{P.O. Box 5000}\\
       \email{lleipuner@researchlabs.org}
% 5th. author
\alignauthor Sean Fogarty\\
       \affaddr{NASA Ames Research Center}\\
       \affaddr{Moffett Field}\\
       \affaddr{California 94035}\\
       \email{fogartys@amesres.org}
% 6th. author
\alignauthor Charles Palmer\\
       \affaddr{Palmer Research Laboratories}\\
       \affaddr{8600 Datapoint Drive}\\
       \affaddr{San Antonio, Texas 78229}\\
       \email{cpalmer@prl.com}
}
%\additionalauthors{Additional authors: John Smith (The Th{\o}rv\"{a}ld Group, {\texttt{jsmith@affiliation.org}}), Julius P.~Kumquat
%(The \raggedright{Kumquat} Consortium, {\small \texttt{jpkumquat@consortium.net}}), and Ahmet Sacan (Drexel University, {\small \texttt{ahmetdevel@gmail.com}})}
%\date{30 July 1999}


\maketitle


\begin{abstract}
The abstract for your paper for the PVLDB Journal submission.
The template and the example document are based on the ACM SIG Proceedings  templates. This file is part of a package for preparing the submissions for review. These files are in the camera-ready format, but they do not contain the full copyright note.
Note that after the notification of acceptance, there will be an updated style file for the camera-ready submission containing the copyright note.
\end{abstract}
\begin{CJK*}{UTF8}{gbsn}
\section{Introduction}
%Recent years, 伴随内存和CPU等mordern hardware的快速发展, 内存数据库成为一个很hot的领域.
%最明显的特点是, the biggest bottleneck of disk io is gone.
%然而, 据[]了解, 锁开销占据的百分比很大, 是一个新的bottleneck.
%面对morden hardware \textbf{多核，大内存，协处理器，高速网络} 的 , main memory database , 设计, 有些并发协议不能满足性能要求.
%比如说:
%经典的两阶段锁，在高吞吐量下，锁的开销占整个事务处理时间的百分比很大
%最近几年关于，轻量并发锁的研究有很多,...vll, ill 等等.
%其中notable的一个算法是kunren的 VLL 算法, 相对与传统的两阶段锁有很大的改进, 不仅可以避免死锁, 还可以高效运行在heavy work contention的场景.
%
%
%
%\subsection{Observation}
%
%然而, 基于单版本的锁协议, 读和写之间有很明显的冲突, 也就是说读会被写操作明显阻塞.
%which is not good enough for following applications:
%
%\begin{itemize}
%  \item HTAP workload: like Gartner says,
%  \item consistent checkpoint:
%\end{itemize}
%\TODO{ find paper, MVCC 内存开销比较大}
%\subsection{Motivation}
%To both support maintaining a snapshot view of data and quick transaction control, the present answer is MVCC.
%多版本MVCC 能专门针对查询操作优化, 提高吞吐量, 然而, MVCC的内存开销相对single version的算法有所提高.
%大量的程序都是 read-intensive的, 我们倾向于集成vll和mvcc的机制提出一个专门针对读优化的并发算法.
%此外, 针对多版本开销大的问题,我们必须针对性的提出优化方针.
%MVCC的缺点: 1, 内存开销大; 2, malloc耗时.
%MVCC is not the appreciate answer.
%
%\subsection{Contribution}
%
%In this paper, we invent a concurrent control protocol based two version locking, named 2VLL.
%\liliang{As the best of knowledge, this is the first work tries to use two version of data to deal with HTAP workloads.}
%\begin{enumerate}
%	\item 针对读优化的vll多版本协议.
%	\item 进一步提出了ovll算法.
%	\item 新算法, 可以很方便的执行快速增量式检查点算法, 这和第一篇文章连起来了.
%	\item 大量的实验.
%\end{enumerate}
%\subsection{Organization}


%Traditionally, database processing has been broadly classified into two categories: online transaction processing (OLTP) and online analytical processing (OLAP). OLTP systems preceded the emergence of relational database management systems (RDBMSs). OLAP, which was enabled by the arrival of RDBMSs and SQL, and enhancements to them, has gained even more attention in the last decade or so with the emergence of column stores and Big Data technologies like Map/Reduce, Hadoop and Spark. Data generated by OLTP systems are periodically moved in a batched fashion into OLAP systems for analytical processing.  
%\liliang{Efficiently handling both OLTP and OLAP workloads is difficult}
%because they require different algorithms and data structures.
%A common approach for handling such hybrid workloads is to keep
%a separate data warehouse for OLAP isolated from the OLTP system. 
%Data warehouse systems are optimized for read-only analytical workloads and are periodically refreshed through a batch job
%containing the latest data updates. This provides both good performance isolation between the two workloads, and the ability to tune
%each system independently. \liliang{The downsides, however, are that the data analyzed is possibly stale and that there is additional overhead of interfacing with multiple systems [45].}
%Traditional analytical systems often provide business insights too late. Financial institutions want to address potential fraud, not days or weeks later. 
%In the last few years, increasingly organizations want to be able to base their decisions on the latest set of raw data and the real-time analytics derived from them.
%\textbf{Freshness is very IMPORTANT.}


Traditionally, database processing has been broadly classified into two categories: 
online transaction processing (OLTP) and online analytical processing (OLAP).
\liliang{Efficiently handling both OLTP and OLAP workloads is difficult.}
A common approach for handling such hybrid workloads is to keep
a separate data warehouse for OLAP isolated from the OLTP system. 
Data generated by OLTP systems are periodically moved in a batched fashion into OLAP systems for analytical processing.
This provides both good performance isolation between the two workloads, and the ability to tune
each system independently. \liliang{The downsides, however, are that the data analyzed is possibly stale and that there is additional overhead of interfacing with multiple systems [45].}

Traditional analytical systems often provide business insights too late. Financial institutions want to address potential fraud, not days or weeks later. 
In the last few years, increasingly organizations want to be able to base their decisions on the latest set of raw data and the real-time analytics derived from them.
\textbf{Data Freshness(时效性) is very IMPORTANT.}

In Recent years, there has been tremendous interset in developing HTAP systems.
Gartner coined the term “Hybrid Transactional and Analytical Processing” (\textbf{HTAP}) to describe this new type of databases~\cite{bib-gartner1}\cite{bib-gartner2}. Another term used to describe this type of processing is \textbf{Operational Analytics}~\cite{bohm2016operational}. The term indicates that insight and decision take place instantaneously with a transaction.

%HTAP为企业提供的新功能包括：

%\begin{itemize}
%	\item 能够根据潜在客户的搜索实时调整价格或实时更新在线产品目录
%	\item 作为销售人员，创建模拟以确定要销售给特定客户的产品版本
%	\item 作为销售经理，随着季度末的临近，实时监控您的团队
%	\item 根据对事件的反应计划货物运输，例如罢工或暴风雪
%\end{itemize}

Gartner initially outlined four key HTAP benefits:
\begin{itemize}
	\item \textbf{Architectural and technical complexity.}
%	In traditional approaches, data must be extracted from the operational database, transformed and loaded into the analytical database, which requires the adoption of database replication; extraction, transformation and loading (ETL) tools; enterprise service buses (ESBs); message-oriented middleware (MOM); and other integration tools.
	In HTAP, data doesn't need to move from operational databases to separated data warehouses/data marts to support analytics.
	\item \textbf{Analytic latency.} 
%	In a classic setting, it can take hours, days or even weeks from the moment data is generated by the transaction processing application to when it can be used for analytics.
%	Although this is adequate for certain types of analytics, and even processes, it may be suboptimal for others. For example, being able to perform financial consolidation at any point in the month can enable a CFO to better evaluate the business impact of economic trends and take early corrective actions.
	The transactional data of HTAP applications is readily available for analytics when created.
	\item \textbf{Synchronization.} 
%	{If analytical and transactional data storage is separated, when business users want to "drill down" from a point-in-time aggregate into the details of the source data in operational database, in many cases they find the source of data "out of synch" because of the analytic latency. 
	In HTAP, drill-down from analytic aggregates always points to the "fresh" HTAP application data.
	\item \textbf{Data duplication.}
%	In traditional architecture, multiple copies of the same data must be administered, monitored, managed and kept consistent, which may lead to inaccuracies, timing differences, and inconsistency. 
	In HTAP, the need to create multiple copies of the same data is eliminated (or at least reduced).
\end{itemize}








% Elliott also points to faster querying, strong data compression and flexibility to make changes among HTAP's benefits.

%An example is the airline industry [59], 
%where analytics of flight bookings is offered as a service to travel agents and airline companies.
New database technologies, including In-Memory Computing (IMC), have emerged the latest years that support hybrid workloads within one database instance. These new database systems take advantage of the new hardware technologies such as sold-state disks (SSD) and cost reductions in RAM memory.
These database technologies allow for transactional and analytical processing to execute at the same time and on the same data [2]. Consequently, real-time processing and reports are possible at the same time that the transactions occur.
To get the OLTP\&OLAP workload runs in one single system,
Multi version concurrency control is currently the best approach for 
supporting transactions in mixed workloads. 
MVCC allows for a high degree of parallelism as readers do not block writers. 
The core principle is straightforward: if a tuple is updated,
new physical version of this tuple is created and stored alongside
the old one in a version chain, such that the old version is still
available for readers that are still allowed to see the older version.
Timestamps ensure that transactions access only the most recent
version that existed when they entered the system.


%To overcome this problem, several alternatives have been re-
%cently introduced which target such hybrid workloads (e.g., SAP
%HANA [17], HyPer [28, 40], SQL Server [30], MemSQL [52], Or-
%acle [29], etc.). \liliang{However, a big challenge for these systems is the
%	performance impact that workloads have on each other.} A recent
%study by Psaroudakis et al. [47] analyzed cross-workload interfer-
%ence in HANA and HyPer and found that the maximum attainable
%OLTP throughput, when co-scheduled with a large analytical work-
%load, is reduced by at least three and five times respectively.



\textbf{Limitations.} In MVCC implementations that rely on a single execution engine,
\liliang{all transactions, no matter whether they are short running OLTP
transactions or scan-heavy OLAP queries, are treated equally and
are executed on the same (versioned) database. }While this form
of processing unifies the way of transaction management, it also
has unpleasant downsides under HTAP workloads: 
\begin{itemize}
\item First and foremost, scan-heavy OLAP queries heavily suffer when they have to
deal with a large number of version chains [16]. During a scan,
version chains must be traversed to locate the most recent version
of each item that is visible to the transaction. It involves expensive
timestamp comparisons as well as random accesses when going
through the version chains. As column scans typically take signif-
icantly more time than short-running transactions, which touch
only a few entries, a large number of OLTP transactions can per-
form updates in parallel to create such version chains. 
\item Apart from
this, these version chains must be garbage collected from time to
time to remove versions that are not visible to any transaction
in the system. Garbage collection is typically done by a separate
thread, which frequently traverses these chains to locate and delete
outdated versions [12, 25, 26]. This thread has to be managed and
synchronized with the transaction processing, utilizing precious
system resources.
\end{itemize}

Interestingly, a large number of database systems, including
major players like PostgreSQL [18], Microsoft Hekaton [6], SAP
HANA [7], HyPer [16], MemSQL [1], MySQL [2], NuoDB [3], and
Peloton [4] currently implement a form of multi-version concurrency control (MVCC) [5, 14, 24] to manage their transactions.
HTAP workload, however, consisting of transactions of inherently different nature, does not fit the uniform processing in a single execution
engine, which treats all incoming transactions in the same way. Unfortunately, many state-of-the-art MVCC systems [1, 2, 4, 6, 16, 18]
implement some variant of such a processing model.



As the above says, it is a bad way to treat OLTP and OLAP equally.
To the best of our knowledge, the recent method is that the system classifies queries based on the
type and executes them in separate replica within a single system.
How to organize the sync work between oltp and olap replica has the following 2 methods:
(1) Based on recording and merging delta snapshot.
	The recent work AIM~\cite{Braun2015Analytics} and BatchDB\cite{Makreshanski2017BatchDB} employ the 
	delta snapshot method to sync the data, it exploits a extra memory space to record transactions and generate a delta snapshot over a certain interval and then merge the snapshot into olap datasets. Notable, the snapshot of batchdb is a special set of in-memory log. In industry, the most typical system is SAP HANA.
	Hana hold a write-optimized delta store to collect the insert and delete operations, and merge to a read-optimized and immutable main store
(2) Based on Copy on Write. the system call function fork can get a virtual snapshot of the oltp data used as a olap task,
but fork makes the oltp and olap data has the same physical data organizations, which is not good for special optimizations. What's more , fork granularity is too coarse,
the performance is very influenced by dataset size, is not very flexible. Anker~\cite{Sharma2017Accelerating} proposes a fine-grained system level virtual snapshot system call, but it must rely on the schema of column storage.

	

Apparently, Delta snapshots are more flexible. 
There are two challenges in using delta snapshot. 
On the one hand, how to record delta snapshots without affecting the OLTP's performance as much as possible is difficult; 
on the other hand, merging snapshots requires blocking OLAP, how to avoid that features is the another challenge.

As far as we know, the present systems both in academic and industry do not address the wait-free execution of tp and ap.
This paper presents WHTAP, an alternative design of a database
engine architecture, which handles hybrid workloads with guarantees for performance, data freshness, consistency, and isolation.
To accommodate both OLAP and OLTP, WHTAP primarily relies on replication, with
a primary replica dedicated for OLTP workloads and a secondary
replica dedicated for OLAP workloads. This allows for workload
specific optimizations for every replica and physical isolation of
resources dedicated for each workload.
WHTAP not only guarantees the performance of oltp and olap, but also ensures that both oltp and olap are executed wait-free style.


In summary, this work get the following contributions:
\begin{itemize}
	\item \textbf{Dual Snapshot.} We propose a dual snapshot structure into the storage engine to ensure the freshness and wait free feature of the WHTAP running.
	\item \textbf{LSM-like query.} To let the analysis queries from the latested data, we raised a lsm-like query algorithms, it ensure the analysis transaction runs in wait-free mode and query the latested data.
	\item \textbf{High performance Wait-Free HTAP (WHTAP) system.} we develop a prototype and opensource the code in Github~\footnote{\url{https://github.com/bombehub/HTAP}}, which can ensure the serializable of oltp and the olap is snapshot isolation. compared with the traditional one system running method, our system can get the similar performance of oltp, but a several magrnitude of olap performance.
\end{itemize}

The paper is organized as follows, \secref{sec:overview} shows the backgound and high level overview of the htap systems,
\secref{sec:componets} gives the much more detail of each component of the WHTAP system.
Then, in \secref{sec:state} shows a life cycle of the whtap systems and the algorithms detail.
\secref{sec:choice} discuss several design choice in real-world systems.
\secref{sec:exp} evaluate several experiments with ycsb and tpcc benchmark.
Related work is given in \secref{sec:rw},
\secref{sec:conclusion} and \secref{sec:fw} draws the conclusion and future works.



\section{Overview}\label{sec:overview}
Foremost, in this section, we talk about the high level view of the HTAP systems.
Recent research has proposed two different architectures
to handle the mixed workload, both of them are shown in \figref{fig:fork} and \figref{fig:delta}.

%\begin{figure}[htb]
%	\centering
%	\begin{minipage}[t]{0.23\textwidth}
%		\centering
%		\includegraphics[width=\textwidth]{fig/2method-fork.pdf}
%		\caption{fork}
%		\label{fig:fork}
%	\end{minipage}
%	\hspace{0.1cm}
%	\begin{minipage}[t]{0.23\textwidth}
%		\centering
%		\includegraphics[width=\textwidth]{fig/2method-delta.pdf}
%		\caption{delta.}
%		\label{fig:delta}
%	\end{minipage}
%\caption{2 kind of method for HTAP systems.}\label{fig:2method}
%\end{figure}

\begin{figure}
	\centering
	\includegraphics[width=0.4\textwidth]{fig/2method-fork.pdf}
	\caption{fork based data proparagtion.}\label{fig:fork}
\end{figure}


\begin{figure}
	\centering
	\includegraphics[width=0.4\textwidth]{fig/2method-delta.pdf}
	\caption{delta based data proparagtion.}\label{fig:delta}
\end{figure}


\textbf{\textit{Complete Snapshot}}, is the mechanism employed by most modern operating systems to
efficiently manage the initially common memory state of
parent and child process after a fork system call. 
Systems, like HyPer~\cite{kemper2011hyper}, use this OS mechanism to manage different
snapshots of their database. While Insert/Update/Delete are processed by
the parent process on the most current version of the data,
analytical query processing happens in the child process(es)
on older snapshots.

\textbf{\textit{Delta Snapshot}}, is a mechanism proposed by Krueger
et al. [25]. Their idea is to accumulate all incoming Insert/Update/Delete in
one data structure (called delta) and to process analytical
queries in a separated structure. Periodically, the
delta records are applied to the main, which is referred to as
merge. 
%If response time for Put is critical, we can maintain
%two deltas, one for Insert/Update/Delete and one for records currently
%being merged, and atomically switch their reference at the
%starting point of a merge. This approach also guarantees
%snapshot isolation for the analytical queries as they work on
%a slightly outdated, but consistent version of the data.



\subsection{Design Goals}\label{sec:goals}
We identify the key requirements for engines that aim to handle hybrid transactional and analytical processing workload (HTAP).
\begin{enumerate}
	\item \textbf{Single system with a Uniform Interface.} In order to provide a transparent user experience, the htap system should expose a single table. Instead of requiring the user to explicitly specify to access OLTP and OLAP data, even though the data is internally two physically isolated data. \eg like memsql and sql server, expose two table engines is not what we want.
	\item \textbf{Data Freshness.} To make sure transactional data is readily available for analytics as quick as possible. OLTP should periodically and frequently merges transaction snapshot data into the OLAP replica. it can make sure that OLAP runs on the latest snapshot version as much as possible. It can ensure the ad-hoc system's effectiveness.
	\item \textbf{High Throughput for OLTP and OLAP.} For OLTP and OLAP mixed load, we need to achieve as much better performance as possible. The OLTP workload requires a high throughput concurrency protocol, For the OLAP workload, we need to separate the data of OLTP and OLAP, Gets static snapshot data for long queries.
	\item \textbf{Wait-Free OLTP and Wait-Free OLAP.} \cite{Salles.12}\cite{Cao.13} indicate that taking delta snapshots (of OLTP data) can cause system blocking. Since the OLTP task is sensitive to latency, our system needs to think about how to reduce this latency spike phenomenon. On the other hand, merging delta snapshots destroys the read-only feature of OLAP data, thus the corresponding OLAP query may be blocked. therefore, how to merge delta snapshots without blocking the execution of OLAP queries is another big challenge.
	\item \textbf{Small Memory footprint.} Although the price of memory is rapidly dropping, it
	is still relatively expensive to store many OLTP applications entirely in main memory, and main memory becomes a limited resource. Therefore, complete multi-versioning (where database updates do not overwrite previous values, but rather add another version
	to a new place in memory) is likely to be too expensive	in terms of memory resources for many applications.
	Ideally, the checkpointing process should require minimum additional memory usage.
	The multi-version concurrency control protocol has been abandoned by us, and we have adopted the most advanced single version protocol, Tictoc.
\end{enumerate}  


\subsection{Dual Snapshot Based Architecture}
In this part, we give the high level overview of our novel systems.

%我们采用增量快照的方式处理htap负载。具体来说，就是周期性的生成oltp的数据增量快照，然后定期合并到olap的存储引擎内。
We use dual delta snapshots to handle the mixed workload. 
Specifically, transaction data are collected as delta snapshots and periodically merged into the OLAP storage engine.
As shown in \figref{fig:toplevel}.

\begin{figure}[htb]
	\centering
	\includegraphics[width=0.45\textwidth]{fig/toplevel.eps}
	\caption{Dual Snapshot.}\label{fig:toplevel}
\end{figure}


Transactions and analysis queries are handled in isolation mode.
In other words, Two replica data are used to process OLTP and OLAP requests, respectively.
In order to ensure the system performance, we employ the dual snapshot model to flexibly ensure the feature that running OLTP and OLAP in wait free.
%能够确保OLTP的可串行化和OLAP的快照隔离级别。
The dual snapshot model that separates OLTP from OLAP ensures serializability of OLTP and snapshot isolation level of OLAP.


In high level, The system runs alternately between two cycles.
Assume in period one, transaction data are recorded into delta snapshot within delta1, 
and delta2 are to be merged into OLAP data store.
Then, the next period two, delta1 and delta exchange their roles. 
That is to say, recoding delta snapshot in delta2, and merging delta1 to OLAP data store. 
The life cycle duration is shorter, the freshness of the system is much more high.

The challenge is that how to recode delta snapshot but not block OLTP running, with the same time merge delta snapshot but not block OLAP running.
% TP 系统运行过程中需要生成快照，如何保证TP的业务不受阻塞，并尽可能的保证性能; 以及合并快照的时候，如何保证ap的业务正常执行，而不受阻塞。
As we know, taking delta snapshot should lead to performance loss and latency spike, and merging delta snapshot should break the OLAP's readonly feature.

To breakthrough those challenge, we propose a architecture, named WHTAP, which have 5 components in all, as shown in \figref{fig:toplevel}.
%为了确保完成以上五个功能，系统一共可以划分为6个部分，对应于图上所标示的编号：
\begin{enumerate}
	\item Storage Engine. OLTP and OLAP is stored in 2 replicas, besides, we need a some space to recode the delta data. This module is primarily responsible for the organization and storage of data.	 
	\item Transaction Concurrency Control. How to schedule the running logical of OLTP workload and guarantee performance.
	\item Delta Snapshot. To eusure the data freshness, the running data of transaction must be transform into OLAP Store. To take snapshot, there have three difficulties. 1, How to make sure the linear serializability between OLTP and delta snapshot; 2, lose litter performance as much as it can; 3, do not cause evidently latency spike.
	\item Compact Snapshot. The dual snapshot structure needs to periodically merge the frozen snapshot data into the OLAP data. The difficulty is how to merge snapshot without blocking the normal execution of OLAP queries.	
	\item Query Execution. OLAP data is updated periodically by delta snapshot data, how to ensure the high performance of OLAP queries and the guarantee of snapshot isolation level with the same time keep the  wait free feature.
	\item State Controller. WHTAP system maintain a state controller to make sure the query runs correct and wait-free, and merge snapshot in the suitable time.
\end{enumerate}

\section{WHTAP Components}\label{sec:componets}
In this and next section, we will discuss the detail of the above components.
\subsection{Storage Engine}
As \secref{sec:goals} says, on the one hand, the system must expose one single schema, 
on the others, it should maintain 2 replica data for OLTP and OLAP data, respectively.
Besides, it should employ 2 extra memory space to recode the delta snapshot.

The storage engine is organized in the form of tuple level.
Each logical tuple contains two static in-memory row, used to store oltp data and olap data, respectively.
Besides, it holds two copies of dynamic data Delta1 and Delta2.
The dynamic data are used to record increments alternately.
Dynamic memory can save memory footprint appropriately, we can also use static memory allocation for performance reason.
Like Tictoc~\cite{yu2016tictoc:}, for each TP tuple we maintain 2 counters represent the read and write timestamps.

Compared with single version engine, our memory footprint is between 2 and 4 times.
For example, in \figref{fig:engine}, it is a user information table. For each user, like "john", we maintain two rows of data, one for the TP and one for the AP analysis. Once the user updates John's information, we will modify the data(record it in the Tp and the corresponding delta line).
For external data access, we expose a piece of logical data to the outside. 
This satisfies the design goals of the external single schema.
Note that ap data is older than tp data.

\begin{figure}[htb]
	\centering
	\includegraphics[width=0.4\textwidth]{fig/2v-layout.pdf}\\
	\caption{Table Storage Engine.}
	\label{fig:engine}
\end{figure}

So far, there are many indexes designed to organize the in-memory row data. Such as Adaptive Radix Tree(ART)~\cite{Leis2013The}\cite{Leis2016The}, BwTree~\cite{Levandoski2013The}, Masstree~\cite{Mao2012Cache}, SkipList~\cite{Pugh1990Skip}, etc.
\cite{xxx}(ICDE18) gives the plenty performance evaluation for the state of art indexes.
How to use index is beyond the scope of this paper.
To access the table fast and simple, we exploit hash index or a memory-optimized b+ tree for organizing tuples.
both of oltp, olap and delta data share the same index.




\subsection{Transactions and Concurrency Control}\label{alg:oltpcc}
To handle HTAP workload, transaction can be categorize to \textbf{update} (OLTP workload) and \textbf{query} (OLAP workload).

\liliang{For those update transaction}, to guarantee serializable, 
the most popular protocol is 2 phase locking protocol.
However, as shown in DBx1000~\cite{DBLP:journals/pvldb/YuBPDS14} project,
two phase locking protocol is not suitable for modern database systems. 
It turns out that timestamp ordering protocol is much better for main memory database systems, including OCC and MVCC, etc.

Component 2 (\figref{fig:toplevel}) choose OCC based protocol to schedule the OLTP workload.
First, we describe the basic OCC protocol.
Within OCC, the DBMS tracks the read/write sets of each transaction and stores all of their write
operations in their private workspace [28].
Then, once a transaction commits, the system determines whether that transaction’s read set
overlaps with the write set of any concurrent transactions. If no overlap exists, then the DBMS applies the changes from the transaction’s workspace into the database; otherwise, the transaction is aborted and restarted. The advantage of this approach for main
memory DBMSs is that transactions write their updates to shared memory only at commit time, and thus the contention period is short [42]. 
%\liliang{这个 advantage 可以和快照记录很好的结合, write phase的每一次写操作，我们同步进行两遍，一次执行在oltp数据中，一次执行在delta数据中。}
This outstanding feature could be well combined with recoding snapshot, that is, each write operation occurred in write phase will accompany with a \textbf{write\_delta} operation synchronously.
That is why we choose OCC as WHTAP's transaction execution protocol.
The disadvantage is that the duration time of write phase is approximately doubled, which may cause concurrent transaction validation failures, thus may degrade system performance. This remains to be verified by experiments. 

Our baseline protocol is TICTOC~\cite{yu2016tictoc:}, a new concurrency control algorithm that achieves higher concurrency than state-of-the-art T/O
schemes and completely eliminates the timestamp allocation bottleneck. The key contribution of TicToc is a technique that we call data-driven timestamp management: instead of assigning timestamps to each transaction independently of the data it accesses, TicToc embeds the necessary timestamp information in each tuple to enable each transaction to compute a valid commit timestamp after it has run, right before it commits. This approach has two benefits.
First, each transaction infers its timestamp from metadata associated to each tuple it reads or writes. No centralized timestamp allocator exists, and concurrent transactions accessing disjoint data do not communicate, eliminating the timestamp allocation bottleneck.
Second, by determining timestamps lazily at commit time, TicToc finds a logical-time order that enforces serializability even among transactions that overlap in physical time and would cause aborts in other T/O-based protocols. In essence, TicToc allows commit timestamps to move forward in time to uncover more concurrency than existing schemes without violating serializability.

The another popular stateofart OCC protocol is Silo.
\liliang{Note that our algorithms can be adopt to multi version algorithms.}
since the high overhead of memory within MVCC, we abandon it in WHTAP system.

\liliang{For those query transaction}, we can directly query from the AP replica (a snapshot), the more detail can be found in \secref{sec:query}.

\subsection{Delta Snapshot}\label{sec:delta}
%我们的并发算法主体采用的是tictoc算法，但事务执行的同时，我们需要配合着记录增量快照。
%下文，我们首先介绍目前比较主流的几个快照算法，尤其是增量快照算法。然后，结合tictoc和双快照模型结合的情况。
The main body of our concurrent algorithm is the TICTOC algorithm, but we need to record delta snapshots in conjunction with transaction execution. In this part, we first introduce the most popular main memory snapshot algorithms, especially delta snapshot algorithms. Then, combine the tictoc with the dual snapshot model.

\subsubsection{State of Arts}
In the most recent work~\cite{myposter}\cite{2018arXiv181004915L}, Our database group compared the state of art main memory snapshot algorithms, 
including Naive Snapshot, Copy On Update/fork, Wait-Free Zigzag, Wait-Free Pingpong, Hourglass, Piggyback.
Pingpong~\cite{Cao.13} and Hourglass~\cite{myposter} are used to generate delta snapshot, 
both of them not only have a good performance, but also, meanwhile, generating cdelta snapshot does not lead to latency spikes.
the main idea of Pingpong and Hourglass is that using replica to record the delta snapshot, and use roll switch to ensure no latency spike.

In this paper, we could combine the idea of TicToc with PP or HG.
each write instruction (only appears in write phase) should write to particular data according to the pointer as shown in pingpong or hourglass.

\subsubsection{Consistent}
The above algorithms are all based on a physical consistent state, in other words, the snapshot can be capture when the system is in the consistent state.
Unfortunately, in the OLTP application, to maintain consistent state, we should block newcomming transactions until all the running transactions has finished.
To overcomming this problem, CALC and my recent work~\cite{2018arXiv181004915L} employ the virtual snapshot to take snapshot without blocking transaction running.

Both pingpong and hourglass are able to combine with virtual snapshot, that is, when we take virtual snapshot into consideration, once the pointer swapping exchanged, it only influence the new-coming transactions behavior but ignore the current running transactions.
What's more, since TicToc access OLTP data only during writes phases, we can improve the algorithms that WHTAP only need to determine which cycle the time point (at the beginning of  write phase) belongs to (odd or even cycle). 
In other words, the pointer swapping only influence those tranactions which goto writhe phase behind the pointer swapping timepoint.

\begin{algorithm}[htbp]
	\caption{ Transaction Execution Thread}
	\label{alg:oltp}
	\KwIn{Transaction \textit{T}}	
	\textit{Read Phase} \\
	\If{Validation Passed}{
		pointer = p\_update \\
		\For{each request in T.writeset}{
			write(index(request.key)) \\
			malloc\_delta(pointer) \\
			write\_delta(index(request.key), pointer) \\
			KeySet.insert(request.key) \\
			index(request.key).wts = index(request.key).rts = commit\_ts \\
			unlock(index(request.key)) \\
		}
	}
\end{algorithm}	

Algorithm \ref{alg:oltp} presents the detail of the transactions concurrency control and the delta recode process. the read and validation phase is the same with tictoc algorithm, the detail about line 1 and line 2 could be found in \cite{yu2016tictoc:}。
The difference is in the write phase. 
Once step into write phase, the transaction first decides which cycle it belongs to, and then decides whether the delta snapshot should be recorded in delta 1 or delta 2.
The thread submits the changes in the writeset to OLTP data (line 5) and the corresponding periodic delta snapshot (line 6 and 7), respectively.
Because the storage engine shares the index structure, we need an additional KeySet to record the modified key (line 8) of the current cycle. 
If we maintain the index structure separately for the delta structure, the KeySet here is not essential.


\begin{figure}[htb]
	\centering
	\includegraphics[width=0.45\textwidth]{fig/overview.pdf}
	\caption{Delta Snapshot.}\label{fig:overview}
\end{figure}

Memory needs to be allocated dynamically before each record delta operation.
But malloc is very time-consuming. For performance, we can set delta1 and delta2 to static data.

It should be noted that the time-point of the delta snapshot generation cannot be reflected in algorithm \ref{alg:oltp}.
This time point must be the point at which all transactions entering the write phase in the previous cycle are committed.
As shown in \figref{fig:overview}, there are 2 transaction T1 and T2, T1's writeset is 3,5(marked in yellow), T2's writeset is 6,8.
and once a peroid is end, the role switch is trigger when the T1 is during the write phase, but T2 is not.
The T1 write phase will always write to delta2.
The detail could be found in \secref{sec:example}.


\subsection{Compact Snapshot}
% \secref{sec:delta} 描述了如何交替性生成快照，为了保证 olap 的数据 freshness，我们可以定期把数据同步 compact 到 olap 的数据中。
\secref{sec:delta} describes how to generate delta snapshots interchangeably(交替地). 
To ensure OLAP's data freshness, we can synchronize compact the other data snapshot into OLAP store periodically.
% 注意，除了compation操作，整个olap的数据一直是只读的。
Note that the data for the entire OLAP is always read-only, except for the period of compaction operation.
% 所以问题是如何合并这些数据，而不需要阻塞 olap 查询的执行，同时尽可能的保持高性能。
So the difficulty is how to compact the delta snapshot without blocking the execution of the OLAP query while maintaining as high performance as possible.

\subsubsection{LSM-like}
%为了保证写不阻塞读，比较经典的做法就是前文我们提到的 MVCC. 
%直观的,我们可以把 olap 引擎设计成 mvcc 模型的，把 delta snapshot 按照多版本链的方式 merge 到 olap 系统内。
%一方面，MVCC存储开销过大，另一方面，还有版本链过长，导致查询性能下降的问题。
To ensure that writing does not block reading, the classic approach is employing MVCC as we mentioned earlier.
Intuitively, we can design the OLAP-part engine as a MVCC model, and compact the delta snapshot into the OLAP engine in a multi-version chain.
On the one hand, MVCC's memory overhead is very large. 
On the other hand, it is a tough problem with too long version chains, which leads to poor query performance.

%在本文中，我们采用的是类似于 LSM 树的方式组织数据。
%类比来看，the frozen delta snapshot 可以看作是MEM table，而olap的数据，可以看作disk data。
%一旦delta snapshot 生成，我们便可以compat到olap数据中，查询事务可以先从delta中查找，没有找到的时候，在到olap数据中寻找。
In this article, we used an LSM-like approach to organizing data.
By analogy, the frozen delta snapshot can be regarded as MEM table, while OLAP data can be regarded as disk data.
Once the delta snapshot is taken, the system compact the snapshot into the OLAP data, 
and the query transaction must first search from the delta, and if the result is not found, then search from the OLAP data.

To make the system keep the feature of Wait-Free,
A system running the \textbf{WHTAP} algorithm cycles through five states, or phases:
\begin{enumerate}
	\item  the \textit{NORMAL} phase, in which no delta is being taken or merged,
	\item  the \textit{FROZEN} phase, immediately preceding a virtual
	point of consistency delta snapshot. At the end of this phase, the delta snapshot will be generated.
	\item  the \textit{WAITING} phase, immediately following a virtual point
	of consistency but before the delta snapshot compaction process has started. Note that we are not compacting right after the snapshot is taken, WAITING phase is very important for maintaining OLAP's wait-free feature.
	\item  the \textit{COMPACTION} phase, during which a background thread
	compact the delta snapshot to olap replica, and deletes
	those snapshot versions once finish the compact,
	\item  the \textit{COMPLETE} phase, which immediately follows the completion of COMPACTION phase. 
	In complete phase, the compacted delta snapshot is basically useless, then can be recycled, and the garbage collection is efficiently. Although we are using batch free data here, we can also dynamically free data during the compact phase once the data item is being compacted.
\end{enumerate}




\begin{algorithm}[htb]
	\caption{ Query Execution Thread}
	\label{alg:olap}
	\KwIn{Query \textit{Q}}	
	start\_state = State \\
	\If{ $ start\_state = NORMAL \| FROZEN \| COMPLETE $ }{
		fetch\_and\_add(query\_static\_counter) \\
		\For{each request in Q}{
			\textbf{Query\_Static}(index(request.key)) \\
		}
		fetch\_and\_sub(query\_static\_counter) \\
	}
	\Else {
		\If{ $start\_state = WAITING \| COMPACTION$ }{
			fetch\_and\_add(query\_delta\_counter) \\
			\For{each request in Q}{
				\textbf{Query\_DeltaFirst}(index(request.key)) \\
			}
			fetch\_and\_sub(query\_delta\_counter) \\
	}}
\end{algorithm}


\subsection{Query Execution}\label{sec:query}
In \secref{alg:oltpcc}, we talk about how to handle OLTP workload, in this part, we describe the OLAP workload running.

Once a query request is accepted, the system detect it begins in which state phase.
For performance reason, we only in Waiting and Compaction phase execute the lsm-like query strategies,
but in normal, taken and complete phase, just query the olap data directly.

As shown in Algorithm \ref{alg:olap}, line 5 query\_static function and line 11 query\_deltaFirst() represent two different query strategies.

The 2 counters: query\_static\_counter and query\_delta\_counter are essential to identify which state the system is in.

The performance of the way to look up a delta first is slower than the performance of querying OLAP data directly. 
Fortunately, query\_deltaFirst() relatively rare. 
becauseof the frozen and compact phases is short. 
The next section will elaborate.

\section{State Controller}\label{sec:state}


\subsection{NORMAL phase}
%While in the rest phase, every record stores only a live
%version. All stable versions are empty, and the bits in stable status vector are always equal to not available. 
%Any transaction that begins during the rest phase uses only the live version for its read and write operations, regardless of
%when it commits.

For the transaction, every transaction once reaching write phase in the Normal phase
should both write to TP data and the delta snapshot.
the time duration of Normal phase is a key factor to influence the freshness of olap query.

For the olap long running queries, every query can execute those request in the ap data directly.

\subsection{FROZEN phase}
%When the checkpoint process is triggered to begin, the
%system enters the prepare phase. Transactions that start
%during the prepare phase read and write live record versions,
%just like transactions that started during the rest phase.
%However, before updating a record, if the stable version is
%currently empty and unavailable, a copy of the live version is
%stored in the stable version prior to the update. This is done
%because the system is not sure in which phase the transaction will be committed. 
%For example, a transaction TP that
%begins during the prepare phase and writes a record R1 —
%overwriting a previous value R0 —makes a copy of R0 in its
%stable version before replacing R0 in the live version. Im-
%mediately after committing, but before releasing any locks,
%a check is made: if the system is still in the prepare phase
%at the time the transaction is committed, the stable version
%is removed.

when the normal phase finished, the follow is the FROZEN phase.
We need to frozen the delta snapshot, first thing is to swap the \textit{p\_update} and \textit{p\_delta} pointers,
in other words, changes the dual snapshot roles.
but for those transactions that start their write phase in Normal phase should running ignore the role change operations.
Once those transaction finished, it is the end time point of FROZEN phase.
at the end boundary of the FROZEN phase, the delta snapshot should be frozen, that means no oltp transaction should write data to this dataset.
we can see that at the begin boundary, we have let all the new oltp txn write data to another snapshot.

%Note that 为了等待事务结束，我们需要维护一个活动事务数量，这个可能会影响性能。global counter在多核心系统会是瓶颈。
%由于内存oltp事务的时间通常会很短，我们假设tp事务的时间小于1us，因此我们直接粗暴的等待1us，因为内存数据库下的事务时间都会很短。注意，这个时间大小是可以直接调节的。
%具体的代码对应的是 algorithm \ref{alg:state} 中的line 6
We need a global counter to identify the ending of all the active transactions.
This must affect performance, global counter will be a bottleneck in a multi-core system.
Since the OLTP transaction is usually short, we assume that the OLTP transaction is less than 1us, so we simply wait for 1us violently because the transaction time under the memory database will be very short. Note that this time can be adjusted directly.
The specific code corresponds to line 6 in algorithm \ref{alg:state}.

\subsection{WAITING phase}
The FROZEN phase lasts until all active transactions are
running in NORMAL phase. that is, until all transactions that
started during the NORMAL phase have completed. At this point,
the system transitions to the COMPACTION phase.
All transactions that have committed before this point will be recorded in the delta snapshot.

At the begin of the waiting phase,
the delta frozen snapshot is generated.
we could not be compact the snapshot to olap data, 
because there has some active olap querys running , 
we must ensure those active olap query finished, beacuse those transaction are query from olap data, the data should not be able to write.
Once those query finished, it is the end point of waiting phase.

For those queries begins in waiting phase, it should be query delta first, since the mem table has generated, 
we can query more fresh data from delta, what's more, for the wait-free consideration.


%If a transaction that started during the prepare phase commits in resolve phase, 
%the stable version is not deleted, and
%the corresponding bit in stable status is set to available. For
%example, suppose the transaction TP (discussed above) com-
%mitted during the resolve phase. When it performed its
%check to see what phase it committed in, it discovers this,
%and then sets the corresponding bit in stable status. Now
%subsequent transactions will see R1 as the live version, but
%the background checkpoint recorder will see R0 as the stable
%value when it runs.
%
%Transactions that start during the resolve phase are al-
%ready beginning after the point of consistency, so they will
%certainly complete after the checkpoint’s point of consis-
%tency. Therefore, they always copy any live version to the
%corresponding stable record version before updating it, un-
%less the record already has an explicit stable version.
\subsection{COMPACTION phase}

The WAITING phase lasts until all queries that began
before the waiting phase have completed.
The system then transitions into the compaction phase. Transaction write behavior is the same during the whole period.

%Once this phase has begun, 
%后台启动一个线程，根据 keyset 集合去顺序遍历 delta 数据，然后，合并到olap数据集合中。
%对于查询事务，我们依然是需要类似于 lsm 的方式，先查询 frozen delta，如果找到结果，便直接返回，没有，便继续查找。
%我们甚至可以在frozen delta上添加 bloom filter。
%注意这个过程，因为我们的olap查询是先查delta，找到便不查找olap数据，因此compaction的工作
%可以lock free的方式进行，不会导致任何的阻塞。
Once this stage begins, a background thread is spawned, 
and the delta data is traversed sequentially according to the keyset, and then compacted into the OLAP data set.
For querying transactions, we still need a LSM-like approach. First, we query the frozen delta, and if the result is found, we return it directly; otherwise, we continue the search.
We can even add bloom filter to the frozen delta.
Note that in this process, since our olap query looks up delta first, 
if finds the result then does not need to look up OLAP data, 
so compaction work can be in a  lock free manner and it won't cause any blocks.

%a background thread is spawned
%that scans all records, recording stable versions (or live ver-
%sions for record versions that have no explicit stable version)
%to disk. As it proceeds, it erases any explicit stable versions
%that it encounters. This can be accomplished with no addi-
%tional blocking synchronization. The capture phase section
%of the RunCheckpointer function outlined in Figure 1 shows
%the pseudocode for this process. Note that the bit in sta-
%ble status is set to available after the background thread ac-
%cesses the record. This prevents other transactions running
%during the capture phase from creating the stable version
%again.



\subsection{COMPLETE phase}
Once the compaction has completed, the system transitions into the complete phase. 
Transaction write behavior reverts to being the same as in the rest phase. 
Once all queries that began during the waiting and compaction phase have
committed, the system transitions back into the rest phase,
and awaits the next signal to frozen delta snapshot.

For the query begin in this phase, we can read the latest data from olap directly.
Since the frozen delta snapshot is uselessness, we can delete it .


%As shown it Algorithm \ref{alg:state}, 系统交替性的在五种阶段间转换。
%从 normal 到 frozen 阶段，我们可以控制这段阶段的时长，注意时间过长，会导致olap的数据比较旧。
\textbf{Algorithm Code.} As shown it Algorithm \ref{alg:state}, the system alternates between five stages.
From the normal to the frozen phase, we can control the duration of this phase. If the time duration is too long, the data of the olap will be old, the data freshness is not good enough.

%一旦系统进入Frozen阶段，我们需要作的是交换两个快照的角色，
%以及等待仍处于write phase的事务结束。
Once the system enters the Frozen phase, what we need to do is 
exchange the roles of the dual-snapshot and wait for the end of the transaction that is still in the write phase.
%注意代码line 6 中，我们出于性能的考虑，可以直接采用等待一段时间来替代功能，比如1us，
%保证write phase的事务都执行完成。
Note that in line 6 of the code, for performance reasons, we can directly wait for a period of time to replace the function, such as 1us, to ensure that the write phase transaction is completed.
%waiting阶段的持续时间，我们便需要通过计数器的方式去判定从olap直接查询的事务是否都结束了。  
%代码line 10的compact函数，需要通过借助keyset集合去合并数据。
%Complete阶段，主要的工作是回收数据，以其等待query delta的查询结束。
The duration of the waiting phase, we need to use the counter to determine whether the transactions (which queried from olap directly) are all committed. 
Compact function (line 10) needs to merge data by using KeySet. In the Complete phase, The main job is to free the useless data and wait the LSM-like queries finished.


	
\begin{algorithm}[htb]
	\caption{ State Controller and Compaction Work}
	\label{alg:state}
	\While{true}{
		State = NORMAL \\
		Waitingfor the delta snapshot frozen signal \\
		State = FROZEN \\
		Swap(\textit{p\_update}, \textit{p\_delta}) \\
		Wait Until Last Period Transaction Finished \\
		State = WAITING \\
		wait until query\_static\_counter = 0 \\
		State = COMPACTION	\\
		\xxx{Compact}(p\_delta, AP) \\
		State = COMPLETE \\
		Gargage\_Colection(p\_delta) \\
		wait until Query\_delta\_counter = 0 \\
	}
\end{algorithm}




\subsection{Running example}\label{sec:example}

\begin{figure}[htb]
	\centering
	\includegraphics[width=0.48\textwidth]{fig/example.eps}
	\caption{Running Example of WHTAP.}\label{fig:example}
\end{figure}

To make the system much more clear, the \figref{fig:example} gives the running example of our WHTAP algorithm.
It fully shows five phases in a cycle, and several types of transactions running.

如图所示，绿色代表的是一个周期内，每个成功进入write phase的事务。
上一个周期内的 write 事务，用黄色表示。
%As shown, green represents a transaction that successfully enters the write phase within a cycle. The write transaction in the previous cycle, indicated in yellow.
%系统触发frozen操作后，新进入write phase的事务（绿色），则把增量记录到另一份数据中。
%之前没完成的事务则继续进行执行，一旦结束。frozen阶段结束。
After the system triggers the frozen signal (at time $ t_1 $), the new transaction entering the write phase (green) records the delta into another data.
Transactions that have not been completed before continue to execute, once they are finished, The frozen phase is over.



%此刻系统获取到了一份稳固的上一周期的更新增量快照。接下来，我们便可以把上一周期的增量快照 merge 到 olap 的数据集合中。
At time $ t_2 $, the system has obtained a stable delta snapshot of the previous cycle. Next, we can compact the delta snapshot from the previous cycle into the OLTP storage.
%不幸地，但是还有一些活动的olap查询没有结束，直接merge可能会覆盖这一部分数据，导致查询出错。
Unfortunately, there are still some active OLAP queries($ Q_1, Q_2 $) that are not committed, direct compact the delta snapshot may overwrite this part of the data, causing some kind of query errors.
%因此，我们需要经历一段等待阶段。等待图中蓝色的查询事务全部结束。
Therefore, we need a waiting phase to wait for the blue query transaction in the figure to complete.
%在此阶段启动的查询，为了保证下一阶段的write phase能够lock free的方式惊醒merge，我们需要采用先查询快照在查olap数据集合的方式。
For the query started at waiting phase, we need to query the delta snapshot first to execute the OLAP queries,
since to ensure that the write phase of the COMPACT phase can merge in the lock free way.
%一旦活动事务结束，我么可以开始进行merge操作了，图中黄色矩形表示的是merge，他主要是把上一周期内的事务增量数据，完全merge到olap内。
Once the active transaction is committed, the system can start the compact operation. The yellow rectangle in the figure represents the COMPACT operation, which basically merges the incremental transaction data for the last cycle into OLAP store.
%同样在这个阶段启动的查询，依然要类似于lsm的方式进行查询。
Similarly, the query started at this stage is still queried in a similar way to LSM.
%一旦这个阶段结束，上一周期的增量数据以及没用了，我们自然的就可以删除这部分数据。同时我们需要等待lsm方式查询的事务结束。
Once this phase is over and the incremental data from the previous cycle is useless, we can naturally delete this data. 
At the same time, we also need to wait for the end of the transaction for the LSM-style query.
%然后，我们又开始进行下一个周期的normal阶段，并等待frozen信号的触发。
We then proceed to the next normal phase of the cycle and wait for the frozen signal to trigger.



\section{Design Choice}\label{sec:choice}
\subsection{Static vs. Dynamic}
%\secref{sec:overview} 说明了我们系统中需要两份数据，和dual snapshot的结构。
%delta1和delta2的内存到底是静态，还是动态，是一个trade-off.
%对于dual snapshot的数据，我们为了空间开销考虑，可以采用动态内存分配的方式。
%但malloc和free的开销对于高吞吐的场景下，对系统性能是有很大损耗的。这会加大write phase的时间,从而导致abort rate会提高，进一步影响性能。
%静态内存分配不会有这些问题，性能很好，但内存开销很大。

As talked in \secref{sec:overview}, the storage engine needs 2 replica and a dual snapshot.
Actually, whether the dual snapshot is allocated in dynamic or static is a kind of trade-off.
To saving memory space, we can employ dynamic memory allocate strategies,
however, malloc and free seem like very time consuming in a high throughput scenario,
which is a significant loss of system performance, 
because of malloc hugely increase the time of write phase and lead to a higher abort rate.
On the other hand, static memory allocation is not introduce those performance problem, but it's memory footprint is higher.

%此外，我们展示的dual snapshot的方案的baseline是pingpong算法，我们可以替换成hourglass算法也可以。
%if so，我们便直接采用三分内存的方案。具体细节，由于篇幅，不在此展开，我们可以参见\cite{bibid}
Besides, The baseline of the dual snapshot is the pingpong algorithm, which can be replaced by the hourglass algorithm.
If so, the dual snapshot structure only has a copy of memory footprint.
For the space reason, we not discuss the details, the reader can see it in~\cite{myposter}.
\subsection{Storage Engine}
For simplicity, the storage engine simplifies many details. 
In our system, OLTP, OLAP and dual snapshots all use row data and all share a common index.

%实际上，我们可以单独为快照和olap的数据分别做索引设计。从而可以进一步加快系统的速度，不如说，我们可以给delta snapshot增加bloom filter，从而可以加快类似lsm的查询速度。 olap引擎本文采用的行数据库，可以设计成列的。
In fact, for the real-world product, OLTP storage is designed as a row-major store, but OLAP storage is better designed as a column-major engine. And the index should not share a common one, we could design index for each part,
especially the dual snapshot, if it has index, the keyset can be removed and we can improve the performance by employ special index such as bloom filter, which can decrease the latency of LSM query, as shown in the recent work \cite{}.

\subsection{Single Version vs. Multi Version}
As shown in \secref{sec:goals}, the goal of the HTAP system must save memory footprint as much as it can, since memory is expensive.
that is why we based on a single version concurency control protocol.
But that not means the system not runs in a multi concurrency control protocol.
As long as the concurrency control is based on OCC, such as xxx, xxxx,
both can be intergated with the WHTAP system, if the memory is enough.
%为了存储容量的考虑，本文的oltp和olap都是单版本的系统，实际上，我们可以设计成多版本的系统，基本原理一致。
\subsection{Sync write vs. Deterministic}

To keey the linearlizability of the OLTP transaction and the delta snapshot,
we synouchly running the write phase on both of them.
this could be increase the time duration of writhe pahse, and increase the abort rate.
Actually, we can asycn processing the transaction running and delta recording by employing the determistic concurrency control method, like the calvin~\cite{bibid} system dose.

%上面，为了保证事务数据和delta内执行的事务一致，我们需要同步进行操作数据，这会导致时间周期很长。在分布式场景下，我们可以类似与calvin那样，分开处理。

\subsection{Log vs. Snapshot}
%我们文章中，是直接生成一个周期内的增量最终数据，batchdb那种记录周期内的日志也是一种处理方式。
In the delta snapshot, we recode the real data,
the other method, we can record the running transaction log into delta, the recent system BatchDB do like this.
But in our mind, replay the log is not good as merge data, since it could be slow.
并且快照的方式，很大的一个优点是可以和查询相结合。


\section{Experimental Study}\label{sec:exp}

\begin{figure*}[htbp]
	\centering
	\begin{minipage}[t]{0.23\textwidth}
		\centering
		\includegraphics[width=\textwidth]{exp/ycsb-oltp-medium.pdf}
%		\vspace{-9mm}
		\caption{ycsb-oltp-medium.}
		\label{fig:ycsb1}
	\end{minipage}
	\hspace{0.1cm}
	\begin{minipage}[t]{0.23\textwidth}
		\centering
		\includegraphics[width=\textwidth]{exp/ycsb-oltp-medium-abort.pdf}
		\caption{ycsb-oltp-medium abort rate.}
		\label{fig:ycsb1:abort}
	\end{minipage}
	\hspace{0.1cm}
	\begin{minipage}[t]{0.23\textwidth}
		\centering
		\includegraphics[width=\textwidth]{exp/ycsb-oltp-high.pdf}
		\caption{ycsb-oltp-high.}
		\label{fig:ycsb2}
	\end{minipage}
	\hspace{0.1cm}
	\begin{minipage}[t]{0.23\textwidth}
		\centering
		\includegraphics[width=\textwidth]{exp/ycsb-oltp-high-abort.pdf}
		\caption{ycsb-oltp-high abort rate.}
		\label{fig:ycsb2:abort}
	\end{minipage}
\end{figure*}

\begin{figure*}[htbp]
	\centering
	\begin{minipage}[t]{0.23\textwidth}
		\centering
		\includegraphics[width=\textwidth]{exp/ycsb-oltp-rw.pdf}
		\caption{ycsb-oltp-rw}
		\label{fig:ycsb:rw}
	\end{minipage}
	\hspace{0.1cm}
	\begin{minipage}[t]{0.23\textwidth}
		\centering
		\includegraphics[width=\textwidth]{exp/ycsb-oltp-rw-abort.pdf}
		\caption{ycsb-oltp-rw abort rate.}
		\label{fig:ycsb:rw:abort}
	\end{minipage}
	\hspace{0.1cm}
	\begin{minipage}[t]{0.23\textwidth}
		\centering
		\includegraphics[width=\textwidth]{exp/ycsb-oltp-zipf.pdf}
		\caption{ycsb-oltp-zipf.}
		\label{fig:ycsb:zipf}
	\end{minipage}
	\hspace{0.1cm}
	\begin{minipage}[t]{0.23\textwidth}
		\centering
		\includegraphics[width=\textwidth]{exp/ycsb-oltp-zipf-abort.pdf}
		\caption{ycsb-oltp-zipf abort rate.}
		\label{fig:ycsb:zipf:abort}
	\end{minipage}
\end{figure*}

\begin{figure*}[htbp]
	\centering
	\begin{minipage}[t]{0.23\textwidth}
		\centering
		\includegraphics[width=\textwidth]{exp/ycsb-oltp-len.pdf}\\
		\caption{ycsb-oltp-len}
		\label{fig:ycsb:len}
	\end{minipage}
	\hspace{0.1cm}
	\begin{minipage}[t]{0.23\textwidth}
		\centering
		\includegraphics[width=\textwidth]{exp/ycsb-oltp-len-abort.pdf}\\
		\caption{ycsb-oltp-len abort rate.}
		\label{fig:ycsb:len:abort}
	\end{minipage}
	\hspace{0.1cm}
	\begin{minipage}[t]{0.23\textwidth}
		\centering
		\includegraphics[width=\textwidth]{exp/ycsb-olap-short.pdf}\\
		\caption{ycsb-olap-short.}
		\label{fig:ycsb:olap}
	\end{minipage}
	\hspace{0.1cm}
	\begin{minipage}[t]{0.23\textwidth}
		\centering
		\includegraphics[width=\textwidth]{exp/ycsb-olap.pdf}\\
		\caption{ycsb-olap.}
		\label{fig:ycsb:olap2}
	\end{minipage}
\end{figure*}

\begin{figure*}[htbp]
	\centering
	\begin{minipage}[t]{0.23\textwidth}
		\centering
		\includegraphics[width=\textwidth]{exp/ycsb-olap-len.pdf}\\
		\caption{ycsb-olap-len.}
		\label{fig:ycsb:olap:len}
	\end{minipage}
	\hspace{0.1cm}
	\begin{minipage}[t]{0.23\textwidth}
		\centering
		\includegraphics[width=\textwidth]{exp/fixap-tp.pdf}\\
		\caption{fixap tp performance.}
		\label{fig:ycsb1:htap1}
	\end{minipage}
	\hspace{0.1cm}
	\begin{minipage}[t]{0.23\textwidth}
		\centering
		\includegraphics[width=\textwidth]{exp/fixap-ap.pdf}\\
		\caption{fixap ap performance.}
		\label{fig:ycsb1:htap2}
	\end{minipage}
	\hspace{0.1cm}
	\begin{minipage}[t]{0.23\textwidth}
		\centering
		\includegraphics[width=\textwidth]{exp/fixap-abort.pdf}\\
		\caption{fixap abort rate.}
		\label{fig:ycsb1:htap3}
	\end{minipage}
\end{figure*}

\begin{figure*}[htbp]
	\centering
	\begin{minipage}[t]{0.32\textwidth}
		\centering
		\includegraphics[width=\textwidth]{exp/fixtp-tp.pdf}\\
		\caption{fixtp tp performance.}
		\label{fig:ycsb1:htap4}
	\end{minipage}
	\hspace{0.1cm}
	\begin{minipage}[t]{0.32\textwidth}
		\centering
		\includegraphics[width=\textwidth]{exp/fixtp-ap.pdf}\\
		\caption{fixtp ap performance.}
		\label{fig:ycsb1:htap5}
	\end{minipage}
	\hspace{0.1cm}
	\begin{minipage}[t]{0.32\textwidth}
		\centering
		\includegraphics[width=\textwidth]{exp/fixtp-abort.pdf}\\
		\caption{fixtp abort rate.}
		\label{fig:ycsb1:htap6}
	\end{minipage}
\end{figure*}


We now present our evaluation of the WHATP algorithm. For
these experiments, we use the DBx1000 OLTP DBMS [1]. This
is a multi-threaded, shared-everything system that stores all data in
DRAM in a row-oriented manner with hash table indexes.
DBx1000 includes a pluggable lock manager that supports different concurrency control schemes.

To integrate our algorithm WHTAP into the DBx1000 system, 
we first need to modify the storage engine.
Then, modify the concurrent processing of the OLTP transaction and separate the OLAP transaction from the OLTP component,
Also add the state controller and LSM-like OLAP query methods.

This allows us to compare five
approaches all within the same system:
\begin{itemize}
	\item TICTOC : Time traveling OCC with all optimizations.
	\item SILO : Silo OCC [35].
	\item HEKATON : Hekaton MVCC [24].
	\item MVCC : Basic MVCC.
	\item WHTAP : our proposed HTAP system.
\end{itemize}


All of the experiments are running with a high-end server HP G8, which equipped with two E5-2620 CPU socket each has 20 physical sockets, 512 Gigabyte Memory, and 1TB hard disk drive.

\subsection{YCSB Benchmark}
\textbf{YCSB:} The Yahoo! Cloud Serving Benchmark is a collection
of workloads that are representative of large-scale services created
by Internet-based companies [8]. For all of the YCSB experiments
in this paper, we used a \~20GB YCSB database containing a single table with 20 million records. 
Each YCSB tuple has a single primary key column and then 10 additional columns each with 100
bytes of randomly generated string data. The DBMS creates a single hash index for the primary key.
Each transaction in the YCSB workload by default accesses 16 records in the database. 
Each access can be either a read or an update. 
The transactions do not perform any computation in their program logic. 
All of the queries are independent from each other; 
that is, the input of one query does not depend on the output of a previous query. 
The records accessed in YCSB follows a Zipfian distribution that is controlled by 
a parameter called $ \theta $ that affects the level of contention in the benchmark [18]. 
When $ \theta $=0, all tuples are accessed with the same frequency. 
But when $ \theta $=0.6 or $ \theta $=0.8, a hotspot of 10\% of the tuples in the database are accessed by \~40\% and \~60\% of all transactions, respectively.

%Benchmark for imdb  FCS paper.



\subsubsection{OLTP-only workload}
First, we follow the workload with TICTOC paper,
to validate whether the performance of our algorithm will degrade significantly under OLTP-only workload.
For YCSB benchmark, two groups of tests were conducted.
\begin{itemize}
	\item \textbf{Medium Contention:} 16 queries per transaction (90\% reads
	and 10\% writes) with a hotspot of 10\% tuples that are accessed by ∼60\% of all queries (theta=0.8).
	\item \textbf{High Contention:} 16 queries per transaction (50\% reads and
	50\% writes) with a hotspot of 10\% tuples that are accessed
	by ∼75\% of all queries (theta=0.9).
\end{itemize}

%\figref{fig:ycsb1} 展示的在中等冲突情况下，五种算法的吞吐量在1-32线程下的性能结果。
%五种算法的吞吐量都随着线程数的增长而增长，都有很好的可拓展性。
%我们的算法相比 tictoc 性能有所下降，因为每次 write phase 的持续时间相对长一些。
%在32线程的时候，相比tictoc性能下降了10\%.
% MVCC的算法是最差的。
%\figref{fig:ycsb1:abort} 是相应的事务回滚率。
%我们可以看到，tictoc 和我们的算法的事务回滚率都是最低的。
%而 HEKATON 的事务回滚率是最高的。。。。。。

\figref{fig:ycsb1} shows the performance results of the throughput of the five algorithms in the case of Medium Contention between threads 1 and 32.
First of all, the throughput of the five algorithms increases with the number of threads, and both have good scalability.
The performance of the WHTAP algorithm decreases compared to TICTOC, 
because the duration of each write phase is relatively longer.
At 32 threads, the TICTOC performance was reduced by 10\%.
MVCC's algorithm is the worst.
\figref{fig:ycsb1:abort} shows the corresponding transaction abort rate.
We can see that both tictoc and WHTAP have the lowest transaction abort rates.
And HEKATON has the highest transaction abort rate.

%\figref{fig:ycsb2} 给出的是热度冲突数据下的，事务吞吐量性能结果图。
%Compared with \figref{fig:ycsb1}，
%算法整体的吞吐量水平相对下降了一个水平。
%比较显著的差异就是，mvcc和hekaton的性能并没有随着线程数增长而增长，可拓展性在高冲突场景下很差。
%而silo， tictoc 和whtap表现依然很好。并且我们的算法性能和tictoc几乎一样，甚至于更好。
%\figref{fig:ycsb2:abort} 是相应的事务回滚率，tictoc和我们的算法回滚率的优势，相对就没那么明显了。

\figref{fig:ycsb2} gives the transaction throughput performance result graph under the High Contention workload.
Compared with \figref{fig:ycsb1},
The overall throughput of the algorithm is reduced by one level.
The significant difference is that the performance of MVCC and HEKATON does not increase with the number of threads, and scalability is poor in high-conflict situations.
Silo, TICTOC and WHTAP still works well. And WHTAP algorithm performance is almost the same as tictoc, or even better.
\figref{fig:ycsb2:abort} is the corresponding transaction rollback rate, and the advantage of tictoc and WHTAP algorithm's abort rate is less obvious.

%\figref{fig:ycsb:rw} 展示的是ycsb负载，读写占比不同情况下的性能情况。
%随着读比例加大，mvcc和hekaton的性能越来越好，在完全只读的情况下，比tictoc好10\%,比whtap好20\%。
%OCC算法类的性能受读写比的影响相对小一些，但MVCC更适合读多的场景，在write-intensive的场景下，并不是很适合。
\figref{fig:ycsb:rw} shows the performance under ycsb workload with different read-write ratios.
As the reading ratio increased, the performance of MVCC and HEKATON became better, which was 10\% better than tictoc and 20\% better than whtap in the case of full read-only.
The performance of the OCC-based algorithm is less affected by the read/write ratio. 
MVCC is more suitable for read-intensive scenarios, which are not suitable for the write-intensive scenario.
\figref{fig:ycsb:rw:abort} shows the correpsonding abort rate, we find that the more read operations, the litter abort rate is.

%\figref{fig:ycsb:zipf} 展示的是数据冲突成都对数据库性能的影响。
%在数据库冲突不大的时候，性能基本一致，数据冲突一旦加大，性能都下降严重。
%WHATP的性能最适合高数据竞争的场景。
%\figref{fig:ycsb:zipf:abort} 是对应的事务回滚率，纵轴是log处理的结果。
%hekaton的回滚率一直是最高的。
\figref{fig:ycsb:zipf} shows the impact of data contention on database performance.
When workload contention are not serious, the performance is basically similar
Once data contention increase, the performance will be seriously degraded.
WHATP performance is best suited for scenarios with high data contention.
\figref{fig:ycsb:zipf:abort} is the corresponding transaction abort rate, and the y-axis is in logscale.
Hekaton has consistently had the highest abort rate.

%\figref{fig:ycsb:len} 展示的ycsb执行事务的长度对性能的影响。注意图中纵轴对应的操作数，而不是事务数。
%我们可以看到，事务长度在4-8之间，性能相对有一个理想的水平。当事务长度加长后，性能开始下降。
%这可以通过\figref{fig:ycsb:len:abort}反应，事务的回滚率随着事务长度加大而加大。
\figref{fig:ycsb:len} shows the effect of the length of ycsb executing transactions on performance. 
Note the operand corresponding to the vertical axis in the figure, not the number of transactions.
As you can see, when the transaction length is between 4-8, and the performance is relatively ideal. 
As the transaction length increases (larger than 8), performance begins to degrade.
This can be reflected by \figref{fig:ycsb:len:abort} that the transaction abort rate increases as the transaction length increases.

Combine the above results, we can get the following findings:
\begin{itemize}
	\item %在数据竞争不高的情况下，算法性能和tictoc和silo相比有所下降，在32物理线程的时候，下降大概xxx。
	In the case of low data contention, compared with tictoc and silo, the WHTAP algorithm performance was reduced a litter bit. 
	In the case of 32 physical threads, the algorithm performance was reduced by about 10\%.
	\item %MCCC 是一种read optimized的数据库设计方案，write-intensive的场景下，OCC相对性能会好一些。而htap场景下，可以看成同时包含write-intensive和read-intensive的场景。因此我们要分开处理。
	MCCC is a database design scheme of read optimized. In the scenario of write-intensive, OCC performance is better. In the HTAP scene, it can be regarded as a scene containing both write-intensive and read-intensive, so we're going to do it separately.
	\item %在数据竞争严重的情况下，silo，tictoc和我们的算法都有良好且几乎无差异的性能。
	In the case of serious data contention, silo, tictoc and WHTAP algorithm all have good and almost no difference performance.
	\item %tictoc和我们的算法都有较低的abort rate。
	Both tictoc and WHTAP algorithm have a lower abort rate.
\end{itemize}

In summary, our algorithm running well in oltp workload. 
Especially the application scenario of high data competition. For example, the "double eleven" scenario with hot data.
%尤其是, 高数据竞争的应用场景。比如说双十一这种带热点数据的秒杀场景。

\subsubsection{OLAP-only workload}
In this section, we test the situation in a pure OLAP scenario.
For YCSB benchmark, two groups of tests were conducted.
\begin{itemize}
	\item \textbf{Short Read-Only Queries:} 2 queries per transaction and a uniform
	access distribution (theta=0).
	\item \textbf{Long Running Read-Only Queries:} 48 queries per transaction and a uniform
	access distribution (theta=0).
\end{itemize}

%\figref{fig:ycsb:olap} 给出的是大量短小的查询下的吞吐量性能。我们可以看到，silo 和 tictoc 表现良好。
\figref{fig:ycsb:olap} gives throughput performance for a large number of short queries. We can see that silo and TICTOC performed well.
%而 mvcc 和 hekaton 由于中心化的时间戳分配瓶颈，导致性能很差。
However, the performance of MVCC and hekaton is poor due to centralized timestamp allocation bottlenecks.
%而我们的算法因为涉及到对两类query的统计，需要两个全局的counter，性能也会受到很大限制。
Unfortunately, since our algorithm involves statistics of two types of query, it needs two global counter, and its performance will be greatly limited.

%\figref{fig:ycsb:olap2}, 在长分析型事务下，WHTAP 的查询就很有优势了. 相比查询的时间，counter竞争的开销就被忽略掉了。
\figref{fig:ycsb:olap2}, In the case of long analysis transactions, the WHTAP query has an advantage. The counter competitive overhead is ignored compared to the query time.
%同样的， hekaton 和 mvcc 也是一样，性能相对很好，而且比 silo 和 tictoc 要更好。
Similarly, hekaton and MVCC have relatively good performance and are better than silo and tictoc.
WHTAP is good than Hekaton and MVCC, becuase WHTAP does not need to query form version chains.

%\figref{fig:ycsb:olap:len} 给出了事务长度，对性能结果的影响，纵轴是操作数而不是事务。
\figref{fig:ycsb:olap:len} gives the effect of transaction length on performance results, and the vertical axis is operand not transaction.
%事务长度达到一定后，性能上升到极限。
When the transaction length reaches a certain level, the performance rises to the limit.
%这是因为短事务的时候受global counter影响，事务长度达到一定后，global counter的性能差异便可以忽略不计了。
This is because global counter is affected by a short transaction,
 and when the transaction length reaches a certain level,
  the performance difference of global counter is negligible.

%综合以上，我们可以发现：
%\begin{itemize}
%	\item MVCC, HEKATON, WHTAP 不适合大量只读的小事务，原因是global counter的代价太大。
%	\item WHATP 非常适合快照读的事务操作。
%\end{itemize}

Combine the above results, we can get the following findings:
\begin{itemize}
	\item MVCC, HEKATON, WHTAP are not suitable for a large number of read-only small transactions because global counter is too costly.
	\item WHATP is well suited for snapshot reading transaction operations.
\end{itemize}

\subsubsection{HTAP workload}
%这部分实验测试的是 OLTP 和 OLAP 混合型的事务。
%一共分成两个部分， 第一部分固定 ap 的线程数， 测试 tp 线程数量对性能的影响情况。
%第二部分， 固定 tp 的线程数，测试 ap 线程数 对性能的影响情况。
This part of the experiment tested OLTP and OLAP mixed transactions.
The first part is to fix the number of threads of OLAP and test the impact of the number of OLTP threads on the performance.
In the second part, the number of threads in OLTP is fixed, and the effect of the number of threads in OLAP on the performance is tested.

\textbf{Fix OLAP threads.}
%我们固定 OLAP 线程数固定为8, 并且固定每个OLAP查询的长度为48.
We fixed the number of OLAP threads to 8 and fixed the length of each OLAP query to 48.

%\figref{fig:ycsb1:htap1} 展示了当OLAP线程数量固定时，OLTP的性能结果。
\figref{fig:ycsb1:htap1} shows the performance results of OLTP when the number of OLAP threads is fixed.
%横轴对应的时OLTP的线程数量从1-32。
The horizontal axis corresponds to the number of threads in OLTP from 1 to 32.
%可以发现这部分的性能结果和 \figref{fig:ycsb2} 的结果类似。这说明，OLAP的负载对我们的OLTP部分性能几乎没有影响。
It can be found that the performance results of this part are similar to those of \figref{fig:ycsb2}. 
This shows that the OLAP workload has little impact on our OLTP performance.

%\figref{fig:ycsb1:htap2} 展示的是 OLAP　线程数为8，OLTP　线程数从1-32的结果。
\figref{fig:ycsb1:htap2} shows the performance results of OLAP in which the number of OLAP threads is 8, and the number of OLTP threads is from 1 to 32.
% OLAP　性能是其他算法的2-3倍，WHTAP算法的OLTP线程数增多，对OLAP性能的影响很小, 基本不随 OLTP 线程数的增长而下降。
The performance of OLAP is 2-3 times that of other algorithms. With the number of OLTP threads in the WHTAP algorithm increases, which has little impact on OLAP performance, and almost does not decline with the increase of OLTP threads.
%而传统的并发算法，随着ap数量的增加，性能会下降很多。　尤其是MVCC 类型的算法, 虽然适合读取操作，但由于版本链的原因，olap性能依然不好。
While the traditional concurrency control algorithm, as the number of OLAP thread increases, the performance will decline a lot. In particular, even MVCC and HEKATON is suitable for reading operations, but the performance of olap is still poor due to the version chain.

%\figref{fig:ycsb1:htap3} 给出的是tp线程数量对回滚率的影响。可以看到，我们算法的回滚率是最低的，尤其是比tictoc还要低，因为我们算法的ap是执行在快照上的，完全不需要回滚。而tictoc的只读查询也会导致不需要的回滚，避免了不必要的性能损失。
\figref{fig:ycsb1:htap3} gives the effect of the number of OLAP threads on the abort rate. As you can see, our algorithm has the lowest abort rate, especially lower than tictoc, because our algorithm's OLAP thread is executed on the snapshot and does not require abort at all. And the read-only query of tictoc will also lead to unnecessary abort and avoid unnecessary performance loss.

\textbf{Fix OLTP threads.}
%\figref{fig:ycsb1:htap4}  展示了当OLTP线程数量固定时，OLTP的性能结果。横轴对应的时OLAP的线程数量从1-32。
\figref{fig:ycsb1:htap4} shows the performance results of OLTP when the number of OLTP threads is fixed. The horizontal axis corresponds to the number of threads of OLAP from 1 to 32.
%加大AP的负载，TP性能会对应下降。OCC的单版本并发算法性能应下相对小一些。
When the OLAP's workload is increased, OLTP performance will decrease accordingly. 
The performance of OCC based single-version concurrent control algorithm should be relatively worse.
%OLAP线程数增多，对导致OLTP性能下降。MVCC下降严重，而OCC的方案下降一点。WHTAP的OLTP负载性能略略差于tictoc。
The number of OLAP threads increases, resulting in a decline in OLTP performance. MVCC went down significantly, while the OCC scheme went down a bit.
The OLTP load performance of WHTAP is slightly worse than that of tictoc.


%\figref{fig:ycsb1:htap5}  展示了当OLTP线程数量固定时，OLAP的性能结果。
%我们可以看到WHTAP的性能增长显著。这主要得益与TP和AP分开处理的技术。
%当AP线程数为32时，WHTAP的性能是tictoc的4倍，是hekaton的6倍。
\figref{fig:ycsb1:htap5} shows the performance results of OLAP when the number of OLTP threads is fixed.
We can see that the performance of WHTAP has increased significantly. This mainly benefits from technologies that are processed separately from TP and AP.
When the number of AP threads is 32, WHTAP has four times the performance of tictoc and six times that of hekaton.



%\figref{fig:ycsb1:htap6}  展示了当OLTP线程数量固定时，系统回滚事务的数量。
%随着ap负载的加大，whtap的回滚事务数量是下降趋势的，这是因为OLAP的事务是一定执行而不会回滚的。
%而其他算法的只读查询也会导致一定的回滚。
\figref{fig:ycsb1:htap6} shows the number of system rollback transactions when the number of OLTP threads is fixed.
As the ap load increases, the number of whtap rollback transactions is trending downward, because OLAP transactions are certain to execute and not rollback.
The read-only queries of other algorithms can also result in some rollback.


Combine the above results, we can get the following findings:
\begin{itemize}
	\item %在OLTP和OLAP负载同时存在的场景，WHTAP的性能是最好的。对TP的性能接近于tictoc，对AP的性能保有绝对的优势。
	In a scenario where both OLTP and OLAP loads exist at the same time, WHTAP has the best performance. The performance of TP is close to that of tictoc, and it has an absolute advantage over the performance of AP.
	\item %WHTAP算法中，TP和AP负载对彼此性能的影响是最小的。
	In the WHTAP algorithm, TP and AP loads have the least impact on each other's performance.
\end{itemize}

\subsection{TPC-C validation}
\textbf{TPC-C:} This benchmark is the current industry standard for
evaluating the performance of OLTP systems [40]. It consists of
nine tables that simulate a warehouse-centric order processing application. All of the transactions in TPC-C provide a WAREHOUSE id as an input parameter for the transaction, which is the ancestral
foreign key for all tables except ITEM. For a concurrency control
algorithm that requires data partitioning (i.e., H-STORE ), TPC-C is
partitioned based on this warehouse id.
Only two (Payment and NewOrder) out of the five transactions in TPC-C are modeled in our simulation. Since these two comprise
88% of the total TPC-C workload, this is a good approximation.
Our version of TPC-C is a “good faith” implementation, although
we omit the “thinking time” for worker threads. Each worker issues
transactions without pausing; this mitigates the need to increase the
size of the database with the number of concurrent transactions.

similar Hekaton/Hyper workload.

\section{Related Work}\label{sec:rw}
\subsection{HTAP}

Hybrid transactional/analytical processing (HTAP), 
a term created by Gartner~\cite{bib-gartner1}\cite{bib-gartner2} Inc.
As defined by Gartner: Hybrid transaction/analytical processing (HTAP) is an emerging application architecture that "breaks the wall" between transaction processing and analytics. It enables more informed and "in business real time" decision making.
Recent years, there have 3 survey~\cite{bohm2016operational}\cite{ozcan2017hybrid}\cite{bibid} discuss this topic.
HTAP emphasizes two main points: data freshness and Unified data representation.

How to design HTAP system is an open problem.
(1) On the top level, one straightforward method is that Using a single system to processing mixed workload. 
Neumann et al.~\cite{neumann2015fast} proposed a novel MVCC implementation within the Hyper~\cite{kemper2011hyper} system, 
which does update in place and store prior versions as before image deltas, 
which enable both an efficient scan execution and fine-grained serializability validation needed for fast processing of point access transactions. 
From the NoSQL side, Pilman et al.~\cite{pilman2017fast} demonstrated how scans can be efficiently implemented on a key value store (KV store) enabling more complex analytics 
to be done on large and distributed KV stores. 
(2) the second method is to fork a copy of the main database, used as a OLAP dataset.
	Specifically, Hyper~\cite{bibid} and Swingdb~\cite{bibid} do like this.
(3)
The other method is that you can generate a delta snapshot, and periodically merged into the second replica.
	Specifically, BatchDB and SAP HANA do like this.




From 2000s, 出现了大量的数据库提供tp和ap的处理能力。
典型以HTAP为设计目标的有HANA和Hyper.

For some systems, it just develop two store engine, separately used for oltp and olap. 比如 memsq 和 sqlserver

or a hybrid store engine, like hybase.

\subsection{Concurrency Control}
%OLTP的evaluation类文章说明了CC的研究价值，瓶颈在于：锁开销大，时间戳限制可拓展性，多版本空间开销大等等。
%问题的本质是，解决物理线程之间的竞争问题，降低不必要的开销。 

To get a better transaction execution performance, 
the concurrency control~\cite{bernstein1987rrency} has been studied for many years, and is still a key factor to be improved.
As we all know, with the development of multi-core technique, the lock should be used conscious since it is very heavy~\cite{ren2012lightweight}.
\cite{harizopoulos2008oltp}文章发现事务占19\%的时间开销。
yuxiang yao,\etal says in \cite{DBLP:journals/pvldb/YuBPDS14}, at present, no one system can scale to 1000 cores without performance loss.

Two-phase locking (2PL) was the first provably correct method
of ensuring the correct execution of concurrent transactions in a
database system [6, 12]. 
两阶段锁, 缺点是会导致严重的数据竞争还可能发生死锁. 适合于低竞争的短事务场景, ycsb.
[12] presented a Lightweight Intent Lock (LIL), which
maintains a set of lightweight counters in a global lock table.
However, this proposal doesn’t co-locate the counters with
the raw data (to improve cache locality), and if the transaction doesn’t acquire all of its locks immediately, the thread
blocks, waiting to receive a message from another released
transaction thread.
The VLL centralized lock manager uses per-tuple 2PL to remove contention bottlenecks [36].
Johnson et al. identified latch contention on high level intention
locks as a scalability bottleneck in multi-core databases [17]. They
proposed Speculative Lock Inheritance (SLI), a technique to reduce
the number of contended latch acquisitions. SLI effectively amortizes the cost of contended latch acquisitions across a batch transactions by passing hot locks from transaction to transaction without
requiring calls to the lock manager. 


H-Store~\cite{kallman2008h-store:} and its commercial successor VoltDB
employ an extreme form of partitioning, treating each
partition as a separate logical database even when partitions are collocated on the same physical node. Transactions local to a single partition run without locking at
all, and multi-partition transactions are executed via the
use of whole-partition locks.

The original OCC algorithm was proposed in 1981 [22], but it
has only recently been adopted in high-performance OLTP DBMSs.
By contrast, MVCC, the other T/O-based algorithm, has been used
in DBMSs for several decades (e.g., Oracle, Postgres).
Many algorithms have been proposed to refine and improve the
original OCC algorithm [8, 17, 26, 31]. The first OCC derivatives from the 1980s dealt with improving transaction validation
for single-threaded systems with limited memory [17, 31]

Multi-version concurrency control (MVCC) [3] is a popular design choice for today’s on-disk databases [5, 48]. While MVCC is
less dominant for in-memory databases, recent research has led to
several new in-memory MVCC schemes including HekatonHekaton~\cite{diaconu2013hekaton}\cite{larson2011high-performance},
HyPer~\cite{neumann2015fast}, Bohm~\cite{faleiro2015rethinking}, Deuteronomy~\cite{levandoski2015high}\cite{levandoski2015multi-version}, and ERMIA~\cite{kim2016ermia:} and Cicada~\cite{lim2017cicada:}.
MVCC reduces conflicts between transactions by using multiple
copies (versions) of a record; a transaction can use an earlier version of a record even after the record has been updated by a concurrent writer. MVCC is an effective design for read-intensive
workloads [31, 44].




%\textbf{Lock-based Scheduling.}
%Two-phase locking (2PL) was the first provably correct method
%of ensuring the correct execution of concurrent transactions in a
%database system [6, 12]. Under this scheme, transactions have to
%acquire locks for a particular element in the database before they
%are allowed to execute a read or write operation on that element.
%The transaction must acquire a read lock before it is allowed to read
%that element, and similarly it must acquire a write lock in order to
%modify that element. The DBMS maintains locks for either each
%tuple or at a higher logical level (e.g., tables, partitions) [14].
%
%The ownership of locks is governed by two rules: (1) different
%transactions cannot simultaneously own conflicting locks, and (2)
%once a transaction surrenders ownership of a lock, it may never
%obtain additional locks [3]. A read lock on an element conflicts
%with a write lock on that same element. Likewise, a write lock on
%an element conflicts with a write lock on the same element.
%
%In the first phase of 2PL, known as the growing phase, the transaction is allowed to acquire as many locks as it needs without releasing locks [12]. When the transaction releases a lock, it enters
%the second phase, known as the shrinking phase; it is prohibited
%from obtaining additional locks at this point. When the transaction terminates (either by committing or aborting), all the remaining locks are automatically released back to the coordinator
%
%两阶段锁, 缺点是会导致严重的数据竞争还可能发生死锁. 适合于低竞争的短事务场景, ycsb.
%
%[12] presented a Lightweight Intent Lock (LIL), which
%maintains a set of lightweight counters in a global lock table.
%However, this proposal doesn’t co-locate the counters with
%the raw data (to improve cache locality), and if the transaction doesn’t acquire all of its locks immediately, the thread
%blocks, waiting to receive a message from another released
%transaction thread.
%The VLL centralized lock manager uses per-tuple 2PL to remove contention bottlenecks [36]. It is an optimized version of
%DL\_DETECT that requires much smaller storage and computation
%overhead than our implementation when the contention is low。
%
%Johnson et al. identified latch contention on high level intention
%locks as a scalability bottleneck in multi-core databases [17]. They
%proposed Speculative Lock Inheritance (SLI), a technique to reduce
%the number of contended latch acquisitions. SLI effectively amortizes the cost of contended latch acquisitions across a batch transactions by passing hot locks from transaction to transaction without
%requiring calls to the lock manager. 
%
%\textbf{Hard-Partition based Execution.}
%Several recent transactional systems for multicores
%have proposed partitioning as the primary mechanism
%for scalability. DORA~\cite{pandis2010data-oriented} is a locking-based system
%that partitions data and locks among cores, eliminating
%long chains of lock waits on a centralized lock manager
%and increasing cache affinity. Though this does improve
%scalability, overall the performance gains are modest—
%about 20\% in most cases—compared to a locking system. Additionally, in some cases, this partitioning can
%cause the system to perform worse than a conventional
%system when transactions touch many partitions.
%PLP [26] is follow-on work to DORA. In PLP, the
%database is physically partitioned among many trees
%such that only a single thread manages a tree. The partitioning scheme is flexible, and thus requires maintaining a centralized routing table. As in DORA, running a
%transaction requires decomposing it into a graph of actions that each run against a single partition; this necessitates the use of rendezvous points, which are additional
%sources of contention. The authors only demonstrate a
%modest improvement over a two-phase locking (2PL)
%implementation.
%H-Store~\cite{kallman2008h-store:} and its commercial successor VoltDB
%employ an extreme form of partitioning, treating each
%partition as a separate logical database even when partitions are collocated on the same physical node. Transactions local to a single partition run without locking at
%all, and multi-partition transactions are executed via the
%use of whole-partition locks. This makes single-partition
%transactions extremely fast, but creates additional scalability problems for multi-partition transactions. We compare Silo to a partitioned approach and confirm the intuition that this partitioning scheme is effective with few
%multi-partition transactions, but does not scale well in
%the presence of many such transactions.
%Multimed [30] runs OLTP on a single multicore machine by running multiple database instances on separate
%cores in a replicated setup. A single master applies all
%writes and assigns a total order to all updates, which are
%then applied asynchronously at the read-only replicas.
%Multimed only enforces snapshot isolation, a weaker notion of consistency than serializability. Read scalability
%is achieved at the expense of keeping multiple copies of
%the data, and write-heavy workloads eventually bottleneck on the master
%
%Hyper at 2011~\cite{kemper2011hyper}
%
%分析负载，降低cross partition问题的开销。
%[][][]
%
%此外需要说明了的一种方式, 借助批处理的方式[Rethinking serializable multiversion concurrency control.][Phase reconciliation for contended in-memory transactions][Design principles for scaling multi-core OLTP under high contention.]
%
%\textbf{OCC.}
%The original OCC algorithm was proposed in 1981 [22], but it
%has only recently been adopted in high-performance OLTP DBMSs.
%By contrast, MVCC, the other T/O-based algorithm, has been used
%in DBMSs for several decades (e.g., Oracle, Postgres).
%
%Under OCC, the DBMS executes a transaction in three phases:
%read, validation, and write. In the read phase, the transaction performs read and write operations to tuples without blocking. The
%DBMS maintains a separate private workspace for each transaction that contains its read set and write set. All of a transaction’s
%modifications are written to this workspace and are only visible to
%itself. When the transaction finishes execution, it enters the validation phase, where the OCC scheme checks whether the transaction
%conflicts with any other active transaction. If there are no conflicts,
%the transaction enters the write phase where the DBMS propagates
%the changes in the transaction’s write set to the database and makes
%them visible to other transactions.
%
%Many algorithms have been proposed to refine and improve the
%original OCC algorithm [8, 17, 26, 31]. The first OCC derivatives from the 1980s dealt with improving transaction validation
%for single-threaded systems with limited memory [17, 31]
%
%Silo is a state-of-the-art OCC algorithm that achieves high throughput by avoiding bottlenecks caused by global locks or timestamp
%allocation [35]. In Silo, a global timestamp (called an epoch) is
%allocated at coarse time granularity (every 40 ms) and is used to indicate the serial order among transactions. Within an epoch, transaction IDs are used to identify data versions as well as to detect
%conflicts. These IDs, however, do not reflect the relative order
%among transactions. This is because they only capture read-afterwrite dependencies, but not write-after-read dependencies (antidependencies). Silo is still able to enforce serializable execution,
%but only able to exploit a limited amount of parallelism.
%To tackle the above issues, we now present a new OCC variant, Tictoc, 
%that uses decentralized data-driven timestamp management
%
%\begin{itemize}
%	\item Silo~\cite{tu2013speedy}
%	\item TicToc~\cite{yu2016tictoc:}
%	\item MOCC~\cite{wang2016mostly-optimistic}
%	\item BCC~\cite{yuan2016bcc:}	
%\end{itemize}
%\textbf{MVCC.}
%Multi-version concurrency control (MVCC) [3] is a popular design choice for today’s on-disk databases [5, 48]. While MVCC is
%less dominant for in-memory databases, recent research has led to
%several new in-memory MVCC schemes including HekatonHekaton~\cite{diaconu2013hekaton}\cite{larson2011high-performance},
%HyPer~\cite{neumann2015fast}, Bohm~\cite{faleiro2015rethinking}, Deuteronomy~\cite{levandoski2015high}\cite{levandoski2015multi-version}, and ERMIA~\cite{kim2016ermia:} and Cicada~\cite{lim2017cicada:}.
%MVCC reduces conflicts between transactions by using multiple
%copies (versions) of a record; a transaction can use an earlier version of a record even after the record has been updated by a concurrent writer. MVCC is an effective design for read-intensive
%workloads [31, 44].
%
%MVCC uses a timestamp to determine which version of records
%to serve. A transaction is assigned a timestamp when it begins. A
%version has a write timestamp, which indicates when the version
%becomes valid, and a read timestamp, which specifies when the
%version becomes invalid or until when it must remain valid. The
%transaction compares its timestamp against versions’ timestamps to
%find and use visible versions. The timestamps of versions use either
%the transaction’s initial timestamp or a separate timestamp allocated
%at commit time.
%\cite{wu2017an}
%
%\textbf{HTM}
%\subsection{Distributed Transaction}
%Calvin

\subsection{Frequent Snapshot}
To get a consistent snapshot, Salles \etal ~\cite{Salles.12} gives a evaluation work that compared several stateofart snapshot algorithms.
It concluded that Naive Snapshot is good for high throughput workload with a little dataset, but Copy On Update is good for more widely cases.
Swingdb and Hyper work modify the linux kernel to support a fine-grained  fork-like system call to generate snapshot.

ZIGZAG is a algorithms developed to be used in MMOs games scenarios, but it is just suitable for small dataset, not a general algorithms.
both PINGPONG  HOUGLASS ,PIGGYBACK used a kind of pointer swapping techniques to generate snapshot. 
Moreover, both PINGPONG and HOUGLASS is for generate delta snapshot, it can be used to htap systems.

However, all of those snapshot algorithms are all depend on a physical consistent time-point, but for more widely cases, such as oltp, to make a physical consistent state in a running system, it must be introduce system blocks, but the most recently work calc~\cite{ren2016low-overhead}, invent a virtual snapshot idea to solve the problem, both pingpong and hourglass can be integrate with this idea.





\section{Conclusion}\label{sec:conclusion}
本文设计并实现了一种wait-free的,基于两版本数据和dual-snapshot的,针对查询优化的并发控制机制.
该系统主要的用在同时包含OLTP和OLAP负载的situation。
优点在于，不仅有很理想的OLTP和OLAP性能，而且还是wait-free的能及时给用户以响应。
同时还能保证数据的freshness，对外暴露一套接口等优点。

基于算法设计，在 YCSB 和 TPCC 基准测试上验证了算法的高性能吞吐量. 和目前优秀的并发算法相比，大概在oltp性能上是tictoc持平，在olap性能上是其4倍左右。
我们强烈建议开发者，在高数据竞争场景下，同时需要实时分析处理的场景，比如双十一下，采用我们的算法。


\section{future work}\label{sec:fw}
可以考虑在delta snapshot上做一个过滤器，类似于最新 sigmod 的文章.
另外OLAP的存储，可以基于列示数据库进行设计于优化。

\textbf{ACKNOWLEDGMENTS.}
The authors would like to thank xxx and
the  anonymous  reviewers.  Guoren  Wang  is  the  corresponding
author of this paper. Lei Chen is supported by the Hong Kong
RGC  GRF  Project  16214716  ,  National  Grand  Fundamental
Research  973  Program  of  China  under  Grant  2014CB340303,
the National Science Foundation of China (NSFC) under Grant No.
61729201, Science and Technology Planning Project of Guangdong
Province,  China,  No.  2015B010110006,  Webank  Collaboration
Research  Project,  and  Microsoft  Research  Asia  Collaborative
Research Grant. Guoren Wang is supported by the NSFC (Grant
No. U1401256, 61732003, 61332006 and 61729201). Gang Wu
is  supported  by  the  NSFC  (Grant  No.  61370154).  Ye  Yuan  is
supported by the NSFC (Grant No. 61572119 and 61622202) and
the Fundamental Research Funds for the Central Universities (Grant
No. N150402005).
\end{CJK*}
\bibliographystyle{abbrv}
\bibliography{vldb}
\end{document}
