\setlength{\parindent}{0pt}

\large \textbf{Dear reviewers:}

We appreciate your constructive comments that are very helpful to improve
the quality of the paper. We have made lots of efforts to address all your comments by
improving our query engine, making various performance comparisons, re-conducting experiments on new GPUs and clearly presenting our
study insights. We believe that we have addressed all your concerns and comments.
This cover letter outlines how we have addressed each of your comments with the pointers where
you can find the updated content in the revised paper.
\\

\large \textbf{Summary}\\

The revised paper is organized to address the following questions:
\begin{itemize}

\item Where does time go when processing warehousing queries on GPUs? (Section 4.1)
\item How do existing software optimization techniques affect query performance on GPUs? (Section 4.2 - 4.4)
\item Under what conditions will GPU significantly outperform CPU for warehousing queries? (Section 5.1)
%\item When programming warehousing queries on GPUs, which programming model is more suitable and more supportive, CUDA or OpenCL?
\item How do different GPU hardwares and their supporting systems affect the query performance? (Section 5.2)

\item How does the advancement of GPU hardware affect query performance? (Section 6)

\end{itemize}


\large \textbf{Responding to Reviewer 1}:\\

\textbf{Weak point 1:}
\textit{
The proposed question and the undertaken study do not match - the question asks why not adopting the GPU for data warehousing queries but the study is about the query performance on the GPU only. There should be a performance comparison between the top-of-line multicore CPU and GPU. \\ 
}

\textbf{Answer:}

Thank you for the comment.
In the revised paper,
we have comprehensively compared the performance of SSBM queries on GPUs with the performance of SSBM queries on the modern multi-core CPU.
The hardware platforms are NVIDIA GTX 680 and Intel Core i7 3770k CPU.
Our comparisons are based on the following two kinds of performance numbers.
\begin{itemize}
\item GPU performance is the performance of using the CUDA engine.
%The performance comparisons between different programming models and between NVIDIA GPU and AMD GPU
%are discussed in Section \ref{sec:cudaopencl} and Section \ref{sec:nvidiaamd}.
\item CPU performance is the best one, for each query,
between the performance of two systems: one is the most recent released MonetDB, which we believe
represents the state of the art the high performance query execution engine on CPU; the other one
is our OpenCL query engine specially optimized for Intel CPU. 
\end{itemize}

We have made the following findings.

\begin{itemize}
\item The GPU query engine outperforms the CPU query engine for processing all SSBM queries.
However, the performance speedup varies significantly depending on query characteristics and system setups.
\item The key to obtain high query execution performance on GPUs is to prepare the data
in the pinned memory, where 4.5-6.5X speedups can be observed for certain queries.
When data are in the pageable memory, the speedups are only 1.2 - 2.6X for all SSBM queries.
\item GPU has limited speedups (around 2X) for queries: 1) dominated by selection operations,
and 2) dominated by random accesses to dimension tables caused by high join selectivities and
projected columns from dimension tables.
\end{itemize}

We believe the comparison results can reflect the non superior performance advantage of GPU over CPU
when processing complex warehousing workloads.
We conclude that this is one of the reasons why GPU have not been adopted in the warehousing systems.\\

The details are in Section 5.1. \\

\textbf{Weak point 2:}
\textit{
The experimental study offers little new results. It mainly shows that either
data transfer or kernel execution dominates the overall performance, depending on data and query characteristics.\\
}

\textbf{Answer:}

In this revision, our experimental study has been extended to the following three topics.

\begin{itemize}
\item As answered in weak point 1, we have examined the performance differences between GPU and CPU for processing warehousing queries.
\item We have investigated how the current GPU hardware and GPU programming models affect warehousing query performance.
\item We have further examined how the OpenCL query engine designed for GPU architectures performs compared to MonetDB when executing on CPU.
\end{itemize}

We believe these new results can not only answer the question why GPUs have not been utilized in warehousing systems, but also
give directions for adopting GPUs in the fittest way.\\

The details are in Section 5.\\

\textbf{Weak point 3:}
\textit{
The experimental methodology needs to be described more clearly and better justified.
For example, why not use GTX680 with pinned memory as default? \\
}

\textbf{Answer:}

Thank you for pointing this out.
In the revised paper, when studying the behaviors of warehousing queries on GPUs, 
we have conducted the experiments on NVIDIA GTX 680 with pinned memory
to make our findings valid for the newest GPUs.
In our performance comparisons and modeling, we also use the performance of SSBM queries
on GTX 680 with the pinned memory as the baseline.\\

\textbf{Detail 1:}
\textit{
Naming of the query engine. The first paragraph of Section 2 
says "The engine is actually an automatic translator from SQL to CUDA programs written in C language".
Based on this sentence, I gather that the query engine is a C program that takes SQL statements as input and automatically generates CUDA programs to run on the GPU.
Then the second paragraph of Section 3.1 says "Our GPU query engine is implemented with CUDA...".
This sentence somewhat contradicts the previous one in the naming - the previous sentence says the
query engine is a translator in C and this sentence says the query engine is the actual CUDA code (produced from the translator?).\\
}

\textbf{Answer:}

Thank you for pointing this confusion.
In the revised paper, we have clarified the architecture of our query engine
and removed the confusions. Here is the revised description in the paper. \\

Our query engine is comprised of an SQL parser, a query optimizer and an execution engine.
The parser and optimizer share the same code with YSmart.
The execution engine consists of a code generator and pre-implemented query operators using CUDA/OpenCL.
The code generator can generate either CUDA drive programs or OpenCL drive programs,
which will be compiled and linked with pre-implemented operators. \\

The details are in Section 2.1\\

\textbf{Detail 2:}
\textit{
Description of the selection implementation. I understand that the system adopts the column-based storage format. Thus I guess the "projected columns" referred to in the paper are those columns that appear in the result. However, the last sentence in the second paragraph of Section 2.2. really puzzles me: "The second step is to sequentially scan the filter ... when the element in the filter indicates true for the corresponding predicate". What is "the filter"? Is it the predicate evaluation result generated from the first step? If so, every element in the result satisfies the predicates. Do you mean that you use the predicate evaluation result to find corresponding elements in the "projected columns" to form the final result? \\
}

\textbf{Answer:}

The projected columns refer to the columns appeared in the selection results.
The filter is a 0-1 vector generated in the first step of selection
which is used to form the final selection results.
We have clarified the selection implementation in Section 2.2.
Here is the revised description in the paper.\\

Selection's first step is to sequentially scan all the columns in the predicates for predicate evaluation,
with the result stored in a 0-1 vector.
The second step is to use the vector to filter the projected columns. \\


\textbf{Detail 3:}
\textit{
Citation of hash conflict avoidance. The last sentence of the third paragraph in Section 2.2 says "we can avoid hash conflicts by making the size of hash table twice the cardinality of the dimension table theorectically". I take that "theorectically" is meant for "avoiding hash conflict". If so, it would be helpful to cite the theory here.\\
}

\textbf{Answer:} 

The following citation has been added when we describe the implementation of the join operator in Section 2.2.

R. Motwani and P. Raghavan. Randomized Algorithms. Cambridge University Press, 1995.\\


\textbf{Detail 4:}
\textit{
Use of pageable host memory instead of pinned memory. Table 3 show that transfer bandwidths with pinned memory are slightly higher than with pageable memory. Any particular reasons for the choice of using pageable memory instead of pinned memory?\\
}

\textbf{Answer:}

In the revised paper, when studying the behaviors of warehousing queries on GPUs,
we have conducted all the experiments on GTX 680 with pinned memory
to make our findings valid from the newest GPUs.\\

\textbf{Detail 5:}
\textit{
Workload setup (Section 3.2.1). \\\\
The first paragraph of 3.2.1 says "when studying factor A, only factor A in the default workload will be changed. All other factors remain the same". This setup sounds good in general; however, in some cases multiple factors are not independent from each other. In particular, the number of selection predicates may affect query selectivity.
I assume that the paper uses conjunction of predicates, but both conjunction and disjunction of predicates may affect selectivity.
Then the question would be how to keep the default 10\% query selectivity when increasing the number of predicates?
If 10\% means selectivity of each predicate, then the query selectivity or the number of result tuples will get much fewer with the increase of number of predicates, assuming conjunctive, mutually independent predicates.\\\\ 
Table 1. Number of projected columns. Are the one column from each table in a join the join attributes? Similarly, is the one projected column in aggregation or sorting the groupby/order-by column? What does the 1\% selectivity of an aggregation query mean, on average there are 100 tuples in each group? 
3.2.1. the second paragraph.\\\\
Does "1 predicate with a min comparison" mean inequality predicate (less than or greater than)? \\\\
"1 column will be projected by the selection operator". Does it mean that only one column will appear in the query result? Is this column also the one that is involved in the selection predicate?\\ 
}

\textbf{Answer:}

Thanks for the detailed comments.\\

1) Independent factors

For the selectivities and the number of predicates, we agree with you that they depend on each other.
In our study, we use the conjuncture of predicates. When studying the number of predicates, we keep each predicate a selectivity 10\%.
We clarified this when we describe the workload setup.\\

2) Number of projected columns

In our study, the projected columns are not included in selection predicates, join attributes, group-by columns and order-by columns. 
For the selectivity of an aggregation, we mean the number of tuples in the results divided by the number of input tuples.
We have clarified this when we describe the workload setup. \\

3) "1 predicate with a min comparison"

We mean \textit{less than} here. When there is one column in the predicate, it has the form of \textit{col$<$value}. \\

We have re-described the setup for the workloads when studying the behavior
of single warehousing operators.
Due to the limited space, we put the study of single operator in the our technical
report. Here is the link to the report:

{http://www.cse.ohio-state.edu/$\sim$yuanyu/report.html} \\

\textbf{Detail 6:}
\textit{
Figure 2 results.\\\\ 
The second paragraph of 3.2.2 states that "The figure shows the execution time of a kernel is corelated to the number of actual memory transactions". First of all, as both the kernel execution time and the number of memory transactions are broken down by components, it is hard to figure out the total kernel execution time or the total number of memory transactions. Second, every operator type has only one pair of kernel execution-number of memory transactions results, the comparison can only be done between different types of operators, e.g., selection versus join, which is not only about the correlation of the two factors under study. If there had been results for a single type of operator, e.g., selection, with the direct comparison of total kernel execution time versus total number of memory transactions, the claim would have been better supported. \\ 
}

\textbf{Answer:}

Thank you for the suggestion.

To illustrate the relation between kernel execution time and the total number of memory transactions,
we use the selection operator as an example and present the comparison of total kernel execution time and the number of memory transactions.  
Due to limited space, we also put this part in our technical report. \\

\textbf{Detail 7:}
\textit{
Evidence of memory-bounded query performance.\\\\ 
Section 5.1.\\\\ 
The second paragraph says that "As we have already discussed, the performance of data warehousing queries on GPUs are mainly bounded GPU device memory accesses". However, the time breakdowns in all figures are by transfer/kernel execution and/or kernel components. There is no direct comparison on the GPU memory access time versus the total execution time. \\
}

\textbf{Answer:}

We use the relation between kernel execution time and the number of memory transactions as the evidence to demonstrate that the performance of warehousing queries on GPUs are
bounded by GPU device memory accesses.
We verify this through our cost model in the paper. \\

\textbf{Detail 8:}
\textit{
Assumption of projected columns in a join query\\\\ 
Section 5.2. The second paragraph. "we assume the join keys ... are not projected by the join operator". Do you mean that the join attributes do not appear in the join query result? Does this assumption hold for the join and the SSBM queries under study? \\
}

\textbf{Answer:}

What we mean in the paper is that the join attributes do not appear in the join query result.
This is true from all the SSBM queries.
We have clarified this when describing the join model in the second paragraph in Section 6.2. \\

\textbf{Detail 9:}
\textit{
Setup of GTX 680.\\\\ 
Section 3.1 says the GTX 580 is connected to the host through a PCI-e 2.0 bus. Since GTX 680 supports PCI-e 3.0, is the GTX 680 connected through a PCI-e 3.0 bus to the host? The results in Section 6 say that the two types of cards have the same transfer bandwidth using pageable memory.\\
}

\textbf{Answer:}

In our experiments, each GPU will be connected to a PCIe 3.0 when conducting experiments on that GPU.
We have clarified this when we describe the hardware platform in Section 3.2.1 in the revised paper.\\

We have also illustrated the reason why the PCIe bandwidth is higher for the pinned host memory
in the second paragraph in Section 3.3.2. Here is the revised description.\\

The reason is that for pinned memory, data can be directly transferred using GPU DMA engine.
However, for pageable memory, data need to be copied to a pinned DMA buffer first and then
transferred using GPU DMA engine. \\


\textbf{Detail 10:}
\textit{
Actual and Modeled performance.\\\\ 
I am really confused with the modeled and actual performance results. Section 5.1, the first paragraph says that the model is for query execution time, assuming data are already in the host memory. This seems to say the host-to-device transfer time is excluded. However, Tt the data transfer time is included in the model. Do the results in Figure 10 include transfer time? If so, the transfer time might be the bulk in the total time, as shown in Figure 4. What about results in Figures 11-14 - do they include PCI-e transfer time?\\\\ 
Figure 13 is very encouraging - using GTX 680, which has PCI-e 3.0, increases the query performance by 30-70\% with pinned memory.
In contrast, Figure 12 shows that using pageable meory, the performance on GTX480/580/680 is all similar. The question is then why not use pinned memory on GTX680 as default?\\\\
Figure 14. Is the "Base" measured performance on GTX580 with pageable memory? \\ 
}

\textbf{Answer:}

1) Model

When evaluating the model, we assume that the data are available in the host memory. Data need to be transferred to GPU device memory for processing.
In the original paper, we included both the PCIe transfer time and the kernel execution time, for Figure 11-14.
To make it clear, in the revised paper, we have separated the estimation of PCIe transfer time and the estimation of the kernel execution time
when we evaluate our model in Section 6.3.
When examing the impacts of GPU internal architectures on query performance, we focus on the kernel execution which is determined by
GPU internal architectures (Figure in the revised paper).\\

2) why not use pinned memory as default?

In the revised paper, we use 680 with the pinned memory as the base performance
for prediction and for studying query behaviors on GPUs.\\

3) Figure 14.

In the original paper, the Base was the performance on GTX 580 with pageable memory.
In the revised paper, the Base is the performance on GTX 680 with pinned memory (Figure 16). We have made it clear. \\


\large \textbf{Responding to Reviewer 2:} \\

\textbf{Weak point 1:}
\textit{
I understand the focus of this work is on the interaction between the host and GPU, but it is still disappointing that I/O is completely ignored in this study. It is all the more so, because the subject of this study is large-scale OLAP queries. It would be interesting to see how all things are balanced in data transfer from disk to the host memory and to the GPU memory. \\
}

\textbf{Answer:}

We agree with you that I/O is a very important issue in warehousing systems.
The reason that we don't study I/O in this paper is that the GPUDirect technique
that can load the data directly from I/O devices into
GPU device memory has not been supported by the I/O devices yet.
We will study how to make GPUs directly process
data stored in the permanent storage medias when the support for GPUDirect is available.
We have added this discussion in the end of the revised paper.\\ 

\textbf{Weak point 2:}
\textit{
Not all the findings are insightful, and some of them are exaggerated. For example, I don't agree that invisible joins accelerate the kernel execution significantly. In fact, I don't feel like I have learned a lot about using GPU for OLAP queries. \\
}

\textbf{Answer:}

In the revision process, we have conducted intensive experiments including the performance
comparison between GPU and CPU, and among different GPU environments. 
We believe our new findings are useful and insightful for seeking the fittest way to adopt GPUs into warehousing systems.
The details are in Section 5. \\


For the invisible join technique,
we agree with you that it is not effective for most SSBM queries.
However, invisible join is specifically effective to accelerate queries dominated by random device memory accesses (e.g.,Q3.1).
As reported in the paper, this type of query is the one that can least utilize the GPU processing capabilities,
which is also confirmed when comparing the performance of GPU over CPU.
However, we realize that it is not free to utilize invisible join because
this technique requires the data reorganization in both the dimension tables and
in the fact table.
But as a performance study paper, in order to present a comprehensive study of what kind of queries can benefit from different software
optimization techniques, invisible join is included and carefully studied. \\

We also would like to highlight that
we expect this paper can broadcast various knowledge to the database community. 
Having implemented a fully functional OLAP engine on GPUs using various hardware platforms and software optimization techniques,
our experience would help other system builders to understand the real GPU advantages for complex workloads, and to 
know the support and limitations of software environments for warehousing workloads. \\

\textbf{Detail 1:}
\textit{
The cost model (Section 5.1) focuses on the kernel execution cost. As Figure 6 shows, however, this is not a dominant factor in the overall cost.\\
}

\textbf{Answer:}

We both estimate the PCIe transfer time and kernel execution time for a given query.
When evaluating the model, we have separated the PCIe transfer and kernel execution in Section 6.3.\\

\textbf{Detail 2:}
\textit{
Section 6 states that the PCIe transfer bandwidth doubles when the host memory is pinned. Not all the VLDB audience are familiar with this. Provide more details on pinning the host memory and elaborate on how it doubles the bandwidth. \\
}

\textbf{Answer:}

Thank you for pointing out the confusion.
In the revised paper, we have described the reason why the PCIe transfer bandwidth is higher
when the memory is pinned. Here is the description.\\

The PCIe transfer bandwidth becomes higher when the host memory is pinned.
For example, the PCIe transfer bandwidth of NVIDIA GTX 680 almost doubles
when the host memory is pinned.
The reason is that for pinned memory, data can be directly transferred using GPU DMA engine.
However, for pageable memory, data need to be copied to a pinned DMA buffer first and then
transferred using GPU DMA engine. \\

The details are in Section 3.3.2.\\

\textbf{Detail 3:}
\textit{
I believe Figure 7 does not include any I/O time, but it may be necessary to clarify that, because a preceding paragraph states that the fact table is stored in multiple disk copies. \\
}

\textbf{Answer:}

You are right.
It doesn't include I/O time. We have made it clear in the revised paper
in the fourth paragraph in Section 4.2. \\

\textbf{Detail 4:}
\textit{
There are a few missing words and grammatical errors. \\
}

\textbf{Answer:}
Thank you for the comment. We have made a strong efforts to improve the readability of the paper in the revision process.\\

\large \textbf{Responding to Reviewer 3:} \\

\textbf{Weak point 1:}
\textit{
Bad Exposition. \\
}

\textbf{Answer:}

Thank you for pointing this out. In the revision process, we have made a strong effort to
present our findings and improve the quality of the paper. \\

\textbf{Weak point 2:}
\textit{
No Novelty in approach or conclusions. \\
}

\textbf{Answer:}

In the revision process, we have conducted a larger scope of performance studies to
the most proper way to utilize GPUs in warehousing systems.
We have made the following new findings.

\begin{itemize}
\item The GPU query engine outperforms the CPU query engine for processing all SSBM queries.
However, the performance speedup varies significantly depending on query characteristics and system setups.
\item The key to obtain high query execution performance on GPUs is to prepare the data
in the pinned memory, where 4.5-6.5X speedups can be observed for certain queries.
When data are in the pageable memory, the speedups are only 1.2 - 2.6X for all SSBM queries.
\item GPU has limited speedups (around 2X) for queries: 1) dominated by selection operations,
and 2) dominated by random accesses to dimension tables caused by high join selectivities and
projected columns from dimension tables.
\item From both the performance and programming perspective,
CUDA is more suitable and supportive for processing warehousing queries.
\item Without using the pinned memory, the NVIDIA OpenCL query engine
can have similar performance with the CUDA engine.
However, NVIDIA OpenCL haven't well supported pinned host memory.
\item The performance slowdown when porting NVIDIA CUDA (on GTX 680) to AMD OpenCL (on 7970)
is not caused by the differences in hardware efficiency (PCIe transfers time or kernel executions),
but by AMD's OpenCL implementation for GPU memory management.
\item The major obstacle to OpenCL portability is not performance slowdown of GPU kernel executions but
subtle differences of vendor implementations for the OpenCL specification.
\item Porting the OpenCL query engine from GPUs to CPU can work well by changing each thread's memory access pattern
and thread configurations.
\item MonetDB outperforms the OpenCL query engine for processing selection dominated queries
and join dominated queries with low selectivities.
\item The OpenCL query engine has comparable or better performance for join dominated queries
with high selectivities.
\end{itemize}

The details are in Section 5.\\

We believe these new finding can not only explain why GPUs have not been adopted in warehousing systems,
but also give two  R\&D directions for adopting GPUs in the fittest way.
First, a CPU/GPU hybrid query engine can maximize the hardware combination efficiency by task scheduling either in the query level or in the operator level.
Second, GPUs should run query exegine for the purpose of real-time business intelligence analytic
for main memory database systems with minimal interference for transactions executed on CPUs.\\

We conclude this in the revised paper.


\textbf{Weak point 3:}
\textit{
Complete lack of GPU-specific implementation details. \\
}
\textbf{Detail 1:}
\textit{
Authors seem to have completely skipped over GPU implementation: how many threads are used, how are they mapped, what kind of memories were used: device, local, texture? were caches optimized? how are columns of a multidimensional CUBE laid etc. What kind of work CPU does? \\ 
}

\textbf{Answer:}

Thank you for pointing this out. \\ 

\textbf{About CPU work.}
In our query engine, the codes executed on CPU are responsible for allocating and releasing GPU device memory,
transferring data between the host memory and the GPU
device memory, and launching different GPU kernels.\\

\textbf{About use of GPU Memory.} Our engine utilizes both device memory and
local shared memory. For selection, only device memory
is utilized. For join and aggregation, the hash table will be put in the local
shared memory when its size is smaller than the local shared memory size.
For sort, all the keys are sorted and merged in the local shared memory.

\textbf{About data Layout.} Each column is stored in a continuous memory in GPU device memory,
which has the Array-Of-Structure (AOS) format.
The Structure-Of-Array (SOA) format, which can provide
coalesced access for scanning irregular data,
doesn't provide
performance benefits for our workloads because the accesses of irregular data (string data from dimension tables)
are dominated by random accesses during join operations.

\textbf{About GPU Thread Configurations}. The thread block size is configured to be 256 and
the largest number of thread blocks is configured to be 2048. Each thread in the thread block
will process a set of elements from the input data based on its global thread ID.
For example, the thread with global ID 0 will process the data with the index of 0, 2048*256,
2*2048*256 until the end of the data.

\textbf{About GPU Thread Output.}
Our engine avoids the synchronizations among threads when they write to the same memory region at the same time.
This is achieved by first letting each thread count the number of results it will generate,
and then performing a prefix sum on the count result.
In this way each thread knows its starting position in the region and can
write to the region without synchronization. \\


\textbf{Detail 2:}
\textit{
SSMB queries are just referred by number. Am I supposed to read [23] to understand their execution behavior? This needs to fixed by providing a table summarizing all the queries. \\
}

\textbf{Answer:}

In the revised paper, we have summarized the characteristics of SSBM queries
in a table in Section 3.1.\\

\textbf{Detail 3:}
\textit{
NVIDIA GPUs are SIMT machines not SIMD machines. SIMD usually refers to short-vector data parallelism such as Intel AVX2.\\
}

\textbf{Answer:}

Thank you for pointing this out.
In the revised paper, we do not use these terms. \\ 

\textbf{Detail 4:}
\textit{
No novelty (see the novelty box). \\
The two components of GPU performance: PCI bandwidth and kernel performance have been known for ages: as soon as CUDA came into picture. This is not a novel problem. 
I would have preferred new implementation of hash (e.g., using Cuckoo Hash), rather than just using what other people have proposed. There are a lot of unsolved issues, e.g., memory layout optimizations, how to optimize ORDER BY queries with multiple attributes (similar to 3.1).. Either these optimizations don't exist or authors have not described them. \\
}

\textbf{Answer:}

Thank you for your suggestions.
We have implemented Cuckoo Hash in our query engine.
However, we find that it doesn't outperform the chained-hash because it needs more key comparisons
than chained-hash when there are no matches for the key in the hash table. Since the join selectivities
are not high for SSBM queries, it doesn't outperform chained hash. \\

For the memory layout optimizations, we have studied whether our engine can benefit from changing the layout format
from the Array-Of-Structures (AOS) to Structure-Of-Arrays (SOA) for the irregular data.
We find that changing to SOA doesn't provide performance benefits for SSBM workloads
because the accesses of irregular data are dominated by random accesses. \\

We haven't done special optimizations for order by queries with multiple attributes because for the SSBM workloads,
order by operation is after aggregation which makes the number of tuples to be sorted very small. 
In this case, the total time of order by operation is very small compared to join or selection.\\


\textbf{Detail 5:}
\textit{
There are a lot of numbers, but they don't provide any insight. The cost model is OK, but again it is not novel. I would have liked to see how it would have impacted query partitioning across CPUs and GPUs. \\
}

\textbf{Answer:}

Thank you for the comment.
We agree with you that query partitioning across CPUs and GPUs are necessary and
it is one of the proper way to utilize GPUs in warehousing systems.
In this paper we have obtained two kinds of useful performance comparison results for workload partitioning.
First, we have identified what kind of queries are suitable and not suitable for GPU processing.
Second, we have studied the performance of OpenCL query engine designed for GPUs when running on CPU.
It is our future work to combine the model and our findings to partition workloads among CPUs and GPUs. \\

\textbf{Detail 6:}
\textit{
No reference to other ROLAP implementations that used compressed data. \\ 
}

\textbf{Answer:}

We have added the following citation in the revised paper.\\

D. J. Abadi, S. Madden, and M. Ferreira. Integrating
compression and execution in column-oriented
database systems. In SIGMOD Conference, pages
671–682, 2006. \\



\textbf{Detail 7:}
\textit{
Why didn't you use OpenCL? Are your conclusions valid for ATI GPUs?\\
}

\textbf{Answer:}

Thank you for the comment.
We have implemented our query engine using OpenCL and studied the query performance
on the recent AMD GPU HD 7970. \\

We compared the performance of warehousing queries on NVIDIA GTX 680 and on AMD HD 7970.
We find that they have similar PCIe transfer overhead and kernel execution time.
However, considering the overall performance, NVIDIA GPU outperforms AMD GPU because of
the differences in their implementations of the programming models (Section 5.3). \\

Based on our studies, we are confident that our conclusions are also valid from AMD GPUs. \\

\textbf{Detail 8:}
\textit{
No discussion of the underlying runtime- what is MapReduce doing here (Section 2)?? How is MapReduce mapped on the GPUs? \\
}

\textbf{Answer:}

We don't use the MapReduce here.
We clarified the architecture of our query engine in the revised paper.
The following is the revised description of our engine.\\

Our query engine is comprised of an SQL parser, a query optimizer and an execution engine.
The parser and the optimizer share the same code with YSmart.
The execution engine consists of a code generator and pre-implemented query operators using CUDA/OpenCL.
The code generator can generate either CUDA drive programs or OpenCL drive programs,
which will be compiled and linked with pre-implemented operators. \\

The details are in Section 2.1.

