%\documentclass[10pt,conference,letterpaper]{IEEEtran}
\documentclass{acm_proc_article-sp}
\usepackage{times}
%\usepackage[english]{algorithm2e}
\usepackage{algorithm}
\usepackage{algpseudocode}
%\usepackage[named]{algo}
%\algref{<algorithm>}{<line>}
\newtheorem{theorem}{Theorem}
%\newcounter{Observation}
\newtheorem{definition}[theorem]{Definition}
\newtheorem{Observation}[theorem]{Observation}
\def\candidate{{\cal C}}
\def\comment#1{}
\usepackage{graphicx}
\input{psfig}


\pagestyle{empty}

\usepackage{graphicx}

\begin{document}

% ****************** TITLE ****************************************

\title{Feedback}




\maketitle



\section{Reviewer 3}

\emph{Overall, the idea is nice and plausible. The presentation was
also good in general. However, there are several points where the
authors can improve on:}
\begin{enumerate}
         \item \emph{In section 3.3, the distributions of the lengths and slopes of
the lines in a cluster are assumed to be 1-dimensional Gaussian
Distributions. However, this assumption has never been justified.
The authors should justify it through analysis or at least through
experiments.}\\
\begin{figure}[htbp]
\centering
\begin{tabular}[h]{cc}
\includegraphics[width=3.8cm,height=3.8cm]{gaussian1.eps} &
\includegraphics[width=3.8cm,height=3.8cm]{gaussian2.eps} \\
(a)  Spot Dataset & (b)  Power Dataset
\end{tabular}
\caption{Slope and length distribution \label{fig:gau}}
\end{figure}
         \textbf{Feedback}: In many real life applications, the system operates in different
states; and in each state, the system exhibits stable behavior. Each
observation of one state can be regarded as the stable behavior plus
some errors. Since the observational error in an experiment is often
described by Gaussian Distribution, we use it here to describe the
distribution of segment lines. Some experiments are conducted to
verify the assumption and results are shown in Fig~\ref{fig:gau}.
For both Spot and Power dataset, we select one big cluster randomly,
since the big cluster contains more lines which can demonstrate the
distribution better. It can be seen that both length and slope can
be approximated by Gaussian distribution.
         \item \emph{In Algorithm 1, indexes for the loops seem not correct. For
example, in lines 2 and 5, the initial indexes should be 3 and 2
respectively. Also it seems that line 11 always evaluated false
because $\delta_t(i)$ is initialized to 1 in line 4. The algorithms
should be cleaned up.}\\
         \textbf{Feedback}: The algorithm is modified as follows. The
         initialization of $\delta_1(i)$ is added in line 2. In line
         11-12, we add the initialization of $\delta_{t-d}(i)$ if
         the segment starts from the beginning of the time series.
%\label{app:algorithm}
\begin{algorithm}
\caption{Detect\_state\_sequence}\label{al:viterbi}
\begin{algorithmic}[1]
\State \textbf{Input} $\varepsilon_r$:maximal error threshold of
line approximation
 \State Initialize $\delta_1(i)=0$ ($1\leq i\leq K$)
 \For{$t\gets 2, n$}
    \For{$i\gets 1,K$}
        \State $\delta_t(i)=0$
        \For{$d\gets 2,t$}
            \State $L=BestLine(t-d+1,t)$
            \If{$Err(L)>\varepsilon_r$}
                \State Break
            \Else
                \If{$t==d$}
                    \State $temp=\pi_ib_i(L)$
                \Else
                    \State $temp=\max\limits_{j}(\delta_{t-d}(j)\cdot a_{ji})b_i(L)$
                \EndIf
                \If{$temp>\delta_t(i)$}
                    \State $\delta_t(i)=temp$
                    \State $prev_d(t)=t-d$
                    \State $prev_s(t)=j$
                \EndIf
            \EndIf
        \EndFor
    \EndFor
\EndFor \State Obtain maximal optimal probability $\delta_n(i)$,
which holds
\[\delta_n(i)\geq \delta_n(j),j\neq i\]
\State Obtain state sequence by backtracking sequence of $prev_s$
\State Obtain line sequence by backtracking sequence of $prev_d$
\end{algorithmic}
\end{algorithm}
         \item \emph{More importantly, it would have been nice if the
         evaluation included binary predication accuracy for predicting up or down instead of just giving relative error on the slopes. It would give more tangible idea about the performance because one can compare the accuracy with a natural baseline with the expected accuracy of 50\% (e.g., random, or always up; i.e., no learning cases). If the accuracy of the proposed model is better than 50\% with meaningful margin, it may be significant in some domains like stock
         prediction.}\\
         We conduct experiments of binary trend prediction as said
         by the reviewer. For each testing time series, we predict
         up or down by three approaches: Rand, Regression and pHMM.
         The Rand approach predicts up or down by probability 50\%.
         The Regression approach predicts by first computing the linear regression of the
         time series in the current time window, and then making
         prediction with it. For each dataset, we pick 500 time points to
         predict the trend of the next 10, 20, 30, 40 and 50 steps.
         Fig~\ref{fig:predt} shows the average accurate rate of all
         500 time points. It can be seen that pHMM is more accurate
         than other approaches apparently.
\begin{figure}[htbp]
\centering
\begin{tabular}[h]{cc}
\includegraphics[height=3cm]{predt_power_binary.eps} &
\includegraphics[height=3cm]{predt_spot_binary.eps} \\
(a) Spot dataset & (b)  Power dataset
\end{tabular}
\caption{Accuracy of binary trend prediction\label{fig:predt}}
\end{figure}
         \item \emph{In page 9, right column, 3rd paragraph, line 1. $\epsilon_r$
should be $\epsilon_c$ 2) there is no explanation distinguishing
Minimal GC and Average GC in Table 2.}\\
\textbf{Feedback}: We change $\epsilon_r$ to $\epsilon_c$ as
mentioned by the reviewer. We add the explanation of Minimal $GC$
and Average $GC$ in Table 2 as follows: For each state of "French
Franc", we compute its pattern-based correlation($GC$) with any
state in other 5 time series. Table 2 shows both minimal $GC$ and
average $GC$. For example, in the first row, Minimal $GC$ means the
minimal $GC$ computed between any state pair in which one is from
"French Franc" and the other is from "Australian Dollar". Average
$GC$ is the average of all $GC$s between all state pairs from these
two currencies.
\end{enumerate}



\end{document}
