\begin{table*}
	\centering
		\begin{tabular}{|p{2cm}|p{2cm}|p{1.5cm}|p{2cm}|p{3.5cm}|p{3.5cm}|}
\hline
%{\footnotesize
Application & Goal & Opportunity & Value & Search Cost & Source of uncertainty \\
\hline
Marriage market %TODO: \cite{who?} 
& Maximize life-time happiness & Date & Lifetime utility & Time spent, being alone while searching  & Uncertainty regarding the potential spouse character\\
\hline
Job market %\cite{who?} 
& Optimize lifetime assets & Job interview & Offered salary and perks & Time spent, unemployment while searching  & Uncertainty about potential employers\\
\hline	
Product purchase %\cite{who?} 
& Minimize overall expense  & Store  &  Product price  & Time, communication and transportation expenses & Uncertainty regarding asked prices\\
\hline	
%Real estate agent & maximize profit of agent  & apartment  &  commission  & Time,Communication expenses or transportation costs &  Uncertainty of the apartment's price and buyer's willingness to buy it \\

%\hline	
		\end{tabular}
	%	\vspace{-5pt}
	\caption{Mapping applications to sequential search problem}
%	\vspace{-10pt}
	\label{tab:MappingApplications}
\end{table*}
\section{The Search Model}  \label{sec:model}   

As the underlying framework for the research, we consider the canonical sequential search problem described by Weitzman \cite{Weitzman1979} to which a broad class of search problems can be mapped.  In this problem, a searcher is given a number of possible available opportunities $B=\{B_1,...,B_n\}$ (e.g., to buy a product) out of which she can choose only one. The value $v_i$ to the searcher of each opportunity $B_i$ (e.g., expense, reward, utility) is unknown.  Only its probability distribution function, denoted $f_i(v)$, is known to the searcher.  The true value $v_i$ of opportunity $B_i$ can be obtained but only by paying a fee, denoted $c_i$, possibly different for each opportunity.  Once the searcher decides to terminate her search (or once she has uncovered the value of all opportunities) she chooses from the opportunities whose values were obtained, the one with the minimum or maximum value (depending on whether values represent costs or benefits).  A strategy $s$ is thus a mapping of a world state $W=(q,B'\subset B)$ to an opportunity $B_i\in B'$, the value of which should be obtained next, where $q$ is the best (either maximum or minimum) value obtained by the searcher so far and $B'$ is the set of opportunities with values still unknown. ($B_i={\emptyset}$ if the search is to be terminated at this point.)  The optimal sequential search strategy $s^*$ is the one that maximizes/minimizes the expected sum of the costs incurred in the search and the value of the opportunity chosen when the process terminates. 

The search problem as so formulated applies to a variety of real-world search situations.  For example, consider the case of looking for a used car.  Ads posted by prospective sellers may reveal little and leave the buyer with only a general sense of the true value and qualities of the car.  The actual value of the car may be obtained only through a test drive or an inspection, but these incur a cost (possibly varying according to the car make, model, location, and such).  The goal of the searcher is not necessarily to end up with the most highly valued car, since finding that one car may incur substantial overall cost (e.g., inspecting all cars).  Instead, most car buyers will consider the trade off between the costs associated with further search and the marginal benefit of a better-valued opportunity.  Table \ref{tab:MappingApplications} provides mappings of other common search applications to the model. As it suggests, a large portion of our daily routine may be seen as executing costly search processes. 
From the time we wake up and search our closet for an outfit for the day, opening the refrigerator and choosing what to cook, searching for parking space, etc. In all these examples the search itself takes time, resulting with a trade-off between the benefit from the improvement in the quality results we may further obtain and the costs of search.


While the problem is common, the nature of its optimal solution is non-intuitive.  A simplified version of an example from Weitzman \cite{Weitzman1979} may be used to illustrate.  It deals with two possible investments. The benefits of each are uncertain and can only be known if a preliminary analysis is conducted.  If funds are limited, then no more than one investment would actually be carried out.  Table \ref{table:example} summarizes the relevant information for decision-making: investment $\alpha$ might yield a total benefit of 100 with  probability .5 and of 55 with probability .5 and alternative investment $\omega$ with a probability of .2 might deliver a possible benefit of 240 and no benefit with probability .8.  Preliminary analysis shows costs of 15 for the $\alpha$ investment and 20 for $\omega$.  
\begin{table}
	\centering
		\begin{tabular}{|c|c|c|}
		\hline
			Project & $\alpha$ & $\omega$\\
			\hline
			Cost & 15 &20\\
			Rewards & (0.5,100) , (0.5,55) & (0.2,240) , (0.8,0)\\
			\hline
		\end{tabular}
	\caption{Information for Simplified Example.} %\vspace{-10pt}
	\label{table:example}
\end{table}

The problem is to find a sequential search strategy which maximizes expected value. 
Carrying out a preliminary analysis only for $\alpha$ or only for $\omega$ reveals that it is better than not researching either investment. The expected value of researching $\alpha$ is $-15 + [.5(100) + .5(55)] = 62.5$, whereas for $\omega$ it is $-20+[.2(240) + .8(0)] = 28$. Thus, at least one investment should be explored further. The next logical question is which alternative should be researched first? 
When reasoning about which alternative to explore first, one may notice that by any of the standard economic criteria, $\alpha$ dominates $\omega$. Investment $\alpha$ has a lower cost, higher expected reward, greater minimum reward and less variance. Consequently, most people would guess that $\alpha$ should be researched first \cite{Weitzman1979}.  %They would probably be reacting to the fact that the expected value of $\alpha$ is so much higher than $\omega$.  However, there is a crucial difference between the value of a investment and the order in which it should be researched. %$\alpha$ is worth more in the sense that the expected value of an optimal investment without it is lower than without $\omega$. 
However, and somewhat paradoxically, it turns out that the optimal sequential strategy is to check $\omega$ first and if its payoff turns out to be zero then to develop $\alpha$.  This is shown in the decision tree in Figure \ref{fig:tree}.


\begin{figure}
	\centering
		\includegraphics[scale=0.4]{tree.pdf}
		%\vspace{-10pt}
		\caption{Solution to simplified example %TODO: *** fix figure - add terminations at the different branches***
		}
	\label{fig:tree}
	%\vspace{-10pt}
\end{figure}

Suppose $\alpha$ is researched first. If the payoff turns out to be 55, it would then be worthwhile to research $\omega$, because the expected value of that strategy would be $-20+[.2(240) +.8(55)] =72$ which is greater than the value at that point of not developing $\omega$, 55. If the payoff turns out to be 100 it would be economical to develop $\omega$, since -20 + [.2(240) + .8(100)] = 108, is better than 100. So the expected value of an optimal policy beginning with developing alpha is -15 +[.5(108) + .5(-20 + [.2(240) + .8(55)])] =75. A similar calculation shows the expected value of an optimal policy which starts by developing omega is -20 + [.2(240) + .8(-15 + [.5(100) + .5(55)])] = 78.
Thus, the optimal policy for this example has the counter-intuitive property that $\omega$ is researched first. 


It is quite simple to compute the optimal solution to the sequential search problem \cite{Weitzman1979}.  The solution is based on setting a reservation value (a threshold) denoted $r_i$ for each opportunity $B_i$.  For the expected cost minimization version of the problem, the reservation value to be used should satisfy Equation \ref{eq:optimal}(a):
\begin{equation} \label{eq:optimal}
(a)\hspace{5pt}c_{B_i}=\int_{x=-\infty}^{r_i}\hspace{-5pt}{(r_i-x)f_i(x)dx} \hspace{10pt};\hspace{10pt} (b)\hspace{5pt} c_{B_i}=\int_{x=r_i}^{\infty}\hspace{-5pt}{(x-r_i)f_i(x)dx}.
\end{equation}
Intuitively, $r_i$ is the value where the searcher is precisely
indifferent: the expected marginal benefit from obtaining the value of the opportunity exactly equals the cost of obtaining that additional value.  The searcher should always choose to obtain the value of the opportunity associated with the minimum reservation value and terminate the search once the minimum value obtained so far is less than the minimum reservation value of any of the remaining opportunities.  For revenue maximization, the reservation value should satisfy equation \ref{eq:optimal}(b) and values should be obtained according to descending reservation values, until a value greater than any of the reservation values of the remaining opportunities is found.

%TODO: for the journal, explain in depth the solution and its mayopic nature.

%The optimal strategy, $S^*$, is thus the sequence of reservation values $\{r_1,...,r_n\}$. 

One important and non-intuitive property of the above solution is that the reservation value calculated for each opportunity does not depend on the number and properties of the other opportunities, but rather on the distribution of the value of the specific opportunity and the cost of evaluating it.  

Sequential search problems provide a good, and important, arena for investigating whether restructuring the problem is preferred over supplying the optimal search strategy to the decision-maker.  As evidenced in the results section (and in prior literature \cite{chalamish2008programming,Rosenfeld09}), both people and the agents they program fail to follow the optimal strategy when engaged in sequential search.  Supplying the optimal solution to the searcher may require extensive argumentation and effort because of the counter-intuitive nature of the optimal solution, in particular its myopic nature and the fact that it often favors risky opportunities \cite{Weitzman1979}.  A possible way to persuade a person that this is the optimal strategy is by giving her the optimality proof, but that is relatively complex and requires strong mathematical and search theory background.  A possible way to persuade an agent that this is the optimal strategy is by calculating the expected value of every other sequence of decisions and compare with the expected outcome of the optimal strategy. 
%%TODO: for the journal version:
%However, unlike the strategy itself, which is trivial to calculate, the expected value calculation is complex *** Dudi: should I give the equation for this? It is a very complex equation and requires a lot of explanations ***. 
However, the number of possible sequences for which the expected outcome needs to be calculated is theoretically infinite in the case of continuous value distributions or exponential (combinatorial) for discrete probability distribution functions.
For $N$ opportunities the number of possible different sequences is $O(N!)$. For each opportunity there are $V$ possible values and there for there are $|V|$ possible strategies for each opportunity.  For each sequence there are ${|V|}^N$ different strategies, since there are $O(N!)$ sequences the total possible strategies is $O(N!)*{|V|}^N$.
%In this case the strategy cannot even be expressed as even its representation is exponential (unless we use the set of reservation values to represent the strategy).
Thus, both these methods for proving optimality have substantial overhead.  In contrast, the problem restructuring approach can improve performance without requiring such complex persuasion.


\section{Agent-based Methodology} \label{sec:methodology}

We used computer agents rather than people to test the effectiveness of the general approach of restructuring the problem space and of the heuristics for manipulating the choices presented for several  
reasons.    First, from a methodological perspective, this approach enables the  
evaluation to be carried out over thousands of different search problems, substantially improving the statistical quality of the result.  Even more importantly, it eliminates people's computational and memory limitations as possible causes of inefficiency.  The inefficiency of the agents' search is fully attributable to their designs, and result from the agent designers' limited knowledge of how to reason effectively in search-based environments.

Second, from an applications perspective, the ability to improve the performance of agents  for search-related applications and tasks, especially in eCommerce, could significantly affect future markets.   The importance and role of such agents have been growing rapidly.  Many search tasks are delegated to agents that are designed and controlled by their users (e.g., comparison shopping).  Many of these agents use non-optimal strategies.  Once programmed, their search strategy cannot be changed externally, but it can be influenced by restructuring of the search problem.

Finally, the results from agent-based evaluations may be useful in predicting the way the proposed heuristics would affect people's search.  Some prior research has shown close similarities between a computer agent's decisions and the decisions made by people in similar settings \cite{Rosenfeld09}, in particular in search-based settings \cite{chalamish2008programming}. 

\section{Heuristics} \label{sec:heuristics}

In this section, we define the three problem-reconstruction heuristics used in our investigations: Information Hiding, Mean Manipulation and Adaptive Learner.   These heuristics differ primarily in whether they adapt to a searcher's strategy.  The first two heuristics do not adapt; they assume no prior information about the searcher is available and apply a fixed set of manipulation rules.  The third heuristic uses information from a searcher's prior searches to classify it and decide which of the other two heuristics to use.  

For a manipulation heuristic to be considered successful, it needs not only to improve average overall agent performance, but also to avoid significantly harming the performance of any of the agents.  
It is notable, that the removal of a opportunity is risky, because if the opportunity is needed eventually, its absence will affect performance in the opposite way.

The best heuristic for economic search problem will be to disclose on each step the opportunity that needs to be open next, it will give us full control over the searcher.  In our case this heuristic is not applicable because the user gets a new information (opportunity) on each stage but in our case the user needs to get full set of alternatives initially.  There is another problem with this heuristic, there is no guarantee that the user will actually stop where we want him to stop.  Because of the reasons described above we did not include the heuristic in our research. 
%\TODO(done above) in the journal version discuss the issue of just giving people the right answer by disclosing on each step the box that needs to be open next. Say this is not applicable for our case because in that method the user doesn't get the full set of alternatives initially, but rather receives a new information (ne boxes) on each stage of the first. Even with that method, there is no guarantee he will actually stop where we want him to stop.


\subsection{The Information Hiding Heuristic}  

This heuristic removes from the search problem opportunities for which the probability that their value will need to be obtained according to the optimal strategy $s^*$ is less than a pre-set threshold $\alpha$.  By removing these opportunities, we prevent the searcher from choosing them early in the search, yielding a search strategy that is better aligned with the optimal strategy in the early, and more influential, stages of the search.  While the removal of alternatives is likely to worsen the performance of fully rational agents (ones that use the optimal strategy), the expected performance decrease  is small; the use of the threshold guarantees that the probability is relatively small that these removed opportunities are actually required in the optimal search.  

Formally, for each opportunity $B_i$ we calculate its reservation value, $r_i$, according to Equation \ref{eq:optimal}.  The probability of needing to obtain the value of opportunity $B_i$ according to the optimal strategy, denoted $P_{i}$, is given by $P_i=\prod_{r_j\leq r_i} P(v_j \geq r_i)$ for the cost minimization version of the problem, and $P_i=\prod_{r_j\leq r_i} P(v_j \leq r_i)$ for its revenue maximization version.  The heuristic omits from the problem every opportunity $B_i$ ($i\leq n$) for which $P_i\leq\alpha$.  

%TODO: for the journal paper. update the formula - make it more complex in a sinse that it's not just the probability we're going to reach this box but also that we'll use the value of the box later on.  This is very complex to calculate, so maybe we'll just give the mathematical formulation and not run the actual experiments.
%DUDI: please check:
We thought to improve the information hiding heuristic, instead of removing opportunities that the probability of reaching them according to the optimal sequence is lower than a certain threshold, we wanted to remove the opportunities that the probability that we will use the value inside them is low.  This approach is very risky and sensitive because we might remove boxes that are essential, for example, Assume we have to choose between two possible opportunities. The benefits of each are uncertain, we only know the probability of benefit for each one. opportunity $B_1$ might yield a total benefit of 10 with  probability .5 and of $\epsilon$ with probability .5 and alternative opportunity $B_2$ with a probability of .9 might deliver a possible benefit of 9 and no benefit with probability .1.  There is costs of 1 for the first and the second opportunity.  According to the optimal sequence $B_1$ comes before $B_2$ (although $B_2$ has a higher expectancy), the probability to reach and use the value of opportunity $B_2$ is lower than the probability of $B_1$. If we will follow the suggested strategy opportunity $B_2$ will be removed even though this opportunity is essential.    


\subsection{The Mean Manipulation Heuristic} \label{sec:heuristic2}

This heuristic addresses the problem that people tend to overemphasize mean values, reasoning about this one feature of a distribution rather than the distribution more fully.  Their search strategies typically choose to obtain the value of the opportunity for which the difference between its expected net value and the best value obtained so far is maximal.  We denote this strategy 'naive mean-based greedy search'.  Formally, denoting the mean of opportunity $B_i$ by $\mu_i$, searchers using the naive mean-based greedy strategy calculate for each opportunity the value $w_i=\mu_i-c_i$ (in the cost maximization version) or $w_i=\mu_i+c_i$ (in the revenue minimization version) and choose to obtain the value of opportunity $B_i=argmax_{B_j}\{w_j|B_j\in B' \cap w_j\geq v\}$ (or  $argmin_{B_j}\{w_j|B_j\in B' \cap w_j\leq v\}$  in the cost minimization version), where $v$ is the best value obtained so far.  

The heuristic restructures the problem such that $w_i$ of each opportunity $B_i$ in the restructured problem equals the reservation value $r_i$ calculated for that opportunity in the original problem.  This ensures that the choices made by agents that use the naive mean-based greedy search strategy for the restructured problem are fully aligned with those of the optimal strategy for the original problem. The restructuring is based on assigning to each opportunity $B_i$ ($0<i\leq n$) a revised probability distribution function $f_i^{'}$ such that $w^{'}_i=r_i$, where $r_i$ is the reservation value calculated according to Equation \ref{eq:optimal}.  The manipulation of $f_i$ is simple, as it only requires allocating a large mass of probability around $\mu_i^{'}$ (the desired mean, satisfying $w_i^{'}=r_i$).  The remaining probability can be distributed along the interval such that $\mu_i^{'}$ does not change.



\subsection{The Adaptive Learner Heuristic}  \label{sec:heuristic3}

The adaptive learner heuristic attempts to classify the strategy of a searcher and uses this classification to determine the best problem restructuring method to apply.  For this purpose we need to have a representative strategy for each strategy class that has all the typical characteristics of strategies in the class.  The heuristic thus requires the development of agents, denoted "`class-representing agents"'.  Each "`class-representing agent"' employs the representative-strategy of its class.  The measure of similarity between any searcher's strategy and a given class is the relative distance between its performance and the performance of the class-representing agent for that class over the same set of problems.  The searcher is classified as belonging to the class for which the relative performance distance to its representing-agent is minimal, and below a threshold $\gamma$.  Otherwise, it is classified as belonging to a default class.   Once the searcher is classified, the restructuring heuristic for its class can be applied.  The use of the threshold $\gamma$ assures that for any agent that cannot be accurately classified, a default manipulation heuristic is used, one that guarantees no substantial possible degradation in the performance of the agent.  The adaptive learner heuristic is given in Algorithm \ref{alg:1}.  


The basic requirement from the default manipulation heuristic is to improve overall agent average performance and even more important, to guarantee minimal individual performance degradation.



\begin{algorithm}
\begin{algorithmic}[1]
\caption{Adaptive Learner}  \label{alg:1}
\renewcommand{\algorithmicrequire}{\textbf{Input:}}
\renewcommand{\algorithmicensure}{\textbf{Output:}}

\REQUIRE 
$O$ - Set of prior problem instances.\newline
$S$ - Set of strategies.\\
$Threshold$ - classification threshold.\\

\ENSURE
$s^*$ - the classification strategy for the searching agent ($null$ if not classified or no previous data).\\

\STATE Initialization: $d_s\leftarrow 0$ $\forall s\in S$

\FOR {every $o\in O$}

\FOR {every $s\in S$}

\STATE $d_s\leftarrow d_s + \frac{\|Performance_{agent}(o)-Performance_{s}(o)\|}{Performance_{s}(o)}$ 

\ENDFOR

\ENDFOR

\IF {$min\{d_s\}\leq Threshold$}
\RETURN $argmin_s(d_s)$
\ELSE {

\RETURN {$null$}}

\ENDIF

\end{algorithmic}
\end{algorithm}


The algorithm receives as an input the results of prior searches and a set $S$ of strategy classes.  The function $Performance_{s}(o)$ returns the performance of the class-representing agent $s\in S$ given the problem instance $o$ (where $Performance_{agent}(o)$ is the performance of the searcher being classified based on $o$).  The algorithm returns the strategy $s^*$ to which the agent is classified (or $null$, if none of the distance measures are below the threshold set).  Based on the strategy returned we apply the manipulation heuristic which is most suitable for this strategy type (or the default manipulation if  $null$ is returned).

The results we report in this paper are based on two strategy classes: optimal strategy and naive mean-based greedy search strategy.\footnote{We use the optimal strategy class as a means for representing the class of strategies that are better off without applying problem restructuring.  The optimal strategy is part of this class, though, as reported in the evaluation section, none of the agents we evaluated actually used the optimal strategy.}   For a searcher that cannot be classified as one of these two strategies, we use the information hiding manipulation.  To produce the functionality $Performance_{s}(o)$ required in Algorithm \ref{alg:1}, we developed the following two agents:
\begin{itemize}
%\vspace{-5pt}
\item {\emph {Optimal Agent}}. This agent follows Weitzman's optimal solution \cite{Weitzman1979}.
\item {\emph{Mean-based Greedy Agent}}. This agent follows the naive mean-based greedy search strategy described in Subsection \ref{sec:heuristic2}.
\end{itemize}

The more observations of prior searcher behavior that the adaptive heuristic's algorithm is given, the better the classification it can produce, and consequently the better the searcher's performance is likely to be after  the appropriate manipulation is applied.   Obviously, an even greater improvement in performance could be obtained if the heuristic had access to the searcher (i.e., the agent) rather than just the records describing prior searches.  If direct access to the agent is allowed, then a straightforward improvement of the method would be executing the agent over each set of choices obtained, using any of the different methods and classifying it accordingly. 


%TODO for the journal paper - test what are the results iw we actually apply the manipulation techniques directly on the problem faced so far and let the agents deal with them and see what's best for it.

\subsection{Developed Agents}
For evaluation purposes we developed three types of agents: an optimal agent, a random agent and an agent that follows a mean-based strategy (pure mean agent).
\subsubsection{Optimal Agent}
Based on Weitzman's solution for Pandora's problem \cite{Weitzman1979} we constructed the optimal strategy.
The reservation value of each server is calculated according to:

\begin{equation}
r_i=c_i+\int_{y=0}^{r_i} yf(y)dy + r_i \int_{y=r_i}^{\infty} f(y)dy
\end{equation}

Now since $\int_{y=r_i}^{\infty} f(y)=1-F(r_i)$ we obtain:
\begin{equation}
r_iF(r_i)=c_i+\int_{y=0}^{r_i} yf(y)dy
\end{equation}

Using integration by parts we obtain eventually:
\begin{equation}
c_i=\int_{y=0}^{r_i} F(y)
\end{equation}

Now notice that for the multi-rectangular distribution function we have: $F(x)=\sum_{i=1}^{j-1}P_i+P_j(x-x_i)/(x_{i+1}-x_i)$ where $j$ is the rectangle that contains $x$ and each rectangle $i$ is defined over the interval $(x_{i-1},x_i)$. Therefore we obtain:

\begin{align}
c_i&=\int_{y=0}^{r_i} \Big(\sum_{i=0}^{j-1}P_i+\frac{P_j(y-x_i)}{(x_{i+1}-x_i)}\Big)=\\ \nonumber & \sum_{k=1}^{j-1}\int_{y=x_{k-1}}^{x_k}\Big(\sum_{i=1}^{k-1}P_i+\frac{P_k(y-x_{k-1})}{x_k-x_{k-1}}\Big)+ \int_{y=x_{j-1}}^{r_i}\Big(\sum_{i=1}^{j-1}P_i+\frac{P_j(y-x_{j-1})}{x_j-x_{j-1}}\Big)=\\
& \nonumber
\sum_{k=1}^{j-1}\Big((x_k-x_{k-1})\sum_{i=1}^{k-1}P_i+\frac{P_k((x_k)^2-2x_kx_{k-1}-(x_{k-1})^2+2(x_{k-1})^2)}{2(x_{k}-x_{k-1})}\Big)+\\ \nonumber & (r_i-x_{j-1})\sum_{i=1}^{j-1}P_i+\frac{P_j((r_i)^2-2r_ix_{j-1}-(x_{j-1})^2+2(x_{j-1})^2)}{2(x_{j}-x_{j-1})}=\\ \nonumber &
\sum_{k=1}^{j-1}\Big((x_k-x_{k-1})\sum_{i=1}^{k-1}P_i+\frac{P_k(x_k-x_{k-1})}{2}\Big)+ (r_i-x_{j-1})\sum_{i=1}^{j-1}P_i+\frac{P_j(r_i-x_{j-1})^2}{2(x_{j}-x_{j-1})}
\end{align}

From the above equation we can extract $r_i$ which is the reservation value of this server.


 
\section{Evaluation} \label{sec:Experimental Design}

To evaluate the three heuristics and our hypothesis that agents' performance in search-based domains can be improved by restructuring the search problem, we used a search domain called ``job-assignment''.
  Job-assignment is a classic server-assignment problem in a distributed setting that can be mapped to the general search problem discussed in this paper.  
  The problem considers the assignment of a computational job for execution to a server chosen from among a set of homogeneous ones (servers).  
  The servers differ in the length of their job queue. Only the distribution of each server's queue length is known.  
  To learn the actual queue length of a server, it must be queried, an action that takes some time (server-dependent).    
  The job can eventually be assigned only to one of the servers that were
  queried (a server cannot be assigned unless it is first queried).  
  Because it is a sequential search problem it is impossible to use parallel
  querying. The goal is to find a querying strategy that minimizes the overall time until the job starts executing.  
  The mapping of this problem to the sequential search problem (in its cost minimization variant) is straightforward: each server represents an opportunity where its queue length is its true value and the querying time is the cost of obtaining the value of that opportunity.

%For the journal: mention also that a server cannot be assigned unless it is first queried. Also, explain that it is obvuiys that it is impossible to use parallel querying. (DONE ABOVE)



%Formally we use S=$\{S_1,...,S_n\}$ to denote the set of $N$ servers, the querying time for server $i$ is denoted $C_i$, and the actual time it takes for a job to wait in server $i$'s queue for execution is $T_i$. The goal is to minimize the overall time until execution, calculated as: $\sum_{(i=1)}^k C_i+\min(T_1..T_k)$.



\subsection{Agent Development}

The evaluation used agents designed by computer science students in a core Operating Systems course.  While this group does not represent human searchers in general, it fairly represents future agent developers who are likely to design the search logic for eCommerce and other computer-aided domains.  As part of her regular course assignment, each student created an agent that receives as input a list of servers, their distribution of waiting times and querying costs (times); queries the servers (via a proxy program) to learn its associated waiting time;  and then chooses one of them for executing a (dummy) program.  %While the primary goal of the exercise was to test several programming aspects, 
The students' grade in the assignment was correlated with their agent's performance, i.e., the time it takes until the program is executed on one of the servers.  As part of their assignment, students provided documentation that described the algorithm used for managing the search for a server.  

An external proxy program was used to facilitate communication with the different servers.  The main functionality of the proxy was to randomly draw a server's waiting time, based on its distribution, if queried, and to calculate the overall time elapsed from the beginning of the search until the program is assigned to a server and starts executing (i.e., after waiting in the server's queue).  
To simplify the search problem representation, distributions were formed as multi-rectangular distribution functions.  In multi-rectangular distribution functions, the interval is divided into sub intervals ${x_0,..,x_n}$ and the probability distribution is given by $f(x) =\frac{p_i}{x_i-x_{i-1}}$ for  $x_{i-1}<x<x_i$ and $f(x)=0$ otherwise, ($\sum{_{i=1}^n}P_i=1$). The benefit of using a multi-rectangular distribution function is its simplicity and modularity, in the sense that any distribution function can be modeled through it with a small number of rectangles.  


\subsection{Analysis Methodology}

The agents that the students developed were executed on a set of problems with the full set of search choices and no restructuring.  The problems in the set varied in their characteristics (e.g., number of opportunities, characteristic of distribution functions, querying costs).  Each agent was then run on each problem restructured according to the different restructuring heuristics.  The performance of the agents was logged.  In parallel, the class-representing optimal and mean-based greedy agents were executed over the same problem set.  The results obtained by the students' agents on the non-manipulated problem set were compared with their results on the restructured problems.  The results of the optimal agent were used as a baseline for evaluating the improvement achieved by each of the restructuring heuristics.  The results of the students' agents were also used for improving the characterization of the different agents and their division into clusters.
  
Similar analysis was performed using the average rankings of the different agents as a replacement to their average performance.
Results were tested for statistical significance using t-test (with $\alpha=0.05$), whenever applicable. 

The designs of the students' agents were also analyzed to identify a set of common search strategy characteristics.  We then looked for common features among agents that performed similarly. 

\subsection{Performance Measures}

The evaluation of the different heuristics used two complementary measures: (1) relative decrease in the time until the job is executed; and (2) the relative reduction in search inefficiency. Formally, we denote the expected time until execution  for the optimal search strategy by $t_{opt}$ and the time until execution for an agent on the manipulated and non-manipulated problem by $t_{man}$ and $t_{\neg man}$, respectively.  The first measure, calculated as $\frac{t_{\neg man}-t_{man}}{t_{\neg man}}$, relates directly to the time saved.  It depends on the problem set, because $t_{\neg man}$ can vary widely.  The second measure, calculated as $\frac{t_{man}-t_{opt}}{t_{\neg man}-t_{opt}}$,  takes into account that the search time using either the manipulated or original data is bounded by the performance of the optimal agent.  It thus highlights the efficiency of the heuristic in improving performance. 

For each of  the two measures, the average over the entire set of problem instances and across all agents was calculated from both social and individual perspectives. For the social perspective, we calculated the relative improvement in both measures over the aggregated times obtained for all agents in all problems.  For the individual perspective, we calculated the average of individual improvements for both measures.  For each evaluated heuristic the maximum decrease in individual average performance was also identified, because an important requirement for a successful heuristic is that it does not substantially worsen any of the agents' individual performance. 

%Social: Relative improvement in overall performance and overhead (2) - this is social welfare. Average relative performance in individual average saving in overhead (cross agent) - this is the average individual welfare (3).

\section{Results and Analysis} \label{sec:results}

Seventy six agents, each designed by a different student, were used to evaluate the heuristics.  The test set used for evaluation consisted of 5000 problems that were generated with a random number of servers in the range $(2,20)$, costs of querying the different servers uniformly drawn from the range $(1,100)$, and a multi-rectangular distribution function to each server generated by randomly setting a width and probability for each rectangle and then normalizing it to the interval $(0,1000)$.  

In this section we present the main analysis carried out over these agents using this problem set.  The results using two other problem sets are given in Subsection \ref{sec:differentSet}.  


\subsection{Agent Strategy}

The strategies students used reveal several characteristics along which agent designs vary when programmers who are not search experts do the design.    
Figure \ref{fig:mapping} presents the result of a manual classification of agents according to search strategy characteristics reflected in their documentation. The columns ``mean'', ``variance'', and ``median'' represent taking into consideration to some extent in the agent strategy the expected value, the variance and the median of each server, respectively.  The column ``cost'' indicates taking into consideration the time cost of querying servers.  Column ``random'' indicates the use of some kind of randomness in the decision making process of the agent.  The column ``subset'' indicates a preliminary selection of servers for querying.  The column ``acc. cost'' indicates the inclusion of the cost incurred so far (i.e., ``sunk cost'') in the decision making process.  Finally, the ``probability'' column indicates the use of the probability of finding a server with a lower waiting time than the minimum found so far in the agent's considerations.

Our analysis of the agents using program documentation (and occasionally the code itself) revealed several problem features commonly used in their search strategies, including  expected value (in 41 of the agents), variance (in 6 of the agents), and the median (in 2 of the agents) of each server.  Additional factors used in some designs were the time cost of querying servers (in 37 of the agents), randomness in the decision-making process  (in 11 of the agents), a preliminary selection of servers for querying (in 57 of the agents), the inclusion of the cost incurred so far (i.e., ``sunk cost'') in the decision-making process (in 4 of the agents) and the use of the probability of finding a server with a lower waiting time than the minimum found so far (in two of the agents).


\begin{figure}
	\centering
		\includegraphics[scale=0.5]{mappingAll.pdf}
\vspace{-20pt}		\caption{Strategy characteristics *** still need to add column here ***}
\vspace{-15pt}
	\label{fig:mapping}
\end{figure}

Several interesting observations may be made based on these characteristics.  First, many of the agents use the mean waiting time of a server as a parameter that directly influences the search strategy, even though the optimal strategy is not affected directly by means (see Section \ref{sec:model}). Second, a substantial number of agents (39 of 76) do not take into account the cost of search in their strategy.  One possible explanation for this phenomena is that the designers of these strategies considered cost to be of very little importance in  comparison to the mean waiting times.  Interestingly, several students (11 of 76) use randomization in their search strategy,  even though, as explained in Section \ref{sec:model}, randomness is not useful for these problems (and plays no role in the optimal strategy).

The average performance of the different agents on the 5000 test problem instances is given in Figure \ref{fig:compare all}.  The vertical axis represents the average overall time until execution and the horizontal axis is the agent id.  The two horizontal lines in the figure represent the performance of an agent searching according to the optimal strategy and an agent using the random selection rule, ``randomly query a random number of servers and assign the job to the server with the lowest waiting time''.  As can be seen from the figure, none of the students' agent strategies reached the performance of the optimal strategy.  The average overall time obtained by the agents is  445.77, while that of the optimal agent is 223.1.  Furthermore, many of the strategies (41 out of 76) did even worse than a random agent. %(the random did 490 and the average is 445 ***) did even worse than a random agent.

We attempted to identify clusters of agents, based on agent performance and characteristics of agent design, as identified by our analysis of agent designs.   The following clusters emerged from this assessment:
\begin{itemize}
\item The naive mean-based greedy search strategy and its variants (e.g., agents 3-7).
\item Mean-based approaches that involve preliminary filtering of servers according to means and costs (e.g., agents 15-17).
\item A variation of the naive mean-based greedy search strategy that also takes the variance of each server as a factor (e.g., agents 22-23).
\item Querying the two servers with the lowest expected queue length and assigning the job to the one with the minimum value found (e.g., agents 24-27).
\item Assigning the job to the first/last/random server (e.g., agents 42-71). %TODO: for the journal discuss the fact that these strategies can be classified as ``satifiers''
\end{itemize}

For many agents, similarities in performance could not be explained by resemblances among the strategies themselves.  Although in some cases, agents used different variants of the same basic strategy, apparently the differences among the variants resulted in substantial differences in performance.  The most interesting and unique strategies deployed include: (a) adding up the costs of the servers already queried, and based on this sum deciding whether to continue to the next server or to assign the job to the server with the lower execution time found so far; (b) taking 10\% of the servers with the highest variance and querying them one by one, until the real value of one of them is less than the one of the server with the minimal expected value.  

There was one major distinction between agent designs that led us to separate out a group of agents.  Although many of the agents used search strategies that took into account affected information obtained in searching, a significant number did not follow any sequential decision-making rule, but rather queried only one server chosen arbitrarily.  While any selection rule was considered legitimate for the students' assignment, strategies of the latter type are not true search strategies.  Because these strategies are simple for the adaptive learner to identify (they choose a single server according to a simple pattern) and a simple problem reconstruction method could easily improve their behavior (provide only one choice: the server with the minimum sum of expected waiting time and querying time), we removed the  results for these agents (agents 32-75) from the main analyses given in this paper.   If included in the analysis, the improvement in the average overall performance of the adaptive agent reported in the following subsections would have been substantially better.

%In fact, we cannot determine if the programmers of these agents actually attempt to minimize the expected waiting time of their process, or if this kind of strategy is the result of failing to understand the goal of the exercise, absence of will to get the offered bonus (i.e., acting as satisfiers \cite{}  or any other reason that precludes a sequential query of servers.   It is notable that the removal of these agents from our analysis results with a substantial more permissive results of the evaluated heuristics since the performance of the removed strategies can be substantially improved when using the proposed heuristics.   

% TODO: for the journal paper, Avshalom will run an experiment with adaptive agent that includes also an option to give the searcher a single server (eliminate all the rest) and simply gives the one with the maximum value + cost. Let me know what is the result obtained ***WAS DONE, THE ADAPTIVE AGENT FINDS THEM AND TAKES THEM DOEWN


\begin{figure}
	\centering
		\includegraphics[scale=0.4]{original_all.pdf}
%\vspace{-10pt}
		\caption{Agent performance without any manipulation (to make the results easier to follow, IDs are ordered based on performance)}.
%	\vspace{-10pt}
	\label{fig:compare all}
\end{figure}



%HERE I THINK WE SHOULD ADD THE RANKING GRAPH AND EXPLAIN
In addition to the manual classification we used automated classification.  We gave the agents set of problems to solve and based on the agents performance we were able to identify unique behaviors of the agents. Figure \ref{fig:rank} compares between the agents performance time wise v.s rank wise.  Time wise means the average time it took each agent on the set of the problems.  Rank wise means, on every problem from the set we ranked the agents based on their performance and than we calculated the average rank for each agent.  In general the time wise and the rank wise are correlated, there are three places that there is inconsistency between the time wise measurement and the rank wise measurement.  The three places mentioned above are marked in Figure \ref{fig:rank}, we looked at this places and tried to understand what was the reason for this inconsistency.  We decided to check the strategies in the three locations manually.  The most interesting inconsistency is the first one since it is in the area of the agents that has good performance relatively to others.  The agents in the ranking that are located around the first cluster are mean based agent, also the agents of the first cluster are mean base agent, the difference between the agents of the first cluster and the agents around the first cluster is the "stopping rule"', the agents of the first cluster terminate their search if the real time of the current agent in the list is smaller than the next agent's expectancy+Query time.  The "`stopping rule"' of the agents around the first cluster is as follows: terminate the search if the minimal real time is smaller than the next agent's expectancy+Query time.  When the agents of the first cluster observed according to their average time it seems that their performance is the same as classic pure mean based agent, thanks to the rank base measurement it was noticeable that they did worse than the classic mean based agents.  The average time base measurement did not recognize the different of the first cluster from the agents around it because this measurement is lacking, since the number of the problems that was given to the agent was relatively big (5000) in average the difference between the first cluster to the agents around it was small, from the other hand the agents of the first cluster got rank lower than the agents around this cluster and that is the reason that in the rank wise measurement it was noticeable the difference between the agents of the first cluster to the agents around it.  

The second cluster has also relatively lower ranking score compared to its' average time score. 
The common denominator between the second cluster and the agents around it is the search depth, they both query in the worst case three servers the only difference is that the agents from the first cluster always query three servers but the agents around the second cluster checks if it the expectancy of the next server is lower than the minimal real value that was already revealed.  When we compare the average time of the second cluster to the agents around the second cluster the difference is relatively small, one might assume based on this measurement that their strategy is similar, when we look at the rank measurement it is noticeable that their strategy is different.  The reason why the average time measurement did not "`picked up"' the difference in the strategies is because in average querying another server that his expectancy is similar to server that was already queried will not change drastically the results but from the other hand even a small improvement in the time changes the rank.    

The third cluster has also relatively lower ranking score compared to its' average time score. 
The agents around the third cluster pick up randomly a server and assign the job to this server, the agents of the third cluster pick up the agent with the maximum query time and assign it the job.  It is hard to tell why the student of the third cluster agents chose this strategy but it is clear why if we measure the performance based on average time their strategy was the same as random picking strategy but if we measure by rank this agents did the worse.

DUDI: MAYBE CONCLUSION ABOUT HOW TRICKY MEASUREMENTS ARE?? 

\begin{figure}
	\centering
		\includegraphics[scale=0.4]{rank.pdf}
%\vspace{-10pt}
		\caption{Time wise v.s rank wise}
%	\vspace{-10pt}
	\label{fig:rank}
\end{figure}

\subsection{Analysis of Information Hiding}
The threshold $\alpha$ used for removing alternatives from the problem instance is a key parameter affecting the ``Information Hiding'' heuristic.  Figure \ref{fig:alphaChange} depicts the average time until execution (over all agents, for the 5000 problem instances) for different threshold values.  For comparison purposes, it also shows the average performance on the non-manipulated set of problems, which corresponds to  $\alpha=0$ (the horizontal line).   The shape of the curve has an intuitive explanation.  First, for small threshold values, an increase in the threshold increases agent performance as it further reduces the possible deviation from the optimal sequence.  However, as the threshold increases, the probability increases that the opportunities this increase allows to be removed are ones that would be examined by  the optimal strategy.  %This leads to a theoretical bound for the possible theoretical improvement using this method.   
%TODO for journal - talk and present the theoretical bound that we get here.

As the graph in Figure \ref{fig:alphaChange} shows, the optimal threshold is $\alpha=10\%$, for which an average time of 299.33 is obtained. 
%TODO for the journal, put a graph with the bound (Avshalom sent by mail).
This graph also shows that for a large interval of threshold values around this point --- in particular for $3\%\leq \alpha \leq 30\%$ --- the performance level is similar.  Thus, the improvements are not extremely sensitive to the exact value; relatively good performance may be achieved even if the  $\alpha$ value used is not exactly the one which yields the minimum average time.  In fact, any threshold below $\alpha=55\%$ results in improved performance in comparison to the performance obained without the use of this manipulation heuristic (i.e., with $\alpha=0$).  

Figure \ref{fig:information hiding} depicts the average reduction in search inefficiency (over the 5000 problem instances) of each agent with $\alpha=10\%$.  As the figure shows, this heuristic decreased the search inefficiency of 24 of the 32 agents.  The maximum improvement was obtained by agent 32 (80.18\%).  The average reduction (individual welfare) is 14.49\% and the overall reduction (social welfare) is 5.52\%.   The downside of this heuristic is that it increases the overhead of some of the agents' searches.  The highest increase  in the overhead of any agent was, however, minimal and equals 12.1\% (for agent 1), corresponding to an increase of 0.57\% in its average time until its job starts executing.


%Naturally, this heuristic is most effective for agents which strategy misplaces the querying sequence in comparison to the optimal one (in terms of the search sequence) since it removes all options that the probability of querying them according to the optimal sequence is small (e.g., agents 24-32 *** please check again ***).   
The main advantages of this strategy are that it improves the performance of most agents and that even in  cases in which an individual agent's performance degrades, the degradation is relatively small.  Thus, the heuristic is a good candidate for use as a default problem-restructuring heuristic whenever there is no information about searcher strategy or an agent cannot be classified accurately.


\begin{figure}
	\centering
		\includegraphics[scale=0.13]{differant_alphas.pdf}
	%	\vspace{-10pt}
		\caption{The effect of $\alpha$ in ``information hiding'' over average performance (cross-agents)}
	%	\vspace{-10pt}
			\label{fig:alphaChange}
\end{figure}

\begin{figure}
	\centering
		\includegraphics[scale=0.4]{original_information_Hiding.pdf}
%				\vspace{-10pt}
				\caption{Average reduction in the search inefficiency of information hiding for $\alpha=10\%$}		%\vspace{-10pt}
	\label{fig:information hiding}
\end{figure}

\begin{figure}
	\centering
		\includegraphics[scale=0.4]{original_information_manipulation.pdf}
%		\vspace{-10pt}
				\caption{Average reduction in the search inefficiency of Mean Manipulation}
	\label{fig:information manipulation}
\end{figure}

\begin{figure}
	\centering
		\includegraphics[scale=0.4]{original_adaptive.pdf}
		%\vspace{-10pt}		
		\caption{Average reduction in the search inefficiency of Adaptive Learner}
%				\vspace{-10pt}
	\label{fig:adaptive learner}
\end{figure}



\subsection{Analysis of Mean Manipulation}

Figure \ref{fig:information manipulation} depicts the average reduction in search inefficiency, using the same standard problem set, of each agent when using the ``Mean Manipulation'' heuristic.  With this heuristic, seven agents almost fully eliminate their search overhead.  These agents use variants of the naive mean-based greedy search strategy.  Other agents also benefited from this heuristic and substantially reduced the overhead associated with their inefficient search.  These agents (e.g., 4, 7, 25) all also included mean-based considerations, to some extent, in their search strategy.

This heuristic has a significant downside, however.  Ten agents did worse with the mean manipulation heuristic, 5 of them substantially worse. The search overhead of these agents, in comparison to optimal search, increased by 50-250\%.  With this heuristic the overall inefficiency (social welfare) actually increased by 2.5\% even though the average overhead decreased by 1.3\%, a classical case of Simpson's paradox \cite{Simpson:1951}.  The substantial increase in search overhead for some of the agents makes this heuristic inappropriate for general use.  It is, however,  very useful when incorporated into an adaptive mechanism that attempts to identify those agents that use mean-based strategies and applies this manipulation method on their input.



\subsection{Analysis of Adaptive Learner}

The adapter learner has the best of both worlds.  Figure \ref{fig:adaptive learner} depicts the average reduction in search inefficiency, using the standard problem set, of each agent when using the adaptive learner heuristic with $\gamma=10\%$.    For agents that were badly affected by the mean-manipulation heuristic, the information hiding manipulation was used instead, improving their performance.  Overall, the inefficiency (social welfare) decreased by 37.4\%.  The average reduction in individual inefficiency (individual welfare) is 39\%.  Out of the 32 agents, 5 agents slightly worsened their performance (maximum of 8\% increase in inefficiency, which is equivalent to a 1.3\% increase in the expected waiting time of that agent).


\subsection{Evaluation with Different Problem Sets} \label{sec:differentSet}

To show that the results were not due to a wise selection of problem instance characteristics, we repeated the evaluation  with other distributions of queue lengths and different querying costs. Two problem sets were used, 
\begin{itemize}
\item Increased possible querying time (denoted ``Inc Quer''): same as the original set of problems, except that the querying time was taken from an interval that was three times as large (resulting in an increased ratio between querying time and possible waiting times in queue).
\item Increased queue time variance (denoted ``Inc Var''): same as the original set of problems, except that the possible waiting time interval was increased from 1,000 to 10,000 (resulting in a substantial increased variance in server waiting time in queue).
\end{itemize}
Each of these problem sets also contains 5000 different problems.

Table \ref{table:performance} presents the results obtained for the new problem sets in comparison to the original set.  As can be seen from the table, the improvement obtained from the different heuristics is consistent with the one obtained using the original set.


\begin{table}[ht]
	%		\vspace{-10pt}
	\centering
		\begin{tabular}{|p{4cm}|c|c|c|}
		\hline
			& Original	& Inc. Var	& Inc. Quer.\\
			\hline
Non-Manipulated & 322.3	& 2449.4	& 461.9\\\hline
Adaptive & 223.1 & 2203.0 & 388.1\\\hline
Optimal	& 285.2	& 1349.7 & 332.3\\\hline
Average individual improvement	& 10.2\%	& 9.7\%	& 10.5\%\\\hline
Overall (social) improvement	& 11.5\%	& 10.1\%	& 16.0\%\\\hline
Maximum individual performance decrease	& 2.2\%	& 2.0\%	& 1.9\%\\\hline
Average individual inefficiency reduction	& 39.0\%	& 29.0\%	& 46.5\%\\\hline
Overall inefficiency reduction &	37.4\%	& 22.4\%	& 56.9\%\\
\hline
		\end{tabular}
%			\vspace{-10pt}
	\caption{Performance for different classes of problems}
%	\vspace{-15pt}
	\label{table:performance}
\end{table}



\section{Related Work} \label{sec:related}

%****************** People have a problem in decision making


People are bounded rational \cite{simon1972theories}, unlike computer agents, which may deploy rational strategies and are significantly less bounded computationally.  They cannot be trusted to exhibit optimal behavior \cite{Rabin1998}.  Furthermore, people often tend not to use the optimal strategy even when one is provided \cite{Kahneman2000}. Their decision-making may be influenced by selective search, the tendency to gather facts that support certain conclusions while disregarding other facts that support other conclusions \cite{blackhart2005individual}, and by selective perception --- the screening-out of information that one does not think is important \cite{drake1993processing}.  Others \cite{thaler2008nudge}, have attributed people's difficulty in decision-making to the conflict between a "reflective system" (e.g., involved in decisions about which college to attend, where to go on trips and in most circumstances, whether or not to get married) and an "automatic system" (e.g., that  leads to smiling upon seeing a puppy, getting nervous while experiencing air turbulence, and ducking when a ball is thrown at you).


%****************** How to handle this problem.

Over the years a variety of work has addressed the challenge of improving people's decision-making, mostly by developing decision support systems to assist users in gathering, merging, analyzing, and using information to assess risks and make recommendations in situations that may require tremendous amounts of the users' time and attention \cite{376412}.  Recently, several approaches have been proposed that attempt to reconstruct the decision-making problem \cite{thaler2008nudge} instead of attempting to change people's decision-making strategies directly.  This prior work focused on psychological aspects of human decision-making, and does not involve any learning or adaptation. Furthermore, none of this prior work dealt with a sequential decision-making process.


%****************** Search Theory

The search model discussed in this paper, which considers an optimal stopping rule for individuals engaged in costly search (i.e., ones for which there is a search cost)  builds on economic search theory,\footnote{A  literature review of search theory may be found elsewhere \cite{McMillan94Rothschild}.} and in particular its sequential search model \cite{McMillan94Rothschild}.  While search theory is a rich research field, its focus is on the theoretical aspects of the optimal search strategy and it does not address the non-optimality of search strategies used by people or rationally-bounded agents. 



%****************** People programming agents

A range of research in multi-agent systems has examined people's use of agents designed to represent them and act on their behalf. For example, Kasba \cite{chavez1996} is a virtual marketplace on the Web where people create autonomous agents in order to buy and sell goods on their behalf.  Various research has involved programming agents in the decision-theoretic framework of the Colored-Trails game \cite{Grosz2004}. Here, the agents had to reason about other agents' personalities in environments in which agents are uncertain about each other's resources. In the Trading Agent Competition (TAC) \cite{tac}, agents are used to collect people's strategies.  Work involving people who design agents provides some evidence that people fail to build in the optimal strategy \cite{chalamish2008programming}, in particular in search-based environments \cite{Rosenfeld09}.  This work has not, however, provided methods for improving the performance of such agents through problem restructuring of any sort.  

%*********************************
\section{Discussion and Conclusions} \label{sec:conclusions}

The results reported in Section \ref{sec:results} are encouraging and a proof of concept for the possibility of substantially improving agent performance in sequential search by restructuring the problem space.  The extensive evaluation reveals that even with no prior information regarding an agent's strategy, a heuristic such as information hiding produces substantial improvement in average performance while limiting individual potential performance degradation.  With even limited information about the prior search behavior of an agent, heuristics such as the adaptive learner can further improve the overall performance and lower even further the possible decrease in individual agent performance.  These results were consistent across three different classes of search environments in extensive evaluations involving a large number of agents, each designed by a different person, and a large number of problems within each class.

Restructuring of the problem space is applicable in settings for which the optimal choice cannot be revealed but rather an optimal sequential exploration should be devised, and the optimal exploration strategy cannot be provided directly to the decision-maker or the decision-maker cannot easily be convinced of its optimality.  Instead, we can only control the information the decision-maker obtains in the problem. 

The problem restructuring technique has great potential for market designers (who also have the domain-specific information that can lead to more intelligent restructuring heuristics).  Consider for example large scale Internet websites like \url{expedia.com} or \url{autotrader.com}.  These web-sites attempt to attract as many users as possible to increase their revenues from advertisements. Every listing for a flight or a car on these web-sites is an opportunity that needs to be explored further to realize its true value to the user.  The welfare of users or the agents they use can thus be substantially improved by manipulating the listings.

The heuristic that provides the best performance is the adaptive learner.  % TODO: for journal: One important characteristic of this heuristic is its ability to quickly classify the strategy of the searcher, based on a relatively small number of records describing prior search attempts *** last sentence is still conditional in the graph Avshalom needs to produce ***.  
As our ability to recognize and differentiate additional strategy clusters and produce appropriate choice manipulations for them improves, we expect the performance improvement obtained by applying the adaptive strategy to increase even further.  The adaptive heuristic's architecture is modular, allowing its augmentation using the new manipulation heuristics to be straightforward.


The research reported in this paper is, to the best of our knowledge, the first to attempt to restructure the decision-making problem in order to improve performance in a sequential decision-making setting.  The fact that the searcher is facing a sequence of decisions and all manipulations over the choices take place prior to beginning the process, substantially increases the complexity for heuristics.  In this case the search strategy used by the searcher becomes more complex as her decisions are also affected by the temporal nature of the problem and the new data that is being obtained sequentially.  The challenge faced by the manipulation designer is thus substantially greater than in one-shot decision processes.  In the latter case many simple and highly efficient manipulation techniques can be designed.  For example, if the searcher is limited to obtaining the value of only one opportunity overall, the simplest and most efficient choice of a manipulation technique would be to remove all opportunities other than the optimal one. 


%The specific underlying application to which the heuristics presented directly apply, sequential search, is a core process in many real-life applications.  Its existence both in physical and virtual worlds suggests that the methods presented in this paper have the potential to affect both agents designed by programmers for various search tasks and humans performing search in physical domains.  While the latter contribution is not proven experimentally, it is supported by recent research showing that agents can be used to reliably reflect people's decision-making strategies \cite{chalamish2008programming}.  Furthermore, the use of the (potentially non-computationally bounded) programmed agent in search is equivalent to the use of standard computer programs for data processing (e.g, spreadsheets) by people for overcoming memory and computational bounds in search.  Overall, the use of agent-based methodology for this research has many advantages research-wise, as detailed in Section \ref{sec:methodology} and is recommended for future research in this field.   *** For example, the availability of the search strategy code enabled a better understanding of the strategy, even in cases where it documentation was a bit vague or high-level.  ***

A natural extension of this work involves developing heuristics that will choose the manipulation method to be applied not only based on agent classification but also based on problem instance characteristics.  This, of course, requires a more refined analysis in the agent level.  



%\section{TODO:}
%\begin{itemize}
%\item Avshalom, whenever relevant - remove the legend from the graph ***
%\item if time permits, Avshalom will produce two more sets of problem instances.
%\item if time permits, Avshalom will add to the adaptive agent the capability to recognize agents that only pick one server

%TIDO: leave statistical significance for better times. 

%\end{itemize}
