\section{Problem Definition}

In this section, we formally define the problem and the goals we are trying to
achieve.  We also discuss the threat model, i.e. what components of the system
are trusted and what are untrusted.

As mentioned earlier, a client requests a server to perform some computation
$f()$ over some input $x$. Also, we assume that the client specifies an upper
bound $t$ on the time it will take to perfom the desired computation. The client
also proposes some amount of money $m$ that he is willing to pay for the
computation. 

The server has a choice to perform the computation or to decline the request. In
case the server is willing to perform the computation, the server attempts to
calculate the output $y = f(x)$ for a maximum time $t$. In case the computation
does not finish within the specified time $t$, the server sets the output to
some placeholder value specifying that the computation exceeded it's allocated
time.  In this discussion, we assume that $y = -1$ signifies such condition. 

The server then returns the result $y$ to the client, and the client pays $m$
amount of money to the server.

\subsection{Goals}

Our primary goal is to achieve the fairness in the above process, i.e. no party
can cheat to gain advantage over the other.

From the server's perspective: if the server agrees to perform the computation
requested by the client, then either it calculates the desired result $f(x)$
within time $t$, or spends $t$ time trying to caculate the result and fails to
finish the computation. If the server spends computation time in such manner,
then the server gets paid $m$ amount of money.

From the client's perspective: if the client pays $m$ amount of money, then it
either gets the desired result, or a proof that the time $t$ specified by the
client was not enough for the computation to finish.

We also desire the correctness of the results. Also, we wish to achieve the
privacy of the input $x$ and the output $y$ of the computation. 

\subsection{Adversary Model}

We assume that the client and the server can be physically located anywhere, and
they communicate with each other over a network. The network is unreliable, and
there is no guarantee that the data sent over the network will be delivered
without any modification, or even delivered to the other party at all. 

We also assume that both the client machine and the server machine are
controlled by potentially malicious operators. The operators can disconnect
their machine from the network at any time, potentially to gain some advantage
over the other party.

The operating system and other software components on both the client side and
the server side are also untrusted. 

However, we assume that actual computation function $f()$ is trusted by both the
parties. \todo{explain this. a predetermined set of computation functions} 

\todo{correctness of the actual code in f()}

We also assume all the trust guarantees provided by the TPM manufacturers. For
example, we consider complex hardware level attacks against the TPM chip out of
the scope of this paper.

Also, we assume that the CPU manufacturer is trusted. 

We also assume that the high bandwidth memory bus between the CPU and memory can
not be monitored by an attacker. This assumption might not be true for
determined attackers, but we consider that out of the scope of our work.

We assume that the bitcoin network behaves well. This means that more than half
of the computation power in the bitcoin network is controlled by honest nodes.
Given the total computation power of the bitcoin network, subverting the bitcoin
network is practically infeasible.
