\documentclass[11pt]{article}
\usepackage[draft,hylinks,notitlepage,full]{boaz}
\usepackage{subfigure}
\title{A Survey of the Game ``Lights Out!''}
\author{Jiajin Yu}

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Macros for this paper

% BKMRK 0
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

\begin{document}




%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% BEGIN BODY of Document


\begin{DOCheader}


  \begin{abstract}
    ``Light Out!'' is an electrical game played on a $5\times 5$ grid where each
    cell equips with a button and an indicator light. Pressing a cell will
    change its lights and toggle the lights of its rectilinear adjacent
    neighbors. Given some initial configuration of lights, the goal of the game
    is to switch them all off. It is easy to see this game can be generalized to
    a general graph. In this paper, we survey the research of the game ``Lights
    Out!''. In particular, we give two proofs of realizability of ``all-on to
    all-off'', study the completely solvable graph and survey some optimize
    problem such as the finding minimal steps to turn off all the lights and
    maximizing off switches when it is unable to turn off all vertices.
  \end{abstract}
  \DOCkeywords{Combinatorics Game Theory, Linear Algebra, Graph Theory, ``Lights
    Out''}

\end{DOCheader}

\section{Introduction} \label{sec:intro} Consider an interesting one-player
combinatorics game named ``Lights Out''. The game is played on a $5\times 5$
grid where each cell equips with a button and an indicator light. By pressing a
cell, its light and the light of its rectilinear adjacent neighbors will
change(switch \texttt{ON} if it was \texttt{OFF}, and vice versa). Given some
initial pattern of lights, the goal of the game is to switch them all \texttt{OFF}
by pressing a set of buttons. Obviously, the game can be played on a general
gird, and naturally generalized to any graph $G$.

This problem is first introduced by Sutner in his paper\cite{Sut89}. In this
paper, he gives the result that all graph can be turned to all off from all
on. After that, both simple proof\cite{Car96,CHK99, GKZ95} and the variation of
the game\cite{Sut90, BaR96, DoW01,EES04} has been studied. The technique to solve
the problem includes linear algebra, graph theory and cellular automata.  After
the ``all on to all off'' problem is solved, the study has been focused on the
completely solvable graph \cite{GoK97, AmS96}, the minimal number of presses
turn off all vertices\cite{AmS96,CFF99} and maximal of off vertices that can be
turned off\cite{Gol00}.

In this paper, we first give several proofs of all-on to all-off in Section
\ref{sec:allontoalloff}, then we extensively study how to characterize the
completely solvable instances of different graph classes in Section
\ref{sec:compsolve}. In Section \ref{sec:optimize}, we find it is $\NP$-hard
to find the minimal number to turn off all vertices from all on. Except the
result in the general graph, we also study several special graph classes. 

\section{All-on to All-off is Realizable} \label{sec:allontoalloff} First, we
give a formal definition of the game ``Lights Out''. Given a undirected graph
$G=(V,E)$, we first assign each vertex $v$ a state $c_v \in \set{0,1}$. The
initial states of all vertices is called the \emph{initial configuration} of the
graph $G$. Also it is naturally to define a set $C \subseteq V$ as the
\emph{configuration} of the graph which contains all \texttt{ON}
vertices. Sometimes it is more straightforward to consider the configuration as
a $\abs{V}$-length vector $\vec{c}$, where $c_i$ is the state of vertex
$v_i$. We define an \emph{activation} on $G$ is to choose a vertex $v$, then the
states of all its closed neighborhood $N[v] \eqdef \set{v} \union
\cset{u}{(u,v)\in E}$ will flip (\ie, $c_v \gets 1-c_v$). The activation thus
changes the configuration of the graph $G$. If through a finite number of
activation, all the vertices can be with state $s=0$, we call the initial
configuration is \emph{solvable}. It is easy to observed that the order of the
activation is not important and multiple selections on one vertex is not
necessary. So in order to solve the game, we only need to find a subset $X$ of
vertices $V$ so selecting all vertices in $X$ turn off all vertices in $G$. This
subset $X$ is named the \emph{activation set} and the game is called
$\sigma^+$-game introduced by Sutner\cite{Sut90}. There are two special
configurations, namely \emph{all on} and \emph{all off}, which all the states
are on and off, respectively.

By this definition, the original ``Lights Out'' is a $5\times 5$ grid graph. Our
first task is to find out what kinds of graph are \emph{solvable} with initial
configuration all on. The following theorem is the first non-trivial result of
the study, which seems quite amazing at first glance.
\begin{theorem} \label{thm:allon}
  All undirected graph can be turned from all on to all off.
\end{theorem}

\subsection{Induction Proof by Graph Theory} \label{ssc:induction} Let's first
give the proof of Theorem \ref{thm:allon} by graph theory. This proof is from
the paper by Cowen \etal \cite{CHK99} and the paper by Eriksson
\etal\cite{EES04}.
\begin{proof}
  We prove this theorem by induction on $k$, the number of vertices in graph $G$. 

  \textbf{Basis:} Prove the statement is true for $k=1$. If $k=1$, there is only
  one vertex in $G$. we select this vertex, flip its state and we are done.

  \textbf{Induction Step:} For each $k\geq 0$, assume it is true that we can
  turn the graph of $k$ vertices from all on to all off. We use this assumption
  to prove that the theorem is true for the graph of $k+1$ vertices.  Let $G$ be
  a graph of $k+1$ vertices. For each $v\in G$, the graph $G-\set{v}$ is a graph
  of $k$ vertices. By induction hypothesis, it has a activation set $X_v$ which
  can turn all vertices in $G-\set{v}$ off. We first try the activation set $X_v$
  of $G-\set{v}$ for each $v\in G$. If it can also turn all vertices in $G$ off,
  we are done.

  Otherwise, if $k+1$ is even, then $k$ is odd. For each $v\in G$, we have an
  activation set $X_v$ of $G-\set{v}$, we use this activation set on $G$. Note
  $X_v$ will change states of all vertices in $G$ except $v$. After using
  activation set $X_v$ for every vertex in $G$, the state of each $v\in G$ has
  changed for odd times, which means the state will be off at last for every
  vertex.

  If $k+1$ is odd, then at least one vertex $v$ has even degree. In this case,
  The procedure to turn off all vertices has two stages. The first stage is
  choosing vertex $v$. This will turn all the vertices in $N[v]$ off. In the
  second stage, for each $u\in G-N[v]$, we have an activation set $X_u$ that can
  turn all vertices in $G-\set{u}$ off.  After all the activations $X_u$ are
  done, the state of vertex in $G-N[v]$ will be off and the vertex in $N[v]$
  will remain on. To see this, note that $k+1$ is odd and $\abs{N[v]}$ is
  odd. So $\abs{G-N[v]}$ is even.
\end{proof}
\subsection{Algebra proof by Linear Algebra} \label{ssc:algebra} In this
section, we give a more general result obtained by Dodis \etal\cite{DoW01}.
Given a graph $G=(V,E)$, every vector $v$ has a type $b_v$. While $b_v = 0$
means the selection on this vertex only changes the states of it open
neighborhood $N(v)\eqdef \cset{u}{(u,v) \in E}$ and $b_v = 1$ means the
selection on this vertex affects its closed neighborhood. The adjacent matrix
$A$ is now defined as following:
\begin{equation} \label{eq:generaladj}
  A_{u,v}=
  \begin{cases}
    1 & (u, v) \in E \\
    b_v & u = v \\
    0 & \text{otherwise}
  \end{cases}
\end{equation}
We can see this is a general case of $\sigma^+$-game since every vertex is of
type $b_v=1$ in the $\sigma^+$-game.  Let us rewrite the problem of finding
activation set $X$ using some linear algebra. Let $\vec{x}$ be the
characteristic vector of $X$ such that $x_{v_i}=1$ if and only if $v_i\in X$ and
$\vec{b}$ be a binary vector representing the types of vertices in the graph
where $b_i = 1$ if and only if the $i$th vertex $v_i$ is of type $b_{v_i} =
1$. Then the state of a vertex $v_i$ after using the activation set $X$ is
\begin{equation}
   c_{v_i} + \sum_{u\in N(v_i)\intersect X}x_u + b_{v_i}x_{v_i}
\end{equation}
where $c_{v_i}$ is the initial state of $v_i$ and the addition is over
$GF(2)$. The intuition behind this formula is that the final state of a vertex
is affected by its initial state, the activation on its open neighborhood
and the activation on itself if its type is $b_v=1$. To get the result of whole
graph, we have the following linear expression:
\begin{equation} \label{eq:stmatrix}
  A\vec{x} + \vec{c}=\vec{c'}
\end{equation}
In the above formula, the result of $i$th row of $A\vec{x}$ is $ \sum_{u\in
  N(v_i)\intersect X}x_u + b_{v_i}x_{v_i}$, while $\vec{c}$ represents the
initial configuration of the graph. Thus, $\vec{c'}$ represents the final
configuration of the graph after applying the activation set $\vec{x}$. Now we
are going to prove a stronger version of Theorem \ref{thm:allon}.

\begin{theorem} \label{thm:allonb} Given a graph $G=(V,E)$ with vertices type
  $\vec{b}$. An initial configuration $\vec{c}=\vec{b}$ always can be turned to
  all off. In particular, the linear equation $A\vec{x} + \vec{c} = \vec{0}$
  always has a solution if $\vec{c} = \vec{b}$.
\end{theorem}
To see this is a general case of Theorem \ref{thm:allon}, note in the
$\sigma^+$-game, all vertices have type $b_v = 1$. Therefore the binary vector
$\vec{c} = \vec{b} = \vec{1}$ and Theorem \ref{thm:allonb} says $A\vec{x} =
\vec{1}$ always have a solution if all vertices are of type $b_v=1$ and
initially configuration is all on. We give a proof generalizing the proof given
in a manuscript by Goldwasser \etal\cite{GKZ95}.

\begin{proof}
  First we claim every vector $\vec{x}$ in the kernel of adjacent matrix $A$ has
  even number of 1's whose corresponding vertex is of type $b_v=1$. To see this,
  consider the equation $\sum_{w\in N(v)\intersect X} + x_vb_v = 0$, since
  $x_vb_v =1$, we must have $N(v) \intersect X$ of odd size. Induce a subgraph
  whose vertices have property $x_v=1,b_v=1$ from $G$, since every vector in
  this graph has odd degree, the number of vertices in this graph must be even.
  Consider the value of $\vec{b}\cdot \vec{x}^T$ for all $\vec{x} \in
  \ker(A)$. By the above argument, We know there are even number of $b_i = 1$ to
  sum up so $\vec{b}\cdot \vec{x}^T = 0$ for all $\vec{x} \in
  \ker(A)$. Therefore, $\vec{b}$ is orthogonal to the kernel of $A$, which is
  the same as $\ker(A^T)$ since $A$ is a symmetric matrix. Therefore, $b$ is in
  the range of $A$ and $A\vec{x}=\vec{b}$ is solvable.
\end{proof}

\begin{corollary}
  Given a graph $G=(V,E)$, the following linear equation always has a solution
  \begin{equation}
    \label{eq:alloff}
    A\vec{x} = \vec{1}
  \end{equation}
\end{corollary}

\paragraph{Interpretation of Gaussian elimination on Graph}
\label{para:interpre}
From Theorem \ref{thm:allonb}, we know that in Equation \ref{eq:stmatrix}, if
$\vec{c'}=\vec{0}$ and $\vec{c}=\vec{b}$, there always exists a solution
$\vec{x}$. Usually, we use Gaussian elimination to find this solution. It is
known that Gaussian elimination manipulates the matrix $A$. This observation
indicates we might interpret the Gaussian elimination to graph manipulation and
get the desired activation set. Therefore, Arya \etal{} \cite{Ary02} gives the
following algorithm.

The algorithm to find an activation set includes two phases. In Phase 1, given a
graph $G=(V,E)$ with the initial configuration $\vec{c}=\vec{b}$, we always
select a vertex $v$ with state on (\ie, $s=1$), this will toggle all the states
in $N(v)$. In our algorithm, when the state of vertex is changed, its button
type $b_v$ is also changed. Then we delete any edge connecting two vertices in
$N(v)$ and add edges between unconnected nodes in $N(v)$. After that, we delete
$v$ and all incident edges from the graph and search a vertex with state on
again. Phase 1 ends when all remaining nodes have states off. In Phase 2, we
start with an empty activation set and process the vertices deleted in Phase 1
reversely. For each vertex $v$, we first restore $v$ and all its incident edges
to the current graph, then toggle all states in $N(v)$. If there are even number
of vertices in $N(v)$ belongs to the activation set, we add $v$ to the
activation set. When the original graph $G$ is restored, we have an activation
set to turn off all vertices.

It is easy to see the deletion and restoration are complement so at last we can
reconstruct the original graph $G$ with its original configuration. We claim in
the process of restoration, the current activation set can always turn off all
vertices in the current graph. We prove this claim by induction. At the
beginning of Phase 2, we have all vertices in the current graph of state off and
an empty activation set so the claim is true for the base case. Consider the
current graph is $F$ with an activation set $X$ that can turn off all vertices
in $F$. We restore $v$ and all its incident edges to $F$ to get graph $H$. By
restoring $v$, we change the states and button types of vertices in $N(v)$ and
invert the neighborhoods relation in $N(v)$. If the size of $N(v) \intersect X$
is even, we add $v$ into the activation set. 

Now in the new graph $H$, let us consider the final states of $N[v]$ by using
the current activation set.  In the following discussion, we use $p$ to denote
the parity of $N(v) \intersect X$ ($p=0$ when its size is even and $p=1$ when
its size is odd) and all additions are modulo 2.  When we talk about the parity
of a vertex $v$, we are actually saying the parity of the number of selected
vertices which change the state of $v$. First, let's look at $v$, the parity of
$v$ will be $p$, contributed by the neighbors of $v$, plus $1-p$ by itself
(remember $v$ is always state on, therefore its selection always changes the
state of itself). Therefore the state of $v$ will always change to off. The
neighbors of $v$ need different parity compared with their old parity in $F$
since restoring $v$ changes their state. Consider the neighbor of $v$ not in the
activation set, the inversion of the neighborhoods relation contributes $p$ to
the unselected vertex so its parity will increase $(1-p)+p = 1$, where $1-p$
comes from $v$. For a vertex $z \in N(v)$ in the activation set, neighborhood
inversion only contributes $p-1$ to $z$, but the restoration of $v$ changes the
button type of $z$. Therefore, the parity of $z$ will increase
$(1-p)+(p-1)+1=1$. Therefore, all the vertices in $H$ can be turned to off by
the new activation set. When we restore the whole graph, we find the activation
set to turn off all vertices.

\subsection{Historical Review}
The problem of ``Lights Out!'' was first studied by Klaus Sutner. In his first
paper\cite{Sut89} of this game, he used cellular automata to study the game and
give the result that any graph can be turned from all on to all off. The proof
uses complex notations in cellular automata. Later, Caro gives a much simpler
proof based on linear algebra\cite{Car96}. Another linear algebra proof is given
by a manuscript of Goldwasser \etal \cite{GKZ95}. The more general result
(Theorem \ref{thm:allonb}) appears in the paper of Dodis \etal\cite{DoW01}. In
Sutner's paper\cite{Sut89}, he pointed out that no graph theoretic proof was
known, while later Cowen \etal\cite{CHK99} give an elegant proof of the theorem
using combinatoric techniques in the graph theory. In the paper of Eriksson
\etal\cite{EES04}, they also give a similar graph theoretic proof, but it is
first proved on a restricted directed graph, then generalized to undirected
graph. The graph theoretic proof used in this paper is the synthesis of those
two methods.

\section{Completely Solvable} \label{sec:compsolve} From the above discussion,
we know that all the graphs can be turned from all on to all off in a
$\sigma^+$-game. It is naturally to ask what kinds of graph such that all the
configurations of the graph are solvable (\ie, can be turned all off). Recall
Equation \ref{eq:stmatrix}, the completely solvable problem can be formulated by
the following theorem.


\begin{theorem} \label{thm:null0} 
  A graph $G$ is completely solvable if and only if the adjacent matrix $A$ has
  nullity 0.
\end{theorem}

\begin{proof}
  From Equation \ref{eq:stmatrix} we know if an initial configuration $\vec{c}$
  is solvable, we have $\vec{c'}=\vec{0}$ and $\vec{x}$ is a solution of the linear
  equation $A\vec{x}=\vec{c}$. Since there are totally $2^n$ initial
  configurations, the column space of $A$ is of dimension $n$. Therefore, $A$
  has nullity 0.
\end{proof}

Let us first study how to characterize the grid graph that are completely
solvable since they have a clean structure where we can use linear algebra
technique.
\subsection{Grid Graph}
A grid graph $G$ has $m\times n$ vertices in $m$ rows and $n$ columns. We denote
the vertex $v$ in the $i$th row and $j$th column as the pair $(i,j)$. The
vertices connected to $v$ are $\set{(i-1, j),(i-1,j), (i+1, j), (i,j-1)}$ if
they are well defined, see Figure \ref{fig:grid} as an example.

\begin{figure}
  \centering
  \includegraphics{grid}
  \caption{An example of $5\times 5$ grid }
  \label{fig:grid}
\end{figure}

From Theorem \ref{thm:null0}, we know that a graph is completely solvable if and
only if the adjacent matrix $A$ is invertible. So we analyze the kernel of the
adjacent matrix to find out what kinds of grid graph are completely
solvable. The analysis will study the property of \emph{Fibonacci Polynomial}
since it has a close connection to the structure of the kernel of the adjacent
matrix. We will first study a simpler version, the $\sigma$-game.

\subsubsection{$\sigma$-game}
\label{sss:open}
$\sigma$-game introduced by Sutner \cite{Sut90} is similar with $\sigma^+$-game
except that selection on one vertex only changes the states of its open
neighborhood. The open neighborhood $N(v)$ of vertex $v$ is defined as $N(v)
\eqdef \cset{u}{(u,v)\in E}$. So the adjacent matrix $A$ defined on $G$ of open
neighborhood should be a special form of the adjacent matrix defined in Equation
\ref{eq:generaladj}.
\begin{equation}\label{eq:opena}
  A_{i,j} = 
  \begin{cases}
    1 & (i,j) \in E \\
    0 & \text{otherwise}
  \end{cases}
\end{equation}
The following propositions and theorems in this section is based on the paper by
Barua and Ramakrishnan\cite{BaR96}.

Given the adjacent matrix $A$ of a grid graph of size $m\times n$, if a $m\times
n$ vector $\vec{x}$ satisfies the equation $A\vec{x}=\vec{0}$, we split
$\vec{x}$ to $n$ vectors $(\vec{x_1}, \vec{x_2}, \cdots, \vec{x_n})$, where each
vector is of length $m$. We claim that they satisfy the following recursion.

\begin{proposition} \label{prop:splitx}
  \begin{equation} \label{eq:recx}
      \vec{x_{i+1}} = B_m\vec{x_i} + \vec{x_{i-1}} \quad 1\leq i \leq n 
  \end{equation}
  where $\vec{x_0} = \vec{x_{m+1}} = 0$ and $B_m$ is a $m\times m$ binary
  matrix defined as following:
  \begin{equation*}
    B_{i,j} = 
    \begin{cases}
      1 & \abs{i-j} = 1 \\
      0 & \text{otherwise}
    \end{cases}
  \end{equation*}
\end{proposition}

\begin{proof}
  The result should be clear if we rewrite the adjacent matrix and vector
  $\vec{x}$ as their sub form. The adjacent matrix $A$ of size $mn \times mn$
  defined in Equation \ref{eq:opena} is the same as
  \begin{equation*}
    A = 
    \begin{bmatrix}
      B_m & I & 0 & 0 & 0 & \cdots & 0 \\
      I & B_m & I & 0 & 0 & \cdots & 0 \\
      0 & I & B_m & I & 0 & \cdots & 0 \\
      \hdotsfor[2]{7} \\
      B & I & 0 & 0 & 0 & \cdots & B_m 
    \end{bmatrix}
  \end{equation*}
  where $I$ is the identity matrix of size $m\times m$. If we also split
  $\vec{x}$ into $n$ smaller vectors, the equation $A\vec{x} = \vec{0}$ can be
  viewed as the following equation systems
  \begin{eqnarray*}
    B_m\vec{x_1} + \vec{x_2} &=& 0 \\
    \vec{x_1} + B_m\vec{x_2} + \vec{x_3} &=& 0 \\
    \vdots \\
    \vec{x_{n-1}} + B_m\vec{x_n} &=& 0 
  \end{eqnarray*}
  Alone with the definition $\vec{x_0} = \vec{x_{n+1}} = 0$, the recursion
  follows.
\end{proof}

Note the recursion in Equation \ref{eq:recx}, we can see that it follows a
special pattern. In order to analysis the kernel of $A$, we have to study some
properties of the recursion. Therefore, we define a series of polynomials first
used in the study of this game by Goldwasser \etal\cite{GKZ95} as the following
(note we shift the index of polynomials by one in this definition)

\begin{definition}[Fibonacci Polynomials] \label{def:fib}
  $f_i(x)$ is the $i$th Fibonacci polynomial defined over $GF(2)$ by
  \begin{equation} \label{fibeqn}
    f_n(x) = xf_{n-1}(x) + f_{n-2}(x) \quad n\geq 2, f_0 = 1, f_1 = x
  \end{equation}
\end{definition}

Along with the definition \ref{def:fib}, we can rewrite the recursion in
Equation \ref{eq:recx} by using the Fibonacci polynomials.
\begin{proposition}\label{prop:xf}
  \[\vec{x_{i}} = f_{i-1}(B_m)\vec{x_1}, \quad 1 \leq i \leq m+1\] where
  $f_i(B_m)$ is a polynomial of a matrix
  variable $B_m$.
\end{proposition}

\begin{proof}
  We prove this statement based on induction over $i$.

  \textbf{Basis: } When $i = 1$, 
  \[\vec{x_1} = 1 \cdot \vec{x_1} = f_{0}(B_m)\vec{x_1}\]
  
  \textbf{Induction step: } If for $i = k, k \geq 1$, $\vec{x_k} =
  f_k(B_m)\vec{x_1}$, let us prove the statement for $i = k+1$.  According to
  the recursion in Equation \ref{eq:recx}, we have
  \begin{eqnarray*}
    \vec{x_{k+1}} &=& B\vec{x_k} + \vec{x_{k-1}} \\
    &=& B_m(f_k(B_m)\vec{x_1}) + f_{k-1}(B_m)\vec{x_1} \quad (\text{based on induction
    hypothesis}) \\
    &=&\vec{x_1}(B_mf_k(B_m)) + f_{k-1}(B_m)) \\
    &=&f_{k+1}(B_m)\vec{x_1} 
  \end{eqnarray*}
\end{proof}


\begin{lemma} \label{lem:samenull} Adjacent matrix $A$, $f_{n}(B_m)$ and
  $f_{m}(B_n)$ have the same nullity.
\end{lemma}

\begin{proof}
  If a vector $\vec{x}$ satisfies the equation $A\vec{x} = 0$, the solution can
  be split into $n$ small vectors and using the recursion in Equation
  \ref{eq:recx}, we will have $\vec{x_{n+1}} = f_{n}(B_m)\vec{x_1} =
  0$. Therefore, $\vec{x_1}$ is in the kernel of $f_{n}(B_m)$. On the other
  hand, if a vector $\vec{x'_1}$ of length $m$ is in the kernel of $f_{n}(B_m)$,
  the vector sequence $(f_0(B_m)\vec{x'_1}, f_1(B_m)\vec{x'_1}, \ldots,
  f_{n-1}(B_m)\vec{x'_1})$ is a vector in the kernel of adjacent matrix $A$. To
  see this, note
  \begin{eqnarray*}
    f_{i+1}(B)\vec{x'_1} &=& (Bf_i(B) + f_{i-1}(B))\vec{x'_1} \\
    &=& B(f_i(B)\vec{x'_1}) + f_{i-1}(B)'\vec{x_1}
  \end{eqnarray*}
  which follows the recursion in Equation \ref{eq:recx}. Therefore $A$ has the
  same nullity with $f_{n}(B_m)$. By the symmetric property, we can also obtain
  the result that $A$ has the same nullity of $f_m(B_n)$.
\end{proof}

Another property of Fibonacci polynomial will be useful in proving the main
theorem in this section.

\begin{lemma}\label{lem:fibreq}
  For $n > m$, the Fibonacci polynomial satisfies the following recursion:
  \begin{equation} \label{eq:fibreq}
    f_n(x) = f_{n-m}(x)f_m(x) + f_{n-m-1}(x)f_{m-1}(x) 
  \end{equation}
\end{lemma}

\begin{proof}
  We prove this recursion by induction. 

  \textbf{Basis: }If $n = m+1$, 
  \begin{eqnarray*}
    f_{n-m}(x)f_m(x) + f_{n-m-1}(x)f_{m-1}(x) &=& f_1(x)f_m(x) +
    f_0(x)f_{m-1}(x) \\ 
    &=& xf_m(x) + f_{m-1}(x) \\
    &=& f_n(x)
  \end{eqnarray*}

  \textbf{Induction Step: }Assume the equation is true when $n \leq m+k, k \geq
  1$. We shall prove it is also true in the case of $n=m+k+1$.
  \begin{eqnarray*}
    f_{n}(x) &=& f_{m+k+1}(x) \\
    &=&xf_{m+k}(x) + f_{m+k-1}(x) \\
    &=&x(f_{m}(x)f_k(x) + f_{m-1}(x)f_{k-1}(x)) + (f_{m-1}(x)f_k(x) +
    f_{m-2}(x)f_{k-1}(x)) \\
    &=&f_k(x)(xf_{m}(x)+f_{m-1}(x)) + f_{k-1}(x)(xf_{m-1}(x)+f_{m-2}(x)) \\
    &=&f_k(x)f_{m+1}(x) + f_{k-1}(x)f_{m}(x)
  \end{eqnarray*}
\end{proof}

The following lemma is also useful in the proof of main theorem. 

\begin{lemma} \label{lem:chp} $f_{m}(x)$ is the characteristic polynomial of
  $B_m$ and $f_{m-1}(B_m)$ is invertible.
\end{lemma}

\begin{proof}
  Let $\chiup_{m}(x)$ be the characteristic polynomial of $B_m$.
  \begin{eqnarray*}
    \chiup_{x_m}(x) &=& \det
    \begin{bmatrix}
      x & 1 & 0 & 0 & \cdots & 0 & 0 \\
      1 & x & 1 & 0 & \cdots & 0 & 0 \\
      0 & 1 & x & 1 & \cdots & 0 & 0 \\
      \vdots & \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\
      0 & 0 & 0 & 0 & 1 & x & 1 \\
      x & 1 & 0 & 0 & 0 & 1 & x
    \end{bmatrix} \\
    &=& x\chiup_{m-1}(x) + \chiup_{m-2}(x)
  \end{eqnarray*}
  And we have $\chiup_{B_0}(x)=1$ and $\chiup_{B_1}(x) = x$.  Note the recursion
  of characteristic polynomial $\chiup_m(x)$ is the same as recursion defined in
  Equation \ref{eq:fibreq}. Therefore $f_{m}(x)$ is the characteristic
  polynomial of $B_m$.

  We prove that $f_{m-1}(B_m)$ is invertible based on induction on $m$
  
  \textbf{Basis: } When $k=1$, $f_0(B_1) = 1$ which is clearly invertible.

  \textbf{Induction step: }Assuming for $k\leq 1$, $f_{k-1}(B_k)$ is invertible,
  we now prove $f_k(B_{k+1})$ is also invertible. By Lemma \ref{lem:samenull},
  we know the dimension of $\ker(f_{k}(B_{k+1}))$ is the same as the dimension
  of $\ker(f_{k+1}(B_{k}))$. By invoking Equation \ref{eq:fibreq}
  \[f_{k+1}(B_{k}) = B_{k}f_{k}(B_k) + f_{k-1}(B_k)\] Since $f_{k}(B_k)$ is the
  characteristic polynomial of $B_k$, the Caley-Hamilton theorem it always
  equals to 0. $f_{k-1}(B_k)$ is invertible by induction hypothesis. So
  $f_{k+1}B_k$ is also invertible.
\end{proof}

\begin{theorem} 
  Given a grid graph $G$ of size $m\times n$, the nullity of adjacent matrix $A$
  is $\gcd(n+1, m+1)-1$. In particular, $G$ is completely solvable in the
  $\sigma$-game if and only if $n+1$ and $m+1$ are relatively prime.
\end{theorem}

\begin{proof}
  From Lemma \ref{lem:samenull}, it is known that the nullity of adjacent matrix
  $A$ and $f_{n}(B_m)$ are the same.  So here we focus on the nullity of
  polynomial $f_n(B_m)$. Let $\delta$ be this nullity. From Lemma
  \ref{lem:fibreq}, we have
  \[f_{n}(B_m) = f_{n-m}(B_m)f_m(B_m) + f_{n-m-1}(B_m)f_{m-1}(B_m)\] The
  Calay-Hamilton theorem in linear algebra states that if $\chiup(x)$ is the
  characteristic polynomial of matrix $B$, then $\chiup(B) = 0$. By Lemma
  \ref{lem:chp}, $f_m(x)$ is the characteristic polynomial of $B_m$, therefore
  $f_m(B_m)=0$. $f_{m-1}(B_m)$ is invertible, therefore the nullity of
  $f_n(B_m)$ is equal to the nullity of $f_{n-m-1}(B_m)$. First suppose $n+1 =
  q(m+1)$, we have the following equation systems:
  \begin{eqnarray*}
    f_{n}(B_m) &=& f_{n-m}(B_m)f_{m}(B_m) + f_{n-m-1}(B_m)f_{m-1}(B_m) \\
    f_{n-m-1}(B_m) &=& f_{n-2m+1}(B_m)f_m(B_m) + f_{n-2m-2}(B_m)f_{m-1}(B_m) \\
    & \vdots & \\
    f_{n-(q-2)(m+1)}(B_m)&=&f_{n-(q-1)m-(q-2)}(B_m)f_m(B_m) + f_{n-(q-1)m-(q-1)}(B_m)f_{m-1}(B_m) 
  \end{eqnarray*}
  By the above argument, the nullity of $f_{n}(B_m)$ is the same as
  $f_{n-m-1}(B_m)$, which is the same as $f_{n-2m-2}(B_m)$. At the final step,
  we have $\delta = \dim(\ker(f_{n-(q-1)m-(q-1)}))$. Within a simple
  calculation, we can see that $n-(q-1)m-(q-1)=m$, which means $\delta$ equals
  to the nullity of polynomial $f_m(B_m)$. Applying Caley-Hamilton Theorem
  again, we have $f_m(B_m) = 0$. Therefore,
  \[\delta = \dim(\ker(f_m(B_m))) = m\]
  
  On the other hand, if $m+1$ cannot divide $n+1$, we have the following
  equations 
  \begin{eqnarray*}
    n+1 &=& q_1(m+1) + r_1 \\
    m+1 &=& q_2r_1 + r_2 \\
    &\vdots & \\
    r_k &=& q_k(r_{k+1}) + r_{k+2} 
  \end{eqnarray*}
  where $r_{k+2} = 0$ and $r_k = r_{k+1} = \gcd(n+1, m+1)$.  From the above
  process, we can have
  \[
    \dim(\ker(f_{n}(B_m))) = \dim(\ker(f_{r_1-1}(B_m)))
  \]
  From Proposition \ref{lem:samenull}, we know that the kernel of
  $f_{r_1-1}(B_m)$ is the same as $f_n(B_{r_1-1})$. Now if we use the procedure
  described above again, with $m$ as $n$, $r_1-1$ as $m$, then we can have the
  result that $f_{r_2-1}(B_{r_1-1})$ has the same nullity of
  $f_{r_1-1}(B_m)$. We do this by induction and at last we can have that the
  nullity $\delta$ of $f_n(B_m)$ is equal to the nullity of
  $f_{r_{k+1}-1}(B_{r_k-1})$. As $r_{k+1} = r_k$, by using Caley-Hamilton
  theorem, we get the result that
  \[ \delta = r_k - 1 = \gcd(n+1, m+1)- 1\]
\end{proof}

\subsubsection{$\sigma^+$-game}
\label{sss:close}
As defined in Section \ref{sec:allontoalloff}, in a $\sigma^+$-game, the
selection on one vertex affects its closed neighborhood. So the definition of
adjacent matrix $A$ and the auxiliary matrix $B_m$ change.  The definition of
$B_m$ in $\sigma^+$-game is
\begin{equation*}
  B_{i,j} = 
  \begin{cases}
    1 & \abs{i-j} \leq 1 \\
    0 & \text{otherwise}
  \end{cases}
\end{equation*}

While most propositions and theorems on $\sigma$-game can be used in the
$\sigma^+$-game, Lemma \ref{lem:chp} cannot be applied again. In
$\sigma^+$-game, $f_m(B_m)$ is not the characteristic polynomial of $B_m$. This
exception leads to the main difficulty of finding kernel of $f_n(B_m)$ since we
can not apply the Caley-Hamilton theorem. Instead, we have the following lemma
giving a property of $B_m$

\begin{lemma} \label{lem:minpol}
  $f_{m}(x+1)$ is the minimal polynomial of matrix $B_m$
\end{lemma}

\begin{proof}
  We first prove $f_m(x+1)$ is the characteristic polynomial of $B_m$. 
  By definition, the characteristic polynomial of $B_m$ is 
  \begin{eqnarray*}
    \chiup_{B_m}(x) &=& \det
    \begin{bmatrix}
      1+x & 1 & 0 & 0 & \cdots & 0 & 0 \\
      1 & 1+x & 1 & 0 & \cdots & 0 & 0 \\
      0 & 1 & 1+x & 1 & \cdots & 0 & 0 \\
      \vdots & \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\
      0 & 0 & 0 & 0 & 1 & 1+x & 1 \\
      x & 1 & 0 & 0 & 0 & 1 & 1-x
    \end{bmatrix} \\
    &=&(1+x)\chiup_{B_{m-1}}(x) + \chiup_{B_{m-2}}(x) \\
    &=&f_m(x+1)
  \end{eqnarray*}
  By Invoking Caley-Hamilton theorem here, $\chiup_{B_m}(B) = 0$. Then let us
  prove the degree of minimal polynomial of $B_m$ cannot less than $m$. Let
  $\vec{e_1} = (1, 0, \cdots, 0)$ be a standard basis vector in field
  $F^2_m$. We claim $\vec{e_1}, B_m\vec{e_1}, B_m^2\vec{e_1}, \cdots,
  B_m^{m-1}\vec{e_1}$ is a basis for vector space $F^2_n$. To see this, note for
  $1\leq i \leq m$, $B_m^{i-1}\vec{e_1}$ have all the positions not greater than
  $i$ equal to 1 and the other positions equal to 0. Therefore, the minimal
  polynomial of $B_m$ cannot have degree less than $n$. So $\chiup_{B_m}(x) =
  f_m(x+1)$ is the minimal polynomial of $B_m$.
\end{proof}

\begin{theorem}
  If $r$ is the degree of the greatest common divisor of $f_{n}(x+1)$ and
  $f_{m}(x)$. The kernel of linear map $\sigma^+$ has dimension $r$. So there
  are $2^{mn-r}$ configurations can be turned off. $P_(m,n)$ is completely
  solvable in $\sigma^+$-game if and only if $\gcd(f_{n}(x+1), f_{m}(x))=1$
\end{theorem}

\begin{proof}
  By Lemma \ref{lem:minpol}, we know $f_{m+1}(x+1)$ is the minimal polynomial of
  matrix $B_m$. Suppose $P(x) = \gcd(f_{m+1}(x+1), f_{n}(x))$, we have the
  following equation: \[P(x) = f_{m+1}(x+1)u(x) + f_{n}(x)v(x)\] By setting
  $B_m$ as the variable, we can see that $P(B_m) = f_n(B_m)v(B_m)$ since
  $f_{m+1}(B_m + I) = 0$, Therefore $\ker(f_{n}(B_m)) \subseteq
  \ker(P(B_m))$. Note $P(B_m)$ is a factor of $f_n(B_m)$, we have $\ker(P(B))
  \subseteq \ker(f_{m+1}(B_m))$. Therefore, we can conclude that the kernel of
  $f_{n}(B_m)$ has the same dimension as $P(B_m)$. In particular, when
  $\gcd(f_{m+1}(x), f_{n}(x)) = 1$, $P(B) = I$ so the nullity of $f_{n}(B_m) =
  \dim(\ker(I)) = 0$ and $\sigma^+$-game on the $m\times n$ grid is completely
  solvable. By the Primary Decomposition Theorem, we can show $r$ is the degree
  of the greatest common divisor $P(x)$. This completes the proof.
\end{proof}



\subsection{Several  APR Graph Classes }
Although we have a nice characterization of completely solvable grid graph, it
is hard to generalize the technique to other kinds of graph. The reason of
difficulty is that the above argument of grid graph relies heavily on the
special structure of grid. Without this kind of structure, we cannot derive most
of the propositions in the above section. In this section, we are going to use
graph theory to analyze several graph classes. Consider a graph $G=(V,E)$ with
an initial configuration $C$. In order to turn some vertex $v$ off, if
this vertex is initially on (\ie, $v \in C$), we must select odd number of
vertices in the closed neighborhood of $v$. To be more precisely, let $X$ be
some activation set,
\begin{equation*}
  \abs{N[v] \intersect X} \equiv
  \begin{cases}
    1 \bmod 2 & \text{if $v \in C$} \\
    0 \bmod 2 & \text{if $v \notin C$}
  \end{cases}
\end{equation*}
If $X \subseteq V$ is the activation set which turn all vertices off when
initial configuration is $C$, we call $X$ is a $C$-parity set. If a set $S
\subseteq V$ satisfying the condition that $\abs{S\intersect N[v]} \equiv 0
\bmod 2$ for all $v\in V$, we say it is an \emph{all-even parity set}.

\begin{theorem}
  A graph is completely solvable, or equivalently saying, all parity realizable
  (\emph{APR}) if and only if the all-even parity set is only $S=\emptyset$.
\end{theorem}

\begin{proof}
  If $S$ is an all-even parity set, then a simple calculation will show
  $A\vec{s} = \vec{0}$ where $A$ is the adjacent matrix and $\vec{s}$ is a
  characteristic vector of $S$.  If $S$ is not an empty set, then $\vec{s} \neq
  \vec{0}$ and the nullity of $A$ will not be 0. Thus by Theorem
  \ref{thm:null0}, the underlying graph is not completely solvable. On the other
  direction, if the graph is completely solvable, the only vector satisfying
  $A\vec{s}=0$ will be $\vec{0}$, which leads to the conclusion that the
  all-even parity set is nothing except $\emptyset$.
\end{proof}

\begin{corollary} \label{cor:allodd}
  A graph whose vertices are all odd degree is not completely solvable.
\end{corollary}

\begin{proof}
  Note the set $V$ is a all-even parity set.
\end{proof}

Now we give several graph classes studied in the paper of Amin and
Slater\cite{AmS96}.
\subsubsection{Path}
\label{sss:path}
\emph{Path} $P_n$ is a graph $G=(V,E)$ of $n$ vertices $V=\set{v_1, v_2, \ldots,
  v_n}$ and $n-1$ edges $E=\cset{(v_i, v_{i+1})}{1\leq i < n}$.
\begin{proposition} \label{prop:path}
  $P_n$ is completely solvable if and only if $n \neq 3k+2, k \geq 0$
\end{proposition}
\begin{proof}
  We first see the trivial cases that $P_1$ is APR and $P_2$ is not APR. If
  $P_n$ is not APR, let $S$ be the nonempty all-even parity set. We first prove
  the end vertices $v_1, v_n$ always belong to $S$. Assume $v_1 \notin S$, this
  leads to $v_2 \notin S$, since the size of set $N[v] \intersect S$ is
  even. This deduction goes on and on and leads to the conclusion no vertex is
  in $S$, a contradiction.

  Then consider two adjacent vertices $v_i, v_{i+1}, 1<i<n-1$. At least one of
  them must be belongs to $S$. Otherwise if neither of them are in $S$,
  $v_{i-1}, v_{i+1}$ cannot be in $S$. This again leads to an empty all-even
  parity set. In conclusion, if $v_i \notin S, 1<i<n$, then both $v_{i-1}$ and
  $v_{i+1}$ are in $S$.
    
  For $n>2$, $V$ cannot be an all-even parity set of $P_n$ since all the
  internal vertices will have $\abs{N[v] \intersect V}=3$ if $V$ is an all-even
  parity set. Thus the all-even parity set $S$ must be properly contained in
  $V$. As mentioned above, if one internal vertex is not in $S$, its adjacent
  vertices must be in $S$, so no two vertices in $V-S$ are adjacent. Also note
  three adjacent vertices $v_{i-1}, v_i, v_{i+1}$ cannot all belong to $S$ since
  then $\abs{N[v_i]\intersect S}=3$.  Thus, with $\abs{V-S} = k$ we must have
  $S$ consisting of $k+1$ paths $P_2$. To see this, note that we cannot have
  only one vertex $v$ that is adjacent to two vertices not in $S$, which leads
  to $\abs{N[v]\intersect S}=1$. So the path not APR will start and end with
  $P_2$, with each vertex not in $S$ lying between two $P_2$'s. In the
  conclusion, we have $n=2(k+1)+k = 3k+2$.
\end{proof}

\subsubsection{Spider}
A \emph{spider} $T = (v, P_1, P_2, \ldots, P_n)$ is a tree with one root $v$
connected to $n$ paths $P_1, P_2, \ldots, P_n$, where $P_i = (v_{i_1}, v_{i_2},
\ldots, v_{i_n})$ and root $v$ is connected to vertices $v_{i_1}, 1\leq i \leq
n$. 
\label{sss:spider}
\begin{proposition}
  Let $T = (v, P_1, P_2, \ldots, P_k), k > 2$ be a spider and let $t_i =
  \abs{\cset{j}{\abs{P_j} \equiv i \bmod 3}}$, where $0 \leq i \leq 2$. $T$ is
  not completely solvable if and only if either $t_2 =2k,k>0$ or $t_2 = 0$ and
  $t_1$ is odd.
\end{proposition} 

\begin{proof}
  Let $S$ be a nonempty all-even parity set of spider $T$. If the root $v$ is
  not in $S$, then $S_i = S \intersect P_i$ is the all-even parity set of path
  $P_i$ since $N[v]\intersect S_i = N[v]\intersect S$ for all $v$ in path
  $P_i$. By Proposition \ref{prop:path}, $S_i$ is not empty if and only if
  $\abs{P_i} = 3k+2, k\geq 0$. Note in the argument of Proposition
  \ref{prop:path}, the end of non-APR paths are in the all-even parity
  set. Since the parity of $v$ must be even, this requires the number of $S_i$
  must be even. Therefore, $t_2 = 2k, k\geq 1$.

  On the other hand, if $v \in S$. Given a path $P_i=(u_{i_1}, u_{i_2}, \ldots,
  u_{i_n})$, there are two different cases. First consider $u_{i_1} \in S$,
  since $S$ is an all-even parity set, $u_{i_2}$ cannot be in $S$. Therefore, we
  have a subpath of $P_i$, $P'_i = (u_{i_3}, \ldots, u_{i_n})$ that has an
  all-even parity set. Then the length of $P'_i$ must be $3k+2$ and the length
  of $P_i$ must be $3k+4=3k' + 1$. Since we have $v$ of even parity, the number
  of such paths must be odd. Hence $t_1$ is odd. Then in the second case where
  $u_{i_1} \notin S$, we have argument similar to the first case, which gives us
  the result that $P'_i=(u_{i_2}, \ldots, u_{i_n})$ is not APR and $P_i$ is of
  length $3k$. Therefore there is no path of length $3k+2$.

  Conversely, if $t_2 = 2k, k\geq 1$ and such paths are $P_1, P_2, \ldots,
  P_{2j}$. Then we have $S_1, S_2, \ldots, S_{2j}$ be their all-even parity
  set. Now the union of these all-even parity set is the all-even parity set of
  the whole spider since the end of path $P_i$ is in $S_i$ and there are even
  such ends connected to root $v$. If $t_2=0$ and $t_1$ is odd, for path $P_i$ of
  length $3k+1$, we have $S_i=(v, u_{i_1}, u_{i_2}, \ldots, u_{i_n})$. For path
  $P_i$ of length $3k$, we have $S_i = (u_{i_2}, u_{i_2}, \ldots, u_{i_n})$. It
  is easy to see the union of these sets is the all-even parity set of the
  spider. This completes the whole proof.
\end{proof}

It is difficult to directly characterize the property of instances not APR in
the following structure, so our next approach will use composition. Basically it
means we first find some basic instances in those structures and use induction
to obtain all instances that are not APR in those structures.


\subsubsection{Caterpillars}
\label{sss:caterpillars}
A caterpillar $T$ is a graph with a path called ``spine'' $(u_1, u_2, \ldots,
u_k)$ and there are vertices attached to every vertex in the spine. Namely for each
$u_i$ in the spine, there are end vertices $(v_{i_1}, v_{i_2}, \ldots, v_{i_n})$
connected to $u_i$. The only caterpillars with all vertices of odd degrees are
illustrated in Figure \ref{fig:cater}, namely $T_1$ where one vertex in the
spine with odd end vertices attached, or $T_{2,j}$ where two end vertices in the
spine with even vertices attached and all internal vertices in the spine with
odd vertices attached.

\begin{figure}
  \centering
  \includegraphics[scale=0.8]{t1}%
  \hspace{.5cm}
  \includegraphics[scale=0.8]{t2j}
  \caption{Caterpillars with all vertices of odd degree}
  \label{fig:cater}
\end{figure}

Having two caterpillars $T_{c_1}$ and $T_{c_2}$ with spines $(u_1, u_2, \ldots,
u_m)$ and $(v_1, v_2, \ldots, v_n)$, $T_{c_1}\circ x \circ T_{c_2}$ denotes a
caterpillar with spines $(u_1, u_2, \ldots, u_m, x, v_1, v_2, \ldots, v_n)$,
where the number of end vertices connected to $x$ is arbitrary and the end
vertices connected to $u_i$ and $v_i$ do not change.

\begin{proposition}
  The set $T_c^*$ of all non-APR caterpillars is defined by
  \begin{enumerate}
  \item each tree of type $T_1$ or $T_{2,j}$ is in $T_c^*$, and
  \item if $T_{c_1}$ and $T_{c_2}$ are in $T_c^*$, then so is $T_{c_1} \circ x
    \circ T_{c_2}$.
  \end{enumerate}
\end{proposition}

\begin{proof}
  We observer that every caterpillar of type $T_1$ or $T_{2,j}$ has all vertices
  of odd degree. Thus by Corollary \ref{cor:allodd}, those caterpillar are
  non-APR. If $T_{c_1}$ and $T_{c_2}$ are non-APR caterpillars, let $S_1$ and
  $S_2$ be their nonempty all-even parity set, respectively. Note the end of
  spine are always in the all-even parity set. To see this, note in the base
  case, the all-even parity set $S$ for caterpillar of type $T_1$ or $T_{2,j}$
  is always the whole vertex set, thus the end of spine of those caterpillars
  are in the all-even parity set.  Also observe that if the end of spine of
  $T_{c_1}$ and $T_{c_2}$ are in the all-even parity set, then the new
  caterpillar constructed by operation $T_{c_1}\circ x \circ T_{c_2}$ can have
  an all-even parity set $S=S_1 \union S_2$ and again has both ends of spine in
  $S$. Hence the above argument proves all the caterpillars constructed by the
  operation are non-APR.

  Now we prove no other caterpillars are not APR. If a caterpillar $T_c$ is not
  APR and its type is not $T_1$ and $T_{2,j}$, let $S$ be its all-even parity
  set. First note a vertex $v$ in the spine belongs to $S$ if and only if the
  end vertices attached to $v$ are in $S$. We also note $S$ cannot be $V$ since
  then all vertices have odd degree and $T_c$ will be type $T_1$ or $T_{2,j}$, a
  contradiction. Thus, at least one vertex $u_i$ on the spine and its attached
  end vertices do not belong to $S$. This vertex cannot be the end of spine. To
  see this, note if internal vertices in the spine belongs to $S$, then end of
  spine $u$ will have $\abs{N[u] \intersect S} = 1$, a contradiction of all-even
  parity set. Thus, this $u_i$ split the spline into two parts and now we have
  two separated caterpillars $T_{c_1}$ and $T_{c_2}$. Additionally, $S_i =
  S\intersect V_{c_i}$ will be the all-even parity set for $T_{c_i}$ since the
  only vertices affected by the split are $u_{i-1}, u_{i+1}$, but $u_i$ is not
  in $S$, therefore the size of intersection does not change. Hence $T_c$ is
  obtained from $T_{c_1} \circ u_k \circ T_{c_2}$.
\end{proof}

\subsubsection{APR Trees} \label{sss:tree} The following two definitions come
from the paper of Amin and Slater\cite{AmS96}. For any activation set $X
\subseteq V(G)$, we define a function $ODS(X) \eqdef C$, where $C$ is the
initial configuration . By the argument in Section \ref{sec:compsolve}, we know
an graph is completely solvable if and only if the initial configuration and its
activation set is a one-to-one map. Let $(T_i, x_i)$ be a tree $T_i$ with a
designated vertex $x_i$. We use $X_i$ to represent the activation set when the
initial configuration $C=\set{x_i}$, namely $ODS(X_i) = \set{x_i}$.

\begin{definition}
  Assume $T_1$ and $T_2$ are APR and for $(T_1, x_1)$ we have $x_1 \notin
  X_1$. (Note that for $(T_2, x_2)$, $x_2$ might or might not be in $X_2$.) The
  tree $T^* = T_1 \union T_2 + x_1x_2$, where $V(T^*)=V(T_1)\union V(T_2),
  E(T^*)=E(T_1) \union E(T_2) \union (x_1, x_2)$, is said to be obtained from
  $(T_1,x_1)$ and $(T_2, x_2)$ by a \emph{TYPE 1 operation}.
\end{definition}

\begin{definition}
  Given $(T_1, x_1), (T_2, x_2), \ldots, (T_{2k}, x_{2k})$ where $T_i$ is APR
  and $x_i \in X_i$, the tree $T^{**} = T_1\union T_2\union \cdots T_{2k} + v +
  x_1v + \cdots + x_{2k}v$, where $V(T^{**}) = V(T_1) \union V(T_2) \union
  \cdots V(T_{2k}) \union \set{v}$ and $E(T^{**}) = E(T_1) \union E(T_2) \union
  \cdots E(T_{2k}) + \set{(v,x_1), (v, x_2), \ldots, (v, x_{2k})}$ is said to be
  obtained from $(T_1, x_1), (T_2, x_2), \ldots, (T_{2k}, x_{2k})$ by a
  \emph{TYPE 2 operation}
\end{definition}

\begin{proposition}
  If $T_1$ and $T_2$ are APR, then tree $T^*$ obtained from $(T_1, x_1),
  (T_2,x_2)$ by a TYPE 1 operation is APR.
\end{proposition}

\begin{proof}
  If $T^*$ is not APR and let $S$ be its nonempty all-even parity set and $S_i =
  S\intersect V(T_i)$. If $\abs{S\intersect \set{x_1, x_2}} \leq 1$, then at
  least one $S_i$ will be a nonempty all-even parity set for $T_i$, a
  contradiction. If $S\intersect \set{x_1, x_2} = \set{x_1, x_2}$, $S_1$ will be
  an activation set to turn off all vertices when only $x_1$ is of initial state
  on. Recall the definition of TYPE 1 operation, we have the activation set
  $X_1$ such that $ODS(X_1)=x_1$ and $x_1 \notin X_1$. Since $x_1 \in S_1$ we
  have $S_1 \neq X_1$, then one initial configuration namely $C=\set{x_1}$ has
  two different activation set. Thus $T_1$ is not APR, a contradiction.
\end{proof}

\begin{proposition}
  If $T_1, T_2, \ldots, T_{2k}, k\geq 1$ are APR and $T^{**}$ is the tree
  obtained from $T_1, T_2, \ldots, T_{2k}$ by a TYPE 2 operation, $T^{**}$ is
  also APR.
\end{proposition}

\begin{proof}
  Assume $T^{**}$ is not APR and $S$ be its nonempty all-even parity set and
  $S_i = S\intersect V(T_i)$. If the vertex $v$ in the TYPE 2 operation is not
  in $S$, at least one $S_i$ will not be empty and it will be the all-even
  parity set for $T_i$, a contradiction. If $v \in S$, since $N[v]\intersect S$
  is of even size, there are only odd number of $x_i \in S$. Pick one $x_i
  \notin S$, $S_i$ will the activation set to turn off initial configuration
  $C=\set{x_i}$. Recall in the definition of TYPE2 operation, we define $X_i$ is
  an activation set to turn off $\set{x_0}$ and $x_i \in X_i$. Thus, $X_i \neq
  S_i$ which leads two different activation set for one initial
  configuration. Hence $T_i$ is not APR, a contradiction. 
\end{proof}
\begin{theorem}
  Tree $T$ is completely solvable if and only if $T$ is $K_1$ or $T$ can be
  obtained from a set of completely solvable trees by a TYPE1 operation and
  TYPE2 operation.
\end{theorem}
The ``if'' part follows directly from two propositions above. The ``only if''
part is quite complicated, therefore we refer the paper by Amin and
Slater\cite{AmS96} for any interested reader.

\subsection{Historical Reviews}
Goldwasser \etal first study and give the recursion of vector sequence
$(\vec{x_1}, \vec{x_2}, \cdots, \vec{x_n})$\cite{GKZ95}, which forms a
activation set on a grid graph with closed neighborhood. In this paper, they
also use Fibonacci polynomials to obtain Proposition \ref{prop:xf} and point out
the link between the kernel of adjacent matrix $A$ and the kernel of
$f_n(B_m)$. Later in another paper\cite{GoK97} by Goldwasser \etal, they give a
characterization of completely solvable grid graphs. In this paper, they also
study the divisible property of Fibonacci polynomials in order to analyze the
kernel of $f_n(B_m)$. Later, Klostermeyer gives a survey \cite{Klo01} about the
study of ``Lights Out!'' . In Sutner's one paper\cite{Sut00}, he first study the
divisible property of Fibonacci polynomials and gives the result of both
$\sigma$-game and $\sigma^+$-game. Another paper by Barua and Ramakrishnan
\cite{BaR96} gives the characterization of $\sigma$-game but they only give
partial results of $\sigma^+$-game. The proof used in Section \ref{sss:open} is
basically from \cite{BaR96}, while the result in Section \ref{sss:close} of
$\sigma^+$-game is from \cite{GoK97}. The result of several kinds of APR graphs
are from \cite{AmS92} and \cite{AmS96}.


\section{Optimize Problems} \label{sec:optimize}
\subsection{Minimize Odd Parity Set}
In Section \ref{sec:allontoalloff}, we proved that all undirected graph can be turned
to all off from all on by an activation set $X$. It is natural to ask how to
find the smallest activation set. We will study the problem in this section. 
\subsubsection{$\NP$-Complete on General Graph}
Although an activation set to turn off all vertices from all on always exists
for any graph. It is hard to minimize the size of such set if the target is a
general graph. The following definition and proof is from a paper from
Sutner\cite{Sut88}, while it is not studied in the context of ``Lights Out!''.

\begin{definition}
  BOUND-ALLOFF=$\{(G=(V,E),k)\mid$ $G$ has an activation set at most size
  $k$ to turn it from all on to all off.$\}$
\end{definition}

\begin{theorem} \label{thm:alloffnpc}
  BOUND-ALLOFF is $\NP$-Complete.
\end{theorem}

\begin{proof}\cite{Sut88}
  It is obvious that the problem is in $\NP$. Given a graph $G=(V,E)$ and an
  activation set $X$, first check $\abs{X} \leq k$. Then we just choose every
  vertex in $X$ and do the activation on $G$. At last, we verify whether the
  states of all vertices in $G$ are off.

  To prove the $\NP$-hardness of the problem, we reduce 3SAT to this
  problem. An instance of 3SAT is a boolean formula of 3 conjunctive normal
  form, as in
  \[\phi = \varphi_1 \land \varphi_2 \land \ldots \varphi_m\] Each
  clause $\varphi_i$ is a disjunction $z_{i,1} \lor z_{i,2} \lor z_{i,3}$ where
  $z_{i,j}$ are literals over variable set $X = \set{x_1, x_2, \ldots, x_n}$. To
  construct a reduction from 3SAT to $k$-Alloff, we use gadgets to simulate the
  variables and clauses in Boolean formulas. For each variable, we have a gadget
  of two connected vertices $x_i$ and $\bar{x_i}$. For each clause, the clause
  gadget $H$ has three $a$-vertices and seven $b$-vertices. See Figure
  \ref{fig:gadget} as an example. Seven vertices in the box are called
  $b$-vertices which form a complete graph $K_7$ (edges between them are omitted
  in the figure). The other three vertices not in the box are called
  $a$-vertices, each representing one literal in the clause. The connection
  between $a$-vertices and $b$-vertices are all shown in the figure. Each
  $a$-vertex is also connected to its corresponding variable vertex.

  Now we demonstrate why this construction works. We show that $\phi$ is
  satisfiable if and only if $G$ has an activation set of size $n+m$. We start
  with an activation set $X$. Given the activation set $X$, we construct the
  satisfying assignment $\alpha$ by setting $\alpha(x_i) = 1$ if and only if
  $x_i \in X$. The assignment is consistent since either $x_i$ or $\bar{x_i}$
  must be in the activation set, but not both. Since there is one $b$-vertex
  only connected to the other $b$-vertices, if it is going to be covered,
  at least one of $b$-vertices must be in the activation set $X$. In the
  definition of the problem, we have the constraint that the activation set $X$
  has size at most $k=n+m$. Since $X \intersect \set{x_1, x_2, \ldots, x_n,
    \bar{x_1}, \bar{x_2}, \ldots, \bar{x_n}} = n$ and we have $m$ clauses in
  all, only one $b_{i, j}$ for each clause gadget appears in the activation
  set. So if every $a_{i, j}$ can be turned to off, there at least exist one
  $a_{i,j}$ which has edge to the its corresponding variable vertex. Therefore,
  each clause is true and assignment $\alpha$ satisfies the $\phi$.

  On the other hand, if we have a satisfying assignment for $\phi$, for each
  variable $x_i$, we select vertex $x_i$ into the activation set if $x_i$ is set
  to true in the assignment and if not, we select vertex $\bar{x_i}$. Then for
  each clause component, since the assignment is satisfiable, at least one
  $a$-vertex is covered by a vertex in the activation set. If there is only one
  $a$-vertex is covered, we select the $b$-vertex that connects the remaining
  two $a$-vertices. If two $a$-vertices are covered, we select the $b$-vertices
  connected the remaining one $a$-vertices. If all of $a$-vertices are covered,
  we select the $b$-vertex which only connects to $b$-vertices. This show we
  have an activation set of size $m+n$. 
\end{proof}
\begin{figure}
  \centering
  \includegraphics{gadget}
  \caption{A component of the graph used to embedded 3-SAT}
  \label{fig:gadget}
\end{figure}


\subsubsection{Minimal Activation Set for Several Graph Classes}
Although it is $\NP$-hard to find a minimal activation set to turn off all
vertices from all on for an general graph, we claim that for some graphs with
special structures, it is still possible to find the minimal activation set by
digging the properties of that kind of graph. Given graph $G=(V,E)$, we use
$a(G)$ to denote the size of such minimal set. Recall that in Section
\ref{sec:compsolve}, we show if the vector $\vec{x}$ is a binary vector
representing the activation set $X$ to turn off all vertices from all on,
$\vec{x}$ is a solution of equation $A\vec{x}=\vec{1}$. Note to satisfy this
equation, we will have the following requirement for $X$
\begin{equation} \label{eq:oddall}
  \abs{N[v] \intersect X} \equiv 1 \bmod 2, \forall v \in G
\end{equation}
This condition of $X$ is important and we will use it a lot in the following
argument. First we give the result of several special graph classes from the
paper by Colon \etal\cite{CFF99}.

\paragraph{Path} Let us first study the simplest kind of graph, namely
\emph{path}.
\begin{proposition}
  For a given path $P=(v_1, v_2, \ldots, v_n)$, $a(G) = \ceil{\frac{n}{3}}$.
\end{proposition}

\begin{proof}
  Obviously $X$ cannot be the empty set. Also note any two adjacent vertices
  $v_i, v_{i+1}$ cannot be both in $X$ since then $v_{i-1}$ and $v_{i+1}$ will
  also be in $X$ because every vertex in $X$ must be adjacent to even vertices in
  $X$ and this leads to the conclusion that end vertices are also in $X$ and
  they have only one neighbor in $X$, a contradiction. Therefore, every two
  vertices in $X$ must be separated by some vertices not in $X$. One vertex
  between two vertices in $X$ does not work since it has even adjacent vertices
  in $X$. Vertices more than two does not work either since the middle vertex
  will have no neighbor in $X$. Therefore, we conclude in the activation set for
  path, every two vertices in $X$ must be separated with two vertices not in
  $X$. 

  Thus, for path of length $3k+1$, we put the first vertex in $X$ and every
  third vertex in $X$ until there are not enough vertices. For $n = 3k$, instead
  of putting the first vertex in $X$, we put the second in $X$. In the case of
  $n=3k+2$, the first and the second are the same (\ie, we have activation set
  of same size). Note the activation set constructed by the above approach is
  unique (in the case of $n=3k+2$, two sets are isomorphic), thus for a path $P$
  of length $n$, $a(P) = \ceil{\frac{n}{3}}$.
\end{proof}
\paragraph{Cycle} The argument for cycle is basically the same for path.
\begin{proposition}
  Given a cycle $C$ of $n$ vertices, we have 
  \begin{equation*}
    a(G) =
    \begin{cases}
      \frac{n}{3} & \textrm{if } n \equiv 0 \bmod 3 \\
      n & \textrm{otherwise}
    \end{cases}
  \end{equation*}
\end{proposition}

\begin{proof}
  Observe $V$ is a trivial solution for activation set $X$. Our goal is to find
  an activation set smaller than $V$. If $V$ is not the target solution, we can
  prove that two adjacent vertices cannot be both in $X$ and every pair of
  vertices in $X$ must be separated by two vertices not in $X$ by the same
  argument for path. Thus, if $n \equiv 0 \bmod 3$, we have a smaller activation
  set of size $\frac{n}{3}$. Otherwise, there exists a unique activation set of
  size $n$.
\end{proof}
\paragraph{Complete Bipartite Graph} A completed bipartite graph $K_{m,n}$ is a
graph with two components $V_m$ and $V_n$ of size $m$ and $n$, respectively. For
every vertex in $V_m$, it is adjacent to all vertices in $V_n$ and no vertex in
$V_m$ and it is same for vertex in $V_n$.
\begin{proposition}
  For A complete bipartite graph $K_{m,n}$,
  \begin{equation*}
    a(G) =
    \begin{cases}
      \textrm{the smaller and odd of } m,n & \textrm{at least one in } m,n
      \textrm{ is
        odd.} \\
      m+n & \textrm{otherwise}
    \end{cases}
  \end{equation*}
\end{proposition}

\begin{proof}
  Note every vertex $v$ in the same component has the same size of set $N(v)
  \intersect X$ where $N(v)$ is the open neighborhood of $v$ and there is no
  connection between vertices in the same component. In order to have same
  parity of $N[v] \intersect X$ for all vertices, either all vertices in one
  component are all in $X$, or none in $X$. When $m$ is odd, we can have $V_m$
  as the activation set since then every vertex in $V_n$ will be adjacent to odd
  vertices in $X$. The same argument applies for $V_n$ too. If both $m$ and $n$
  are odd, we choose a smaller one. If both of them are even, the $X$ will be
  $V$ and it is easy to see the size of $N[v]\intersect V$ is odd for all $v\in
  V$.
\end{proof}

\paragraph{Series-Parallel Graph} Now we introduce one large graph class that
was studied in the paper of Amin and Slater\cite{AmS92}.  A series-parallel
graph is a graph $G$ with two special vertices, left and right terminal
$(u,v)$. We give the formal definition below. Please refer Figure \ref{fig:sp}
as illustrations of the following operations.
\begin{definition}[Series-parallel graph]
  Series-parallel graph is a graph $G$ with two designated vertices $u$, $v$ as
  its \emph{terminal}. Its instances is defined recursively as follows 
  \begin{enumerate}
  \item $K_2$ is a series-parallel graph with its two vertices, namely $u$ and
    $v$, as its terminals.
  \item \emph{Series 1} composition of $G_1$ with terminals $(u_1, v_1)$ and
    $G_2$ with terminals $(u_2, v_2)$ construct a graph $G$ with terminals
    $(u_1, v_2)$.
  \item \emph{Series 2} composition of $G_1$ with terminals $(u_1, v_1)$ and
    $G_2$ with terminals $(u_2, v_2)$ construct a graph $G$ with terminals
    $(u_1, v_1=u_2)$.
  \item \emph{Parallel} composition of $G_1$ with terminals $(u_1, v_1)$ and
    $G_2$ with terminals $(u_2, v_2)$ construct a graph $G$ with terminals
    $(u_1=u_2, v_1=v_2)$.
  \end{enumerate}
\end{definition}

\begin{figure}
  \centering 
  \subfigure[Series 1 Composition]{
    \label{fig:subfig:s1} %% label for first subfigure 
    \includegraphics{s1}} 
  \subfigure[Series 2 Composition]{
    \label{fig:subfig:s2} %% label for second subfigure 
    \includegraphics{s2}}
  \subfigure[Parallel Composition]{
    \label{fig:subfig:p}
    \includegraphics{p}}
  \caption{Composition operation defined in series-parallel graph}
  \label{fig:sp} %% label for entire figure 
\end{figure}

Observe a binary tree can be represented the operations used in the composition
of a series-parallel graph. Actually, given a series-parallel graph $G$, there
exists linear time algorithm \cite{Val78, VTL82} to analysis its structure and
give the binary tree, \emph{parsing tree}, for $G$.

To find the minimal activation set $X$ for a series-parallel graph $G$ obtained
from $G_1$ and $G_2$ by one of the three operations above, consider $X_1 = X
\intersect V(G_1)$ and $X_2 = X\intersect V(G_2)$. If it is a Series 1
composition, both $X_1$ and $X_2$ contain $v_1 = u_2$ or not, while the parity of
cardinality of $X_1 \intersect N[v_1]$ and $X_2 \intersect N[u_2]$ depends on
whether $v_1 \in X$ or not. Those leads to the following characterization of a
series-parallel graph.

Given $G=(V,E)$ with terminals $(u,v)$, let $N_{ab,xy}$ be the size of minimal
set $X\subseteq V$ such that for all $w\in V-\set{u,v}$, $N[w]\intersect X$ has
odd size. $a\in \set{1,0}$ and $a=1$ if and only if $u\in X$, $b$ is defined similar
but corresponding $v$; $x\in \set{D,E}$ and $x=D$ if and only if $N[u]\intersect
X$ has odd size, $y$ is defined similar but corresponding $v$. Now we have the
following recursions for each operation.

Consider the case $(G, (u_1, v_2))$ is obtained from $(G_1, (u_1, v_1))$ and
$(G_2, (u_2, v_2))$ by a Series 1 composition (see Figure
\ref{fig:subfig:s1}). $N_{ab,xy}$ have the following recursion.
\begin{equation*}
  \begin{split}
    N_{ab,xy}(G)=\min(& N_{a1,xD}(G_1) + N_{1b,Dy}(G_2), N_{a1,xE}(G_1) +
    N_{1b,Ey}(G_2),\\
    & N_{a0,xD}(G_1) + N_{0b,Ey}(G_2), N_{a0,xE}(G_1) + N_{0b,Dy}(G_2))
  \end{split}
\end{equation*}
To see why Series 1 has this recursion, note $u_1 \in X \Leftrightarrow u_1 \in
X_1$, and $N[u_1] \intersect X$ is of odd size if and only if $N[u_1] \intersect
X_1$ is of odd size. Similar argument applies to $v_2$. Consider $v_1$ and
$u_2$, $v_1\in X \Leftrightarrow u_2 \in X$. So the value of $y$ for
$G_1$ and $x$ for $G_2$ are always the same. Regarding the parity of $v_1$ and
$u_2$, if they are in $X$ then they have the same parity in
$X_1$ and $X_2$. But if they are not in $X$, their parity must be different. 

Consider the case $(G, (u_1, v_1))$ is obtained from $(G_1, (u_1, v_1))$ and
$(G_2, (u_2, v_2))$ by a Series 2 composition (see Figure
\ref{fig:subfig:s2}). $N_{ab,xy}$ have the following recursion

\begin{equation*}
  \begin{split}
    N_{a1,xD}(G)=\min(& N_{a1,xD}(G_1) + N_{10,DD}(G_2), N_{a1,xD}(G_1) +
    N_{11,DD}(G_2),\\ 
    & N_{a1,xE}(G_1) + N_{10,ED}(G_2), N_{a1,xE}(G_1) + N_{11,ED}(G_2)) \\ \\
    N_{a1,xE}(G)=\min(& N_{a1,xD}(G_1) + N_{10,ED}(G_2), N_{a1,xD}(G_1) +
    N_{11,ED}(G_2),\\
    & N_{a1,xE}(G_1) + N_{10,DD}(G_2), N_{a1,xE}(G_1) + N_{11,DD}(G_2)) \\ \\
    N_{a0,xD}(G)=\min(& N_{a0,xD}(G_1) + N_{00,ED}(G_2), N_{a0,xD}(G_1) +
    N_{01,ED}(G_2),\\
    & N_{a0,xE}(G_1) + N_{00,DD}(G_2), N_{a0,xE}(G_1) + N_{01,DD}(G_2)) \\ \\
    N_{a0,xE}(G)=\min(& N_{a0,xD}(G_1) + N_{00,DD}(G_2), N_{a0,xD}(G_1) +
    N_{01,DD}(G_2),\\
    & N_{a0,xE}(G_1) + N_{00,ED}(G_2), N_{a0,xE}(G_1) + N_{01,ED}(G_2)) \\ \\
  \end{split}
\end{equation*}
Here we give a description of the first of those four recursion. The others have
the similar argument. $u_1 \in X \Leftrightarrow u_1 \in X_1$ and it is the same
for the parity of $u_1$, therefore the value of $a$ in $G$ is always the same in
$G_1$. If $v_1 = u_2 \in X$ (\ie, $b=1$ in $G$) and the size of $N[v]\intersect
X$ is odd (\ie, is $y=D$ in $G$), then when $N[v_1] \intersect X_1$ is of odd
size, the $N[v_2] \intersect X_2$ must be also odd; when $N[v_1] \intersect X_1$
is even, the latter is odd. This explains the parity of $v_1$ and $u_2$ in $G_1$
and $G_2$ in the first equation. Also we need to consider whether $v_2$ is in
$X_2$ or not. This explains the four cases in the first equation.

In the case of parallel composition (see Figure \ref{fig:subfig:p}), the
analysis is basically the same but every configuration of $N_{ab,xy}$ has
different subcases. The intuitive reason here is that every terminal in the
subgraph is merged into the terminal of the result graph. Therefore, there are
total 16 recursions here. We give three of them without description.

\begin{equation*}
  \begin{split}
    N_{00,DE}(G)=\min(& N_{00,DD}(G_1) + N_{00,ED}(G_2), N_{00,DE}(G_1) +
    N_{00,EE}(G_2),\\ 
    & N_{00,ED}(G_1) + N_{00,DD}(G_2), N_{00,EE}(G_1) + N_{00,DE}(G_2)) \\ \\
    N_{01,DD}(G)=\min(& N_{01,DD}(G_1) + N_{01,ED}(G_2), N_{01,DE}(G_1) +
    N_{01,EE}(G_2),\\
    & N_{01,ED}(G_1) + N_{01,DD}(G_2), N_{01,ED}(G_1) + N_{01,DE}(G_2)) \\ \\
    N_{11,ED}(G)=\min(& N_{11,DD}(G_1) + N_{11,ED}(G_2), N_{11,DE}(G_1) +
    N_{11,EE}(G_2),\\
    & N_{11,ED}(G_1) + N_{11,DD}(G_2), N_{11,EE}(G_1) + N_{11,DE}(G_2)) \\ \\
  \end{split}
\end{equation*}

\subsection{Bounds for Off State}
\subsubsection{Maximizing Off Switches(MOS)}
Recall the discussion in Section \ref{sec:compsolve}, we know for some kinds of
graph, not all initial configurations are solvable. Now we proposed another question.
It is first proposed by Goldwasser \etal \cite{Gol00}, in which they also prove the
$\NP$-completeness of the problem.

\begin{definition}[Maximizing Off Switches (MOS)]
  Given a graph $G=(V,E)$ with some initial configuration, find the maximum
  number of vertices can be turned off.
\end{definition}


\begin{theorem} \label{thm:mos}
  Maximizing Off Switches is $\NP$-complete.
\end{theorem}

\begin{proof}
  It is easy to see this problem is in $\NP$. The certificate should be the
  activation set $X$ which is polynomial to the size of the instance. We verify
  the solution by do the activation set on the given graph and see whether there
  are more than $k$ vertices of off state. 

  To prove the $\NP$-hardness of its problem, we use a reduction from MAX-3SAT
  to MOS. The MAX-3SAT we use here has some restriction, namely each variable
  appears in five clauses.

  First, we describe the reduction. Given a 3MAX-SAT instance $\phi$ with $n$
  variables and $m$ clauses, we first create seven connected vertices pair
  $(c_1, e_1), (c_2, e_2), \ldots, (c_7, e_7)$ for each clause, where vertices
  $\set{c_i}$ and $\set{e_i}$ are named \emph{clause} and \emph{extra}
  variables, respectively.  Then for each clause $u\lor v\lor w$, we create four
  variable vertices $u, v, w, x$ and make them connected as $K_4$ complete
  graph. Then we connect $u,v,w$ to $c_1$, $u,v$ to $c_2$, $u, w$ to $c_3$,
  $v,w$ to $c_4$, $u$ to $c_5$, $v$ to $c_6$, $w$ to $c_7$. We name these 18
  vertices \emph{clause component} of the corresponding clauses. Then for a
  variable in clause $U_1, U_2, \ldots, U_5$, we create a \emph{truth setting
    component} for each $U_i, U_j, i \neq j$ pair. This truth setting component
  contains 5 connected vertices pairs and we connect $u_i$ and $u_j$, which are
  literals over the same variable, in clause $U_i$ and $U_j$ to the five
  vertices in the truth connected component. See Figure \ref{fig:mos} as an
  example.  All extra vertices have initial state off. If $u_i$ and $u_j$ are
  complements of the same variable, the vertices they have connection in the
  truth setting components are of state on. Otherwise, the vertices in the truth
  setting components have state off. All the other vertices are of initial state
  on. Now we count the number of vertices in the constructed graph. In Figure
  \ref{fig:mos}, the shaded vertices are of state off. For each clause
  component, we have $4+7\times 2 = 18$ vertices. For each variable, we have
  ${5\choose 2}$ truth setting components where each has 10 vertices. Recall
  that a variable appears exactly in 5 clauses, so we have
  $n=\frac{5m}{3}$. Therefore, the total number of vertices will be $18m +
  {5\choose 2}10n = 18m + 60m = 78m$.

  We claim $\phi$ has $c$ or more clauses satisfied if and only if there are at
  least $f=15c + 11(m-c) + 60m$ vertices in $G$ can be turned to off using some
  activation set with initial configuration specified above.
  
  If $\alpha$ is the assignment for $\phi$ such that at least $c$ clauses in
  $\phi$ will be satisfied. First we include variable vertices that are set to
  be true in the assignment $\alpha$. Then for each clause component, there at
  least be one vertex is in the activation set, then four of the seven clause
  vertices can be turned off. If there are multiple variable in a clause to be
  true, the number of clause vertices of state off will not be affected by
  selection on those ``true'' variable vertices and we can always turn all
  variable vertices off by selecting vertex $x$. The selection on variable
  vertex also turn off its corresponding truth setting vertices.  For the
  clauses that cannot be satisfied, we select on $x$, thus turn off all variable
  vertices. In conclusion, for each clause satisfiable, we have 15 vertices off
  and for each clause not satisfiable, we have 11 vertices off. All the truth
  setting vertices can always be turned off. Therefore, we have $f=15c +
  11(m-c) + 60m$ vertices off.

  Now suppose $G$ can be transformed to at least $f$ states off. For each clause
  we have selections on variable vertices, we can at most turn 15 vertices
  off. To see this, note multiple selection on variable does not change the
  number of ``off'' clauses variables and selections made on clauses or extra
  vertices cannot increase the number of ``off'' vertices. For each clause we
  have no selection on variable vertices, 11 vertices can be turned off since we
  select $x$, which turn off all variable vertices. Thus, we assign ``true'' to
  the variables corresponding the variable vertices we select. Note it is always
  a consistent assignment since we cannot have both $u$ and its negation $\bar{u}$
  selected because although this will increase 4 ``off'' vertices in the clause
  component containing $\bar{u}$, it will turn on 5 vertices in the truth
  setting component of $u$ and $\bar{u}$. In conclusion, the assignment can have
  at lease $c$ clauses in $\phi$ satisfied. 
\end{proof}

\begin{figure}[t]
  \centering
    \includegraphics[scale=0.8]{mos}
  \caption{Reduction from MAX-3SAT to MOS, connection from variable to clause
    vertices are not shown}
  \label{fig:mos}
\end{figure}

\paragraph{Hard Approximation of MOS}
From the discussion in Section \ref{sec:allontoalloff}, we know there is always
a solution to change the state of each vertex for any graph. Thus, given a graph
$G$ with some initial configuration, if the number of vertices of state on is
more than half, we can always use the activation set for all on. This activation
set will make more than half vertices have state off. If the initial
configuration of $G$ has more than half vertices off, we do nothing. Observe
this is an algorithm with performance ratio $2-\floor{\frac{2}{n}}$.  Now we
state a corollary of Theorem \ref{thm:mos}.

\begin{corollary}
  There exists a fixed $\varepsilon > 0$ such that no polynomial time algorithm
  for Maximizing Off Switches can have a performance ratio of less that
  $1+\varepsilon$ unless $\P=\NP$.
\end{corollary}

\begin{proof}
  Observe that the reduction we used in the proof of Theorem \ref{thm:mos} is a
  \emph{gap-preserving reduction}. A \emph{gap} problem with parameter
  $(c,\rho)$ is to decide for an instance of the problem whether there exists a
  solution of the instance that has optimize value $OPT > c$ or $OPT \leq
  \rho$. For example, the gap version of MAX-3SAT is to decide whether a 3-CNF
  is satisfiable (\ie, $c=1$) or at most $\delta$ fraction of clauses can be
  satisfied. (\ie, $\rho=\delta$) A gap preserving production is a reduction
  that reduces from one gap problem to another and preserves the gap. To see our
  reduction is gap preserving, note for an instance $I$ of MAX-3SAT of $m$
  clauses, if it is satisfiable, then we have $75m$ vertices off in the
  reduction graph since for each clause component we always have 3 vertices that
  cannot be turned off and there are $78m$ vertices for the graph obtained from
  reduction. If at most $\delta$ fraction of clauses in $I$ is satisfiable,
  then a little calculation will show we can turn at most $71m+4\delta m$
  vertices off. Since the gap version of MAX-3SAT is $\NP$-hard, the gap version
  of MOS is also $\NP$-hard. Thus from \cite{ArL96}, we have the conclusion it
  is $\NP$-hard to have approximation ratio $1+\varepsilon$ of the MOS problem
  for some constant $\varepsilon$.
\end{proof}
\subsubsection{Fixed-Parameter Problem}
We have already known it is very hard to maximize the number of \texttt{OFF}
vertices for the general graph. But we can find a polynomial algorithm to decide
whether a graph can be turned to $N-c$ vertices of state off from any initial
configuration, where $N$ is the number of vertices in the graph and $c$ is
a constant.

\begin{definition}[MOS-$c$] \cite{Gol00}
  Given a graph $G=(V,E)$, decide whether for any initial configuration, there
  exist at least $N-c$ vertices can be turned to state off where $N=\abs{V}$ and
  $c$ is a constant.
\end{definition}

To solve this problem, we use some techniques from coding theory. Given a graph
$G=(V,E)$, as argued in Theorem \ref{thm:null0}, the kernel of the adjacent
matrix $A$ is orthogonal to the range of $A$. Let $\vec{y_1}, \vec{y_2}, \cdots,
\vec{y_k}$ be the $n=\abs{V}$ length vectors forms a basis of the kernel of
$A$. We define a $k\times n$ matrix $H$ as following:
\begin{equation*}
  H= 
  \begin{bmatrix}
    \vec{y_1} \\
    \vec{y_2} \\
    \vdots \\
    \vec{y_k}
  \end{bmatrix}
\end{equation*}

As we know, every solvable configuration $\vec{x}$ is in the range of $A$, which
leads to $H\vec{x} = \vec{0}$. In the language of coding theory, all the
solvable configuration forms a \emph{linear code} $\cal{C}$, a subspace of field
$F^2_n$. Each \emph{codeword} $c \in \cal{C}$ is an initial configuration which
can be turned to all off by a series of activations on vertices in $G$. The
matrix $H$ is called \emph{parity checkable matrix} for the code $\cal{C}$,
which has the following property
\begin{equation*}
  Hc = 0 \quad \forall c \in \cal{C}
\end{equation*}
While for every \emph{word} $w \in F^2_n$, the $k\times 1$ vector $s=Hw$ is
called the \emph{syndrome} of $w$.

\begin{proposition}
  All the configurations can be obtained from initial
  configuration $w$ forms a coset $w+\cal{C}$ of $\cal{C}$. 
\end{proposition}

\begin{proof}
  Code $\cal{C}$ is the range of adjacent matrix of $A$, so for every codeword
  $c \in \cal{C}$, we have a vector $\vec{x}$ representing an activation set to
  turn off all vertices from initial configuration $c$.
  \[A\vec{x} = c\] For every word $u$ in the coset $w+\cal{C}$, there exists a
  codeword $c \in \cal{C}$ such that $u = w + c = w + A\vec{x}$.  Therefore,
  this configuration $u$ can be obtained from initial configuration $w$ and do
  the activation set $\vec{x}$.
\end{proof}

Observe that every word $v$ in the coset $w+\cal{C}$ have the same syndrome
since there exists a codeword $c$ such that word $v = w+c$ and
$Hv=H(w+c)=Hw=s$. The \emph{weight} of a vector is the number of 1's in this
vector (We are now working on field $F^2_n$). The \emph{weight} of a coset is
defined as the minimum weight of any vector in the coset.  Now We claim the
following proposition.

\begin{proposition}
  The weight of a coset is equal to the smallest number of columns of parity
  checkable matrix $H$ whose sum is the syndrome $s$ of the coset.
\end{proposition}

\begin{proof}
  Recall the field we are working on is $F^2_n$, which means every word is a
  binary vector. Therefore the syndrome $s=Hw$ is just the sum of columns in
  $H$, where the indexes of columns in the sum are the positions that has 1 in
  the binary vector. Since every word $u$ in the coset $w+\cal{C}$ has the same
  syndrome, so the smallest number of columns in $H$ whose sum equals to $s$
  equals to the weight of the coset.
\end{proof}

Now come back to our MOS-$c$ problem. Since we want every initial configuration
can be turned to at least $N-c$ vertices off. Or equivalently saying, the vector
in the range of $A$ can have at most $c$ 1's. By the above argument, we know the
weight of a coset $w+\cal{C}$ is the minimal number of 1's of any vector in the
coset. If for every coset, its weight is not greater than $c$. Then every
initial configuration can be turned to at least $N-c$ vertices off. Since every
coset has a corresponding syndrome $s$, what we are going to do is to do some
check on each syndrome. If every syndrome $s \in F_k^2$ can be formed by a sum
of a subset of columns in $H$ whose size does not exceed $c$. We say all the
initial configuration can be turned to at least $N-c$ vertices off. 

\subsubsection{Lower Bounds for Tree}
\label{sss:lbtree}
Observe for any configuration of a graph $G$ with $n$ vertices, we can always
make $\ceil{\frac{n}{2}}$ vertices off. To achieve this, we first check whether
there are more than half vertices on in the initial configuration. If true, we
use the activation set which can turn off all vertices from initial
configuration all on. This activation set change the state of every vertices so
if we have more than half vertices on in the initial configuration, we will have
more than half vertices off after using the activation set. On the other hand,
if there is no more than $\ceil{\frac{n}{2}}$ vertices on, we are done by doing
nothing.

Although we have this bound for general graph, we are seeking some tight bounds
for special graph classes. In a recent paper\cite{WaW07} by Wang and Wu, they
show a tight bound for trees. Roughly speaking, given a tree $T$ with any
initial configuration, we can always have an activation set to achieve no more
than $\ceil{\frac{l}{2}}$ vertices on, where $l$ is the number of leaves. First,
we introduce some notation to simplify the following argument. Denote set
$\set{1,2, \ldots, n}$ by $[n]$. For a matrix $M$ of size $m\times n$, if $I$
and $J$ are subsets of $[m]$ and $[n]$, we use $M(I,J)$ to represent the
submatrix indexed by the row set $I$ and column set $J$. The \emph{range} of a
$m\times n$ matrix $M$ is $R(M) \eqdef \cset{\vec{y}}{\vec{y}=\vec{x}M,
  \vec{x}\in F^2_m}$, so $R(M)\subseteq F^n_2$. Note for a symmetric matrix,
$\vec{x}M = M\vec{x}$. Define function $wt : F^n_2 \to [n]$ as the weight of a
vector from $F^n_2$. The \emph{covering radius} of a matrix $M$ is
\begin{equation*}
  \rho(M) \eqdef \max_{\vec{x} \in F^n_2}\min_{\vec{y}\in R(M)}wt(\vec{x}-\vec{y})
\end{equation*}
In order to get the result from Wang and Wu, let us prove the following lemma
from \cite{WaW07} first.

\begin{lemma} \label{lem:fullcolr} 
  Let $A$ be an $m\times n$ matrix over $F_2$. Let $I$ and $J$ be disjoint
  subsets of $[n]$ such that $A([m], I\union\set{j})$ is of full column rank for
  each $j\in J$. Then for any $\vec{c} \in F^n_2$, there exists $\vec{y}$ in the
  range of $A$ for which $wt(\vec{c}, \vec{y}) \leq n - \abs{I} -
  \ceil{\frac{\abs{J}}{2}}$. In particular, $\rho(A) \leq n - \abs{I} -
  \ceil{\frac{\abs{J}}{2}}$.
\end{lemma}
 
\begin{proof}
  Let $r = \abs{I}$ and $s= \abs{J}$. Observe matrix $A$ is row equivalent to 
  \begin{equation*}
    B=
    \begin{bmatrix}
      I_r & E & F \\
      0 & D & G
    \end{bmatrix}
  \end{equation*}
  where $D$ is a $(m-r)\times s$ matrix and all of its columns are nonzero. To
  see why the submatrix is full of column rank, note we can select any column
  $Z$ from column $B(\set{r+1}, [m])$ to column $B({r+s}, [m])$ and combine it
  with $(I_r, 0)^T$. Since column $Z$ always has at least one 1 among row $r+1$
  to row $m$, therefore the submatrix is full of column rank.
  
  For any $\vec{c_s} \in F^s_2$, we claim there exists a $\vec{y} \in F^{m-r}_2$
  such that $wt(\vec{c_s}-\vec{y}D) \leq \ceil{\frac{n}{2}}$. To prove this,
  Observe $D'$, the row-reduced echelon form of $D$, has column index set
  $\set{I_0 = 0,I_1, I_2, \cdots, I_t=s}$ such that $D'_{i, j}=1, I_{i-1} < j \leq
  I_i$ and $D'_{k,j} = 0, k > i$ for all $I_{i-1} < j \leq I_i$. It looks
  like

  \[D'=\bordermatrix{
    & 1 & 2 & \cdots & I_1 & I_1 + 1 & \cdots & I_2 & \cdots & I_t \cr
    & 1 & 1 & \cdots & 1 & * & * & * & * & *\cr
    & 0 & 0 & 0 & 0 & 1 & \cdots & 1 & * & * \cr
    & 0 & 0 & 0 & 0 & 0 & 0 & * & * & *\cr
    & 0 & 0 & 0 & 0 & 0 & 0 & \cdots & 1 & 1 \cr
    & * & * & * & * & * & * & * & * & * 
  }\]

  We prove the claim by induction based on $t$. When $t=0$, it is trivially
  true. When $t>0$, based on induction hypothesis, we can have a linear
  combination of rows \[D'(\set{1}, [n]), D'(\set{2}, [n]), \cdots,
  D'(\set{t-1}, [n])\] such that $wt(\vec{c_{I_{t-1}}}-\vec{y_{I_{t-1}}}D) \leq
  \ceil{\frac{\abs{I_{t-1}}}{2}}$ for all $\vec{c_{I_{t-1}}} \in
  F_2^{I_{t-1}}$. Adding row $D'(\set{t}, [n])$ to the linear combination does
  not affect the first $I_{t-1}$ columns. If $\vec{c_s}$ has 1's more than 0 in
  the columns indexed by $\set{I_{t-1}+1, \ldots, I_t}$, we add the row
  $D'(\set{t}, [n])$. Otherwise, we are done.
  

  Now for any $\vec{c} = (\vec{c_r}, \vec{c_s}, \vec{c_{n-r-s}})$, we have a
  vector $\vec{x} = (\vec{c_r}, \vec{y})$ such that
  \begin{eqnarray*}
    \vec{x}B &=& (\vec{c_r}, \vec{y})
    \begin{bmatrix}
      I_r & E & F \\
      0 & D & G
    \end{bmatrix} \\
    &=& (\vec{c_r}, \vec{c_r}E + \vec{y}D, \vec{c_r}F + \vec{y}G)
  \end{eqnarray*}
  We can now give the bound for $wt(\vec{x}B-\vec{c})$
  \begin{eqnarray*}
    wt(\vec{x}B-\vec{c}) &=& wt((\vec{c_r}, \vec{c_r}E + \vec{y}D,
    \vec{c_r}F + \vec{y}G) - (\vec{c_r}, \vec{c_s}, \vec{c_{n-r-s}})) \\
    &=& wt(\vec{0},  \vec{c_r}E + \vec{y}D - \vec{c_s}, 
    \vec{c_r}F + \vec{y}G- \vec{c_{n-r-s}})\\
      &\leq & n - r - \ceil{\frac{s}{2}}
  \end{eqnarray*}
\end{proof}

Now we are able to prove the main theorem from Wang and Wu\cite{WaW07}.
\begin{theorem}
  Let $T$ be a tree with of $l$ leaves.  For any initial configuration of tree
  $T$, we can have an activation set to make at most $\floor{\frac{l}{2}}$
  vertices on and these vertices are leaves.
\end{theorem}

\begin{proof}
  Let $A$ be the adjacent matrix of tree $T$ with $n$ vertices. In order to
  prove the theorem, we need to verify that adjacent matrix has the property
  mentioned in Lemma \ref{lem:fullcolr}. Let $I$ be the column index set
  corresponding to non-leaf vertices and $J$ be the column index set
  corresponding to leaf vertices. Then $I\intersect J = \emptyset$ and both $I$,
  $J$ are subset of $[n]$. Let $r$ be the size of $I$. To see submatrix $A([n],
  I\union \set{j})$ is full of column rank for all $j\in J$, note the row
  operation and column operation do not change the rank (\ie, column rank) of a
  matrix. So we permute the rows and columns of $A([n], I\union \set{j})$ such
  that the new matrix
  \[\bordermatrix{
      & v_1 & v_2 & \cdots & v_{r+1} \cr
    u_1 & 1 & * & \cdots & * \cr
    u_2 & 0 & 1 &\cdots & * \cr
    \vdots & \vdots & \vdots & \ddots & \vdots \cr
    u_{r+1} & 0 & 0 & \cdots & 1 \cr
   \vdots & * & * & * & * \cr
   u_n & * & * & * & * 
  }\]
  has the property that $v_i$ is either the parent or of the same depth of
  vertex $v_{i+1}$ for $i\in [r]$ and vertex $u_i$ is always one child of 
  vertex $v_i$, except $u_{r+1}$ is the same as $v_{r+1}$. This permutation
  makes the new matrix upper triangular since for any $i>j$, $(u_i, v_j)$ cannot
  be an edge in a tree. Therefore, every submatrix $A([n], I\union \set{j})$ is
  full of column rank for $j\in J$. Apply Lemma \ref{lem:fullcolr} we get
  \begin{eqnarray*}
    \rho(A) & \leq & n - \abs{I} - \ceil{\frac{\abs{J}}{2}} \\
    &=& n - r  - \ceil{\frac{l}{2}} \\
    &=& \floor{\frac{l}{2}}
  \end{eqnarray*}
  Hence, for any initial configuration $\vec{c}$, we always have an activation set
  $\vec{x}$ such that no more than $\floor{\frac{l}{2}}$ vertices are on.
\end{proof}
\subsection{Historical Review}
Sutner first gives the proof that it is $\NP$-hard to find a minimal
activation set to turn off all vertices from all on\cite{Sut88}. Later several
special classes of graphs are studied. Conlon \etal{} give several results of
the size of minimal activation set in their paper\cite{CFF99}.  In all graph
classes, the case of tree has been studied a lot. The approach described in this
paper is from Amin and Slater\cite{AmS96}. Another method are studied by Galvin,
Dawes\cite{Daw89} and Chen\cite{CLW04}. Their method first try to enumerate all
feasible solutions for a tree, then use some linear algebra technique to find
the minimal. Their method can only be applied to trees while the method from
Amin and Slater can used on a broader graph class. In one paper by Goldwasser
and Klostermeyer\cite{Gol00}, they ask the question of maximizing off switches
from some initial configuration and prove it is $\NP$-hard to get a optimize
solution and a $1+\varepsilon$ approximation. In the same paper, they also use
techniques from coding theory to obtain a polynomial time algorithm to solve the
fixed parameter problem of MOS and give a tighter bound of grid graph.




% END BODY of document
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Bibligoraphy
% \newpage


\bibliographystyle{abbrv}
\bibliography{thesis}

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%



%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Appendices:
%\newpage
%
%\appendix
%\ifnum\hylinks=1 \phantomsection \fi
%\addcontentsline{toc}{section}{Appendices}
%
%\section{Sample Appendix}

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\end{document}
