The next automaton we want to present is an extension of a bounded
two-dimensional Turing machine. The name of the automaton is
two-dimensional sgraffito automata abbreviated as 2SA. This type of automata was
introduced in 2012 by D. Pr{\r{u}\v{s}}a and F. Mr{\'{a}}z
in~\cite{prusa2012new}.
The idea of 2SA is based on the visual arts technique named sgraffito. In
this technique it is scratched in a multi-layered surface in such a way that the
color of the lower layers becomes visible. The 2SA can visit a cell only a
finite number of times. Therefore, every symbol of the working alphabet gets
a number from $\mathbb{N}$ and after the automaton operated on a pixel the
number of this pixel must be lower than before. These numbers are called the weight of
a pixel. Formally, a 2SA can be defined as follows:
\begin{definition} A \emph{deterministic two-dimensional sgraffito automaton}
abbreviated as 2DSA is a 7-tuple $M = (Q, \Sigma, \Gamma, \delta, q_0, F, \mu)$,
where
\begin{compactitem}
	\item $Q$ is a finite non-empty set of states,
	\item $\Sigma$ is a finite set of input symbols,
	\item $\Gamma$ is working alphabet containing $\Sigma$,
	\item $q_0 \in Q$ is the initial state,
	\item $F \subseteq Q$ is the set of accepting states,
	\item
	$\delta: (Q \setminus F) \times (\Gamma \cup \mathcal{S}) \rightarrow Q \times (\Gamma \cup \mathcal{S}) \times \mathcal{H}$
	is the transition function, where $\mathcal{H} = \{R, L, D, U, N\}$ is a set of
	directions and $\mathcal{S} = \{\vdash, \dashv, \top, \bot, \#\}$ is a set of
	border symbols with $\Gamma \cap \mathcal{S} = \emptyset$,
	\item $\mu: \Gamma \rightarrow \mathbb{N}$ is a weight function.
\end{compactitem}
\end{definition}
$M$ is bounded, that means, whenever it scans a symbol from $\mathcal{S}$, it immediately moves to the nearest field of a picture $p \in \Sigma^{*,*}$
without changing the border symbol. For example, if $M$ reads a $\vdash$ that
means it has reached the left border of the input picture and $M$ has to move
one field right and possibly changes its state. An input picture $p$ of
size $(m, n)$ is bordered as follows:
\begin{center}
$\hat{p} =$ 
\begin{tabular}{|ccccc|}
\hline
\#       & $\top$      & \dots    & $\top$    & \#       \tabularnewline
$\vdash$ & $p(1, 1)$   & \dots    & $p(1, n)$ & $\dashv$ \tabularnewline
$\vdots$ & $\vdots$    & $\ddots$ & $\vdots$  & $\vdots$ \tabularnewline
$\vdash$ & $p(m, 1)$   & \dots    & $p(m, n)$ & $\dashv$ \tabularnewline
\#       & $\bot$      & \dots    & $\bot$    & \#       \tabularnewline
\hline
\end{tabular}
\end{center}
 Furthermore, $M$ is
weight-reducing. That means, for all $z, z' \in Q, d \in \mathcal{H}$, and $a,
a' \in \Gamma$, if $(z', a', d) \in \delta(z, a)$, then $\mu(a') < \mu(a)$.
After its computation $M$ accepts an input picture if it reaches an accepting
state.

Next we show an example of a 2DSA. Thus, let $L_{p1p}$ (Lemma 6.4
in~\cite{prusa2012new}) be a picture language over $\Sigma = \{0, 1, 2\}$
consisting of all pictures of the form $P = Q_1 \hcat C \hcat Q_2$, where
\begin{compactitem}
  \item $Q_1$ and $Q_2$ are non-empty square pictures over $\{0, 1\}$, and
  \item $C$ is a column over $\{2\}$.
\end{compactitem}
It is easy to see that each $p \in L_{p1p}$ has a size of $(n, 2n + 1)$ for some
$n \geq 1$. Based on this information we can construct a 2SA $M$ that $L(M) =
L_{p1p}$ holds.
\begin{example}
$M = (\{q_0, \ldots, q_{11}, q_e\}, \{0, 1, 2\}, \{0, 1, 2, x_1, \ldots, x_5\}, \delta, q_0, \{q_e\}, \mu)$\\ 
$\mu(0) > \mu(1) > \mu(2) > \mu(x_1) > \mu(x_2) > \mu(x_3) > \mu(x_4) > \mu(x_5) = 0$
\begin{center}
\begin{tabular}{@{}c|@{}c@{ }|@{ }c@{ }|@{ }c@{ }|@{ }c@{ }|@{ }c@{ }|@{ }c@{ }|@{}c@{}}
$\delta$ & 0,1                  & 2                 & $\vdash$             & $\top$                & $\dashv$             & $\bot$                & $x_i (i\in\{1,2,3,4\})$  \tabularnewline
\hline
$q_0$    & ($q_1$,$R$,$x_1$)    & -                 & -                    & -                     & -                    & -                     & -                        \tabularnewline
\hline
$q_1$    & ($q_0$,$D$,$x_1$)    & ($q_2$,$D$,$x_1$) & -                    & -                     & -                    & -                     & -                        \tabularnewline
\hline
$q_2$    & -                    & -                 & -                    & -                     & -                    & ($q_3$,$U$,$\bot$)    & -                        \tabularnewline
\hline
$q_3$    & -                    & ($q_3$,$U$,$x_1$) & -                    & ($q_4$,$D$,$\top$)    & -                    & -                     & ($q_3$,$U$,$x_{i+1}$)    \tabularnewline
\hline
$q_4$    & -                    & -                 & -                    & -                     & -                    & -                     & ($q_5$,$R$,$x_{i+1}$)    \tabularnewline
\hline
$q_5$    & ($q_6$,$R$,$x_1$)    & -                 & -                    & -                     & -                    & -                     & -                        \tabularnewline
\hline
$q_6$    & ($q_5$,$D$,$x_1$)    & -                 & -                    & -                     & ($q_7$,$L$,$\dashv$) & -                     & -                        \tabularnewline
\hline
$q_7$    & -                    & -                 & -                    & -                     & -                    & -                     & ($q_8$,$D$,$x_{i+1}$)    \tabularnewline
\hline
$q_8$    & -                    & -                 & -                    & -                     & -                    & ($q_9$,$U$,$\bot$)    & -                        \tabularnewline
\hline
$q_9$    & ($q_9$,$U$,$x_1$)    & -                 & -                    & ($q_{10}$,$D$,$\top$) & -                    & -                     & ($q_9$,$U$,$x_{i+1}$)    \tabularnewline
\hline
$q_{10}$  & -                   & -                 & -                    & -                     & -                    & -                     & ($q_{11}$,$L$,$x_{i+1}$) \tabularnewline
\hline
$q_{11}$ & ($q_{11}$,$D$,$x_1$) & -                 & ($q_e$,$R$,$\vdash$) & -                     & -                    & ($q_{12}$,$U$,$\bot$) & ($q_{11}$,$D$,$x_{i+1}$) \tabularnewline
\hline
$q_{12}$ & -                    & -                 & -                    & -                     & -                    & -                     & ($q_9$,$L$,$x_{i+1}$)    \tabularnewline
\end{tabular}
\end{center}
\end{example}
As stated in the proof of Lemma 6.4 in~\cite{prusa2012new}, each computation of
$M$ consists of two phases. In the first phase $M$ checks if the size is
$(n, 2n + 1)$ and if the middle column contains only 2's. While reading the
centered column, the 2's are scrached off. In the second phase $M$ checks that
there is no occurrence of 2's anymore. That implies that $L(M) = L_{p1p}$. The computation phase is illustrated below. 
\begin{center}
\begin{tabular}{cc}
\includegraphics[width = 6cm]{img/example_sgraffito_automaton1.png} &
\includegraphics[width = 6cm]{img/example_sgraffito_automaton2.png} \tabularnewline 
(a) The first phase. & (b) The second phase. 
\end{tabular}
\end{center}
In detail $M$ moves along the first main diagonal, always moving chronologically one field right and one
down. This will be done until it reads a 2. In this case, $M$ expects itself having reached the middle column, checks if the symbol below is a border symbol and
changes to state $q_3$. Now $M$ walks this column up until it reaches the
top border symbol. Here $M$ moves one cell down and changes to state $q_4$.
This state is only used to step one field to the right and change to state
$q_5$. At this point the automaton will walk along the first main diagonal
again. That means it moves one field to the right and one down until $M$ reads
the right border symbol in state $q_7$. Now $M$ checks if it has reached the
right bottom corner and if that is the case the second phase will start with
state $q_9$.

In the second phase the complete input picture will be sampled. This must be
done to verify that no pixel contains the symbol 2. Due to the fact that
$M$ has already scrached the 2's in the center column it can accept
afterwards. That means $M$ rejects the input picture if it reads a 2 in phase
two. The rejection takes place because in phase two no transition is defined for
input symbol 2. To accept the input picture $M$ has to move up and down over
the picture until it does not read the left border symbol in state $q_{11}$.
In this moment $M$ accepts the picture.

Let's talk about the closure properties of 2SA's. The following two theorems
were proved by D. Pr{\r{u}\v{s}}a and F. Mr{\'{a}}z
in~\cite{prusa2012sgraffito}.
\begin{theorem}
$\familyOf{2NSA}$ and $\familyOf{2DSA}$ are closed under union,
intersection, rotation and mirroring. Moreover $\familyOf{2NSA}$ is closed
under projection and concatenation, while $\familyOf{2DSA}$ is closed under
complement.
\end{theorem}
To prove this theorem they used combinations of two 2SA's,  for example a
composition for the intersection.
\begin{theorem}
$\familyOf{2SA}$ is not closed under complement.
$\familyOf{2DSA}$ is not closed under concatenation and projection.
\end{theorem}
\begin{proof}
To show that $\familyOf{2SA}$ is not closed under complement it was proved
that the language $L_{dup} = \{p \hcat p \mid p \in \{0, 1\}^{*,*} \text{ and }
l_1(p) = l_2(p) \text{ and } l_1(p) \geq 1\}$ is not in $\familyOf{2SA}$ and the
complement language $\bar{L}_{dup}$  is in $\familyOf{2SA}$. A similiar proof
was made to show the nonclosure under concatenation of $\familyOf{2DSA}$. The
reason is that $L_{dup} \not\in \familyOf{2SA}$ which follows that
$L_{dup} \not\in \familyOf{2DSA}$ and because $\familyOf{2DSA}$ is
closed under complement it follows that $\bar{L}_{dup} \not\in \familyOf{2DSA}$.
The language $L_{dup}$ is the reason for the nonclosure under horizontal concatenation
of $\familyOf{2DSA}$. The nonclosure under vertical concatenation follows with 
\begin{align*}
P_1 \vcat P_2 = \left(\left(\left(P_2^R \hcat P_1^R\right)^R\right)^R\right)^R,
\end{align*}
where $P_1$ and $P_2$ are arbitrary two-dimensional formal languages.
With $\pi(L) = \bar{L}_{dup}$ and $L \in \familyOf{2DSA}$ the nonclosure under
projection from $\familyOf{2DSA}$ follows where 
\begin{align*} &L = L_1 \cup
L_2, \Sigma_1 = \{0, 1\}, \Sigma_2 = \{\bar{0}, \bar{1}\}, \Sigma = \Sigma_1 \cup \Sigma_2\\
&\pi : \Sigma \rightarrow \Sigma_1, \pi(0) = \pi(\bar{0}) = 0, \pi(1) = \pi(\bar{1}) = 1\\
L_1 = \{&p \in \Sigma^{*,*} \mid p = Q_1 \hcat Q_2 \wedge Q_1 \in \Sigma \wedge\\
&\exists i, j (1 \leq i \leq l_1, 1 \leq j \leq l_2)[p(i, j) \in \Sigma_2] \wedge\\
&l_1(Q_1) = l_2(Q_1) \wedge l_1(Q_2) = l_2(Q_2) \wedge \pi(Q_2(i, j)) \neq \pi(Q_1(i, j))\}\\
L_2 = \{&p \in \Sigma^{*,*} \mid l_2(p) \neq 2 * l_1(p)\} \\
\end{align*}
\end{proof}
Let's talk about the language dependencies of 2SA's. 
\begin{theorem}
$\familyOf{4FA} \subseteq \familyOf{2DSA}$.
\end{theorem}
For the proof of this theorem a graph was defined which represents all
configuration steps of a 4FA $M$ operating on a given input picture $\hat{p}$.
Vertices are triples that includes the current state and the current position in $\hat{p}$ and
edges represent a possible transition dependent on the given state and the
element on the given position of $M$. It can be seen that a 4FA can accept an
input picture $p$ iff there exists a vertex with a final state that can be
reached from the initial vertex.
In the proof of this theorem in~\cite{prusa2012sgraffito} it was given a depth
first search procedure on the configuration graph that can decide this.
Because the number of all outgoing edges of each vertex is limited by a finite
number the given algorithm can also be performed by a 2DSA.

In~\cite{prusa2013comparing} the following two theorems were proved. 
\begin{theorem}
$\familyOf{2DMA(1)} \subset \familyOf{2DSA}$
\end{theorem}
\begin{theorem}
$\familyOf{2NMA(1)} \not\subset \familyOf{2NSA}$
\end{theorem}
Further work on 2SA can be found
in~\cite{prusa2012sgraffito},~\cite{prusa2013comparing} and~\cite{prusa2013new}.
\label{sgraffito}