\documentclass[10pt]{article}
\usepackage{protocol}
\usepackage{minted}

\title{Homework II}
\author{Norbert Tremurici - 11907086}
\begin{document}
\maketitle

\FloatBarrier % Leave the FloatBarriers in place.
\section{Delay Insensitive Codes}

\subsection{(a)}

In \cite{Ver88} the requirements of a delay-insensitive code are listed formally as follows:

\begin{itemize}
    \item the pair $(I, C)$ is a code, $I$ are track indices, $C$ is a subset of subsets of $I$, $|I|$ is the code length, $|C|$ the code size (note: $|C| \leq 2^{|I|}$)
    \item a code word $x \in C$ has size (weight) $w(x)$ (in marble units)
    \item there is a characteristic function of $x$, $x \subset I$ mapping $I$ to $\{0, 1\}$
\end{itemize}

Then the delay-insensivity property is encoded as follows:

$$
\forall x, y : x, y \in C \implies x \subset y : x = y
$$

Which expresses that for any two code words, no code word is contained in another code word.
This fits in nicely with our intuition, as we have already discussed the problem the receiver has in deciding when it has completely received a code word transmitted by the sender.
One obvious example is a one-hot encoding, where one can easily infer that having multiple bits set implies that a transmission is not yet complete.

\subsection{(b)}

\begin{align*}
A = \{ & 00011, 00110, 01010, 01101, 10010, 10101, 11001, 11100\} \\ 
B = \{ & 10101, 01000, 00100, 00010 \} \\
C = \{ & 0010001, 0010010, 0010100, 0011000, 0001001, 0001010, 0000101, 0000110, \\
       & 1000001, 1000010, 1000100, 1001000, 0100001, 0100010, 0100100, 0101000 \} \\
D = \{ & 001, 011, 101, 110 \}\\
E = \{ & 110001, 110010, 110100, 111000, 011010, 011100, 100011, 100101, \\
       & 101001, 101010, 101100, 010011, 010101, 010110, 011001, 100110\} \\
F = \{ & 00100, 10001, 01010, 11001, 00011, 11000 \} \\
G = \{ & 1110, 0111, 1011, 1101\} \\
H = \{ & 0001, 0010, 0100, 1000\} \\
I = \{ & 0000 100, 0001 011, 0010 011, 0011 010, 0100 011, 0101 010, 0110 010, 0111 001, \\
       & 1000 011, 1001 010, 1010 010, 1011 001, 1100 010, 1101 001, 1110 001, 1111 000\} \\
J = \{ & 1110000, 0011000, 0001110, 0000011, 1010100, 1000001, 0101010, 0000110 \}
\end{align*}

To check whether our given codes are delay-insensitive, we need to check the delay-insensivity property we just listed.
We could devise a circuit which does this for us, if we compute two words $x \land y = z$, then the delay-insensivity property is violated iff $z = x \lor z = y$.

Because it is easier to verify a short script that does this work for us than to manually check all pairs, the following code was written to solve this task:

\begin{minted}{python3}
#!/usr/bin/env python3

def check_code(name, code):
    print(f'code {name} is {code}')
    combine = lambda x, y: ''.join([a if a == b else '0' for a, b in zip(x, y)])
    check_validity = lambda x, y, z: not (z in x or z in y)
    result = True
    for i, x in enumerate(code):
        for j, y in enumerate(code[i + 1:]):
            z = combine(x, y)
            valid = check_validity(x, y, z)
            result = result and valid
            if not valid:
                print(f'z={z} in x={x} or y={y}, not valid')
    print(f'code {name} is{"" if result else " not"} valid')
    print()
\end{minted}

Which yields us the following output that already constitutes a solution:

\begin{minted}{text}
code A is ['00011', '00110', '01010', '01101', '10010', '10101', '11001', '11100']
code A is valid

code B is ['10101', '01000', '00100', '00010']
z=00100 in x=10101 or y=00100, not valid
code B is not valid

code C is ['0010001', '0010010', '0010100', '0011000', '0001001', '0001010', '0000101', '0000110', '1000001', '1000010', '1000100', '1001000', '0100001', '0100010', '0100100', '0101000']
code C is valid

code D is ['001', '011', '101', '110']
z=001 in x=001 or y=011, not valid
z=001 in x=001 or y=101, not valid
code D is not valid

code E is ['110001', '110010', '110100', '111000', '011010', '011100', '100011', '100101', '101001', '101010', '101100', '010011', '010101', '010110', '011001', '100110']
code E is valid

code F is ['00100', '10001', '01010', '11001', '00011', '11000']
z=10001 in x=10001 or y=11001, not valid
z=11000 in x=11001 or y=11000, not valid
code F is not valid

code G is ['1110', '0111', '1011', '1101']
code G is valid

code H is ['0001', '0010', '0100', '1000']
code H is valid

code I is ['0000100', '0001011', '0010011', '0011010', '0100011', '0101010', '0110010', '0111001', '1000011', '1001010', '1010010', '1011001', '1100010', '1101001', '1110001', '1111000']
code I is valid

code J is ['1110000', '0011000', '0001110', '0000011', '1010100', '1000001', '0101010', '0000110']
z=0000110 in x=0001110 or y=0000110, not valid
code J is not valid
\end{minted}

As for the code words we could remove to make the code valid again, for codes B and J we remove the only violation ($B.00100$ and $J.0000110$).
For code D, we can remove the single code word which caused two different violations ($D.001$).
For code F, we have two unique words causing the same violation, we could either remove both words causing the violation ($F.10001$ and $F.11000$), or more efficiently, we could remove the violation ($F.11001$).

\subsection{(c)}

There is a lower bound for the hamming distance such that the code is delay insensitive we can infer relatively easily.
The following statement is for codes with words of fixed length $n$ (which we were considering in our examples before), where $d(x, y)$ denotes the hamming distance:

$$
\forall x, y \in C : d(x \land y, x) > 0 \land d(x \land y, y) > 0 \iff x \not \subset y \land y \not \subset x
$$

That is, if the hamming distance of either code word to the code word $x \land y$ is greater than 0, then the delay-insensivity property holds.
The proof of this property is fairly simple.
Assume the opposite for the first direction:

$$
\exists x, y \in C : d(x \land y, x) > 0 \land d(x \land y, y) > 0 \land ((x \subset y) \lor (y \subset x))
$$

The statement $(x \subset y) \lor (y \subset x)$ implies that $(x \land y = x) \lor (x \land y = y)$.
But either of these statements implies that one of the hamming distances must be 0, as equal operands yield a hamming distance of 0.
Thus $d(x \land y, x) = 0 \lor d(x \land y, y) = 0$.
This is impossible, because by assumption both hamming distances must be greater than 0.

An identical proof by contradiction can be constructed for the other direction.
If neither is a subset of another, then their hamming distances must naturally be greater than 0, so assuming that either hamming distance is 0 leads to a contradiction.
Thus, we can infer that pairs of delay-insensitive code words have hamming distances (between themselves and their combination) greater than 0, validating our statement above.

Additionally, we can argue the following between any two code words:

$$
\forall x, y \in C : x \not \subset y \land y \not \subset x \Rightarrow d(x, y) \geq 2
$$

That is, the hamming distance cannot be less than two.
For a proof, consider the opposite.
Hamming distances cannot be negative, a hamming distance of 0 implies that $x = y$ (obviously violating our delay-insensitivity property) and finally, a hamming distance of 1 means that either $x \subset y or y \subset x$.
Only with a hamming distance of 2 can we have to separate bit locations set differently, differentiating the code words.

\subsection{(d)}

To create encoder and decoder circuit, first we need to define a mapping.
We list all states and their meaning, grouped by their semantic significance, by considering the values of the concatenated word $X.t X.f Y.t Y.f$ and their mappings.

First we consider valid words, which have one of $[XY].t$ and $[XY].f$ set:

\begin{itemize}
    \item $X.t\ X.f\ Y.t\ Y.f \to X\ Y \to H$
    \item $0000 \to \text{ZERO} \to 0000$
    \item $0101 \to 00 \to 0001$
    \item $0110 \to 01 \to 0010$
    \item $1001 \to 10 \to 0100$
    \item $1010 \to 11 \to 1000$
\end{itemize}

The following words are invalid/not under consideration, as both true and false rails are set simultaneously:

\begin{itemize}
    \item $X.t\ X.f\ Y.t\ Y.f$
    \item $0011$
    \item $0111$
    \item $1011$
    \item $1100$
    \item $1101$
    \item $1110$
    \item $1111$
\end{itemize}

The following words encode hold states, where the circuit keeps its old value until a new, valid data word has arrived:

\begin{itemize}
    \item $X.t\ X.f\ Y.t\ Y.f$
    \item $1000$
    \item $0100$
    \item $0010$
    \item $0001$
\end{itemize}

Now we have considered all possible $16$ words.

If $H[i]$ constitute the bits of the resulting word that is an element of the H code, we can read off the conditions necessary for our one-hot encoding directly and express it like this:

\begin{itemize}
    \item $H[3] = X.t \land \neg X.f \land Y.t \land \neg Y.f$
    \item $H[2] = X.t \land \neg X.f \land \neg Y.t \land Y.f$
    \item $H[1] = \neg X.t \land X.f \land Y.t \land \neg Y.f$
    \item $H[0] = \neg X.t \land X.f \land \neg Y.t \land Y.f$
\end{itemize}

Using this information, we can construct an encoder and decoder circuit as can be seen in Figure~\ref{fig:encdec}.

\begin{figure}[htb]
	\centering
        \includegraphics[width=.6\textwidth]{graphics/encdec.pdf}
        \label{fig:encdec}
	\caption{Dual rail 2-bit encoder decoder setup}
\end{figure}

Do note that in order to disallow incomplete NULL transitions (requiring all signals to go low before the next transition can occur), we use C-gates instead of normal AND gates for the encoder.
For the decoder we are assuming that no illegal states where more than one is bit being set will be reached.

\subsection{(e)}

One simple measure is the efficiency (or lack of wastefulness) of a code, which we can calculate as the number of code words divided by the number of possible combinations of a code with words of a fixed length.
For example, if a code has 4 words but uses 4-bit code words, then only 4 out of all possible 16 combinations are being used.

\begin{itemize}
    \item code A has efficiency $0.25$ with 8 words against 32 possible combinations
    \item code C has efficiency $0.125$ with 16 words against 128 possible combinations
    \item code E has efficiency $0.25$ with 16 bits against 64 possible combinations
    \item code G has efficiency $0.25$ with 4 words against 16 possible combinations
    \item code H has efficiency $0.25$ with 4 words against 16 possible combinations
    \item code I has efficiency $0.125$ with 16 words against 128 possible combinations
\end{itemize}

Clearly, we can rule out codes C and I.
For the remaining codes, if we wanted more code words, we could choose either code E, A or G/H (with descending number of code words), depending on the number of words needed.
If we wanted to target power consumption, we can target codes with states where few bits are pulled to high.
Another important property is simplicity of the code with respect to comprehension and circuit complexity of the encoder/decoder.

Because we need at least 4 bits of data, it is clear that we need 16 code words to encode all possible states, so we simply choose the most efficient of the codes, code E.

\FloatBarrier % Leave the FloatBarriers in place.
\section{Asynchronous Pipelines}

\subsection{(a)}

Literature: \cite{Sut89}

\begin{figure}[htb]
	\centering
	\begin{tikztimingtable}[timing/e/background/.style={fill=gray},xscale=1.1]
		$req_{in}/x_0$                       & LHHLLLHHHHLLLLLHHHHHHHHHHHHHHHHHHH\\
		$ack_{out}/x_1$                      & LLHHHLLLHHHLLLLLLLLLLHHHHHHHHHHHHH\\
		$x_2$                                & LLLHHHLLLHHHHHHHHHHLLLHHHHHHHHHHHH\\
		$\overline{x_2}$                     & HHHHLLLHHHLLLLLLLLLLHHHLLLLLLLLLLL\\
		$x_3$                                & LLLLHHHLLLLLLLLLLHHHLLLLLHHHHHHHHH\\
		$\overline{x_3}$                     & HHHHHLLLHHHHHHHHHHLLLHHHHHLLLLLLLL\\
		$x_4/req_{out}$                      & LLLLLHHHHHHHHHHLLLHHHHHLLLLHHHHHHH\\
		$\overline{x_4}$                     & HHHHHHLLLLLLLLLLHHHLLLLLHHHHLLLLLL\\
		$ack_{in}/x_5$                       & LLLLLLLLLLLLLHHHLLLLLHHHHLLLHHHHHH\\
                $\overline{ack_{in}}/\overline{x_5}$ & HHHHHHHHHHHHHHLLLHHHHHLLLLHHHLLLLL\\
		\extracode
		\begin{pgfonlayer}{background}
			\begin{scope}[semitransparent ,semithick]
				\vertlines[gray,opacity=0.3]{1.05,2.05,...,34.05}
			\end{scope}
		\end{pgfonlayer}
	\end{tikztimingtable}
	\caption{\label{fig:pipelines_timing} Behavior of the Muller pipeline}
\end{figure}

Figure~\ref{fig:pipelines_timing} shows the behavior of the Muller pipeline with the given input signals.

If the $reg_{in}$ signal were to change at that time, then the transition would be considered invalid because the pipeline has not yet acknowledged the rising transition.
The rising transition on $req_{in}$ would appear as a glitch that goes unnoticed by the pipeline.

These results were verified with the help of a python script that simulates the Muller pipeline:

\newpage

\begin{minted}{python3}
#!/usr/bin/env python3

def print_trace(trace):
    print(''.join(['H' if v else 'L' for v in trace]))

def c_gate(a, b, state):
    return not state if (a == (not state) and b == (not state)) else state

def c_gate2(a, b, state):
    return (a and b) or (state and (a or b))

def muller():
    x0 = [False, True, True, False, False, False, True, True, True, True, False, False, False, False, False, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True]
    x5 = [False, False, False, False, False, False, False, False, False, False, False, False, False, True, True, True, False, False, False, False, False, True, True, True, True, False, False, False, True, True, True, True, True, True]

    elements = len(x0)
    x1 = [False] * elements
    x2 = [False] * elements
    x2n = [True] * elements
    x3 = [False] * elements
    x3n = [True] * elements
    x4 = [False] * elements
    x4n = [True] * elements
    x5n = [True] * elements

    for i in range(len(x0) - 1):
        x2n[i + 1] = not x2[i]
        x3n[i + 1] = not x3[i]
        x4n[i + 1] = not x4[i]
        x5n[i + 1] = not x5[i]
        x1[i + 1] = c_gate(x0[i], x2n[i], x1[i])
        x2[i + 1] = c_gate(x1[i], x3n[i], x2[i])
        x3[i + 1] = c_gate(x2[i], x4n[i], x3[i])
        x4[i + 1] = c_gate(x3[i], x5n[i], x4[i])

    for trace in [x0, x1, x2, x2n, x3, x3n, x4, x4n, x5, x5n]:
        print_trace(trace)

if __name__ == '__main__':
    muller()
\end{minted}

\subsection{(b)}

The basic operation principle is of the Muller pipeline is that it propagates transition events from stage to stage.
If we refer to a stage as the respective C-gate that stores the state of the stage, then we could say a transition from stage $i - 1$ is propagated to stage $i$ iff stage $i$ had its last transition acknowledged by stage $i + 1$.

An empty pipeline, like the initial states we typically consider for micropipelines, has all C-gates set to the same state (if the state is $0$, then the pipeline is waiting for the first transition to $1$ and vice-versa), i.e. $\forall i : x_{i - i} = x_i$.
Empty means there was also no request yet, so $req_{in} = ack_{out} = x_1$.
Similarly we have $req_{out} = ack_{in}$.

A full pipeline is one where the transition events are not being consumed by acknowledgement signals at the end of the pipeline, so the state differences keep piling up until the pipeline becomes full.
Thus a full pipeline is characterized by C-gates which are set to alternating states, i.e. $\forall i : x_{i - 1} \not = x_i$.
If we take full to mean that no more transitions can be stores, but additionally that a currently unstorable transition tries to enter the pipeline, then we have $req_{in} \not = ack_{out} = x_1$.
Similarly we have $req_{out} \not = ack_{in}$.

For the pipeline in which only one entry is contained, we need to consider the initial state.
If the first transition was a transition to $1$ and the pipeline was initialized to $0$, then all C-gates change their state from $0$ to $1$ and vice-versa.
What is different however is that while the pipeline has the $ack_{in}$ signal identical to the $req_{out}$, now that an event has bubbled through the pipeline and not yet been consumed, we have $ack_{in} \not = req_{out}$ (until it is consumed).
If no further transitions enters the pipeline until the output is acknowledged, then the pipeline will look like an empty pipeline again, only differently initialized (now with all C-gates assuming a value of $1$ if previously initialized with a value of $0$).

\subsection{(c)}

If we assume active-high latches, then they can only capture data when the enable signal is high.
In the case of the first latch: when $ack_{out} = x_{1}$ is high.
That enable signal goes to high when $x_2$ goes to high and when a request comes and $req_{in} = x_0$ goes to high.

We could say latch $i$ (starting at 1) captures when $x_i = 1$ (which we call the capture window).
The opening of that capture window is an event $x_i =\ \uparrow$ characterized by $x_i = 0 \land x_{i - 1} =\ \uparrow \land \bar x_{i + 1} =\ \uparrow$.
The other event is the closure of that capture window, $x_i =\ \downarrow$, which is characterized by $x_i = 1 \land x_{i - 1} =\ \downarrow \land \bar x_{i + 1} =\ \downarrow$.

In order to capture data at latch $i$, the data input needs to stay valid for as long as $x_i$ is high.
If we say this window has length $C$, if the time of closure is $t_c$, if the latch has an internal delay until capture of $\tau$ and if we change the data input at time $t_d$, then the we need to uphold the following equation:

$$
t_d < t_c - \tau
$$

Notice that we could change the data while the latch is disabled because the latch will capture the value when enabled, so the window length plays no role, though we still need to respect the borders of the previous data transmission.

\begin{figure}[htb]
	\centering
	\begin{tikztimingtable}[timing/e/background/.style={fill=gray},xscale=1.1]
		$req_{in}/x_0$                        & LHHLLLHHHHLLLLLHHHHHHHHHHHHHHHHHHH\\
		$ack_{out}/x_1$                       & LLHHHLLLHHHLLLLLLLLLLHHHHHHHHHHHHH\\
                $D_{in}$                              & 10d{A} 2u 10d{B} 2u 44d{C}\\
                $D_1$                                 & 6u 12d{A} 26d{B} 24d{C}\\
		$x_2$                                 & LLLHHHLLLHHHHHHHHHHLLLHHHHHHHHHHHH\\
		$\overline{x_2}$                      & HHHHLLLHHHLLLLLLLLLLHHHLLLLLLLLLLL\\
                $D_2$                                 & 8u 12d{A} 26d{B} 22d{C}\\
		$x_3$                                 & LLLLHHHLLLLLLLLLLHHHLLLLLHHHHHHHHH\\
		$\overline{x_3}$                      & HHHHHLLLHHHHHHHHHHLLLHHHHHLLLLLLLL\\
                $D_3$                                 & 10u 26d{A} 16d{B} 16d{C}\\
		$x_4/req_{out}$                       & LLLLLHHHHHHHHHHLLLHHHHHLLLLHHHHHHH\\
                $\overline{x_4}/\overline{req_{out}}$ & HHHHHHLLLLLLLLLLHHHLLLLLHHHHLLLLLL\\
                $D_{out}$                             & 12u 26d{A} 18d{B} 12d{C}\\
		$ack_{in}/x_5$                        & LLLLLLLLLLLLLHHHLLLLLHHHHLLLHHHHHH\\
                $\overline{ack_{in}}/\overline{x_5}$  & HHHHHHHHHHHHHHLLLHHHHHLLLLHHHLLLLL\\
		\extracode
		\begin{pgfonlayer}{background}
                    \begin{scope}[semitransparent ,semithick]
                            \vertlines[gray,opacity=0.3]{1.05,2.05,...,34.05}
                    \end{scope}
		\end{pgfonlayer}
	\end{tikztimingtable}
	\caption{\label{fig:muller_data_path} Behavior of the Muller pipeline with a data path}
\end{figure}

We have copied and annotated the previous control path of the Muller pipeline in Figure~\ref{fig:muller_data_path}.

As we can see, we have managed to transmit three data words.
If a data word has $d$ bits, then we have managed to transmit $3d$ bits.
If we constantly pass in new data and acknowledge it as soon as possible, we reach a steady state where we transmit $d$ bits every 6 divisions, with a division being the unit of time in our timing diagram.

Similarly to before, if we have an empty pipeline, then the latches all have the same value, so the minimum is $d$ bits.
If we have a full pipeline, then the enable signals alternate, enabling two latches and disabling two latches.
Thus the maximum number of given bits that can be stored in the pipeline at a time is $2d$ bits.

Because only rising transitions of each C-gate cause the latches to become enabled, we need to return to zero before being able to transmit another data word.
This is the characteristic of a 4-phase protocol.

\FloatBarrier % Leave the FloatBarriers in place.
\section{GALS}

\subsection{First Configuration (architecture 1, PC 1)}

If we consider the first configuration (architecture 1, PC 1), the working principle is as follows.
A system is clocked using a PC, where the PC takes pause requests from either of the other systems.
As soon as the last active cycle of the PC completes, the grant will be revoked from the oscillating part of the PC circuit and the request will go through.
As soon as the request goes through, the flip-flop is triggered, capturing the value on the short bus and at the same time the acknowledge signal goes high.
At this point we note that the valid data already has to be on the bus at this time.
The PC has no way to determine which of those systems got the grant if both want to request a transmission, but the bus arbiter does.
So the correct transmission of a system to another system entails that a system requests access to the bus, gets a grant by the bus arbiter, then puts data on the bus and requests the PC of the receiving system to stop.
Once the PC has acknowledged, transmission is complete and the grants can be revoked again.

\begin{figure}[htb]
    \centering
    \begin{tikztimingtable} [timing/e/background/.style={fill=gray}]
        $req_{busB}$    & LHHHHHHHHHHLLLLL \\
        $ack_{busB}$    & LLHHHHHHHHHHLLLL \\
        $bus_{data}$    & ZZZ 16d{valid} ZZZZZ \\
        $req_{B2A}$     & LLLLHHHHHHHLLLLL \\
        $ack$           & LLLLLLHHHHHHLLLL \\
        $clk_A$         & HLHLHLLLLLLLLHLH \\
  	\extracode
  	\begin{pgfonlayer}{background}
  		\begin{scope}[semitransparent ,semithick]
                    \vertlines[gray,opacity=0.3]{1.05,2.05,...,16.05}
  		\end{scope}
  	\end{pgfonlayer}
    \end{tikztimingtable}%	
    \caption{\label{fig:a1p1-transmission} Transmission (architecture 1, PC 1) from system B to system A}
\end{figure}

Figure~\ref{fig:a1p1-transmission} shows the timing diagram for a successful transmission from system B to system A.
Note that system B keeps the grant for longer than it might need, stopping system A until it decides to revoke the grant again.
As can be seen, $ack_{busB}$ goes high only after $req_{busB}$ and similarly $ack$ goes high only after $req_{B2A}$.
Once the pause clock request has been acknowledged, we have captured the value and so system B is free to initiate the deassertion part of the protocol.
In our example, B deasserts $req_{busB}$, $bus_{data}$ and $req_{B2A}$ at the same time, after which we get the low transitions of the acknowledge signals.

We could have potential metastability problems if we don't take care to put data on the bus at the right time.
For this reason, we should put valid data on the bus and only then request the PC to stop, then we would need no synchronizers.
Other than that, the synchronous islands receive asynchronous signals for the bus system ($req_{bus}$ and $ack_{bus}$) as well as for the pausible clock system ($req_{X2Y}$ and $ack$).
If they aren't synchronized internally, then we would need to make sure that these signals are properly synchronized.

For an architecture with $n$ systems, we would need $n$ pausible clocks, each getting a $req$ signal from $n - 1$ other systems.
Every PC would need an $ack$ signal that is combined into one $ack$ signal at the end, making for $n + 1$ signals.
We would need $n$ buffers for the shared bus and $n$ $req$ and $ack$ signals for the bus arbiter.

In total we get $req_{PC} + ack_{PC} + req_{bus} + ack_{bus} = n \cdot (n - 1) + (n + 1) + n + n = n^2 + 2n + 1$ request/acknowledge wires for the asynchronous interconnect.

This configuration has two potential deadlock problems.
The bus is in a ``deadlock'' if a (malicious or faulty) system fails to revoke its grant to the bus, thus preventing further transmissions.
Each system can be in a ``deadlock'' if another (malicious or faulty) system requests a transmission, pauses its respective clock and does not revoke its grant to the PC to resume further operation of the system.

\subsection{Second Configuration (architecture 1, PC 2)}

The operating principle for the second configuration (architecture 1, PC 2) is very similar, but the PC operates internally on a 2-phase protocol.
In this case, when a request enters the PC, initially the request signal $req_2$ will be identical to the acknowledgement signal $ack_2$.
The request will then get converted by the 4-phase converter to a 2-phase request, so $req_2$ will become different from $ack_2$.
As soon as any currently active clock cycle is complete, the mutex will grant the request, which will yield a transition for the toggle FF and buffer FF.
The buffer FF will capture any value that is currently on the bus, safely sampling the data until after a time of $\Delta$, $ack_2$ becomes synchronized with $req_2$ and the request to the mutex will be revoked again, allowing continued operation of the oscillating part of the circuit.
Previously it was important whether the requesting system keeps the request signal to the PC high, but now every positive transition will initiate a capture protocol that will keep the clock paused for a fixed length of time until operation of the oscillating part of the circuit is resumed.

\begin{figure}[htb]
    \centering
    \begin{tikztimingtable} [timing/e/background/.style={fill=gray}]
        $req_{busB}$    & LHHHHHHHHHHHHHHLLLLL \\
        $ack_{busB}$    & LLHHHHHHHHHHHHHHLLLL \\
        $bus_{data}$    & ZZZ 24d{valid} ZZZZZ \\
        $req_{B2A}$     & LLLLHHHHHHHHHHHLLLLL \\
        $req_2$         & LLLLLHHHHHHHHHHHHHHH \\
        $ack$           & LLLLLLLLLLHHHHHHLLLL \\
        $ack_2$         & LLLLLLLLLHHHHHHHHHHH \\
        $clk_A$         & HLHLHLLLLLHLHLHLHLHL \\
  	\extracode
  	\begin{pgfonlayer}{background}
  		\begin{scope}[semitransparent ,semithick]
                    \vertlines[gray,opacity=0.3]{1.05,2.05,...,20.05}
  		\end{scope}
  	\end{pgfonlayer}
    \end{tikztimingtable}%	
    \caption{\label{fig:a1p2-transmission} Transmission (architecture 1, PC 2) from system B to system A}
\end{figure}

Figure~\ref{fig:a1p2-transmission} shows the timing diagram for a successful transmission from system B to system A.
Do note that in this example system B also keeps the grant for longer than it might need, but after a certain capture delay the acknowledge signal goes low by itself, resuming system A before system B revokes the grant.
This can be seen by looking at the new 2-phase signals $req_2$ and $ack_2$.
Another indicator is that $clk_A$ has restarted before $req_{B2A}$ went low.
How the $ack$ signal is generated is a consequence of the phase converter, it would be logical that $ack$ goes low only after $req_{B2A}$ has transitioned to low.

By the same line of reasoning as before, there is no potential metastability issue for the buffer FF if we respect the aforementioned protocol of getting bus access first before sending a request to the PC.
Also as before, the synchronous systems need to make sure that none of the incoming/outgoing request or acknowledge signal go unsynchronized.

We get the same result, except for the added 2-phase signals, $req_{PC} + ack_{PC} + req_{bus} + ack_{bus} + req_2 + ack_2 = n \cdot (n - 1) + (n + 1) + n + n + n + n = n^2 + 4n + 1$ request/acknowledge wires for the asynchronous interconnect.

Similar to before, in this system the bus can be in a ``deadlock'' if a system fails to revoke its grant to the bus, thus preventing further transmissions.
But this time, failure to remove the grant to the PC does not cause the recipient system to stop, as the respective part of the PC circuit resumes itself and only stops again for the next event.

\subsection{Third Configuration (architecture 2, PC 1)}

In this configuration, our islands are locally synchronous but globally asynchronous, as this configuration basically uses the asynchronous wrappers of the paper with a similar title.
Again, we have a decoupled bus part of the system and PC part of the system, where a system needs to acquire both grants to the bus and PC for a valid transmission.
We know that $(pen, ta)$ are used for a 2-phase handshake protocol for each island, whereas all other handshakes are 4-phase.

When a system receives or transmits data, its port is enabled and the respective protocol is performed.
Considering a transmission of system B to system A, we similarly have that first access to the bus is requested, then access to the PC of system A.
It's the job of the decoder to use the address on the bus and the outgoing requests to generate the acknowledgement signals to the transmitter and the request signals to the receiver.
Note that knowing the address of the receiving system is enough, even to route the acknowledgement signal back, as the decoder can simply mask all other acknowledgements.
The $req_{pi}$, $ack_{pi}$, $req_{po}$ and $ack_{pi}$ signals are used to determine when it is valid to deassert the request of an input or output.
The rest of the operation is similar to before, a system writes to a bus once it has exclusive access in the form of a grant, then it tries to stop the PA of the receiving system to change its input data.

For a complete picture of a transmission, we need to consider quite a lot more signals this time around.

\begin{figure}[htb]
    \centering
    \begin{tikztimingtable} [timing/e/background/.style={fill=gray}]
        $pen_{B}$       & LHHHHHHHHHHHHHHHHHHHHHHHHHHH \\
        $ta_{B}$        & LLLLLLLLLLLLLLLLLLLLLLLLLHHH \\
        $clk_B$         & LHLHLHLHLHLHLHLHLLLLLLLLLHLH \\
        $req_{busB}$    & LLHHHHHHHHHHHHHHHHHHHLLLLLLL \\
        $ack_{busB}$    & LLLHHHHHHHHHHHHHHHHHHHLLLLLL \\
        $bus_{data}$    & ZZZZ 34d{valid} ZZZZZZZ \\
        $bus_{address}$ & ZZZZ 34d{A} ZZZZZZZ \\
        $req_{outB}$    & LLLLLHHHHHHHHHHHHHHLLLLLLLLL \\
        $ack_{outB}$    & LLLLLLLLLLLLLLLLHHHHHHLLLLLL \\
        $req_{poB}$     & LLLLLLLLLLLLLLLLLHHHHHHLLLLL \\
        $ack_{poB}$     & LLLLLLLLLLLLLLLLLLHHHHHHLLLL \\
        $pen_{A}$       & LLLLLLLLLLHHHHHHHHHHHHHHHHHH \\
        $ta_{A}$        & LLLLLLLLLLLLLLLLLLLHHHHHHHHH \\
        $req_{inA}$     & LLLLLLLLLLLHHHHHHHHHLLLLLLLL \\
        $ack_{inA}$     & LLLLLLLLLLLLLLLHHHHHHLLLLLLL \\
        $req_{piA}$     & LLLLLLLLLLLLHHHHHHHHHLLLLLLL \\
        $ack_{piA}$     & LLLLLLLLLLLLLLHHHHHHHHLLLLLL \\
        $clk_A$         & HLHLHLHLHLHLHLLLLLLLLLLHLHLH \\
  	\extracode
  	\begin{pgfonlayer}{background}
  		\begin{scope}[semitransparent ,semithick]
                    \vertlines[gray,opacity=0.3]{1.05,2.05,...,28.05}
  		\end{scope}
  	\end{pgfonlayer}
    \end{tikztimingtable}%	
    \caption{\label{fig:a2p1-transmission} Transmission (architecture 2, PC 1) from system B to system A}
\end{figure}

Figure~\ref{fig:a2p1-transmission} shows the timing diagram for a successful transmission from system B to system A.
The exchange is basically a combination of both the data input and data output transmission protocols as they were given in the timing diagram.

In this example, system A becomes ready to receive data only some (unpredictable) time after system B has already reserved the bus for a transmission to system A.
Once the wrapper of system A receives the request, its $req_{inA}$ line goes up and the data input protocol is executed.
Once the currently active clock cycle finishes, the oscillating part of the PC is stopped and the data is received.
After successful transmission the deassertion part of the data input protocol is initiated, until finally a signal $ack_{inA}$ signals to the outside world that the reception process is finished, at which point $ack_{outB}$ goes high and the wrapper of system B executes its deassertion part of the data output protocol.
Also, as $ack_{piA}$ goes low again, the grant to the PC is revoked and the oscillating part of the circuit resumes its operation.
As $ack_{outB}$ goes low again, the other grant to the bus can be revoked and the bus is freed for further transmissions again.

Do note that additionally, the clock of B, the transmitting system, needs to be paused in this architecture (if nothing else is done), because the asynchronous wrapper is effectively decoupled from the synchronous part inside.
The transmitting system would need some kind of way to infer that a transmission has been completed, so either it needs to synchronize the $ta$ signal (which would incur an unfortunate performance penalty) or alternatively, it could just pause the clock of its own synchronous part.
To stop the clock of the transmitting system, $req_{po}$ of the wrapper could try to acquire a PC grant of its own system.

The data on the bus should have had more than enough time to become stable before the input FF at the receiving end is triggered.
Because all other parts of the interconnect circuit are designed to function correctly asynchronously, there are no metastability problems for the interconnect circuit.
The only remaining part is to make sure that there is no metastability occuring between a wrapper and its contained system, as the contained system operates synchronously.
Any of the incoming signals to the synchronous parts of the circuit could potentially need synchronization, such as $ta$.

Now on to our analysis of the overhead of this wrapper interconnect approach.
To accomodate $n$ nodes, it is clear that we are going to need $n$ systems, each with a a pausible clock wrapped in its own wrapper ($n$ pausible clocks, $n$ wrappers).
The decoder gets two $req$ and $ack$ signals (input and output ports) for every system, resulting in $2n + 2n$ signals.
We need $n$ buffers for the shared bus and again one $req$ and $ack$ signal per system for the bus arbitration, resulting in $n + n$ signals.
Internally, each wrapper needs a $pen$, $ta$, two $req$ and two $ack$ (input and output ports) signals, resulting in $n + n + 2n + 2n$ signals.

We do not consider the signals inside the decoder, as it is a black box, but as it needs to route request/acknowledge signals between all systems, it is likely to have high overhead also (although it might be possible to implement as a memory, some kind of interconnect like a crossbar or similarly).

Thus we can state the asynchronous interconnect wire requirements to be:

$$
\begin{aligned}
    &req_{in} + ack_{in} + req_{out} + ack_{out} + \\
    &req_{bus} + ack_{bus} + \\
    &pen + ta + req_{pi} + ack_{pi} + req_{po} + ack_{po} =\\
    &2n + 2n + n + n + n + n + 2n + 2n = 12n\\
\end{aligned}
$$

As we can see, there is no quadratic term this time around, because every system connects to the decoder instead of connecting to each other.

The system suffers from the same problems as the first configuration, namely the bus can ``deadlock'' if a system reserves a bus grant forever.
However, as the $req_{in}$ signal is managed by the asynchronous wrapper, this time around a system can't pause a receiving system.

\subsection{Fourth Configuration (architecture 2, PC 2)}

Similar to the difference between our first and second configuration, semantically not much changes between the third and the fourth configuration.
The exchange happens just as before, only this time the acknowledge signal of the PC circuit of a system does not wait until the incoming request signal is deasserted, but the delay element coupled with the toggle FF generates the acknowledge signal independently and revokes the PC grant independently as well.
We can see the new 2-phase signals $req_2$ and $ack_2$ which are responsible for any semantic differences to the other PC setup.

As the clock of the receiving system could be resumed before the input protocol is finished, the receiving system would also need to make sure to wait for $ta$ until it begins a transmission.

\begin{figure}[htb]
    \centering
    \begin{tikztimingtable} [timing/e/background/.style={fill=gray}]
        $pen_{B}$       & LHHHHHHHHHHHHHHHHHHHHHHHHHHH \\
        $ta_{B}$        & LLLLLLLLLLLLLLLLLLLLLLLLLHHH \\
        $clk_B$         & LHLHLHLHLHLHLHLHLHLLLLLLLHLH \\
        $req_{busB}$    & LLHHHHHHHHHHHHHHHHHHHLLLLLLL \\
        $ack_{busB}$    & LLLHHHHHHHHHHHHHHHHHHHLLLLLL \\
        $bus_{data}$    & ZZZZ 34d{valid} ZZZZZZZ \\
        $bus_{address}$ & ZZZZ 34d{A} ZZZZZZZ \\
        $req_{outB}$    & LLLLHHHHHHHHHHHHHHHHLLLLLLLL \\
        $ack_{outB}$    & LLLLLLLLLLLLLLLLLLHHHHHLLLLL \\
        $req_{poB}$     & LLLLLLLLLLLLLLLLLLLHHHHHLLLL \\
        $ack_{poB}$     & LLLLLLLLLLLLLLLLLLLLHHHHHLLL \\
        $pen_{A}$       & LLLLLLLLLLHHHHHHHHHHHHHHHHHH \\
        $ta_{A}$        & LLLLLLLLLLLLLLLLLLLLLLLLHHHH \\
        $req_{inA}$     & LLLLLLLLLLLHHHHHHHHHHLLLLLLL \\
        $ack_{inA}$     & LLLLLLLLLLLLLLLLLHHHHHLLLLLL \\
        $req_{piA}$     & LLLLLLLLLLLLHHHHHHHHHHLLLLLL \\
        $ack_{piA}$     & LLLLLLLLLLLLLLLLHHHHHHHLLLLL \\
        $req_2$         & LLLLLLLLLLLLLHHHHHHHHHHHHHHH \\
        $ack_2$         & LLLLLLLLLLLLLLLHHHHHHHHHHHHH \\
        $clk_A$         & HLHLHLHLHLHLHLLLHLHLHLHLHLHL \\
  	\extracode
  	\begin{pgfonlayer}{background}
  		\begin{scope}[semitransparent ,semithick]
                    \vertlines[gray,opacity=0.3]{1.05,2.05,...,28.05}
  		\end{scope}
  	\end{pgfonlayer}
    \end{tikztimingtable}%	
    \caption{\label{fig:a2p2-transmission} Transmission (architecture 2, PC 2) from system B to system A}
\end{figure}

Figure~\ref{fig:a2p2-transmission} shows the timing diagram for a successful transmission from system B to system A.

Our metastability considerations here are identical to the third configuration.

Our overhead analysis is identical except for the introduced 2-phase signals $req_2$ and $ack_2$, of which we need one for every system:

$$
\begin{aligned}
    &req_{in} + ack_{in} + req_{out} + ack_{out} + \\
    &req_{bus} + ack_{bus} + \\
    &pen + ta + req_{pi} + ack_{pi} + req_{po} + ack_{po} +\\
    &req_2 + ack_2 =\\
    &2n + 2n + n + n + n + n + 2n + 2n + n + n = 14n\\
\end{aligned}
$$

As before, the bus can ``deadlock'' if misused, but this time around a system cannot pause another system indefinitely by keeping its PC grant.

\FloatBarrier % Leave the FloatBarriers in place.
\section{Asynchronous Protocols}

Use the following naming convention:
\begin{itemize}
	\item Request signal: $req$
	\item Acknowledgment signal: $ack$
	\item Data signals for bundled data protocols: $d_1$ (MSB), $d_0$ (LSB)
	\item Data signals for 4 phase Dual Rail: $d_0.t$, $d_1.t$ (true rails) $d_0.f$, $d_1.f$ (false rails)
	\item Data signals for LEDR: $d_0.d$, $d_1.d$ (data rails) $d_0.p$, $d_1.p$ (parity rails)
	\item Meaningless signals (for bundled data protocols): $X$
\end{itemize}
Please do not change the order of the individual signal traces in the waveforms. 

\begin{figure}[htb]
    \centering
    \begin{tikztimingtable} [timing/e/background/.style={fill=gray}]
    $d_0.d$ & LLLHHHHHHHHHHLLLLLLLLLLLLLLLLLLL\\
    $d_1.d$ & HHHHHHHHHHHHHHHLLLLLLLLLLHHHHHHH\\
    $d_0.p$ & HHHHLLLLLLLLLLLLLLLLLLLLLLLLLLLL\\
    $ack$ & LLLLLLLLLLHHHHHHHLLLLLLLLLLLHHHH\\
    $d_1.p$ & LLLLLLLLLLLLLLLLLLLLLLLLLLHHHHHH\\
    \extracode
    \begin{pgfonlayer}{background}
      \begin{scope}[semitransparent ,semithick]
        \vertlines[gray,opacity=0.3]{1.05,2.05,...,32.05}
      \end{scope}
    \end{pgfonlayer}
    \end{tikztimingtable}%
    \caption{\label{protocols02:fig:wf1} Waveform 1 using data sequence 1 (11, 00, 01) and protocol 3 (LEDR, $\phi = \text{odd}$)}
\end{figure}

\begin{figure}[htb]
    \centering
    \begin{tikztimingtable} [timing/e/background/.style={fill=gray}]
    $req$ & LLLLLLLHHHHLLLLLLHHHLLLLLLHHHLLL\\
    $ack$ & LLLLLLLLLHHHLLLLLLLHHHLLLLLLHHHL\\
    $d_1$ & LLLHHHHHHHHHHHHLLLLLLLLLLLLLLLLL\\
    $X$ & LLLLLLLLHHHHHHHLLLLLLLLHHHHHHHHH\\
    $d_0$ & LLLLLHHHHHHHHLLLLLLLHHHHHHHHHHHH\\
    \extracode
    \begin{pgfonlayer}{background}
      \begin{scope}[semitransparent ,semithick]
        \vertlines[gray,opacity=0.3]{1,2,...,31}
      \end{scope}
    \end{pgfonlayer}
    \end{tikztimingtable}
    \caption{\label{protocols02:fig:wf2} Waveform 2 using data sequence 1 (11, 00, 01) and protocol 1 (BD, 4-phase)}
\end{figure}

\begin{figure}[htb]
    \centering
    \begin{tikztimingtable} [timing/e/background/.style={fill=gray}]
    $ack$  & HHHHHHHHHLLLLLLLLLLHHHHHHHHHLLLL \\
    $d_0.p$  & HHHHHHHHHHHHHHHLLLLLLLLLLLLLLLLL \\
    $d_0.d$  & HHHHHHLLLLLLLLLLLLLLLLLLLLHHHHHH \\
    $d_1.p$  & LLLLLLLLLLLLLLLHHHHHHHHHHHHHHHHH \\
    $d_1.d$  & LLLLHHHHHHHHHHHHHHHHHHHHLLLLLLLL \\
    \extracode
    \begin{pgfonlayer}{background}
      \begin{scope}[semitransparent ,semithick]
        \vertlines[gray,opacity=0.3]{1,2,...,31}
      \end{scope}
    \end{pgfonlayer}
    \end{tikztimingtable}
    \caption{\label{protocols02:fig:wf3} Waveform 3 using data sequence 3 (01, 01, 10) and protocol 3 (LEDR, $\phi = \text{odd}$)}
\end{figure}

\begin{figure}[htb]
    \centering
    \begin{tikztimingtable} [timing/e/background/.style={fill=gray}]
    $X$ & LLLHHHHHHHLLLLLHHHHHLLLLLLHHLLLL \\
    $d_1$ & LLLLLLLLLLLLLHHHHHHHHHHHHHHHHHHH \\
    $req$ & HHHHLLLLLLLLLLLLLHHHHHHLLLLLLLLL \\
    $d_0$ & HHHHHHHHHHHHHHHHHHHHHHLLLLLLLLLL \\
    $ack$ & HHHHHHHHLLLLLLLLLLLLHHHHHHHHHLLL \\
    \extracode
    \begin{pgfonlayer}{background}
      \begin{scope}[semitransparent ,semithick]
        \vertlines[gray,opacity=0.3]{1.05,2.05,...,32.05}
      \end{scope}
    \end{pgfonlayer}
    \end{tikztimingtable}%	
    \caption{\label{protocols02:fig:wf4} Waveform 4 using data sequence 2 (01, 11, 10) and protocol 1 (BD, 2-phase)}
\end{figure}

\begin{figure}[htb]
  \centering
  \begin{tikztimingtable} [timing/e/background/.style={fill=gray}]
    $A$ & LLLLLLHHHHLLLLLHHHHHLLLLLLLLLLLL \\
    $B$ & LLLLLLLLHHHHHHHHHHHHHHHHHLLLLLLH \\
    $C$ & HHHHLLLLLLHHHHHHHHHHHHHHHHHHHHHH \\
    $D$ & LLLLLLLHHHHHHHHHHHHHHHLLLLLLHHHH \\
    $E$ & HHHHHHHHHHHHHLLLLLLLHHHHHHHHHHHH \\
    \extracode
    \begin{pgfonlayer}{background}
      \begin{scope}[semitransparent ,semithick]
        \vertlines[gray,opacity=0.3]{1.05,2.05,...,32.05}
      \end{scope}
    \end{pgfonlayer}
  \end{tikztimingtable}%	
  \caption{\label{protocols02:fig:wf5} Waveform 5 using none of the sequences, as it violates different rules excluding all sequences with all protocols}
\end{figure}

\begin{figure}[htb]
  \centering
  \begin{tikztimingtable} [timing/e/background/.style={fill=gray}]
    $d_1.t$ & LLLLLLLLLLLLLLLHHHHHLLLLLHHLLLLL \\
    $ack$ & LLLLLLHHHHHHLLLLLLHHHHLLLLHHHLLL \\
    $d_1.f$ & LLLHHHHHLLLLLLLLLLLLLLLLLLLLLLLL \\
    $d_0.f$ & LLLLLLLLLLLLLLLLLLLLLLLLHHHLLLLL \\
    $d_0.t$ & LLLHHHHHHHLLLLLLLHHLLLLLLLLLLLLL \\
    \extracode
    \begin{pgfonlayer}{background}
      \begin{scope}[semitransparent ,semithick]
        \vertlines[gray,opacity=0.3]{1,2,...,31}
      \end{scope}
    \end{pgfonlayer}
  \end{tikztimingtable}
    \caption{\label{protocols02:fig:wf6} Waveform 6 using data sequence 2 (01, 11, 10) and protocol 2 (NCL)}
\end{figure}

\begin{figure}[htb]
  \centering
  \begin{tikztimingtable} [timing/e/background/.style={fill=gray}]
    $A$ & LLLLLHHHHLLLLLHHHHLLLLLHHHHLLLLL \\
    $B$ & HHHHLLLLLLLLLLLHHHHHHHHHHHHHHHHH \\
    $C$ & LLLLLLLLLLLHHHHHHHHHHHHHHHHHHHLL \\
    $D$ & HHHLLLHHHHHHHHHHHHHHHLLLLLLLLLLL \\
    $E$ & LLLLLLLLLLLLLLLLHHHHLLLLLHHHHLLL \\
    \extracode
    \begin{pgfonlayer}{background}
      \begin{scope}[semitransparent ,semithick]
        \vertlines[gray,opacity=0.3]{1,2,...,31}
      \end{scope}
    \end{pgfonlayer}
  \end{tikztimingtable}
  \caption{\label{protocols02:fig:wf7} Waveform 7 using none of the sequences, as it violates different rules excluding all sequences with all protocols}
\end{figure}

\begin{figure}[htb]
  \centering
  \begin{tikztimingtable} [timing/e/background/.style={fill=gray}]
    $d_1.t$ & LLLLHHHHHHLLLLLLLLLLLLLLLLLLLLLL \\
    $d_0.t$ & LLLLLHHHLLLLLLLLLLLLLLLLHHHHLLLL \\
    $d_1.f$ & LLLLLLLLLLLLLLLHHHHLLLLLHHHHHLLL \\
    $d_0.f$ & LLLLLLLLLLLLLHHHHHLLLLLLLLLLLLLL \\
    $ack$ & LLLLLLHHHHHLLLLLLHHHLLLLLLHHHHLL \\
    \extracode
    \begin{pgfonlayer}{background}
      \begin{scope}[semitransparent ,semithick]
        \vertlines[gray,opacity=0.3]{1.05,2.05,...,32.05}
      \end{scope}
    \end{pgfonlayer}
  \end{tikztimingtable}%	
    \caption{\label{protocols02:fig:wf8} Waveform 8 using data sequence 1 (11, 00, 01) and protocol 2 (NCL)}
\end{figure}


\FloatBarrier
\phantomsection
\addcontentsline{toc}{section}{References}
\bibliographystyle{plain}
\bibliography{refs.bib}

\end{document}
