\documentclass[10pt]{article}
\usepackage{protocol}
\title{Homework I}
\author{Norbert Tremurici - 11907086}
\begin{document}
\maketitle

\FloatBarrier % Leave the FloatBarriers in place.
\section{Hazards}

\subsection{Exercise (a)}

From the sub-circuit with the boolean formula $(A \land \neg D) \lor (\neg C \land D)$ we can create a KV map as follows:

\begin{figure}[h!]
	\begin{center}
		\kvunitlength=10mm
                \karnaughmap{3}{$X$}{{$A$}{$D$}{$C$}}{0111 0100}{
		\textcolor{green}{
			\put(2,1.5){\oval(1.9,0.9)[]}}
		\textcolor{red}{
			\put(0.9,0.5){\oval(1.9,0.9)[]}}
		%\textcolor{black}{
			%\put(0.5,1){\oval(0.9,1.9)[]}}
		}
	\end{center}
	\caption{KV map expansion for X}
	\label{fig:haz02:kv_map}
\end{figure}

SIC Hazards refer to single input change hazards.

Now the static 1 hazards are given by the KV map by looking at adjacent groupings which do not cover each other, as they represent a transition from one high state into another high state in which there is a SIC that could temporarily cause the output to go into a low state.

There is one such boundary, so the static 1 hazard is given by $A \bar C D \leftrightarrow A \bar C \bar D$.

Because we have a sum of product implementations, we only consider static 1 hazards, as we would need a product of sum implementation to find static 0 hazards in a KV map.

\subsection{Exercise (b)}

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%             9-value logic table 
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

Now we want to generate the table of Kung values for an XOR gate.
Our result likely depends on the specific implementation of the XOR gate, so we will derive as simple a circuit as possible.

We begin from the four gate NAND implementation for $C = A \not \equiv B$.

$$
\begin{aligned}
C &= \neg (\neg (A \land \neg (A \land B)) \land \neg (B \land \neg (A \land B))) \\
&= ((A \land \neg (A \land B)) \lor (B \land \neg (A \land B))) \\
&= (A \lor B) \land \neg (A \land B) \\
&= (A \lor B) \land (\neg A \lor \neg B)
\end{aligned}
$$

From this we can already infer that this gate can produce hazards, as both A and B are used negatively and non-negatively and these results later combined.

\begin{center}
\begin{tabular}{|c||c|c|c|c|c|c|c|c|c|}
\hline
$\text{xor}$ & 0 & 1 & $\downarrow$ & $\uparrow$ & S1 & S0 & D+ & D- & * \\
\hline
\hline
 0 &             0 & 1 & $\downarrow$ & $\uparrow$   & S1 & S0 & D+ & D- & * \\ \hline
 1 &             1 & 0 & $\uparrow$   & $\downarrow$ & S0 & S1 & D- & D+ & * \\ \hline
 $\downarrow$ &  $\downarrow$ & $\uparrow$   & S0 & S1 & D+ & D- & S1 & S0 & * \\ \hline
 $\uparrow$ &    $\uparrow$   & $\downarrow$ & S1 & S0 & D- & D+ & S0 & S1 & * \\ \hline
 S1 &            S1 & S0 & D+ & D- & S0 & S1 & D- & D+ & * \\ \hline
 S0 &            S0 & S1 & D- & D+ & S1 & S0 & D+ & D- & * \\ \hline
 D+ &            D+ & D- & S1 & S0 & D- & D+ & S0 & S1 & * \\ \hline
 D- &            D- & D+ & S0 & S1 & D+ & D- & S1 & S0 & * \\ \hline
 *  &            * & * & * & * & * & * & * & * & * \\ \hline
\end{tabular}
\end{center}

Naturally, the table is symmetric.

We know that O is potentially a glitch producer, so there can be dynamic hazards.

\begin{itemize}
    \item $Y = D+ = \neg D-$ could arise if $\neg (S1 \not \equiv \uparrow)$, so for example $A = 1, B = 0, C = 0, D = \uparrow$
    \item $Y = D- = \neg D+$ could arise if $\neg (S1 \not \equiv \downarrow)$, so for example $A = 1, B = 0, C = 0, D = \downarrow$
    \item $Y = S1 = \neg S0$ could arise if $\neg (\downarrow \not \equiv \downarrow)$, so for example $A = 0, B = 0, C = 0, D = \downarrow$
    \item $Y = S0 = \neg S1$ could arise if $\neg (\uparrow \not \equiv \downarrow)$, so for example $A = 1, B = 0, C = 1, D = \downarrow$
\end{itemize}

\subsection{Exercise (c)}

First we extract the formula using a system of equations:

$$
\begin{aligned}
    M &= B \lor D \\
    K &= A \land \neg D \\
    P &= M \lor K \\
    N &= \neg A \lor D \\
    S &= N \land \neg C \\
    Z &= R = P \land S
\end{aligned}
$$

Now we can resolve the equation:

$$
\begin{aligned}
    Z &= R = P \land S = (M \lor K) \land (N \lor \neg C) \\
    &= ((B \lor D) \lor (A \land \neg D)) \land ((\neg A \lor D) \land \neg C) \\
\end{aligned}
$$

We can never derive $B \land \neg B$ or $C \land \neg C$, so we set $B = 0$ and $C = 0$:

$$
Z = (D \lor (A \land \neg D)) \land (\neg A \lor D)
$$

For $D = 0$ we have $Z = A \land \neg A$, so there is a S0 hazard here.
For $A = 1$ we have $Z = (D \lor \neg D) \land D$, so there is also a D- hazard here.

\subsection{Exercise (d)}

We have already learned that inputs Y and Z can have static and dynamic hazards, whereas X can have a static 1 hazard.
We have also seen that the problematic inputs are A and D, so we consider only transitions in which one of those inputs changes.

First we look at A transitions:

\begin{enumerate}
    \item $0101 \rightarrow 1101$ (R remains at 1, A is masked, no glitch)
    \item $1101 \rightarrow 0101$ (R remains at 1, A is masked, no glitch)
\end{enumerate}

Then at D transitions:

\begin{enumerate}
    \item $1101 \rightarrow 1100$ (O has a S1 hazard, thus Q has a S0 hazard, but X is S1, Y is S0, Z is 0, but because Y switches later, T gets set to 1!)
    \item $1100 \rightarrow 1101$ (O has a S1 hazard, thus Q has a S0 hazard, this time Z is 1 so T gets set correctly)
    \item $0101 \rightarrow 0100$ (O goes from 1 to 0 and Q from 1 to 0 ordinarily, X is 0, Y is 1, Z is 1, T gets set correctly)
    \item $0110 \rightarrow 0111$ (O and Q remain at 0, X, Y and Z are 0, T gets reset to 0)
\end{enumerate}

So we have identified the problematic transition $1100 \rightarrow 1101$.
We also know the glitch at Q is caused by the glitch at O, so we will focus on preventing these effects in O.
The glitch is caused when K does not have the updated value of D but L does, as KL transitions from $01 \rightarrow 10$.
To prevent this case we insert a delay element at the D input of L with delay $\Delta$ equal to the delay of the inverter input to K.

From the KV map of input X (which is output O), we can see that we can add a redundant term $(A \land \neg C)$ to fix the problem:

$$
\begin{aligned}
    O &= (A \land \neg D) \lor (\neg C \land D) \\
    &= (A \land \neg D) \lor (\neg C \land D) \lor (A \land \neg C)
\end{aligned}
$$

\subsection{Exercise (e)}

We restrict ourselves again to D transitions, because there are no A transitions:

\begin{enumerate}
    \item $1001 \rightarrow 1000$ (X has a S1 hazard, Y has a D+ hazard, P has a S1 hazard, so Z has a D- hazard, this is pretty bad!)
    \item $1010 \rightarrow 1011$ (same as the first, but S and thus R and thus Z remain at 0 instead, this transition is fine)
    \item $1000 \rightarrow 1001$ (same as the first, but Y and Z are flipped instead)
\end{enumerate}

We can re-use the previously proposed circuit modification to prevent the glitch at O.
But additionally we need to prevent glitches at P.
To achieve this, we could use a delay element at the D input of M equal to the delay of the inverter at the D input of K.

\FloatBarrier % Leave the FloatBarriers in place.
\newpage
\section{Clock Domain Crossing}

\subsection{Exercise (a)}

To solve the problem of clock domain crossing for a 32-bit data word of a slower system to a faster system, we can use an asynchronous FIFO for example.
Here we apply the design we learned in another course, Advanced FPGA Design and the asynchronous FIFO we will use can be found in Steve Kilts' book ``Advanced FPGA Design: Architecture, Implementation, and Optimization'' (Wiley-IEEE Press, 2007).
The design for our case can be viewed in Figure~\ref{fig:async-fifo}.

The idea here is that because we want to synchronize a data word, we cannot simply use synchronizers for the entire data word, as this could lead to inconsistency.
So instead, we will use a FIFO with its read and write ports in separate domains.
The pointers are compared to generate the full flag on the write side and the empty flag on the read side, but only after taking appropriate measures against metastability.
The read address (which is generated as an address to a RAM that wraps around) is converted to a gray code, the output of which passes through an n-stage synchronizer (which actually consists of $m$ many n-stage synchronizers, where $m = \log_2 (d)$, with $d$ being the FIFO depth).
We do the same for the write address.

Because in a gray code, only single bit changes are allowed, we don't have the consistency issues here.
The sychronizers introduce latency, but this is fine, as it could in the worst case simply lead to pessimistically generating a full or empty signal even though it might not be true, so the system will simply wait.

\begin{figure}[h]
    \centering
    \includegraphics[height=.5\textheight]{graphics/async-fifo.pdf}
    \caption{Proposed circuit for the transmission}
    \label{fig:async-fifo}
\end{figure}

\begin{figure}[h]
    \centering
    \includegraphics[width=\textwidth]{graphics/async-fifo-wave.png}
    \caption{Timing diagram showing back-to-back transmission, assuming a FIFO depth of 2}
    \label{fig:async-fifo-timing}
\end{figure}

Figure~\ref{fig:async-fifo-timing} shows the timing diagram for back-to-back transmission, assuming a FIFO depth of 2 and two synchronizer stages.
Notice that the read of A was completed, yet full did still go high.
This can happen, because the updated value of the read pointer can get delayed on its way to the write domain due to the synchronizers.

The throughput of the system is determined by how often System A can write without being interrupted by a full FIFO.

In our given example with two synchronizers, assuming all the other logic and memory can operate in their respective clock domain given the timing constraints, we can see that for every two cycles of writing, we have three cycles of waiting.
So the throughput is $\frac{2 \text{words}}{5 \text{cycle}} = 0.4 \frac{\text{words}}{\text{cycle}}$.
One cycle for System A operating with 368 MHz takes about 2.7 ns, so this is about $\approx 0.4 \frac{\text{words}}{\text{cycle}} \cdot \frac{1 \text{cycle}}{2.7 \text{ns}} = 0.148 \frac{\text{words}}{\text{ns}}$.
This situation can be improved by choosing an appropriate FIFO depth compared to the number of synchronizer stages.

If the FIFO depth is high enough, then the system can reach a steady state with a throughput as high as System A can produce data (at a rate of 368 MHz), as the faster system will always be able to consume the data fast enough, preventing the FIFO from becoming full.

As for the MTBU, metastability can still occur inside the n-stage synchronizers, although it depends on the number of stages.
We also need to consider both directions, from domain A to domain B and vice-versa.
We can specify a generic formula from a system X to a system Y as follows:

$$
\begin{aligned}
    MTBU_{X \to Y}(n, f_X, f_Y) &= \frac{1}{\lambda_{dat} T_0 f_{clk}} \exp\left (\frac{\sum^n_{i=1} t_{res,i}}{\tau_C}\right) \\
    &= \frac{1}{\min\{f_X, f_Y\} T_0 f_Y} \exp\left (n \cdot \left(\frac{1}{f_Y} - t_{CO} - t_{SU}\right) \cdot \frac{1}{\tau_C}\right) \\
\end{aligned}
$$

In the case of domain A to domain B, we have:

$$
\begin{aligned}
    MTBU_{A \to B}(n, f_A, f_B) &= \frac{1}{\min\{f_A, f_B\} T_0 f_B} \exp\left (n \cdot \left(\frac{1}{f_B} - t_{CO} - t_{SU}\right) \cdot \frac{1}{\tau_C}\right) \\
    &= \frac{1}{368 \cdot 10^6 \cdot 85 \cdot 10^{-12} \cdot 368 \cdot 10^6} \exp\left (n \cdot \left(\frac{1}{460\cdot10^6} - 400 \cdot 10^{-12} - 300 \cdot 10^{-12}\right) \cdot \frac{1}{90 \cdot 10^{-12}}\right) \\
    &= \frac{1}{368^2 \cdot 85} \exp\left (n \cdot 1473.91 \cdot 10^{-12} \cdot \frac{1}{90 \cdot 10^{-12}}\right) \\
    &= \frac{1}{368^2 \cdot 85} \exp\left (16.38 n\right) \\
\end{aligned}
$$

Similarly for domain B to domain A:

$$
\begin{aligned}
    MTBU_{B \to A}(n, f_B, f_A) &= \frac{1}{460 \cdot 85 \cdot 368} \exp\left (n \cdot \left(\frac{1}{368\cdot10^6} - 400 \cdot 10^{-12} - 300 \cdot 10^{-12}\right) \cdot \frac{1}{90 \cdot 10^{-12}}\right) \\
    &= \frac{1}{460 \cdot 368 \cdot 85} \exp\left (n \cdot 1473.91 \cdot 10^{-12} \cdot \frac{1}{90 \cdot 10^{-12}}\right) \\
    &= \frac{1}{460 \cdot 368 \cdot 85} \exp\left (16.38 n\right) \\
\end{aligned}
$$

So if we choose the stages to be 2 for example, we get $MTBU_{A \to B}(2, 368\cdot10^6, 460\cdot10^6) = 14.67 \cdot 10^6 s \approx 169.77\ \text{days}$ and $MTBU_{B \to A}(2, 460\cdot10^6, 368\cdot10^6) = 11.73 \cdot 10^6 s \approx 135.82\ \text{days}$

To get the actual MTBU, we need to consider these two parallel systems as a whole system.

$$
\begin{aligned}
    MTBU &= \frac{1}{\frac{1}{MTBU_{A \to B}(2, 368 \cdot 10^6, 460 \cdot 10^6)} + \frac{1}{MTBU_{B \to A}(2, 368 \cdot 10^6, 460 \cdot 10^6)}} \\
    MTBU &= 6.51 \cdot 10^6 \approx 75.45\ \text{days} \\
\end{aligned}
$$


\subsection{Exercise (b)}

Now the write side is faster than the read side.
If the FIFO depth is again high enough, the system will still reach a steady state where system B continuously reads, whereas A is occasionally going to stall to wait for B, but the throughput will remain the same.
The MTBUs will be simply flipped.

\subsection{Exercise (c)}

For the chosen solution, the question doesn't apply, because the generated full and empty flags are synchronous signals within their respective clock domains (the write side uses the full signal generated on the write side and the read side uses the empty signal generated on the read side).
For this reason, we will instead consider the generic system depicted in Figure~\ref{fig:generic-handshake}.

\begin{figure}[h]
    \centering
    \includegraphics[width=.6\textwidth]{graphics/generic-handshake.pdf}
    \caption{Generic handshake system}
    \label{fig:generic-handshake}
\end{figure}

For this system, the requests are generated using clock A, whereas acknowledges are generated using clock B.
Both are synchronous within their respective domain.
Even if both clocks initially start at the same point (if $\Delta = 0$), edge $i$ of clock B will run early for edge $i$ of clock A by a value we will call $\delta_i$.

The least common multiple of 368 MHz and 460 MHz determines how many cycles of each pass until the pattern repeats, looking at their prime factorization we can determine 4 cycles of clock A to equal 5 cycles of clock B.
Thus after 5 cycles of clock A the pattern repeats, so we determine all possible phase shifts for these 5 cycles:
A cycle for clock A takes $T_A \approx 2717\ \text{ps}$, whereas a cycle for clock B takes $T_B \approx 2174\ \text{ps}$.
Their difference is approximately $\delta = T_A - T_B \approx 543\ \text{ps}$

\begin{itemize}
    \item $\delta_0 = 0 \cdot \delta - \Delta = -\Delta$
    \item $\delta_1 = 1 \cdot \delta - \Delta = 543\ \text{ps} - \Delta$
    \item $\delta_2 = 2 \cdot \delta - \Delta = 1086\ \text{ps} - \Delta$
    \item $\delta_3 = 3 \cdot \delta - \Delta = 1629\ \text{ps} - \Delta$
\end{itemize}

To guarantee reliable transmission, we need to make sure that the acknowledge signal does not go high during the setup time of system A (before a rising edge).

We must also ensure the resultion time to be positive.
We can formulate all this by saying:

$$
\begin{aligned}
    \delta &> \Delta \geq 0 \\
    t_{res} &> 0 \\
    t_{res} &= \delta - \Delta - t_{CO} - t_{SU} \\
\end{aligned}
$$

We can then find that $t_{res} = 543 - \Delta - 400 - 300 = -157 - \Delta\ \text{ps} = 386 - \Delta\ \text{ps}$, so we can infer that $0 \leq \Delta < 386\ \text{ps}$.
If we can ensure this constraint, we can even remove the synchronizers.
Calculating the MTBU is meaningless, because for this system with correlated clocks either the delay is correctly set up to avoid upsets, or it is not.

\FloatBarrier % Leave the FloatBarriers in place.
\newpage
\section{Waiting Synchronizers}

\subsection{Exercise (a)}

We will consider all the MTBU values ($MTBU_i$ for synchronizer $i$), delays and transistor counts seperately and compare them.

\subsubsection{Synchronizer 1}

Synchronizer 1 is just a 2-stage synchronizer, so we have double the resolution time:

$$
\begin{aligned}
    t_{res,i} &= 1/f_{clk} - t_{CO} - t_{COMB} - t_{SU} \\
    MTBU_1 &= \frac{1}{\lambda_{dat} \cdot T_0 \cdot f_{clk}} exp \left ( \frac{2 t_{res,i}}{\tau} \right ) \\
\end{aligned}
$$

Let's define the delay $\delta$ as the time between an event $async_{in}$ and a positive edge of $FF_1$.

To get the minimum delay, we can try to time $async_{in}$ in the best possible way such that it can be captured by $FF_1$.
Usually we would respect the setup time, but as we can disregard metastability, we choose the moment for the event just before a positive clock edge arrives at $FF_1$, so effectively $\delta = 0$.
Now it simply takes two more cycles to propagate the result captured by $FF_1$ to $FF_{sys}$, so the minimum delay is $3 T_{clk}$.

To get the maximum delay, we can assume that we have just missed the capture window.
Usually we would respect the hold time, but in this case we choose the moment just after a positive clock edge, so effectively $\delta = T_{clk}$.
Only next cycle will the value be captured and then again two more cycles will be necessary for the value to propagate.
Thus the maximum delay is $4 T_{clk}$.

Finally, if we assume a uniform distribution for the event, the average would lie in between those two values, at $3.5 T_{clk}$

As for the area, if we choose the implementation of our D flip-flops to be the standard double latch implementation using transmission gates, we end up with 4 transmission gates and 6 inverters, disregarding the last inverter for the negated FF output.
If we consider that a transmission gate requires two transistors, that we need at least one inverter for the whole circuit because the transmission gates require both $clk$ and $\neg clk$ and that an inverter costs two transistors, we end up with the following formula for an n-stage synchronizer:

$$
\text{area}(n) = (n+1) \cdot ((4 \cdot 2) + (4 \cdot 2)) + 2 = (n+1) \cdot 16 + 2
$$

Plugging in $n = 2$ yields $\text{area}(n) = 3 \cdot 16 + 2 = 50$ transistors.

\subsubsection{Synchronizer 2}

Synchronizer 2 has a clock divider, so the resolution times for the respective FFs and thus the MTBU change as follows:

$$
\begin{aligned}
    t_{res,sys} &= 1/f_{clk} - t_{CO} - t_{COMB} - t_{SU} \\
    t_{res,1} &= 2/f_{clk} - t_{CO} - t_{COMB} - t_{SU} \\
    &= t_{res,2} \\
    MTBU_2 &= \frac{1}{\lambda_{dat} \cdot T_0 \cdot f_{clk}} exp \left ( \frac{t_{res,2} + t_{res,sys}}{\tau} \right ) \\
\end{aligned}
$$

While the frequency is halved for $FF_2$, the data rate is halved for $FF_{sys}$, so they still share the same denominator.

Here the delay considerations get more interesting.

Like before we assume the best case to be an event just before the positive edge, so $\delta = 0$ and the event gets immediately captured.
After two cycles (due to the clock divider) it propagates to $FF_2$ and after another cycle, it propagates to $FF_{sys}$.
So our minimum delay is $3 T_{clk}$.

As for the worst case, we might just miss the capture window, resulting in two clock cycles waiting time for $FF_1$ to capture again.
After two more cycles the value will have propagated to $FF_2$ and after another cycle to $FF_{sys}$.
So our maximum delay is $5 T_{clk}$.

Assuming again a uniform distribution for the event, the average delay would be $4 T_{clk}$.

We re-use the flip-flop implementation we listed before, we only need to consider the additional clock divider.
For this clock divider, we could use a simple counting circuit that stores a 1-bit integer, counts up by 1 (wrapping around on overflow) and only outputs high if the bit is high.
This could be achieved by feeding back the inverted output of this counting flip-flop to the input.
Thus we would need an additional D flip-flop and inverter for the clock divider.
Our general area formula with a single clock divider becomes:

$$
\text{area}(n) = (n+1) \cdot ((4 \cdot 2) + (4 \cdot 2)) + 16 + 2 + 2 = (n+1) \cdot 16 + 20
$$

Plugging in $n = 2$ yields $\text{area}(n) = 3 \cdot 16 + 20 = 68$ transistors.

\subsubsection{Synchronizer 3}

Sychronizer 3 acts like a 2-stage synchronizer, but $FF_1$ and $FF_2$ alternate with each clock cycle.
We assume at this stage that the enable signals are low active, as this would actually improve the situation because it would mask the FF that is currently capturing a value, prolonging the resolution time and thereby combatting metastability.
The captured value of either $FF_1$ or $FF_2$ becomes active at the D input of $FF_{sys}$ only after the circuit has alternated the selected FF once and back, so there is a clock cycle of delay until the value of the FF that captured the event is seen.

In this case, we buy another clock period of time until the FF has to resolve, which we could express as follows:

$$
\begin{aligned}
    t_{res,sys} &= 1/f_{clk} - t_{CO} - t_{COMB} - t_{SU} \\
    t_{res,1} &= 2/f_{clk} - t_{CO} - t_{COMB} - t_{SU} \\
    &= t_{res,2} \\
    MTBU_3 &= \frac{1}{\lambda_{dat} \cdot T_0 \cdot f_{clk}} exp \left ( \frac{\min\{t_{res,1}, t_{res,2}\} + t_{res,sys}}{\tau} \right ) \\
\end{aligned}
$$

Similarly to synchronizer 2, we have double the clock period as a base for the resolution time of $FF_1$ and $FF_2$.

For the delays, we repeat the procedure.

In the best case, the signal arrives just before capture and the FF that captures it will also be enabled for the capture by $FF_{sys}$.
In the worst case, the signal arrives just after capture, but when it is captured, the FF that captured it will be enabled for the capture by $FF_{sys}$.
But we also need to take into account that a FF value is masked due to the alternation procedure, so we add an additional clock cycle.
The minimum delay is just $2 T_{clk}$ and the maximum delay is $3 T_{clk}$, giving us an average of $2.5 T_{clk}$.

For the area, we will consider our flip-flop implementation, but we need to take care to handle resets and enables as well.
For the enables, we can use a NAND gate for the $en$ signals by using the $clk$ as a second input.
Similarly for reset, we can implement a synchronous reset by adding two AND gates and an inverter, using AND for the $clk \land rst$ for the reset FF and $clk \land \neg rst$ for the set FF.
We know that an AND gate could be implemented as a NAND gate with an additional inverter and a NAND gate requires four transistors.
The buffer can be built using four transistors.

Thus our total result amounts to $\text{area} = 4 \cdot 16 + 6 \cdot 2 + 2 \cdot 4 + 2 \cdot 4 + 2 = 94$ transistors (FFs + ANDs + NANDs + buffers + clock inverter), by far the highest result.

\subsubsection{Comparison}

As we can see in Table~\ref{table:comparison}, the resolution times are relatively similar.
We get a slightly higher resolution time for synchronizers 2 and 3.
The delays are best for synchronizer 3 and worst for 2.
But their complexity gets progressively higher as they end up using more and more transistors.
If we dropped the ideal assumptions we made for this example, evaluating the latter two synchronizers would become more challenging.

\begin{table}[h!]
\centering
\begin{tabular}{|c||c|c|c|c|c|}
\hline
    synchronizer & $t_{res,total}$ & min. $\Delta$ & avg. $\Delta$ & max. $\Delta$ & transistors \\
\hline
\hline
    1 & $2 \cdot t_{res,i} $ & $3 T_{clk}$ & $3.5 T_{clk}$ & $4 T_{clk}$ & 50 \\ \hline
    2 & $t_{res,2} + t_{res,sys}$ & $3 T_{clk}$ & $4 T_{clk}$ & $5 T_{clk}$ & 68 \\ \hline
    3 & $\min\{t_{res,1}, t_{res,2}\} + t_{res,sys}$ & $2 T_{clk}$ & $2.5 T_{clk}$ & $3 T_{clk}$ & 94 \\ \hline
\end{tabular}
\label{table:comparison}
\caption{Comparison of the three synchronizer implementations}
\end{table}

\FloatBarrier % Leave the FloatBarriers in place.
\newpage
\section{Non-ideal clock}

\subsection{Exercise (a)}

To calculate the maximum frequency, we can consider the condition $t_{res} > 0$:

$$
t_{res} = 1/f - t_{comb} - t_{clk2out} - t_{su}
$$

If we leave $f$ undetermined, we can try to calculate the maximum frequency:

$$
\begin{aligned}
1/f_{max} - t_{comb} - t_{clk2out} - t_{su} &> 0 \\
1/f_{max} & > t_{comb} + t_{clk2out} + t_{su} \\
1/(t_{comb} + t_{clk2out} + t_{su}) & > f_{max} \\
\end{aligned}
$$

Now we have to work with the parameter ranges.
We assume the worst case conditions, namely the longest possible gate delays for the longest combinatorial paths with the longest possible gate delays and the highest $t_{clk2out}$ and $t_{su}$.

The longest combinatorial path is through two AND and an XOR gate, with three wire delays, which leads to these worst possible times:

$$
\begin{aligned}
t_{comb} &= (2 \cdot 69 + 67.9 + 4 \cdot 147)\ \text{ps} = 793.9\ \text{ps} \\
t_{clk2out} &= 495.2\ \text{ps} \\
t_{su} &= 48.7\ \text{ps} \\
\end{aligned}
$$

$$
f_{max} < 1/(t_{comb} + t_{clk2out} + t_{su}) \approx 747.50\ \text{MHz}
$$

Because none of the gate delays or wire delays is smaller than $t_h$, there are no hold violations in this circuit.

\subsection{Exercise (b)}

We already chose $f_{max}$ as the biggest value that does not lead to a setup violation, if the clock jitters, then we will enter the problematic range where the resolution time is less than 0.
To account for this, we need to choose the value that is smaller than our calculated maximum frequency when scaled to $105\%$, so we could formulate this problem in this way:

$$
\begin{aligned}
f_{max} \cdot 1.05 &< 747.50\ \text{MHz} \\
f_{max} &< 711.90\ \text{MHz} \\
\end{aligned}
$$

As we can see, there is a significant drop in our $f_{max}$!

Assuming $\pm x \cdot f_{clk} \in [f_{clk} \cdot (1 - x); f_{clk} \cdot (1 + x)]$
We can make this more generic in the following way:

$$
t_{res} = \frac{1}{(1 + x)f_{max}} - t_{comb} - t_{clk2out} - t_{su} > 0
$$

That is, we assume the worst possible effect of jitter, which is to make the clock faster and reduce the clock period, making setup violations more likely.
Which leads to:

$$
\frac{1}{(1 + x)(t_{comb} + t_{clk2out} + t_{su})} > f_{max}
$$

As for the hold time, we require that the inputs remain stable after a positive clock edge for the duration of $t_h$, so the hold time violation constraints do not actually depend on the frequency, but the delays of the circuit between any two registers.

We can still give the following equation however:

$$
t_h < t_{clk2out} + t_{comb} + \sum_{\text{w} \in \min\{\text{path}\}} t_{w}
$$

Where we sum over the wire delays of the shortest path in the circuit from a register to another.

\subsection{Exercise (c)}

This might cause any two registers to have a much shorter resolution time.
If the delays are allowed to vary freely up to a maximum value of 200 ps, then we must assume it possible that one register has its clock edge 200 ps later while the next register has its clock edge 200 ps earlier, giving us 400 ps less time.

Again we consider the resolution time to check for a setup violation:

$$
t_{res}(i, j) = 1/f - t_{clk2out} - t_{comb} - t_{su} - \delta(i, j) > 0
$$

This time we included a function $\delta(i, j)$, which describes the phase relation between registers $i$ and $j$.
We know $\forall i, j : -400\ \text{ps} \leq \delta(i, j) \leq 400\ \text{ps}$.

The max. frequency changes as follows:

$$
\begin{aligned}
    1/f_{max} - t_{clk2out} - t_{comb} - t_{su} - 400 \cdot 10^{-12} &> 0 \\
    \frac{1}{t_{clk2out} + t_{comb} + t_{su} + 400 \cdot 10^{-12}} &> f_{max} \\
\end{aligned}
$$

Which yields a new max. frequency of $f_{max} < 575.44\ \text{MHz}$, which is much lower!

This change also affects the hold time constraint in the same way:

$$
t_h < t_{clk2out} + t_{comb} + \sum_{\text{w} \in \min\{\text{path}\}} t_{w} - \max\{\delta(i, j)\}
$$

\subsection{Exercise (d)}

For this case, we can combine the effects we have observed.
Again we consider the resolution time to check for a setup violation, only this time we include a margin for the maximum frequency to take into account the clock jitter.

$$
t_{res}(i, j) = \frac{1}{(1 + x)f_{max}} - t_{clk2out} - t_{comb} - t_{su} - \delta(i, j) > 0
$$

Luckily for us, the clock jitter does not affect the hold time $t_h$, so we have the same situation as before:

$$
t_h < t_{clk2out} + t_{comb} + \sum_{\text{w} \in \min\{\text{path}\}} t_{w} - \max\{\delta(i, j)\}
$$

\subsection{Exercise (e)}

We assumed that the clock jitter takes affect at the root of the clock tree, otherwise we would have needed to consider individual jitter values for each register, which could be expressed as a dynamic delay term between two registers (on top of the constant delays we have in the circuit).

In the worst case, jitter could cause a clock edge for register $i$ to arrive later and for register $j$ to arrive earlier, thus changing the phase relation.
We would need to introduce an additional delay term that not only depends on values $i$ and $j$, but also on the specific clock edge of the source clock, to characterize the dynamic nature of the clock jitter.
If we call this extra delay term $\varphi(i, j, n)$ for clock edge $n$, our hold time constraint might look something like this:

$$
t_h < t_{clk2out} + t_{comb} + \sum_{\text{w} \in \min\{\text{path}\}} t_{w} - \max\{\delta(i, j)\} - \max\{\varphi(i, j, n)\}
$$

\end{document}
