\documentclass[../generics]{subfiles}

\begin{document}

\chapter{Basic Operation}\label{rqm basic operation}

\lettrine{C}{onsider the problem} of \index{generic signature query}generic signature queries and \index{minimal requirement}minimization. Ultimately, we must be able to look at the requirements appearing in generic signatures and protocol declarations, and make inferences in accordance with the \index{derived requirement}derived requirements formalism. We use the proper noun \IndexDefinition{requirement machine}``The Requirement Machine'' to mean the compiler component responsible for this, while \index{requirement machine}\emph{a} requirement machine---a common noun---is an \emph{instance} of a specific data structure that encodes a fixed set of generic requirements in a form amenable to this kind of automated reasoning.

In this chapter, we will see that the generic signatures and protocols declared by a Swift program define a directed acyclic graph of requirement machines. The global singleton \IndexDefinition{rewrite context}\emph{rewrite context} manages the lazily construction of this graph. Each requirement machine is a \index{string rewrite system}\emph{string rewrite system}, whose \emph{rewrite rules} are formed by taking the union of its successors in this graph, together with the machine's own generic requirements. After we develop the high-level picture in this chapter, we delve into the workings of an individual machine in subsequent chapters; Chapter~\ref{monoids} develops the theory of string rewrite systems, and Chapter~\ref{symbols terms rules} describes how requirements map to \index{rewrite rule}rewrite rules.

\smallskip

Let's go. We begin by observing that both entry points into The Requirement Machine build new requirement machines:
\begin{itemize}
\item To answer a \index{generic signature query}generic signature query, we construct a new requirement machine from the minimal requirements of a generic signature; we call this a \IndexDefinition{query machine}\emph{query machine}. We then consult the machine's \emph{property map}: a description of all conformance, superclass, layout and concrete type requirements imposed on each type parameter.

Query machines are owned by the rewrite context, where they are the values in a table having generic signatures as keys. This table is populated on demand, and every query machine remains live for the duration of the compilation session.

\item To a build a new generic signature, we construct a new requirement machine from an initial set of requirements; we call this a \IndexDefinition{minimization machine}\emph{minimization machine}. We read off the minimal requirements from the string rewrite system after construction.

Minimization machines have a temporary lifetime, scoped to one of the requests for building a generic signature. We previously covered these requests in Chapter~\ref{building generic signatures}; recall from Figure \ref{inferred generic signature request figure}~and~\ref{abstract generic signature request figure} that requirement minimization was the final step in the process, after requirement desugaring. 
\end{itemize}

\begin{figure}\captionabove{Building a query machine}\label{rqm flowchart generic signature}
\begin{center}
\begin{tikzpicture}[node distance=0.75cm]
\node (MinimalRequirements) [data] {Minimal requirements};
\node (ImportedRules) [data, right=of MinimalRequirements] {Imported rules};
\node (RuleBuilder) [stage, below=of MinimalRequirements] {\vphantom{p}Rule builder};
\node (Completion) [stage, below=of RuleBuilder] {Completion};
\node (PropertyMap) [stage, below=of Completion] {Property map construction};
\node (RequirementMachine) [data, below=of PropertyMap] {Requirement machine};

\draw [arrow] (MinimalRequirements) -- (RuleBuilder);
\draw [arrow] (ImportedRules) |- (RuleBuilder);
\draw [arrow] (RuleBuilder) -- (Completion);
\draw [arrow] (Completion) -- (PropertyMap);
\draw [arrow] (PropertyMap.east) -- ++ (0.5, 0) |- (Completion);

\draw [arrow] (PropertyMap) -- (RequirementMachine);

\end{tikzpicture}
\end{center}
\end{figure}

\begin{figure}\captionabove{Building a minimization machine}\label{rqm flowchart generic signature minimization}
\begin{center}
\begin{tikzpicture}[node distance=0.75cm]
\node (DesugaredRequirements) [data] {Desugared requirements};
\node (ImportedRules) [data, right=of DesugaredRequirements] {Imported rules};
\node (RuleBuilder) [stage, below=of DesugaredRequirements] {\vphantom{p}Rule builder};
\node (Completion) [stage, below=of RuleBuilder] {Completion};
\node (PropertyMap) [stage, below=of Completion] {Property map construction};

\node (Minimization) [stage, below=of PropertyMap] {Rewrite system minimization};

\node (RequirementMachine) [data, below=of Minimization] {Requirement machine};
\node (RequirementBuilder) [stage, right=of Minimization] {Requirement builder};
\node (MinimalRequirements) [data, below=of RequirementBuilder] {Minimal requirements};

\draw [arrow] (DesugaredRequirements) -- (RuleBuilder);
\draw [arrow] (ImportedRules) |- (RuleBuilder);
\draw [arrow] (RuleBuilder) -- (Completion);
\draw [arrow] (Completion) -- (PropertyMap);
\draw [arrow] (PropertyMap.east) -- ++ (0.5, 0) |- (Completion);

\draw [arrow] (PropertyMap) -- (Minimization);
\draw [arrow] (Minimization) -- (RequirementMachine);
\draw [arrow] (Minimization) -- (RequirementBuilder);
\draw [arrow] (RequirementBuilder) -- (MinimalRequirements);

\end{tikzpicture}
\end{center}
\end{figure}

\paragraph{Query machines.}
Figure~\ref{rqm flowchart generic signature} shows how a query machine is constructed from the minimal requirements of a generic signature:
\begin{enumerate}
\item The \index{rule builder}\emph{rule builder} lowers the generic signature's minimal requirements to \emph{rewrite rules}; these become the \IndexDefinition{local rule}\emph{local rules} of our new requirement machine.

\item The rule builder also collects rewrite rules from those protocols referenced by this signature; we call these the \IndexDefinition{imported rule}\emph{imported rules}.

\item We run the \index{completion}\emph{completion procedure} (Chapter~\ref{completion}) with our local and imported rules. Completion introduces new local rules which are ``consequences'' of other rules. This gives us a \index{convergent rewrite system}\emph{convergent rewrite system}.

\item We construct the property map data structure, to be used for generic signature queries (Chapter~\ref{propertymap}). 

\item Property map construction may also introduce new local rules, in which case we go back to Step~3; completion and property map construction are iterated until no more rules are added.
\end{enumerate}

The construction of rewrite rules from requirements in Step~1 only looks at canonical types, so it does not depend on type sugar in any way. For this reason, two generic signatures that only differ by sugar will share a query machine; that is, the rewrite context uses \index{canonical generic signature}canonical generic signatures as the keys in its lookup table.

Step~2 is where the directed acyclic graph of requirement machines comes in; each machine has some set of \emph{protocol dependencies} which determine its successors in the graph. We will see shortly that the successors of query and minimization machines are \index{protocol machine}\emph{protocol machines}.

\paragraph{Cycle detection.}
Property map construction may need to look up \index{type witness}type witnesses of \index{conformance}conformances. This lookup may recursively perform generic signature queries, for example as part of type substitution or associated type inference. This introduces the possibility that we might query a generic signature whose query machine is in the process of being constructed. This is not supported; we guard against re-entrant construction by checking if a query machine is complete before we perform queries against it. If the query machine is incomplete, it is currently being constructed further up in the call stack, so the source program has a circular dependency which cannot be resolved. It is very difficult to hit this in practice, so for simplicity of implementation we report a fatal error which ends compilation. The lazy construction of query machines resembles how the \index{request evaluator}request evaluator evaluates requests (Section~\ref{request evaluator}), but it is a hand-coded cache that does not actually use the request evaluator. The re-entrant construction of a query machine is the equivalent of a request cycle.

\paragraph{Minimization machines.}
The process of building a minimization machine, shown in Figure~\ref{rqm flowchart generic signature minimization}, differs from query machine construction in a few respects:
\begin{enumerate}
\item The rule builder receives desugared requirements (Section~\ref{requirement desugaring}).
\item Completion records \emph{rewrite loops}, which describe relations between rewrite rules---in particular, a loop will encode if a rewrite rule is redundant because it is a consequence of existing rules.
\item Property map construction records \index{conflicting requirement}conflicting requirements, to be diagnosed if this generic signature has a source location.
\item After completion and property map construction, \emph{homotopy reduction} processes rewrite loops to find a minimal subset of rewrite rules which completely describe the rewrite system. This is the topic of Chapter \ref{rqm minimization}.
\item The \index{requirement builder}\emph{requirement builder} converts the minimal set of rewrite rules into minimal requirements. This is explained in Section~\ref{requirement builder}.
\end{enumerate}

\paragraph{An optimization.}
We compute minimal requirements as a side-effect of building the string rewrite system, so we could simply discard the minimization machine after we're done. However, this can result in duplication of work. For example, we build a generic signature when computing the interface type of a generic function, and then we issue queries against this signature when we type check the function's body. Instead of creating a query machine from the minimal requirements output by the minimization machine, we can instead ``install'' the minimization machine in the rewrite context; the minimization machine \emph{becomes} the query machine associated with the new generic signature.

\begin{center}
\begin{tikzpicture}
\node [matrix, row sep=4em, column sep=4em] {
\node (U) [data, align=center, inner sep=0.6em, outer sep=0.4em] {User-written\\requirements};&
\node (R) [stage, align=center, inner sep=0.6em, outer sep=0.4em] {Minimization\\machine};&
\node (M1) [data, align=center, inner sep=0.6em, outer sep=0.4em] {Minimal\\requirements};\\
\node (M2) [data, align=center, inner sep=0.6em, outer sep=0.4em] {Minimal\\requirements};&
\node (Q) [stage, align=center, inner sep=0.6em, outer sep=0.4em] {Query\\machine};\\
};
\draw [->,thick] (U.east) -- (R.west);
\draw [->,thick] (R.east) -- (M1.west);
\draw [->,thick] (M2.east) -- (Q.west);
\draw [thick, double equal sign distance] (R.south) -- (Q.north);
\end{tikzpicture}
\end{center}

This all hinges on the minimization machine's string rewrite system having the same observable behavior as the query machine. There is a certain notion of \emph{equivalence} of string rewrite systems that we will precisely define later, but essentially, we must ensure a query machine built from minimal requirements is equivalent to the minimization machine that had produced those minimal requirements in the first place.

This equivalence almost always holds true, except for these four situations where the final list of minimal requirements does not completely describe all consequences of the requirements that were input:
\begin{enumerate}
\item User-written requirements that mention non-existent type parameters are dropped. In this case, a new query machine will not include the corresponding rewrite rules from the old minimization machine.

\item When two requirements are in conflict such that both cannot be simultaneously satisfied, both requirements are dropped. A new query machine will, again, not include the corresponding rewrite rules.

\item As we will see in Section~\ref{word problem}, we may fail to construct a string rewrite system if the user-written requirements are too complex to reason about. In this case the minimization machine will include all rewrite rules that were added up to the point of failure, but the set of minimal requirements will be empty.

\item If a conformance requirement is made redundant by a same-type requirement that fixes a type parameter to a concrete type (such as $\FormalReq{T == S}$ and $\ConfReq{T}{P}$ where \texttt{S} is a concrete type and $\ConfReq{S}{P}$ is a concrete conformance), the rewrite system cannot be reused for technical reasons; we will talk about this in Chapter~\ref{concrete conformances}.
\end{enumerate}

The first three only occur with invalid code, and are accompanied by diagnostics. The fourth is not an error, just a rare edge case where our optimization cannot be performed. All conditions are checked for during minimization, and recorded in the form of a flags field. We cannot install the minimization machine if any of these flags are set; doing so would associate a generic signature with a minimization machine that contains rewrite rules not explained by the generic signature itself. This would confuse subsequent generic signature queries. In the event that the above does not cover some other unforseen scenario where equivalence fails to hold, the \IndexFlag{disable-requirement-machine-reuse}\texttt{-disable-requirement-machine-reuse} frontend flag forces minimization machines to be discarded immediately after use, instead of being installed.


\begin{example}
The compiler builds several requirement machines while type checking the code below:
\begin{Verbatim}
protocol Three {
  associatedtype A
  associatedtype B
  associatedtype C
}

struct ThreeOne<T: Three> where T.A == T.B, T.A == T.C, T.B == T.C {}
struct ThreeTwo<U: Three> where U.A == U.B, U.A == U.C {}
\end{Verbatim}

While type checking the protocol declaration itself, we build a query machine for the generic signature \verb|<Self where Self: Three>|. We then create a minimization machine to compute the generic signature of \texttt{ThreeOne}. The minimization machine receives the three user-written requirements. Any of the two same-type requirements imply the third; minimization outputs a signature with the first and third requirement:
\begin{quote}
\begin{verbatim}
<T where T.[Three]A == T.[Three]B, T.[Three]B == T.[Three]C>
\end{verbatim}
\end{quote}
We install the minimization machine for \texttt{ThreeOne} in the rewrite context, so it becomes the query machine for the above generic signature. Next, we create a minimization machine for \texttt{ThreeTwo}. The requirements of \texttt{ThreeTwo} happen to define the same generic signature as \texttt{ThreeOne}, up to canonical type equality:
\begin{quote}
\begin{verbatim}
<U where U.[Three]A == U.[Three]B, U.[Three]B == U.[Three]C>
\end{verbatim}
\end{quote}
Since a query machine for this signature already exists, the second minimization machine is discarded. If we pass the \texttt{-debug-requirement-machine=timers} frontend flag, we can watch this happen in real time:
\begin{Verbatim}[fontsize=\footnotesize,numbers=none]
$ swiftc three.swift -Xfrontend -debug-requirement-machine=timers
+ started RequirementSignatureRequest [ Three ]
+ finished RequirementSignatureRequest in 96us: [ Three ]
+ started getRequirementMachine() <τ_0_0 where τ_0_0 : Three>
+ finished getRequirementMachine() in 44us: <τ_0_0 where τ_0_0 : Three>
+ started InferredGenericSignatureRequest @ three.swift:7:16
+ finished InferredGenericSignatureRequest in 52us:
  <T where T : Three, T.A == T.B, T.B == T.C>
+ started InferredGenericSignatureRequest @ three.swift:8:16
+ finished InferredGenericSignatureRequest in 37us:
  <U where U : Three, U.A == U.B, U.B == U.C>
\end{Verbatim}
\end{example}
We'll say more about debugging flags in Section~\ref{rqm debugging flags}.

\paragraph{Protocols.}
To reason about the derived requirements and valid type parameters of a generic signature, we must consider not only the generic parameters and minimal requirements of the generic signature itself, but also the associated type declarations and minimal requirements of each protocol referenced by our signature. This set of \emph{protocol dependencies} is a property of a generic signature that we will make precise in Section~\ref{protocol component}.

The associated types and requirements of each protocol define a set of rewrite rules. We may need to incorporate the rewrite rules of a large number of protocols into a single requirement machine. Similarly, any two requirement machines that import the same protocols will also have many rules in common; for example, any machine with a conformance to \texttt{Collection} will contain rewrite rules for the associated types and requirements of \texttt{Collection}, \texttt{Sequence} and \texttt{IteratorProtocol}.

A third kind of requirement machine exists solely to share rules across machines. Naively, we would find ourselves repeatedly lowering a protocol to rewrite rules in every machine that references this protocol. Instead, we build a string rewrite system just to hold the protocol's rewrite rules---we call this a \IndexDefinition{protocol machine}\emph{protocol machine}. A protocol machine is lazily constructed and owned by the rewrite context, similar to a query machine, except that the initial requirements of a protocol machine are the minimal requirements of a protocol's requirement signature (there is one more complication here, in that a protocol machine may describe more than one protocol; we will expound on this in the next section).

When building any kind of requirement machine (protocol machines can import rules from other protocol machines, too), the rule builder triggers lazy construction of protocol machines. A query machine for a protocol generic signature \verb|<Self where Self: P>| is particularly easy to build; we build protocol machines for \texttt{P} and all dependencies if necessary, import rules, then add a single local rule for $\ConfReq{Self}{P}$.

\paragraph{Protocol minimization.}
We have query and minimization machines for existing and new generic signatures, and protocol query machines for existing requirement signatures. The remaining case is the \emph{protocol minimization} machine, for building a new protocol requirement signature. A protocol minimization machine is constructed in the same way as a minimization machine, and its lifetime is scoped to a request. The same optimization applies, where in the absence of error conditions, the protocol minimization machine can be installed in the rewrite context, where it becomes a long-lived protocol machine.

\paragraph{Summary.}
Requirement machines come in four varieties:
\begin{enumerate}
\item \textbf{Query}, built from an existing generic signature, used to answer generic signature queries.
\item \textbf{Minimization}, built from user-written requirements of a generic declaration, used for building a new generic signature.
\item \textbf{Protocol}, built from an existing protocol requirement signature, used for sharing rules across requirement machines.
\item \textbf{Protocol minimization}, built from user-written requirements of a protocol, used for building a new protocol requirement signature.
\end{enumerate}

This list represents all four combinations of ``domain'' and ``purpose.'' In case (1) and~(2), the central entity is a generic signature; in (3) and~(4), we have a protocol requirement signature. In case (1) and~(3), we start with minimal requirements from an existing entity; in case (2) and~(4), we take user-written requirements, and perform minimization to build a new entity of the desired sort.

\section{Protocol Components}\label{protocol component}

Now we will give a full account of how imported rules work in the Requirement Machine. We recall the definition of the protocol dependency graph from Section~\ref{recursive conformances}: we take the \index{vertex}vertex set of all protocol declarations, and \index{edge}edge set of all \index{associated conformance requirement}associated conformance requirements, with the endpoints defined follows for an arbitrary edge $\ConfReq{Self.A}{P}_\texttt{Q}$:
\begin{align*}
\Src(\ConfReq{Self.A}{P}_\texttt{Q})&=\protosym{Q},\\
\Dst(\ConfReq{Self.A}{P}_\texttt{Q})&=\protosym{P}.
\end{align*}
We will show that the protocols that can appear in the \index{derived requirement}derivations of a given generic signature are precisely those reachable from an initial set in the \index{protocol dependency graph}protocol dependency graph. We begin by considering protocol dependencies between protocols first. Along the way, we also introduce the \emph{protocol component graph} to get around complications caused by circular dependencies.

\begin{definition}
A protocol $\protosym{P}$ \emph{depends on} a protocol $\protosym{Q}$ (or has a \emph{protocol dependency} on~$\protosym{Q}$) if we can derive a conformance to $\protosym{Q}$ from the protocol generic signature $G_\texttt{P}$; that is, if $G_\texttt{P}\vDash\ConfReq{T}{Q}$ for some type parameter \texttt{T}. We write $\protosym{P}\prec\protosym{Q}$ if this relationship holds. The \emph{protocol dependency set} of a protocol $\protosym{P}$ is then the set of all protocols $\protosym{Q}$ such that $\protosym{P}\prec\protosym{Q}$.
\end{definition}

Lemma~\ref{subst lemma} implies that $\prec$ is a \index{transitive relation}transitive relation; that is, if $\protosym{P}\prec\protosym{Q}$ and $\protosym{Q}\prec\protosym{R}$, then $\protosym{P}\prec\protosym{R}$. In fact, $\prec$ is the \index{reachability relation}reachability relation in the protocol dependency graph. Before we can prove this, we need a technical result.

\smallskip

Let $f$ be the canonical projection from the \index{conformance path graph}conformance path graph of $G_\texttt{P}$ into the protocol dependency graph. Recall that $f$ maps an \index{abstract conformance}abstract conformance $\ConfReq{T}{P}$ to the protocol $\protosym{P}$; we previously used $f$ to study recursive conformances. Not only is $f$ a \index{graph homomorphism}graph homomorphism, but it is also a \IndexDefinition{covering map}\emph{covering map}, because if we take some abstract conformance $\ConfReq{T}{P}$, the set of \index{edge}edges with \index{source vertex}source $\ConfReq{T}{P}$ is in one-to-one correspondence with the set of edges with source $f(\ConfReq{T}{P})=\protosym{P}$. In fact, by definition both sets of edges are indexed by the \index{associated conformance requirement}associated conformance requirements defined in $\protosym{P}$.

The theory is explained in \cite{godsil2001algebraic}, but here is a quick example to help with the intuition. We can wind the infinite ray around a cycle of length four, infinitely many times, mapping each integer $n\mapsto n \bmod 4$; every vertex has a single successor in both graphs:
\begin{center}
\begin{tikzpicture}
\node (a1) [] at (0,4) {0};
\node (a2) [] at (0,3) {1};
\node (a3) [] at (0,2) {2};
\node (a4) [] at (0,1) {3};
\node (a5) [] at (1,1) {4};
\node (a6) [] at (1,2) {5};
\node (a7) [] at (1,3) {6};
\node (a8) [] at (1,4) {7};
\node (a9) [] at (2,4) {8};
\node (a10) [] at (2,3) {$\ldots$};

\draw [arrow] (a1) edge [] (a2);
\draw [arrow] (a2) edge [] (a3);
\draw [arrow] (a3) edge [] (a4);
\draw [arrow] (a4) edge [bend right] (a5);
\draw [arrow] (a5) edge [] (a6);
\draw [arrow] (a6) edge [] (a7);
\draw [arrow] (a7) edge [] (a8);
\draw [arrow] (a8) edge [bend left] (a9);
\draw [arrow] (a9) edge [] (a10);

\node (b1) at (8,4) {0};
\node (b2) at (6.5,2.5) {1};
\node (b3) at (8,1) {2};
\node (b4) at (9.5,2.5) {3};

\draw [arrow] (b1) edge [bend right] (b2);
\draw [arrow] (b2) edge [bend right] (b3);
\draw [arrow] (b3) edge [bend right] (b4);
\draw [arrow] (b4) edge [bend right] (b1);

\node (c1) at (3.5,2.5) {};
\node (c2) at (5.5,2.5) {};

\path [arrow] (c1) edge [below] node {\tiny{$f$}} (c2);

\end{tikzpicture}
\end{center}
A key property of a covering map is that a path in the \index{range}range \index{path lifting}lifts to a path in the \index{domain}domain, ``backwards'' through $f$. We use this fact in the proof of the next proposition.

\begin{proposition}
If $\protosym{P}$ and $\protosym{Q}$ are any two protocols, then $\protosym{P}\prec\protosym{Q}$ if and only if there exists a path from $\protosym{P}$ to $\protosym{Q}$ in the protocol dependency graph.
\end{proposition}
\begin{proof}
First, suppose $\protosym{P}\prec\protosym{Q}$. Then, there is a type parameter \texttt{T} such that $G_\texttt{P}\vDash\ConfReq{T}{Q}$. By Theorem~\ref{conformance paths theorem}, there exists a conformance path for $\ConfReq{T}{Q}$. This conformance path defines a path in the protocol dependency graph from \protosym{P} to $\protosym{Q}$. Now, suppose that we have a path from $\protosym{P}$ to $\protosym{Q}$ in the protocol dependency graph. Every protocol dependency path in originating at $\protosym{P}$ lifts to a path originating at $\ConfReq{Self}{P}$ in the conformance path graph of $G_\texttt{P}$. By Algorithm~\ref{invertconformancepath}, this conformance path defines a derived conformance requirement $\ConfReq{T}{Q}$ in $G_\texttt{P}$, showing that $\protosym{P}\prec\protosym{Q}$.
\end{proof}

\begin{listing}\captionabove{Protocol component demonstration}\label{protocol component listing}
\begin{Verbatim}
protocol Top {
  associatedtype A: Foo
  associatedtype B: Bar
}

protocol Foo {
  associatedtype A: Bar
  associatedtype B: Baz
}

protocol Bar {
  associatedtype A: Foo
  associatedtype B: Fiz
}

protocol Baz {
  associatedtype A: Bot
}

protocol Fiz {
  associatedtype A: Bot
}

protocol Bot {}
\end{Verbatim}
\end{listing}

\begin{wrapfigure}[12]{r}{3.6cm}
\begin{tikzpicture}[x=1cm, y=1.3cm]
\node (Top) [protocol, outer sep=0.05cm] at (1, -0) {\protosym{Top}};
\node (Foo) [protocol, outer sep=0.05cm] at (0, -1) {\protosym{Foo}};
\node (Bar) [protocol, outer sep=0.05cm] at (2, -1) {\protosym{Bar}};
\node (Baz) [protocol, outer sep=0.05cm] at (0, -2) {\protosym{Baz}};
\node (Fiz) [protocol, outer sep=0.05cm] at (2, -2) {\protosym{Fiz}};
\node (Bot) [protocol, outer sep=0.05cm] at (1, -3) {\protosym{Bot}};

\path [->] (Top) edge [bend right] (Foo)
           (Top) edge [bend left] (Bar)
           (Foo) edge [bend left] (Bar)
           (Bar) edge [bend left] (Foo)
           (Foo) edge (Baz)
           (Bar) edge (Fiz)
           (Baz) edge [bend right] (Bot)
           (Fiz) edge [bend left] (Bot);
  \begin{scope}[on background layer]
    \node[dashed, draw=darkgray, fit=(Foo) (Bar), inner sep=0.4em] {};
  \end{scope}
\end{tikzpicture}
\end{wrapfigure}

\paragraph{Recursive conformances.}
We realized in Section~\ref{recursive conformances} that recursive conformance requirements create cycles in the protocol dependency graph. A cycle appears in the protocol dependency graph for the declarations of Listing~\ref{protocol component listing}, shown on the left. Each one of $\protosym{Foo}$ and~$\protosym{Bar}$ points at the other via two mutually-recursive associated conformance requirements. Based on what's been described so far, we cannot build one protocol machine without first building the other: a circular dependency.

We solve this by grouping protocols into \emph{protocol components}, such that in our example, $\protosym{Foo}$ and $\protosym{Bar}$ belong to the same component. A protocol machine describes an entire protocol component, so the local rules of a protocol machine may include requirements from multiple protocols. We will see that if we consider dependencies between protocol \emph{components} as opposed to \emph{protocols}, we get a directed acyclic graph. To make all this precise, we step back to consider directed graphs in the abstract.

\paragraph{Strongly connected components.}
Suppose $(V, E)$ is any \index{directed graph}directed graph, and $\prec$ is the \index{reachability relation}reachability relation, so $x\prec y$ if there is a path with source $x$ and destination $y$. We say that $x$ and $y$ are \IndexDefinition{strongly connected component}\index{SCC|see{strongly connected component}}\emph{strongly connected} if both $x\prec y$ and $y\prec x$ are true; we denote this relation by $x\equiv y$ in the following. This is an \index{equivalence relation}equivalence relation:
\begin{itemize}
\item For all $x\in V$, we have $x\prec x$ via the trivial path at $x$, and thus $x\equiv x$.
\item If $x\equiv y$, then $y\equiv x$, because the definition of $\equiv$ is symmetric.
\item If $x\equiv y$ and $y\equiv z$, then in particular $x\prec y$ and $y\prec z$, so $x\prec z$, because $\prec$ is transitive. By the same argument, we also have $y\prec x$ and $z\prec y$, hence $z\prec x$. Therefore, $x\equiv z$.
\end{itemize}

The \index{equivalence class}equivalence classes of $\equiv$ are the \emph{strongly connected components} of $(V, E)$. We give this set the structure of a directed graph: an edge joins two components if and only if there is an edge joining some vertex in the first component with another vertex in the second component. The reachability relation of $(V, E)$ is well-defined on strongly connected components; if $x_1\equiv x_2$ and $y_1\equiv y_2$, then $x_1\prec y_1$ if and only if $x_2\prec y_2$. A \index{DAG|see {directed acyclic graph}}\IndexDefinition{directed acyclic graph}\emph{directed acyclic graph} (or DAG) is a graph without cycles; that is, non-trivial paths with the same source and destination vertex. In a directed acyclic graph, each vertex is in a strongly connected component by itself. Conversely, the graph of strongly connected components is itself acyclic, because a cycle is always contained entirely in its strongly connected component.

\begin{wrapfigure}{l}{3.5cm}
\begin{tikzpicture}[x=1cm, y=1.3cm]
\node (Top) [protocol, outer sep=0.05cm] at (1, -0) {\protosym{Top}};
\node (FooBar) [protocol, outer sep=0.05cm] at (1, -1) {\protosym{Foo}, \protosym{Bar}};
\node (Baz) [protocol, outer sep=0.05cm] at (0, -2) {\protosym{Baz}};
\node (Fiz) [protocol, outer sep=0.05cm] at (2, -2) {\protosym{Fiz}};
\node (Bot) [protocol, outer sep=0.05cm] at (1, -3) {\protosym{Bot}};

\path [->] (Top) edge (FooBar)
           (FooBar) edge [bend right] (Baz)
           (FooBar) edge [bend left] (Fiz)
           (Baz) edge [bend right] (Bot)
           (Fiz) edge [bend left] (Bot);
\end{tikzpicture}
\end{wrapfigure}

The protocol component graph of Listing~\ref{protocol component listing} is shown on the left. With the exception of $\protosym{Foo}$ and $\protosym{Bar}$, which have collapsed to a single vertex, every protocol is in a protocol component by itself. The protocol component graph is always acyclic. However, it is still not a tree or a forest; as we see in our example, we have two distinct paths with source $\protosym{Top}$ and destination~$\protosym{Bot}$, so components can share ``children.'' To ensure we do not import duplicate rules, we must be careful to only visit every downstream protocol machine once, and take only the \emph{local} rules from each. (Every imported rule was local in its original protocol machine, so we import it exactly once from there, never ``transitively.'')

A \IndexTwoFlag{debug-requirement-machine}{protocol-dependencies}debugging flag prints out each connected component as it was formed. Let's try with our example:
\begin{Verbatim}[fontsize=\footnotesize,numbers=none]
$ swiftc protocols.swift -Xfrontend -debug-requirement-machine=protocol-dependencies
Connected component: [Bot]
Connected component: [Fiz]
Connected component: [Baz]
Connected component: [Bar, Foo]
Connected component: [Top]
\end{Verbatim}

The
\IndexTwoFlag{debug-requirement-machine}{timers}\texttt{-debug-requirement-machine=timers} frontend flag also logs the construction of protocol components. The indentation follows the recursive construction of protocol machines:
\begin{Verbatim}[fontsize=\footnotesize,numbers=none]
+ started RequirementSignatureRequest [ Top ]
| + started RequirementSignatureRequest [ Bar Foo ]
| | + started RequirementSignatureRequest [ Fiz ]
| | | + started RequirementSignatureRequest [ Bot ]
| | | + finished RequirementSignatureRequest in 46us: [ Bot ]
| | + finished RequirementSignatureRequest in 81us: [ Fiz ]
| | + started RequirementSignatureRequest [ Baz ]
| | + finished RequirementSignatureRequest in 21us: [ Baz ]
| + finished RequirementSignatureRequest in 179us: [ Bar Foo ]
+ finished RequirementSignatureRequest in 211us: [ Top ]
\end{Verbatim}

\paragraph{Tarjan's algorithm.} We use \IndexDefinition{Tarjan's algorithm}an algorithm invented by \index{Robert Tarjan}Robert Tarjan \cite{tarjan} to discover the strongly connected components of the protocol dependency graph. Specifically, we number the strongly connected components, assign a component~ID to each vertex, and build a table mapping each component~ID to the list of vertices it contains.

We do not wish to build the entire input graph up-front, because in our case this would force us to deserialize every protocol in every imported module first. The key property of Tarjan's algorithm is that it is incremental. When asked to produce the component~ID for a given vertex, we check if the vertex has already been assigned a component~ID, in which case we return it immediately. If one has not yet been assigned, the vertex must belong to a component distinct from all others discovered so far; we visit all vertices reachable from the original vertex and form connected components along the way.

Tarjan's algorithm requires that the input graph be specified in the form of a function mapping a given vertex to its successors (recall that if $v\in V$, then $w\in V$ is a \index{successor}\emph{successor} of $v$ if there exists an edge $e\in E$ with $\Src(e)=v$ and $\Dst(e)=w$). In our case, the successors of a protocol are obtained by evaluating the \IndexDefinition{protocol dependencies request}\Request{protocol dependencies request}, which is implemented as follows:
\begin{itemize}
\item If the protocol is declared inside the main module, we evaluate the \index{requirement signature request}\Request{structural requirements request} and collect the protocols appearing on the right-hand side of any conformance requirements written in source.

\item If the protocol is part of a \index{serialized module}serialized module, we evaluate the \index{structural requirements request}\Request{requirement signature request} and collect the protocols appearing on the right-hand side of the protocol's minimal requirements.
\end{itemize}
%Note that if some user-written conformance requirement $\ConfReq{Self.U}{Q}_\texttt{P}$ is redundant, the first set will contain \texttt{Q}, while the second might not.  This ``extra'' edge will not change the strongly connected components of the graph, because \texttt{P} must already be reachable from \texttt{Q} if this requirement is redundant.

\newcommand{\Number}[1]{\texttt{NUMBER}(#1)}
\newcommand{\Lowlink}[1]{\texttt{LOWLINK}(#1)}
\newcommand{\OnStack}[1]{\texttt{ONSTACK}(#1)}

To form the strongly connected component of a vertex $v$, Tarjan's algorithm performs a \index{depth-first search}\emph{depth-first search}: given $v$, we look for edges originating from $v$ that lead to unvisited vertices, and recursively visit each of those successor vertices, and so on, exploring the entire subgraph reachable from there before moving on to the next successor. (Contrast this with the \index{breadth-first search}\emph{breadth-first search} for finding a conformance path in Section~\ref{finding conformance paths}, where we visit all vertices at a certain level before exploring deeper.)

\begin{wrapfigure}[11]{r}{2.9cm}
\begin{tikzpicture}[x=1cm, y=1.3cm]
\node (Top) [protocol, outer sep=0.05cm] at (1, -0) {1};
\node (Foo) [protocol, outer sep=0.05cm] at (0, -1) {2};
\node (Bar) [protocol, outer sep=0.05cm] at (2, -1) {3};
\node (Baz) [protocol, outer sep=0.05cm] at (0, -2) {6};
\node (Fiz) [protocol, outer sep=0.05cm] at (2, -2) {4};
\node (Bot) [protocol, outer sep=0.05cm] at (1, -3) {5};

\path [->] (Top) edge [bend right] (Foo)
           (Top) edge [bend left] (Bar)
           (Foo) edge [bend left] (Bar)
           (Bar) edge [bend left] (Foo)
           (Foo) edge (Baz)
           (Bar) edge (Fiz)
           (Baz) edge [bend right] (Bot)
           (Fiz) edge [bend left] (Bot);
\end{tikzpicture}
\end{wrapfigure}

We number vertices in the order they are visited, by assigning the current value of a counter to each visited vertex $v$, prior to visiting the successors of $v$. We denote the value assigned to a vertex $v$ by $\Number{v}$. Our traversal can determine if a vertex has been previously visited by checking if $\Number{v}$ is set.

We return to the protocol dependency graph from Listing~\ref{protocol component listing}. A depth-first search originating from $\protosym{Top}$ produces the numbering of vertices shown on the right, where the first vertex is assigned the value 1. Here, the entire graph was ultimately reachable from 1; more generally, we get a numbering of the subgraph reachable from the initial vertex.

When looking at a vertex $v$, we consider each edge $e\in E$ with $\Src(e)=v$. Suppose that $\Dst(e)$ is some other vertex~$w$; Tarjan considers the state of $\Number{w}$, and classifies the edge $e$ into one of four kinds: tree edges, ignored edges, fronds, and cross-links.

If $\Number{w}$ has not yet been set, we say that $e$ is a \emph{tree edge}; the tree edges define a \emph{spanning tree} of the subgraph explored by the search. Otherwise, $w$ was visited earlier by the traversal, and we have one of the three remaining kinds. If $\Number{w}\geq\Number{v}$ (with equality corresponding to an edge from $v$ to itself, which we allow), any path from $v$ to $w$ must pass through a common ancestor of $v$ and $w$ in the spanning tree. We say that $e$ is an \emph{ignored edge}, because $e$ cannot not give rise to any new strongly connected components. The final two kinds arise when $\Number{w}<\Number{v}$. We say that $e$ is a \emph{frond} if $w$ is also an ancestor of $v$ in the spanning tree; otherwise, $e$ is a \emph{cross-link}. (While the distinction between the final two kinds plays a role in Tarjan's correctness proof, we'll see that algorithm handles fronds and cross-links uniformly.)

\begin{wrapfigure}[11]{l}{2.9cm}
\begin{tikzpicture}[x=1cm, y=1.3cm]
\node (Top) [protocol, outer sep=0.05cm] at (1, -0) {1};
\node (Foo) [protocol, outer sep=0.05cm] at (0, -1) {2};
\node (Bar) [protocol, outer sep=0.05cm] at (2, -1) {3};
\node (Baz) [protocol, outer sep=0.05cm] at (0, -2) {6};
\node (Fiz) [protocol, outer sep=0.05cm] at (2, -2) {4};
\node (Bot) [protocol, outer sep=0.05cm] at (1, -3) {5};

\path [->, thick]
           (Top) edge [bend right] (Foo)
           (Foo) edge [bend left] (Bar)
           (Foo) edge (Baz)
           (Bar) edge (Fiz)
           (Fiz) edge [bend left] (Bot);

\path [->, dashed] (Top) edge [bend left] (Bar);

\path [->>] (Baz) edge [bend right] (Bot);

\path [->] (Bar) edge [bend left] (Foo);
\end{tikzpicture}
\end{wrapfigure}

If the given graph is a \index{forest}forest, then every edge is a tree edge; we visit a subgraph that is a spanning tree of itself. If the original graph is a \index{directed acyclic graph}directed \emph{acyclic} graph, then all edges are either tree edges or ignored edges. In either case, no two distinct vertices can be strongly connected. Tarjan's insight was that we can understand the non-trivial strongly connected components of our graph by looking at the fronds and cross-links. The taxonomy of edges in our running example is illustrated on the left. The tree edges are drawn with a thick arrow (\begin{tikzpicture}
\node (a) at (0, 0) {};
\node (b) at (1, 0) {};

\path [->, thick] (a) edge (b);
\end{tikzpicture}), and we also have one edge of each of the remaining kinds: an ignored edge
(\begin{tikzpicture}
\node (a) at (0, 0) {};
\node (b) at (1, 0) {};

\path [->, dashed] (a) edge (b);
\end{tikzpicture}), a frond
(\begin{tikzpicture}
\node (a) at (0, 0) {};
\node (b) at (1, 0) {};

\path [->] (a) edge (b);
\end{tikzpicture}), and a cross-link
(\begin{tikzpicture}
\node (a) at (0, 0) {};
\node (b) at (1, 0) {};

\path [->>] (a) edge (b);
\end{tikzpicture}).

The correctness of the algorithm hinges on a key property of depth-first search: any two strongly connected vertices have a common ancestor in the spanning tree, so a strongly connected component is a \index{subtree}\emph{subtree} of the spanning tree. We will call the root of this subtree is called the root of the strongly connected component. To identify the roots of strongly connected components, we associate another integer with each vertex~$v$, denoted $\Lowlink{v}$. This value is defined such that if $\Lowlink{v}=\Number{v}$, then $v$ is the root of a strongly connected component. In our example, $\Lowlink{v}=\Number{v}$ for all $v$ with the exception of ``$3$''; we have $\Lowlink{3}=\Number{2}$, since ``$2$'' and ``$3$'' are in the same strongly connected component having ``$2$'' as the root.

We repeatedly update $\Lowlink{v}$ while visiting the successors of $v$, and check if $\Lowlink{v}=\Number{v}$ afterwards. If so, we create a new strongly connected component. We maintain a pushdown \index{stack}stack such that each strongly connected component consists of some number of vertices from the top of the stack.

We push each vertex onto the stack before visiting its successors, but we only pop vertices from the stack when forming a new strongly connected component; a vertex remains on the stack until we visit its component's root. Thus, while the pushes and pops appear unbalanced, every vertex that is pushed is eventually popped.

The algorithm also needs a cheap way to check if a vertex is currently on the stack. We associate one more piece of state with each vertex $v$, a single bit denoted $\OnStack{v}$, to be set when $v$ is pushed and cleared when $v$ is popped.

\begin{algorithm}[Tarjan's algorithm]\label{tarjan}
Takes a vertex $v$ as input, together with some means of enumerating the successor vertices of $v$. Builds two maps: to each vertex, we assign a component ID, and to each component ID, we assign a list of member vertices. Upon completion, $v$ has a component ID assigned. Maintains a stack, and two global incrementing counters: the next unused vertex ID, and the next component ID.
\begin{enumerate}
\item (Memoize) If $v$ already has a component ID assigned, return this component ID, which can be used to look up its members.
\item (Invariant) Otherwise, ensure that $\Number{v}$ has not been set. If it is set but $v$ does not have a component ID, we performed an invalid re-entrant call (see below).
\item (Visit) Set $\Number{v}$ to the next vertex ID. Set $\Lowlink{v}\leftarrow\Number{v}$. Set $\OnStack{v}\leftarrow \textrm{true}$. Push $\Number{v}$ on the stack.
\item (Successors) For each successor $w$ of $v$:
\begin{enumerate}
\item If $\Number{w}$ is not set, we have a tree edge. Recursively invoke the algorithm on $w$, and set $\Lowlink{v}\leftarrow\min(\Lowlink{v}, \Lowlink{w})$.
\item If $\Number{w}\geq\Number{v}$, we have an ignored edge; do nothing.
\item Otherwise, $\Number{w}<\Number{v}$, so we have a frond or cross-link. If $\OnStack{w}$ is true, set $\Lowlink{v}\leftarrow\min(\Lowlink{v},\Number{w})$.
\end{enumerate}
\item (Not root?) If $\Lowlink{v}\neq\Number{v}$, return.
\item (Root) Assign the next unused component ID to $v$, and add $v$ to this component ID.
\item (Add vertex) Pop a vertex $v^\prime$ from the stack (which must be non-empty at this point) and clear $\OnStack{v^\prime}$. Add $v^\prime$ to the component of $v$, and associate the component ID with $v^\prime$.
\item (Repeat) If the stack is not empty, go back to Step~7. Otherwise, return.
\end{enumerate}
\end{algorithm}
After the outermost recursive call returns, the stack will always be empty. Note that while this algorithm is recursive, it is not re-entrant; in particular, the act of getting the successors of a vertex must not trigger the computation of the same strongly connected components. This is enforced in Step~2. In our case, this can happen because getting the successors of a protocol performs type resolution; in practice this should be extremely difficult to hit, so for simplicity we report a fatal error and exit the compiler instead of attempting to recover.

\paragraph{Protocol components.} The rewrite context lazily populates a mapping from protocol declarations to \emph{protocol nodes}. A protocol is a vertex in our graph; the protocol node data structure for $p$ stores $\Number{p}$, $\Lowlink{p}$, $\OnStack{p}$ and a component ID for $p$. A second table maps each component ID to a \emph{protocol component}, which is a list of protocol declarations together with a requirement machine. This requirement machine is either a protocol requirement machine, or a protocol minimization requirement machine; which one depends on whether we need to build the requirement signatures for each protocol, or if we already have requirement signatures. There is a second level of indirection here, as this requirement machine is lazily constructed when first requested, and not when the component is initially formed by Algorithm~\ref{tarjan}.

\paragraph{Generic signatures.}
The idea of a protocol having a dependency relationship on another protocol generalizes to a generic signature depending on a protocol. The protocol dependencies of a generic signature are those that may appear in a derivation.

\begin{definition}
A generic signature $G$ has a \IndexDefinition{protocol dependency}\emph{protocol dependency} on a protocol $\protosym{P}$, denoted \index{$\prec$}\index{$\prec$!z@\igobble|seealso{protocol dependency}}$G\prec\protosym{P}$, if we can derive a conformance to $\protosym{P}$ from $G$; that is, $G\vDash\ConfReq{T}{P}$. The \emph{protocol dependency set} of a generic signature $G$ is the set of all protocols $\protosym{P}$ such that $G\prec\protosym{P}$.

\end{definition}

This new form of $\prec$ is not a binary relation by our prior definition, because the two operands come from different sets. However, it is still transitive in the sense that $G\prec\protosym{P}$ and $\protosym{P}\prec\protosym{Q}$ together imply $G\prec\protosym{Q}$. Note that the two definitions of $\prec$ are related via the \index{protocol generic signature}protocol generic signature: $G_\texttt{P}\prec\protosym{Q}$ if and only if $\protosym{P}\prec\protosym{Q}$.

Now, consider the protocols that appear on the right-hand sides of the \emph{minimal} (that is, explicit) conformance requirements of our generic signature $G$; call these the \emph{successors} of $G$, by analogy with the successors of a protocol in the protocol dependency graph. If we consider all protocols reachable via paths originating from the successors of~$G$, we arrive at the complete set of protocol dependencies of~$G$.

We end this section with the algorithm to collect imported rules for a new requirement machine. We find all protocols reachable from an initial set, compute their strongly connected components, collect the requirement machines for each component, and finally, \index{imported rule}collect the local rules from each requirement machine.
\begin{algorithm}[Importing rules from protocol components]\label{importing rules}
Takes a list of protocols that are the immediate successors of a generic signature or protocol. Outputs a list of imported rules.
\begin{enumerate}
\item (Initialize) Initialize a worklist and add all given protocols to the worklist in any order. Initialize \texttt{S} to an empty set of visited protocols. Initialize \texttt{M} to an empty set of requirement machines (compared by pointer equality).
\item (Check) If the worklist is empty, go to Step~8.
\item (Next) Otherwise, remove the next protocol $p$ from the worklist. If $p\in\texttt{S}$, go back to Step~2, otherwise set $\texttt{S}\leftarrow\texttt{S}\cup\{p\}$.
\item (Component) Use Algorithm~\ref{tarjan} to compute the component ID for $p$.
\item (Machine) Let $m$ be the the requirement machine for this component, creating it first if necessary. If $m\not\in\texttt{M}$, set $\texttt{M}\leftarrow\texttt{M}\cup\{m\}$.
\item (Successors) Add each successor of $p$ to the worklist.
\item (Loop) Go back to Step~1.
\item (Collect) Return the concatenation of the local rules from each $m\in\texttt{M}$.
\end{enumerate}
\end{algorithm}

We will encounter the protocol dependency graph one last time when we introduce the Knuth-Bendix completion procedure in Chapter~\ref{completion}. We will show that the rewrite rules we construct have a certain structure that enables an optimization where completion does not need to check for \emph{overlaps} between pairs of imported rules.

A protocol component is always reasoned about as an indivisible unit; for example, in Chapter~\ref{rqm minimization} we will that requirement minimization must consider all protocols in a component simultaneously to get correct results. 

\section{Debugging Flags}\label{rqm debugging flags}

We've already seen two examples of the \IndexFlag{debug-requirement-machine}\texttt{-debug-requirement-machine} flag:
\begin{quote}
\begin{verbatim}
-debug-requirement-machine=timers
-debug-requirement-machine=protocol-dependencies
\end{verbatim}
\end{quote}
More generally, the argument to this flag is a comma-separated list of options, where the possible options are the following:
\begin{itemize}
\item \texttt{timers}: Chapter~\ref{rqm basic operation}.
\item \texttt{protocol-dependencies}: Section~\ref{protocol component}.
\item \texttt{simplify}: Section~\ref{term reduction}.
\item \texttt{add}, \texttt{completion}: Chapter~\ref{completion}.
\item \texttt{concrete-unification}, \texttt{conflicting-rules}, \texttt{property-map}: Chapter~\ref{propertymap}.
\item \texttt{concretize-nested-types}, \texttt{conditional-requirements}: Section~\ref{rqm type witnesses}.
\item \texttt{concrete-contraction}: Section~\ref{concrete contraction}.
\item \texttt{homotopy-reduction}, \texttt{homotopy-reduction-detail},\\
\texttt{propagate-requirement-ids}: Section~\ref{homotopy reduction}.
\item \texttt{minimal-conformances}, \texttt{minimal-conformances-detail}: Section~\ref{minimal conformances}.
\item \texttt{minimization},  \texttt{redundant-rules}, \texttt{redundant-rules-detail},\\
\texttt{split-concrete-equiv-class}: Chapter~\ref{requirement builder}.
\end{itemize}

Two more debugging flags are defined. The \IndexFlag{analyze-requirement-machine}\texttt{-analyze-requirement-machine} flag dumps a variety of \index{histogram}histograms maintained by the rewrite context after the compilation session ends. These wereare mostly intended for performance tuning various data structures:
\begin{itemize}
\item A count of the unique symbols allocated, by kind (Section~\ref{rqm symbols}).
\item A count of the number of terms allocated, by length (Section~\ref{building terms}).
\item Statistics about the rule trie (Section~\ref{term reduction}) and property map trie (Chapter~\ref{propertymap}).
\item Statistics about the minimal conformances algorithm (Section~\ref{minimal conformances}).
\end{itemize}

The \IndexFlag{dump-requirement-machine}\texttt{-dump-requirement-machine} flag prints each requirement machine before and after \index{completion}completion. The printed representation includes a list of rewrite rules, the property map, and all \index{rewrite loop}rewrite loops. The output will begin to make sense after Chapter~\ref{symbols terms rules}.

\section{Source Code Reference}\label{rqm basic operation source ref}

Key source files:
\begin{itemize}
\item \SourceFile{lib/AST/RequirementMachine/}
\end{itemize}
The Requirement Machine implementation is private to \texttt{lib/AST/}. The remainder of the compiler interacts with it indirectly, through the generic signature query methods on \texttt{GenericSignature} (Section~\ref{genericsigsourceref}) and the various requests for building new generic signatures (Section~\ref{buildinggensigsourceref}).

\subsection*{The Rewrite Context}

Key source files:
\begin{itemize}
\item \SourceFile{lib/AST/RequirementMachine/RewriteContext.h}
\item \SourceFile{lib/AST/RequirementMachine/RewriteContext.cpp}
\end{itemize}

\IndexSource{rewrite context}
\apiref{rewriting::RewriteContext}{class}
A singleton which constructs requirement machines and uniquely-allocates symbols and terms. The \texttt{ASTContext::getRewriteContext()} method returns the unique instance of the rewrite context.
\begin{itemize}
\item \texttt{getRequirementMachine(CanGenericSignature)} returns a requirement machine for the given generic signature, creating one first if necessary.
\IndexSource{protocol component}
\IndexSource{strongly connected component}
\item \texttt{getRequirementMachine(ProtocolDecl *)} returns a requirement machine for the protocol component which contains the given protocol, creating one first if necessary.
\IndexSource{protocol dependency graph}
\item \texttt{getProtocolComponentImpl()} is a private helper which returns the strongly connected component of the protocol dependency graph which contains the given protocol.
\IndexSource{Tarjan's algorithm}
\item \texttt{getProtocolComponentRec()} actually implements Tarjan's algorithm for computing strongly connected components.
\item \texttt{isRecursivelyConstructingRequirementMachine()} has two overloads, taking a generic signature or protocol component. They return true if we are currently constructing a requirement machine for this generic signature or protocol component. This is used as a cycle-breaking hack in associated type inference, since otherwise re-entrant construction triggers an assertion in the compiler.
\item \texttt{installRequirementMachine()} has two overloads, taking a generic signature minimization or  protocol component minimization requirement machine. They record it as the canonical requirement machine for the generic signature or protocol component that was just built.
\end{itemize}

\index{generic signature}
\apiref{GenericSignature}{class}
See also Section~\ref{genericsigsourceref}.
\begin{itemize}
\item \texttt{getRequirementMachine()} returns the requirement machine for this generic signature, by asking the rewrite context to produce one and then caching the result in an instance variable of the \texttt{GenericSignature} instance itself to speed up subsequent access. This method is used by the implementation of generic signature queries; apart from those, there should be no reason to reach inside the requirement machine instance yourself.
\end{itemize}

\IndexSource{protocol dependencies request}
\apiref{ProtocolDependenciesRequest}{class}
A request evaluator request which collects all protocols referenced from a given protocol's associated conformance requirements.

\apiref{ProtocolDecl}{class}
See also Section~\ref{genericdeclsourceref}.
\begin{itemize}
\item \texttt{getProtocolDependencies()} evaluates the \texttt{ProtocolDependenciesRequest}.
\end{itemize}

\IndexSource{requirement machine}
\apiref{rewriting::RequirementMachine}{class}
Represents a list of generic requirements in a manner amenable to automated reasoning.
\begin{itemize}
\item \texttt{initWithGenericSignature()} constructs a requirement machine from the requirements of an existing generic signature.
\item \texttt{initWithWrittenRequirements()} constructs a requirement machine from user-written requirements from which to compute the minimal requirements of a new generic signature.
\item \texttt{initWithProtocolSignatureRequirements()} constructs a requirement machine from the requirements of existing requirement signatures of each protocol in a protocol component.
\item \texttt{initWithProtocolWrittenRequirements()} constructs a requirement machine for computing the minimal requirements of a new requirement signature for each protocol in a protocol component.
\item \texttt{verify()} checks various invariants, called after construction.
\item \texttt{dump()} dumps the \index{rewrite system}rewrite system and \index{property map}property map of this requirement machine.
\end{itemize}

\subsection*{Requests}

Key source files:
\begin{itemize}
\item \SourceFile{lib/AST/RequirementMachine/RequirementMachineRequests.cpp}
\end{itemize}

\index{inferred generic signature request}
\apiref{InferredGenericSignatureRequest::evaluate}{method}
Evaluation function for building a generic signature from requirements written in source. Constructs a generic signature minimization requirement machine.

\index{abstract generic signature request}
\apiref{AbstractGenericSignatureRequest::evaluate}{method}
Evaluation function for building a generic signature from a list of generic parameters and requirements.

\index{requirement signature request}
\apiref{RequirementSignatureRequest::evaluate}{method}
Evaluation function for building the requirement signature of a protocol. Either deserializes the requirement signature, or constructs a protocol component minimization requirement machine.

\subsection*{Debugging}

Key source files:
\begin{itemize}
\item \SourceFile{lib/AST/RequirementMachine/Debug.h}
\item \SourceFile{lib/AST/RequirementMachine/Histogram.h}
\end{itemize}

\apiref{rewriting::RewriteContext}{class}
\begin{itemize}
\item \texttt{RewriteContext()} splits the string value of the \IndexFlag{debug-requirement-machine}\texttt{-debug-requirement-machine} flag into comma-separated tokens, and maps each token to an element of the \texttt{DebugFlags} enum.
\item \texttt{getDebugFlags()} returns the \texttt{DebugFlags} enum.
\item \texttt{beginTimer()} starts a timer identified by the given string, and logs a message.
\item \texttt{endTimer()} ends a timer identified by the given string, and logs a message. Must be paired with the call to \texttt{beginTimer()}.
\end{itemize}

\apiref{rewriting::DebugFlags}{enum class}

An option set of debugging flags which control various forms of debug output printed by the Requirement Machine.

\IndexFlag{analyze-requirement-machine}%
\apiref{rewriting::Histogram}{class}
A utility class for collating points into a histogram. Each histogram has a fixed number of buckets, together with an initial value, and an ``overflow'' bucket. Points are mapped to buckets by subtracting the initial value and comparing against the total number of buckets. If the result exceeds the total number of buckets, it is recorded in an ``overflow'' bucket. Various histograms in the \texttt{RequirementMachine} instance are printed when the compiler exits if the \texttt{-analyze-requirement-machine} frontend flag was passed in.
\begin{itemize}
\item \texttt{Histogram(unsigned size, unsigned start)} creates a new histogram.
\item \texttt{add(unsigned)} records a point in the histogram.
\item \texttt{dump()} prints the histogram as ASCII art.
\end{itemize}

\end{document}