\section{A Fixed Point Model} 

Our main result establishes that for any set of reaction rates $k>0$, a network with single terminal-linkage class will admit a strictly positive solution pair $(c,v)$
that satisfies the laws \eqref{fb} and \eqref{mak}.  We show this by defining a
related mapping with positive fixed points, and establishing an equivalence
between these positive fixed points and positive solutions to the equations.

We define a mapping in terms of a linearly constrained optimization problem.
The problem is constructed so that the logarithmic form of the mass-action
equation \eqref{mak-alt} is an optimality condition, and hence any solution to
this optimization problem will also satisfy \eqref{mak-alt}.

Let $\mu \in \R^m$ be a vector parameter.  Observe that if the parametric
convex optimization problem
\begin{align}
   \underset{v\in\R^n}{\minimize} &\quad v^TD(\log(v)-\1)    \notag
\\                     \st &\quad YD v = \mu  &:\ y \label{convex-fix}
\\                         &\quad v \ge 0                     \notag
\end{align}
has a strictly positive solution $v^\star(\mu)$, then the optimality conditions
\begin{align}
   YDv^\star(\mu)      &= \mu                    \notag
\\ DY^T y^\star(\mu) &= D\log(v^\star)  \label{optcon}
\\                   v^\star(\mu) &\ge 0                    \notag
\end{align} 
are well defined. Since $D$ is nonsingular, the second optimality condition is
equivalent to \eqref{mak-alt}, and hence implies mass-action. Moreover, it may
be possible to find a value for the parameter $\mu$ for which \eqref{fb} also
holds at the minimizer of \eqref{convex-fix}.

Note that the nonlinear program \eqref{convex-fix} is strictly convex, so for any
feasible $\mu$ there is a unique minimizer. That is, the mapping $\mu
\rightarrow v^\star(\mu)$ is well defined.  Define the function 
\begin{equation}  \label{mapping}
	 f(\mu) := YA^Tv^\star(\mu).  \end{equation}
If $\mu$ is a fixed point of $f: \R^m \rightarrow \R^m$, then the linear
equality constraint in \eqref{convex-fix} implies
\[
   YDv^\star(\mu) = \mu = YA^Tv^\star(\mu),
\]
or, equivalently,
\[
   Y(D-A^T)v^\star(\mu) = -Y A_k v^\star(\mu) = 0.
\]
Therefore, if such a fixed point exists, the solution $v^\star(\mu)$ at this
fixed point will satisfy $\eqref{fb}$.  For simplicity, we henceforth refer to
the optimal solution variables $(v^\star(\mu), y^\star(\mu))$ as $(v^\star,
y^\star)$, but acknowledge their dependence on $\mu$.

\begin{thm} \label{fp-exist-map} For any mass conserving, mass-action chemical
  reaction network and any choice of rate constants $k>0$, there exist
  nontrivial fixed points for the mapping given by \eqref{mapping}.
\end{thm} 

\begin{proof} 
	Brouwer's fixed point theorem states that any continuous mapping from a
	convex and compact subset of a Euclidean space $\Omega$ into itself must have
	fixed points.  That is, if $\mu \in \Omega$ implies $f(\mu)\in\Omega$, then
	there exists a fixed point $\hat{\mu}$ such that $f(\hat{\mu}) = \hat{\mu}$.
	
	Let $f(\mu)$ be defined as in \eqref{mapping} and define
  \[
     \Omega=\{\mu\in R^m \; : \; e^T\mu = \gamma, 
     \quad \mu\ge 0, \quad \mu \in Y[\R^n_+] \},
  \]
	where $e$ is defined in \eqref{consis}, $\gamma$ is any positive constant,
	and $Y[\R^n_+]$ is the image of the positive orthant under matrix
	multiplication by $Y$. 
	
	The set $\Omega$ is compact and convex, since it is formed by the
	intersection of convex sets.  Since the mapping $\mu\rightarrow v^\star$ is
	continuous, $f(\mu)$ is a composition of continuous functions and is hence
	continuous. Also, problem \eqref{convex-fix} is feasible for any
	$\mu\in\Omega$, so the mapping is well defined over $\Omega$.

	To show the mapping is into, first observe via \eqref{convex-fix} that since
	$v^\star \ge 0$ and $YA^T$ is a non-negative matrix, then $f(\mu)$ is
	non-negative.  The mapping $f(\mu)$ is clearly contained in $Y[\R_+^n]$, and
	finally, since the network conserves mass we have
  \[
     e^Tf(\mu)=e^TYA^Tv^\star=e^TYDv^\star=e^T\mu = \gamma.
  \]
	Therefore, the mapping $f(\mu)$ is into and must have a fixed point in
	$\Omega$.  Moreover, since $\Omega$ does not contain the zero vector, the
	fixed point(s) are nontrivial. 
\end{proof}

We have established the existence of a nontrivial fixed point $\mu$ of the
mapping $f(\mu)$, implying that when the associated point $v^\star$ is strictly
positive, it is a solution to both \eqref{mak} and \eqref{fb}.  However, in the
case when some entries of $v^\star$ are zero, the objective function of
\eqref{convex-fix} is non-differentiable and we cannot use the optimality
conditions to show that \eqref{mak} holds. 

The parameter $\mu$ is the rate of consumption of each
chemical species and $f(\mu)$ is the rate of production of each chemical species and at the fixed point, $\mu = f(\mu)$ defines a steady state. Observe that the definition of set $\Omega$ involves the parameter
$\gamma=e^T\mu$.  Since the vector $e$ can be interpreted as an assignment of
relative mass to the species, $\gamma$ can be interpreted as the total
amout of mass that reacts per unit time at the equilibrium. 
	

\subsection{Positive fixed points in single terminal-linkage networks}  \label{section::single-linkage}

We now consider the case when the network is formed by a
single terminal-linkage class and show that if $\hat \mu$ is a fixed point of
the mapping $f(\mu)$, the minimizer $v^\star(\hat \mu)$ is strictly positive.

Lemma $\ref{maximum-support}$ shows that if problem $\eqref{convex-fix}$ has a
feasible point with support $J$, the minimizer $v^\star$ will have support at
least $J$. Lemma \ref{positive-feasible} uses the single terminal-linkage
hypothesis to show that at a fixed point, there is a positive feasible point.
These two Lemmas imply that at the fixed point $\hat{\mu}$, the optimal
solution will be strictly positive, $v^\star(\hat\mu)>0$, and there is
therefore a nontrivial equilibrium for the network. 

To complete the argument we must establish Lemmas \ref{maximum-support} and
\ref{positive-feasible}, and the following result will be necessary.

\begin{lemma} \label{positive-flux}
	For a given $\mu$, if the parametric convex optimization problem
	\eqref{convex-fix} has a strictly positive feasible point $\hat{v}>0$, then
	the unique solution $v^\star$ will be strictly positive.
\end{lemma}

\begin{proof}
	Assume that the point $\hat{v}$ is feasible and strictly positive, and let
	$z$ be any nonzero feasible direction so that $v + z$ is also feasible, i.e., $z$ is
	in the nullspace of $YD$.  Since $\hat{v}$ is strictly positive, there must
	exist an interval $[l,u]$ such that points of the form $\hat{v} + \alpha z$
	are feasible for all $\alpha \in [l,u]$, specifically, $v + \alpha z \ge 0$.
	Since the feasible set is a finite simplex, $l$ and $u$ are finite.
    
	The interval can be chosen so that when $\alpha=l$ or $\alpha=u$, at least
	one entry of $\hat{v} + \alpha z$ is zero, and for all $\alpha$ in the
	interior of the interval, $\hat{v} + \alpha z$ is strictly positive. Observe
	that in this case, $l<0$ and $u>0$.

  Define the univariate function
  \begin{equation}   \label{univariate}
     g(\alpha) := \phi(\hat{v} + \alpha z),
  \end{equation}
	where $\phi$ is the objective function of \eqref{convex-fix}.  We will
	establish that as $\alpha\rightarrow l$ the derivative $g'(\alpha)\rightarrow
	-\infty$, and as $\alpha\rightarrow u$ the derivative $g'(\alpha)\rightarrow
	\infty$.  Thus, by the mean value theorem, there must exist a zero in the
	interior of the interval $[l,u]$. Since this function is strictly convex, if
	a stationary point exists in the interior of the interval, the function value
	at the stationary point must be smaller than at the boundary.

  Observe that
  \begin{align*}
     g'(\alpha) &= z^T\grad \phi(\hat{v} + \alpha z)
  \\            &=\sum_i z_i d_i(\log(\hat{v}_i + \alpha z_i))
  \\            &= \sum_{i\notin L}  z_id_i(\log{(\hat{v_i}+\alpha z_i)})
                    + \sum_{i\in L} z_id_i(\log{(\hat{v_i}+\alpha z_i)}), 
  \end{align*} 
	where $L$ is the set of entries that tend to zero as $\alpha \rightarrow l$,
	and the $d_i$ are the diagonal entries of $D$. Since $z_i<0$ for all $i\in L$,
	as $\alpha \rightarrow l$, the first summation will approach a finite value.
	As the entries in the logarithm of the second sum tend to zero, the term will
	diverge to $\infty$.

	Let $U$ be the set of entries for which $\hat{v} + \alpha z$ tends to zero as
	$\alpha\rightarrow u$, and $z_i>0$ for all $i\in U$.  In this case, we can
	write
  \[
    g'(\alpha) = \sum_{i\notin U} z_i d_i(\log{(\hat{v_i}+\alpha z_i)})
               + \sum_{i\in U}   z_i d_i(\log{(\hat{v_i}+\alpha z_i)}),
  \]
	in which the first sum will tend to a finite value and the second will
	diverge to $-\infty$. Therefore, there must exist a stationary point strictly
	in the interior of the interval $[l,u]$.

	Now, assume that for some $\mu$ there is a strictly positive feasible point
	$\hat{v}$ and that the solution $v^\star$ of \eqref{convex-fix} has zero
	entries.  By construction, the direction $z = v^\star-\hat{v}$ is feasible,
	and by the previous argument there is a point in the line spanned by $z$
	containing $\hat{v}$ that has a lower function value than $v^\star$,
	contradicting the minimality of $v^\star$.
\end{proof}

We now establish some notation. Let $I$ be a subset of $[1,\dots,n]$, and take
$x\in\R^n$. The notation $x_{_I}$ will refer to the vector $x$ restricted to the
indices in $I$. If a vector in $\R^{|I|}$ is denoted by $y_{_I}$, then the vector
$y\in\R^n$ will be its extension in $\R^n$ defined by
\[y_i=\begin{cases}(y_{_I})_j & j\in I\\ 0 &j\notin I.\end{cases}\] 
 
Let $K=\supp (v^\star)$, where $v^\star$ is the minimizer of \eqref{convex-fix}.

  \begin{lemma}
    \label{maximum-support}
		Let $\hat{v}$ be a feasible point for problem \eqref{convex-fix}. Then
		$\supp(\hat v) \subseteq K$.
  \end{lemma} 
\begin{proof} 
  We can prove this via contradiction. Let $\hat{v}$ be a feasible
  point with support $J$, and suppose $K\subset J$. Let $v_{_J}$ be the
  restriction of the vector $v$ to the indices of $J$, let $D_{_J}$ be the
  restriction of the matrix $D$ to the rows and columns corresponding to the
  indices in $J$, and let $Y_{_J}$ be the restriction of $Y$ to the columns in
  $J$.

	Define the new function $\tilde{\phi}(v_{_J}) =  v_{_J}^TD_{_J}(\log(v_{_J})-\1)$, and
	the reduced optimization problem 
	\begin{align}
    \underset{v_{_J}}{\minimize} &\quad \tilde{\phi}(v_{_J})    \notag \\
    \st &\quad Y_{_J}D_{_J} v_{_J} = \mu  &:\ y \label{convex-red} \\
    &\quad v_{_J} \ge 0.                     \notag 
		\end{align}
		Observe that if $\hat{v}$ is feasible for $\eqref{convex-fix}$, then
		$\hat{v}_{_J}$ is positive and feasible for $\eqref{convex-red}$. Thus by
		Lemma \ref{positive-flux}, the minimizer of \eqref{convex-red}, say
		$w^\star_{_J}$, is positive. 

	Since $Y_{_J}D_{_J} w^\star_{_J} = \mu$ implies $YD w^\star = \mu$, we know
	$w^\star$ is feasible for \eqref{convex-fix} and has support exactly $J$.
	Finally, observe that 
	\[\phi(w^\star) = \tilde{\phi}(w^\star_{_J}) \leq \tilde{\phi}(v^\star_J) = \phi(v^\star).\]
	Since the minimizer of $\eqref{convex-fix}$ is unique, $w^\star=v^\star$ and
	$w^\star$ must have support $K$, which is a contradiction. Therefore, $J \subseteq K$.
\end{proof}
\begin{lemma}\label{positive-feasible}
  Let $\hat \mu$ be a nontrivial fixed point for $f(\mu)$ in \eqref{mapping}. Then there exists a strictly
	positive feasible point $\hat v$ for problem $\eqref{convex-fix}$.
\end{lemma}  
\begin{proof}
Let $\hat\mu$ be a fixed point, so that (by definition) $YDv^\star =
YA^Tv^\star = \hat\mu$. Since $YDD^{-1}A^Tv^\star = YA^Tv^\star = \hat \mu$,
and both $A$ and $D$ are nonnegative matrices, we know that $\tilde v =
D^{-1}A^Tv^\star$ is feasible for \eqref{convex-fix}. Moreover, its support
will be the same as the support of $A^Tv^\star$. 

The convex combination $\hat v = \frac{1}{2}v^\star + \frac{1}{2}\tilde v$ will
also be a feasible point for \eqref{convex-fix} and its support will be the
union of the supports of $v^\star$ and $A^Tv^\star$.  Since the support of
$\tilde v$ is the support of $A^Tv^\star$, the support of $\hat{v}$ is the
union of the supports of $v^\star$ and $A^Tv^\star$. And since $v^\star$ is the
minimizer of $\eqref{convex-fix}$, its support is at least the support of
$\hat{v}$. We can repeat this argument iteratively to show that the support of
$v^\star$ must be at least the union of the supports of $(A^T)^pv^\star$, for
all positive powers
$p$. 

The single terminal-linkage class hypothesis implies that for any pair
$(i,j)\in [1,\dots,n]\times [1,\dots,n]$, there exists a power $p$ such that
$(A^T)^p_{ij}>0$.  Thus, if $j$ is in the support of $v^\star$, then $i$ is in
the support of $(A^T)^pv^\star$ and hence $i$ is in the support of $v^\star$.
Therefore, if $v^\star\neq 0$, then $v^\star>0$.

\end{proof}
