diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzdnbb" "b/data_all_eng_slimpj/shuffled/split2/finalzzdnbb" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzdnbb" @@ -0,0 +1,5 @@ +{"text":"\\section{INTRODUCTION}\n\nMany complex systems might be represented as network structures, for example, human interactions or mobile phone telecommunications, food webs or gene interactions. In recent works, a lot of attention has been focused on the extraction of meaningful clusters to characterize networks at different levels \\cite{Arenas2008}. This clustering is essential to comprehend large networks and extract relevant statistical properties. Many researchers have proposed appropriate measures and algorithms to unfold community structures, i.e. groups of densely connected nodes \\cite{Porter2009, Fortunato2010}. However, this structural distribution of nodes in networks is not always representative and lack generalization in practical contexts. For instance, bipartite networks or cycle graphs do not contain communities although they may be heavily structured. Less attention has been paid to uncover more general structures which is known as roles extraction or block modeling \\cite{Wasserman1994,Cason2012}. In previous work \\cite{reichardt2007}, Reichardt \\& White had applied a similar approach than community detection in the framework of \\cite{reichardt2006} to extract roles in networks. In this paper, we assume that the different roles in a network should represent groups of nodes sharing the same behavior within the graph or, in other words, having similar flow patterns. This generalized the notion of communities which can also be described as roles where each node in a role mainly interacts with other nodes in the same role. But many other role interactions may be defined like, for example, a leader-follower model on social network interactions or a block cycle model for food webs. In this paper, we present a pairwise node similarity measure designed to derive such role models. This similarity measure compares the neighborhood patterns of every node and is expected to be high for any pair of nodes sharing analogous flow properties. Since computing the exact pairwise similarity is computationally expensive, we propose a low rank iterative scheme that approximates the similarity score and allows to analyze large networks. We will first present the similarity measure defined as the fixed point solution of a converging sequence. We will then introduce our low rank approximation and briefly demonstrate its convergence. Finally, we will apply the similarity measure and our low rank approximation to random graphs containing a structural block distribution of nodes, and show that they successfully extract the different roles within this kind of graph. We will also exhibit some evidences that analyzing the evolution of the low rank similarity measure can reveal the number of roles in the network. Lastly, we will show that the performances of both measures are quantitatively equivalent hence justifying the application of our low rank iterative scheme in practical contexts.\n\n\\section{Node-to-Node similarity}\nWe consider a weighted and directed graph $G_A(V,E)$, with $V$ the set of vertices and $E$ the set of edges, associated to its adjacency matrix $A\\in\\mathbb{R}^{n\\times n}$ where $A_{i,j}\\neq0$ if $(i,j)\\in E$ for $i,j \\in V$. Our similarity measure should reveal nodes having similar behaviors in the network which we will identify by the neighborhood patterns of each node. We define a neighborhood pattern of length $\\ell$ for a node as a sequence of length $\\ell$ of incoming (I) and outgoing (O) edges starting from the node, which we will call the \\emph{source} node. For example, the neighborhood patterns of length $1$ consist in exactly one edge and end up either in a parent (I) or in a child (O) of the source. If we consider neighborhood patterns of length $2$, then $4$ different types of nodes can be reached: the parent of a parent (I-I), the child of a parent (I-O), the parent of a child (O-I) or the child of a child (O-O). One can easily see that when the length of the neighborhood patterns is increased by $1$ the number of reachable nodes, which we will call the \\emph{target} nodes, is doubled.\n\nOur similarity measure reflects that a pair of nodes is highly similar if they have many neighborhood patterns in common, or in other words, if they can reach many targets with neighborhood patterns of the same kind and length. For example, using the patterns of length $1$, two source nodes will be more similar if they have many common parents (I) or many common children (O). \\fref{fig:pattern} shows all the possible common neighborhood patterns, up to length $3$, where the source nodes are represented as dark circles and each target node as a light gray square. One can compute the number of common target nodes for every pair of source nodes using neighborhood patterns of length $1$ as $$N_1 = AA^T + A^TA,$$ where the first term gives the number of common children (O) and the second term gives the number of common parents (I). Similarly, the number of common target nodes for neighborhood patterns of length $2$ is given by $$N_2 = AAA^TA^T + AA^TAA^T + A^TAA^TA + A^TA^TAA,$$\nwhere the different terms corresponds to the neighborhood patterns (O-O), (O-I), (I-O) and (I-I), respectively.\n\n\\begin{figure}[t]\n\t\\centering\n\t\\framebox{\\parbox{0.85\\columnwidth}{\n\t\t\\includegraphics[width=0.85\\columnwidth]{pattern_fig.pdf} \n\t}}\n \\caption{All the different neighborhood patterns, up to length $3$, captured by the similarity measure $S_{i,j}$ \\eref{eq:S} with the source nodes $i$ and $j$ represented as dark circles and the target node represented as a light gray square.}\n\t\\label{fig:pattern}\n\\end{figure}\n\nOur pairwise node similarity measure $S\\in\\mathbb{R}^{n\\times n}$, previously introduced in \\cite{Denayer2012}, is then defined as\n\\begin{equation}\n\\label{eq:S_np}\nS = \\sum_{\\ell=0}^{\\infty} \\beta^{2(\\ell-1)}N_l,\n\\end{equation}\nwhere $\\beta\\in\\mathbb{R}$ is a scaling parameter. Hence our similarity measure computes the weighted sum of the number of common target nodes using neighborhood patterns of any length, and the contribution of the number of common targets using neighborhood patterns of length $\\ell+1$ is weighted by $\\beta^{2\\ell}$. \n\nOne can define an iterative sequence \n\\begin{equation}\n\\label{eq:S}\nS_{k+1} = \\Gamma_A\\left[I + \\beta^2S_k\\right],\n\\end{equation}\nwhere $\\Gamma$ is a linear operator,\n$$\\Gamma_A:\\mathbb{R}^{n\\times n}\\rightarrow\\mathbb{R}^{n\\times n}:\\Gamma_A[X] = AXA^T + A^TXA,$$\nsuch that\n$$S_{k+1} = \\Gamma_A\\left[I\\right] + \\dots + \\left(\\beta^{2}\\right)^{k}\\Gamma_A^{k+1}\\left[I\\right]+\\left(\\beta^2\\right)^{k+1}\\Gamma_A^{k+1}\\left[S_0\\right]$$\nwhere $\\Gamma_A^k[.]$ corresponds to applying $k$ times the operator $\\Gamma_A$. Hence, our similarity measure $S$ can be computed as the fixed point solution of the iterative sequence \\eref{eq:S}.\n\nOur similarity measure $S$ \\eref{eq:S_np} can be seen as a generalization of the measure proposed by {Cooper and Barahona} \\cite{Cooper2011,Diaz2013} for which the pairwise similarity $S^{CB}$ only compares the total number of paths originating or leading to a node, without comparing the targets or the sources of those paths. Furthermore, this similarity score $S^{CB}$ does not consider all types of neighborhood patterns, as represented in \\fref{fig:pattern}, but only restricts the measure to direct paths (represented in the first row of the panels $\\ell=2$ and $\\ell=3$ in the figure). While being easily computed, this makes the measure unable to extract a good pairwise similarity score for some particular graphs. For example, if one considers a regular block cycle graph, as represented in \\fref{fig:cycle}, where each role contains the same number of nodes and each node is connected to all the nodes in the following role in the cycle, the pairwise similarity measure $S^{CB}$ is of rank $1$ because all the nodes have a constant number of in\/out neighbors at all distances. This makes the extraction of roles in this network impossible using $S^{CB}$. On the contrary, our similarity measure \\eref{eq:S} produces a fixed point solution $S^*$ of rank equal to the number of roles in the network, with an obvious clustering that reveals the different roles. One can see that any $2$ nodes of the same role in the input graph are isomorphic, while any $2$ nodes of different roles are not. This is accurately represented by our measure $S^*$ but not by $S^{CB}$.\n\nThe similarity measure we propose in this paper might also be compared to the self similarity score introduced by {Blondel} et al.\\ \\cite{Blondel2004}. However, this measure has some drawbacks that are avoided using our iterative scheme \\eref{eq:S}, i.e.\\ the sequence $S_k$ converges for any initial matrix $S_0$ and the fixed point solution is unique. Moreover, it is known that the similarity score of {Blondel} et al.\\ $S^B$ is of rank $1$ when the adjacency matrix $A$ is normal. After scaling, $S^{B}$ is therefore the matrix of all ones as $S^{CB}$, which makes the analysis of the block cycle graph again impossible using this similarity measure.\n\n\\def 0.25\\columnwidth {0.25\\columnwidth}\n\\def \\myframebox [#1] {{#1}}\n\\begin{figure}[t]\n\t\\centering\n\t\\framebox{\\parbox[t][][c]{0.95\\columnwidth}{\n\t\t\\myframebox[\\parbox[t][0.25\\columnwidth][c]{0.2\\columnwidth}\n\t\t\t{\\includegraphics[width=0.2\\columnwidth]{cycle_graph.pdf}}]\n\t\t\\hfill\n\t\t\\myframebox[\\parbox[t][0.25\\columnwidth][c]{0.2\\columnwidth}{\\centering$A$\\\\[2pt]\n\t\t\t\\includegraphics[width=0.2\\columnwidth]{cycle_adjacency.pdf}\n\t\t\t}]\n\t\t\\hfill\n\t\t\\myframebox[\\parbox[t][0.25\\columnwidth][c]{0.2\\columnwidth}{\\centering$S^*$\\\\[2pt]\n\t\t\t\\includegraphics[width=0.2\\columnwidth]{cycle_S.pdf}\n\t\t\t}]\n\t\t\\hfill\n\t\t\\myframebox[\\parbox[t][0.25\\columnwidth][c]{0.2\\columnwidth}{\\centering$S^{CB}=S^{B}$\\\\[2pt]\n\t\t\t\\includegraphics[width=0.2\\columnwidth]{cycle_Sb.pdf}\n\t\t\t}]\n\t}}\n \\caption{From left to right: Block cycle role graph where each block has the same number of nodes and each node is connected to all the nodes in the following block. The large gray filled circles represent the roles and the small white circles represent the nodes of the graph; The adjacency matrix of the block cycle graph; The fixed point pairwise similarity score $S^*$, computed using \\eref{eq:S}, reveals all the different blocks; The pairwise similarity score of Cooper and Barahona $S^{CB}$ and Blondel et al.\\ $S^B$ are rank $1$ and do not exhibit the block structure.}\n\t\\label{fig:cycle}\n\\end{figure}\n\nThe parameter $\\beta$ in \\eref{eq:S} can be tuned to vary the weight of long neighborhood patterns but must be chosen wisely to ensure the convergence of the sequence $S_k$. If we initialize $S_0=0$, the iteration \\eref{eq:S} can be written for $k\\geq 1$ as \n\\begin{equation}\nS_{k+1} = S_1+ \\beta^2\\Gamma_A\\left[S_k\\right],\n\\label{eq:Ss}\n\\end{equation}\nwhere \n\\begin{equation}\nS_1 = AA^T + A^TA,\n\\label{eq:S1}\n\\end{equation}\nand the fixed point solution of \\eref{eq:S} is then given by \n\\begin{equation*}\nS^* = S_1 + \\beta^2\\left(AS^*A^T + A^TS^*A\\right),\n\\end{equation*}\nif the sequence converges. Using a classical property of the Kronecker product, this can be written as \n\\begin{equation*}\nvec(S^*) = \\left[I-\\beta^2\\left(A\\otimes A + \\left(A\\otimes A\\right)^T \\right) \\right]^{-1}vec\\left(S_1\\right)\n\\end{equation*}\nwhere $vec(S)$ denotes the vectorization of the matrix $S$, formed by stacking the columns of $S$ into one single column vector. It follows that, to ensure convergence, one can choose $\\beta$ such that\n\\begin{equation}\n\\beta^2\\leq\\frac{1}{\\rho\\left(A\\otimes A + \\left(A\\otimes A\\right)^T \\right)}\n\\label{eq:beta}\n\\end{equation}\nwhere $\\rho(.)$ denotes the spectral radius. Computing the exact upper bound for the parameter $\\beta$ to ensure convergence might be computationally expensive due to the Kronecker products $A\\otimes A \\in\\mathbb{R}^{n^2\\times n^2}$ if $A$ is non-symmetric. However, one can use an easily computed bound\n\\begin{equation}\n\\beta^2 \\leq \\frac{1}{\\rho\\left((A+A^T)\\otimes (A+A^T)\\right)} = \\frac{1}{\\rho\\left((A+A^T)\\right)^2}\n\\label{eq:beta_bound}\n\\end{equation}\nwhich ensures that the constraint \\eref{eq:beta} is satisfied. However, even if $\\beta$ is small enough to guarantee the convergence of the sequence \\eref{eq:Ss}, it might be impossible to compute the fixed point solution up to a small tolerance because of the increasing computational cost and memory requirement. Indeed, even if $A$ is sparse, the matrix $S_k$ tends to fill in as $k$ increases and each single iteration of \\eref{eq:Ss} is $O(n^3)$. This leads us to define a low-rank projected iteration to approximate the solution of \\eref{eq:Ss}. In the next section, we will introduce the low-rank iteration and briefly demonstrate its convergence.\n\n\\section{Low-rank similarity approximation}\nBecause the full rank fixed point solution of \\eref{eq:S} is often computationally too expensive to extract, we introduce a low-rank approximation of rank at most $r$ of $S^*$. Inspired from the formulation \\eref{eq:Ss}, we define the low rank iterative scheme as\n\\begin{equation}\nS^{(r)}_{k+1} = \\Pi^{(r)}\\left[ S^{(r)}_1+ \\beta^2\\Gamma_A\\left[S^{(r)}_k\\right]\\right] = X_{k+1}\\;X_{k+1}^{T}\n\\label{eq:LR}\n\\end{equation}\nwhere $X_k \\in \\mathbb{R}^{n\\times r}$ and $\\Pi^{(r)}\\left[.\\right]$ is the best low-rank projection on the dominant subspace which can be computed using a truncated singular value decomposition (\\textit{SVD}) of rank at most $r$. $S_1^{(r)}$ is the best low-rank approximation of $S_1$ which can be written as\n$$S_1 = \\left[A\\;\\left|\\right. A^T\\right]\\left[A\\;\\left|\\right. A^T\\right]^T,$$\nwhere $\\left[A\\;\\left|\\right. A^T\\right]$ is the horizontal concatenation of $A$ and $A^T$. This allows us to efficiently compute $S_1^{(r)}$ as\n\\begin{align*}\nS_1^{(r)} &= \\Pi^{(r)}\\left[\\left[A\\;\\left|\\right. A^T\\right]\\left[A\\;\\left|\\right. A^T\\right]^T\\right]\\\\\n\t\t&= U_1\\Sigma_1^2U_1^T = X_1X_1^T\n\\end{align*}\nwhere the columns of the unitary matrix $U_1\\in\\mathbb{R}^{n\\times r}$ span the dominant subspace of dimension at most $r$ of $\\left[A\\;\\left|\\right. A^T\\right]$ and $\\Sigma_1\\in\\mathbb{R}^{r\\times r}$ is a diagonal matrix of the dominant singular values, i.e.\\ $\\left[A\\;\\left|\\right. A^T\\right] \\approx U_1\\Sigma_1V_1^T$. To compute each iterative solution of \\eref{eq:LR}, one can see that\n\\begin{align*}\nS^{(r)}_1+ \\beta^2\\Gamma_A\\left[S^{(r)}_k\\right] &= X_1X_1^T + \\beta^2AX_kX_k^TA^T\\\\\n&\\hspace{25pt}+\\beta^2A^TX_kX_k^TA\\\\\n&=Y_k\\;Y_k^T\n\\end{align*}\nwhere\n$$Y_k = \\left[X_1\\;\\left|\\right.\\beta AX_k\\;\\left|\\right.\\beta A^TX_k\\right],$$\nwhich leads to\n$$X_{k+1}X_{k+1}^T = \\Pi^{(r)}\\left[Y_kY_k^T\\right].$$\nTo efficiently compute $X_{k+1}$, we first apply a \\textit{QR} factorization to $Y_k = Q_kR_k$, then compute a truncated \\textit{SVD} of rank at most $r$ of $R_k$ such that $R_k = \\mathcal{U}_k{\\Omega}_k\\mathcal{V}_k$ and finally compute $$X_{k+1} = Q_k\\mathcal{U}_k\\Omega_k.$$\nOne can prove, using perturbation theory \\cite{stewart1973}, that the iterative scheme \\eref{eq:LR} converges locally to a fixed point solution $S^{(r)}$ if the spectral gap at the $r^{th}$ singular value is sufficiently large. Without going into the details of the demonstration of the convergence, let us mention some interesting results that follow from it. First, we consider the function $$f(S) = S_1^{(r)} + \\beta^2\\Gamma_A\\left[S\\right].$$\nClearly, since $S^{(r)}$ is a fixed point solution of \\eref{eq:LR}, we know that there exist a unitary matrix $U\\in\\mathbb{R}^{n\\times r}$ and a diagonal matrix $\\Sigma\\in\\mathbb{R}^{r\\times r}$ such that $S^{(r)} = U\\Sigma^2U^T$ and\n$$[U\\;V]^T\\;f(S^{(r)})\\;[U\\;V] = \n\\left[\\begin{array}{cc}\n\\Sigma^2 &\\\\\n& \\sigma^2\n\\end{array}\\right]\n$$\nwhere $\\Sigma_{i,i}>\\sigma_{j,j}$ $\\forall i,j$ because we assumed that the fixed point solution has a positive spectral gap at the $r^{th}$ singular value.\n\n\\noindent Then, we consider a small symmetric perturbation $\\Delta$ and, using the linearity of the operator $\\Gamma_A[.]$, one can write that $$f(S^{(r)}+\\Delta) = f(S^{(r)}) + \\beta^2\\Gamma_A\\left[\\Delta\\right]$$ and\n$$\n[U\\;V]^T\\;\\left(f(S^{(r)}) + \\beta^2\\Gamma[\\Delta]\\right)\\;[U\\;V] = \n\\left[\\begin{array}{cc}\nE_{11}&E_{21}^T\\\\\nE_{21}&E_{22}\n\\end{array}\\right].\n$$\nSince $U$ is in general not an invariant subspace of $f(S^{(r)}+\\Delta)$, $E_{21}$ will be non-zero.\n\\noindent However, we know from \\cite{stewart1973} that there exists a unitary matrix $Q$ such that $UQ$ is an invariant subspace of $f(S^{(r)}+\\Delta)$ if $$0\\leq 4\\beta^2\\left\\|\\Gamma\\left[\\Delta\\right]\\right\\|_F \\leq \\Sigma_{k,k}^2-\\sigma_{1,1}^2.$$ \nIf $\\left\\|\\Delta\\right\\|_F$ is sufficiently small, the rotation matrix $Q$ will not perturb too much the singular values of $f(S^{(r)})$, so $UQ$ will not only be an invariant but also the dominant subspace of $f(S^{(r)}+\\Delta)$, hence the local convergence of the low-rank iterative scheme is guaranteed for sufficiently small $\\beta$. This leads to the following bound for the distance between $S^{(r)}$ and the projection of $f(S^{(r)}+\\Delta)$\n\\begin{equation*}\n\\norm{S^{(r)}-\\Pi^{(r)}\\left[f(S^{(r)}+\\Delta)\\right]}_F \\leq\\gamma\\norm{\\Delta}_F\n\\end{equation*}\nwhere $\\gamma < 1$ if $$\\beta^2 < \\frac{1}{\\norm{A\\otimes A + A^T\\otimes A^T}_2\\left(\\frac{4\\norm{\\Sigma^2}}{\\Sigma_{k,k}^2-\\sigma_{1,1}^2}+1\\right)}$$\nwhich shows the existence of $\\beta$ such that the iteration \\eref{eq:LR} converges.\n\n\\noindent In the next section, we will apply our low-rank iterative scheme to {Erd\\H{o}s-R\\'{e}nyi} random graphs and demonstrate that it allows to successfully extract roles in those networks.\n\n\\section{Numerical experiments}\n\nWe applied our similarity measure to extract roles in {Erd\\H{o}s-R\\'{e}nyi} random graphs containing a block structure. To build such graphs, we first choose a directed role graph $G_B(V_B,E_B)$, i.e.\\ each node in $G_B$ defines a role that we would like to identify. Some of the role graphs that we considered are represented in the first column of each panel of \\fref{fig:rank}. As previously, in the role graphs, the large gray filled circles represent the different roles and the small white circles represent the nodes of the graph. The role graph in the first panel corresponds to a community structure where nodes in a role interact mainly with other nodes in the same role. This kind of role graph often occurs when considering human interactions in online social networks for example \\cite{Kumpula2009} but has been observed in many other networks \\cite{Fortunato2010}. The second panel represents a block cycle role graph, already presented in \\fref{fig:cycle}, where each node interacts mainly with nodes in the following role in the cycle. This role graph might represent the behavior of animals in some particular food webs. In the third and fourth panels, the role graphs were simply chosen as representative examples for more complex role interactions without precise real life example in mind.\n\n\\begin{figure*}[t]\n\t\\centering\n\t\\parbox{0.98\\textwidth}{\n\t\\centering\n\\includegraphics[height=0.85\\textheight]{rank_analysis.pdf} \n\t}\n \\caption{Evolution of the adjacency matrix, the extracted roles and the low rank similarity measure $S^{(r)}$ for different random graphs. Each panel corresponds to a chosen role graph represented in the first column. Each panel is divided in $4$ sections corresponding to different values of $p_{in}$ and $p_{out}$. In each section, we present one realization of the adjacency matrix, then the extracted role assignment for each node and finally the evolution $\\norm{S^*-S^{(r)}}_F$ and $\\norm{S^{(r)} - S^{(r+1)}}_F$ for increasing values of $r$.\n}\n\t\\label{fig:rank}\n\\end{figure*}\n\nOnce the role graph $G_B$ has been chosen, we build a random graph $G_A(V_A,E_A)$ where each node in $G_A$ has a corresponding role in $G_B$. That is, for each node $i\\in V_A$, we select a role $R(i)\\in V_B$. Then, we add the edges in $E_A$ using $2$ probability parameters. For every pair of nodes $i,j\\in V_A$, we add the edge $(i,j)\\in E_A$ with probability $p_{in}$ if there is an edge between the corresponding roles in $G_B$, i.e.\\ $\\left(R(i),R(j)\\right)\\in E_B$. If there is no edge between the corresponding roles in $G_B$, the edge is still added with a probability $p_{out}$.\nIf $p_{in}$ is much larger than $p_{out}$, then the role graph $G_B$ is accurately representing the different roles in the graph $G_A$ and it is expected that the pairwise similarity $S^*$ between the vertices $V_A$ should allow the extraction of those roles. On the other hand, if $p_{out}$ is much larger than $p_{in}$, then the different roles in $G_A$ are more closely represented by the complement graph of $G_B$ represented by the adjacency matrix $\\mathbf{1}\\mathbf{1}^T-B$. However, the role structure is still strongly existing in this complement graph and it is expected that the similarity measure $S^*$ should still be able to differentiate them. It is when the $2$ probabilities $p_{in}$ and $p_{out}$ are close to each other that extracting the different roles becomes challenging but, at the same time, the graph becomes closer to a uniform {Erd\\H{o}s-R\\'{e}nyi} graph which is known to be free of any structure.\n\n\\begin{figure*}[t]\n\t\\centering\n\t\\parbox{1.95\\columnwidth}{\n\t\\centering\n\\includegraphics[width=1.9\\columnwidth]{qualityAnalysis.pdf} \n\t}\n \\caption{Average normalized mutual information between the exact role structure and the extracted role structure using the full rank and the low rank similarity measure on different role graphs.\n}\n\t\\label{fig:quality}\n\\end{figure*}\n\nEach of the panels of \\fref{fig:rank} is divided into $4$ sections corresponding to different values of $p_{in}$ and $p_{out}$ for a single role graph, as follows\n\\def1.5{1.5}\n$$\n\\begin{array}{|c|c|c|}\n\\hline\n\\multirow{2}{*}{\\quad} & p_{in} = 0.9 \/ p_{out} = 0.1 & p_{in} = 0.8 \/ p_{out} = 0.2 \\\\ \\cline{2-3}\n\t\t\t\t& p_{in} = 0.7 \/ p_{out} = 0.3 & p_{in} = 0.6 \/ p_{out} = 0.4 \\\\\n\t\t\t\t\\hline\n\\end{array}\n$$\n\n\n\n\\noindent In each section of a panel of \\fref{fig:rank}, we first present the adjacency matrix of one realization of the random graphs $G_A$ generated. For visual clarity, the matrices have been permuted such that nodes in the same role are next to each other. Then, we represent the role assignment of each node extracted using our low rank similarity matrix $S^{(r)}$ for $r=10$. Based on the similarity measure $S^{(r)}$, we extract the role assignment of each node using the community detection algorithm presented in \\cite{Browet2013}. Indeed, within each role, we expect nodes to be highly similar in their neighborhood patterns. This should lead to a similarity graph, whose weighted adjacency matrix is the similarity matrix $S^{(r)}$, with groups of highly connected nodes in each role, hence containing a community structure. Since the algorithm produces hierarchical communities, we present each level of roles when different levels of clustering were extracted from the similarity graph.\n\n\\noindent The last plot in each section of a panel in \\fref{fig:rank} represents the evolution of $S^{(r)}$ for increasing values of $r$. That is, we compute the norm of the difference between the full rank and each low rank solutions, $\\norm{S^*-S^{(r)}}_F$, and between consecutive low rank solutions, $\\norm{S^{(r)} - S^{(r+1)}}_F$, for increasing values of $r$. This should reveal the minimal rank required for $S^{(r)}$ to be a qualitatively good approximation of $S^*$.\n\nThe results of \\fref{fig:rank} clearly show that the different roles within each network can be well extracted using the low rank similarity graph up to some high level of noise. In the first role network, each community is correctly extracted until $p_{in} = 0.6$ and $p_{out} = 0.4$. However, even if the network is really noisy for those parameters, as represented by the adjacency matrix, the first and the third communities are pretty well clustered and the second community is mainly split in $2$. The same observation might be done for the block cycle role graph for which all the roles are perfectly extracted for the first three probability parameters. Again, in the last section, the second and third roles are essentially split in $2$ different clusters but there is only a few nodes with inappropriate role assignments in the final level of clustering. Those first $2$ role graphs have a strong role structure and adding any edge in the role graph would not alter it, i.e.\\ this will not create isomorphic roles. This explains why the role structures are correctly extracted even for high level of noise.\n\nThe third role graph is less strongly defined because if one edge is added from the second block to the first block, the second and the third roles would become isomorphic. Indeed, we observe that some nodes are incorrectly clustered from the second role to the third role for $p_{in}=0.7$ and $p_{out}=0.3$. This might also explain why the second and third roles are grouped together in the final level of clustering for the first two probability parameters but this might also be due to some resolution limit of the community detection algorithm \\cite{Schaub2012,Fortunato2007}. For high level of noise, clustering the pairwise similarity matrix does not provide an accurate result, however, the adjacency matrix clearly indicates that the role structure is very weak.\n\nIn the last role graph composed of $4$ distinct blocks, the results are again reasonably good. Except for an additional merge in the last level of clustering for $p_{in}=0.9$ and $p_{out}=0.1$, all the roles are correctly extracted, and even for the last probability parameters, leading to a high level of noise, each role tends to be correctly extracted. There is only a small number of nodes incorrectly classified for the first and last blocks and the second and third blocks are only bisected as previously observed.\n\nWe also observe that the evolution of the low rank similarity matrix $S^{(r)}$ might be used to reveal the number of roles in the networks. Indeed, when the different roles in the networks are strongly defined, we observe an abrupt variation in the decay of the norm of the differences $\\norm{S^*-S^{(r)}}_F$ and $\\norm{S^{(r)}-S^{(r+1)}}_F$. This abrupt variation indicates that we do not need to consider larger values of the rank to extract qualitatively good roles in the networks, since the gain in precision for the similarity measure starts to decrease very slowly afterwards. What is also interesting is that this abrupt variation always occurs when the rank hits the exact number of roles in the networks. When the networks are highly noisy, we do not observe such an abrupt variation which could indicate that the clustering of the nodes according to the similarity matrix will not produce relevant results. Observing the evolution of the low rank similarity matrix could become a strong indicator of the quality of the extracted roles for real networks when the exact block structure is not known.\n\nFinally, we compare quantitatively the extracted clusters using the full rank similarity $S^*$ and the low rank similarity measure $S^{(r)}$. For each of the different role graphs previously introduced, we compute the normalized mutual information (\\textit{NMI}) \\cite{Danon2005} between the exact role structure and the extracted role assignments using $S^*$ or $S^{(r)}$ and the community detection algorithm. The NMI ranges in $[0,1]$ and is large if the two distributions are similar. More precisely, for each role graph, we generate $20$ random graphs for each couple of probability parameters $p_{in}$ and $p_{out}$ in $[0,1]$ with a discretization step size of $0.05$, and we compute the average NMI on those $20$ realizations of the {Erd\\H{o}s-R\\'{e}nyi} random graphs. The results are presented in \\fref{fig:quality}. As expected, we observe that the extracted roles are accurate when either $p_{in} >> p_{out}$ or the opposite. As we mentioned previously, the third role graph seems harder to recover due to either a resolution limit phenomenon or to the almost isomorphic behavior of two of the role nodes. Nevertheless, we observe that the low rank similarity matrix $S^{(r)}$ produces almost identical results than the full rank similarity $S^*$. This leads us to conclude that, if the rank is sufficiently large, one can always use our low rank pairwise similarity measure to extract role structures in networks. The low rank similarity matrix will always be easier to compute and will produce highly similar results.\n\n\\section{Conclusion}\n\nIn this paper, we present a pairwise similarity measure between the nodes of a graph that allows the extraction of roles or block structures within the graph. Those roles generalized the concept of communities often studied in the literature. Then, we present a low rank iterative scheme to approximate the pairwise similarity measure and prove its convergence when the parameter $\\beta$ is sufficiently small. We applied the similarity measure and its low rank approximation to {Erd\\H{o}s-R\\'{e}nyi} random graphs containing a block structure and showed that, if the noise level is not too large and the block structure correctly represents the different roles of the nodes in the network, our similarity measure and its low rank approximation accurately extract these blocks. We also show that analyzing the evolution of the low rank similarity measure might reveal the number of roles in the networks and also might indicate if the extracted cluster are relevant. Finally, we demonstrate that the pairwise similarity measure and the low rank approximation produce very similar results, hence justifying the use of the low rank approximation in practical examples when computing the full rank measure is computationally too expensive. In future works, we plan to apply our low rank similarity measure to other kinds of random graphs, e.g.\\ scale-free networks. We will also apply our measure to real networks like food webs, international exchange networks or words graphs to automatically uncover similar type of words in the construction of sentences, known as ``tagging'' in natural language processing. We will also analyze the behavior of our similarity measure against weighted networks.\n\n\n\n \n\n\\addtolength{\\textheight}{-12cm} \n \n \n \n \n \n\n\n\n\n\n\n\n\n\\section*{ACKNOWLEDGMENT}\n\nWe would like to thank N. Boumal, R. Jungers and J. Hendrickx for their useful comments regarding the subject.\\\\\n\n\\noindent This paper presents research results of the Belgian Network DYSCO (Dynamical Systems, Control, and Optimization), funded by the Interuniversity Attraction Poles Programme, initiated by the Belgian State, Science Policy Office. The scientific responsibility rests with its author(s).\n\n\n\n\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:introduction}\n\n\n\n\n\n\nArtificial General Intelligence (AGI) requires an intelligent agent to solve a wide variety of real-world tasks.\nLearning to solve these tasks efficiently involves sharing knowledge between tasks, and systematic generalization from relatively few samples. \nIn contrast, agents trained with Reinforcement Learning (RL) frequently fall short in this regard: they rely on excessive amounts of data~\\cite{francois-lavet2018introduction} and are unable to generalize beyond their initial training regime~\\cite{leike2017ai}. %\n\n\n\nModel-based RL promises to alleviate this problem by using a general (non task-specific) world model that captures the latent structure of the environment.\nThis more abstract knowledge about the world is expected to be useful for many (even novel) tasks and facilitate simulation and planning.\nIt is unclear what constitutes a good model, and frequently models are either engineered~\\cite{deavilabelbute-peres2018endtoend} or obtained by training a deep (recurrent) neural network to predict future states of the world~\\cite{schmidhuber1990making,ha2018recurrent}. \nIn the latter case, an underlying assumption is that the learned representations of such a network present suitable abstractions for transfer and planning, analogous to the versatility of features learned by a deep convolutional image classifier.\nHowever, the limited success of learned models for model-based RL in these domains raises doubts about the validity of this assumption.\n\nIn this work, we argue that, rather than learning a single monolithic model that handles all situations in all environments, what is needed is a flexible system which dynamically infers a suitable model on the fly. %\nA human playing the game of \\emph{Space Invaders} uses a mental model that revolves around space ships and aliens, without simultaneously also considering all other aspects of the real world that are relevant for other tasks.\nHumans are also quick to adapt their model to new information by adding or removing additional assumptions.\nFor example, reading the manual of a game before playing greatly increases first-episode performance~\\cite{tsividis2017human}. %\n\nInitially, it may appear that in arguing for a dynamic model we have mostly made the task of model-learning harder: we now require learning many different models that fit specific situations. \nWhy then would we expect such a model to perform any better or even work at all?\n\n\n\n\n\n\n\n\n\n\n\\paragraph{Objects}\nObjects are the key piece to this puzzle, in that they facilitate the modular reuse of prior knowledge and the combinatorial construction of novel models.\nIt is well-established that objects play a central role in human cognition, both for internal reasoning and as the basis for communicating about the world.\nIndeed, objects are widely considered to be core knowledge~\\cite{spelke2007core}, and infants learn about objects already within their first year of life~\\cite{munakata1997rethinking}.\n\nRL methods that leverage the combinatorics of objects and relations have shown similar benefits in terms of systematic generalization~\\cite{zambaldi2019deep}, sample efficiency~\\cite{diuk2008objectoriented}, and transferring skills and knowledge across domains~\\cite{kansky2017schema}. %\nRecently, there appears to be an emerging consensus that objects are important in learning intelligent agents~\\cite{lake2017building}, while it remains unclear in how to fully realize this potential.\nThe discrete and compositional nature of objects seems at odds with many of the core tenets of connectionism, and they are unlikely to emerge naturally in neural networks.\nReconciling the two is a difficult problem and requires careful thought to ensure a synergistic integration. %\n\n\n\n\n\n\n\n\\paragraph{The Binding Problem}\n\nHow then should we think about objects?\nWhy do they not simply emerge in neural networks, what is missing, and how can this be addressed?\nMany of these questions have been raised and debated in theoretical neuroscience and have become known as the binding problem: How does the brain bind features together into objects while keeping them separate from other objects.\nInspired by this literature (cf. \\citet{treisman1999solutions, vondermalsburg1995binding}) we will focus on three main challenges in incorporating objects in connectionist models of the world: segregation, representation, and composing, which we discuss in the next sections.\n\nSegregation is about object discovery, i.e. given a set of observations, what are good candidates for representational objects and how can they be extracted.\nRepresentation is about storing this representational content in neural networks, and as we will find, plain fully-connected feedforward networks are ill-equipped to solve this task.\nFinally, composition is about using representational objects efficiently in a way that ensures combinatorial generalization (\\emph{systematicity}; \\citet{niklasson1994being}).\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Representation}\n\\label{sec:representation}\n\n\n\n\nWhat are good object representations? \nIf objects are to serve as the primitives for compositional reasoning, it is important that their representations support that end.\nHere we argue for three main requirements:\\footnote{Keeping space limitations in mind we will not attempt at exhaustively listing all requirements. Instead we focus on those that we believe to be most important.}\n\n\\begin{description}\n \\item[Universal] Each object representation should be able to represent any object regardless of position, class or other properties. \n It should facilitate generalization, even to unseen objects (zero-shot generalization), which in practice means that its representation should be distributed and disentangled.\n \\item[Multi-Object] It should be possible to represent multiple objects simultaneously, such that they can be related and composed but also transformed individually. \n This only needs to cover a small number of objects at the same time (e.g. $7\\pm2$; \\citealt{miller1956magical}), since there is an intractable number of possible objects. \n Instead, objects should be swapped in and out of this working memory on demand.\n \\item[Common Format] All objects should be represented in the same format, i.e. in terms of the same features. \n This makes representations comparable, provides a unified interface for compositional reasoning and allows the transfer of knowledge between objects. %\n\\end{description}\n\n\nIt is easy to see how regular representations of fully-connected neural networks fall short in this regard:\nWhen representing multiple objects, they can either reuse the same features for all objects simultaneously, thus superimposing representations which leads to ambiguities (\\{red, square\\} + \\{blue, triangle\\} = \\{red, blue, square, triangle\\}).\nAlternatively, they can allocate a different set of features per object which violates common format.\nWithout any architectural bias in the form of weight sharing, useful multi-object representations are thus unlikely to emerge naturally in a neural network.\nIn what way can this problem be addressed?\n\nWeight sharing, as it is used, for example, in ConvNets and RNNs, is a step in the right direction.\nWe call these approaches ``slot-based'' because they provide several slots that all share weights and can thus be used to represent objects in a common format.\nIn the case of RNNs there is one slot per time-step~\\cite{eslami2016attend}, while in ConvNets there is one slot per spatial location in the image~\\cite{santoro2017simple}.\nNote that both are in slight violation of universality because they tie a slot to a specific time step or location, while RNNs additionally do not \\emph{simultaneously} represent multiple objects. \nWe can extend the idea of representational slots and consider a setting in which each object has its own universal slot and all slots share a common format (instance slots, c.f.~\\cref{fig:slots}).\nWhile this constitutes a good object representation, it raises another problem: if all slots are identical and share weights, then how do they not end up all representing the same object?\nSolving this conundrum requires a dynamic information routing process that goes beyond simple feed-forward processing (see~\\cref{sec:segregation}). \n\nThere are two, less developed, alternatives to slot-based approaches that have the potential to meet our requirements: \n\\emph{Augmentation} approaches keep a single set of features but augment each feature to include some extra grouping information. \nExamples include complex-valued activations (e.g. \\citet{reichert2013neuronal}) or spiking networks that encode grouping via synchronization (e.g. \\citet{lane1998simple}).\n\\emph{Embedding} approaches carefully embed multi-object representations in a higher-dimensional space (e.g. \\emph{Tensor Product Representations};~\\citealt{smolensky1990tensor}).\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=\\linewidth]{approach_replicate.pdf}\n\\caption{Different types of \\emph{slot-based} representation strategies.}\n\\label{fig:slots}\n\\vspace{-10pt}\n\\end{figure}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Segregation}\n\\label{sec:segregation}\n\nIt can be difficult to provide precise boundaries or definitions even for concrete objects like a tree, a mountain, and a river.\nMatters become even worse with slightly more abstract objects like a hole, a shadow, or a street corner.\nClearly, sensory information does not come pre-structured into objects, yet we so effortlessly and consistently perceive them. %\nHow can we aid our agents in developing an equally general understanding of objects?\nWe address this by focusing on the role of objects as computational primitives in a compositional reasoning system, namely as abstract patterns of the input data that are modular, dynamic, and consistent:\n\n\n\n\n\\begin{description}\n \\item[Modular] Objects subdivide the input into parts with strong internal coherence while being mostly independent of each other given some task under consideration.\n This division can be thought of as a form of clustering by mutual predictability and helps minimize the error that results from treating them as independent entities. \n \n \n \\item[Dynamic] Objects are task-dependent, i.e. there is no one fixed definition of objects that applies to all tasks.\n For example, objects can be part-whole hierarchies whose parts are objects themselves: a stack of chairs can be viewed as a single object (the stack) or as multiple objects (the individual chairs). \n It necessitates top-down feedback: interaction between the up-stream problem solving and down-stream segregation to obtain a dynamic definition of objects.\n \\item[Consistent] \n Representational objects often ``refer to'' physical objects in the real world (although this does not need to be the case), and their usefulness depends on the reliability of that link. \n The output of the segregation process must thus be stable and consistent to ensure that the results from internal reasoning can be mapped back onto the environment.\n Consistency is also important in communication (different agents should agree on objects), and in the absence of information, e.g. as a result of occlusion.\n\\end{description}\n\n\n\n\nModularity rules out standard convolutional neural networks as a means to learn object representations given by the representational content at each spatial slot.\nEach convolutional layer with a kernel size exceeding $1 \\times 1$ creates dependencies between local spatial neighborhoods. \nThrough depth, the representational content of upper layers encode information from all spatial positions and are no longer modular: a change affecting a single object in the input image affects the representations at \\emph{all} spatial locations in the upper layers. \n\nDynamicity implies that we can not treat segregation as a pre-processing step that extracts objects from input data.\nThis rules out the use of large quantities of labeled data to pre-train an image segmenter, or the use of domain-specific engineering as is commonly found in generative models that essentially encode a fixed definition of object.\nMoreover, human labor is an expensive resource that we can not spend exhaustively to overcome all possible situations.\n\n\nWe conclude that to a large extend object learning must be \\emph{unsupervised} through a specialized mechanism that allows for the possibility to incorporate top-down feedback.\nTwo promising approaches from the literature are \\emph{attention} and \\emph{differentiable clustering}.\n\nAttention mechanisms are used to selectively attend to a subset of the image, i.e. parts that correspond to a single object~\\cite{schmidhuber1991learning, eslami2016attend}.\nIn this way, attention restricts the information intake and ensures that the resulting representations are modular. \nTop-down feedback can be incorporated by granting control of the attention window to the agent that learns to solve some task~\\cite{mnih2014recurrent}.\nA downside is that objects are processed in an iterative fashion, which may make it more challenging to reason about multiple objects simultaneously~\\cite{kosiorek2018sequential}.\n\nAn alternative mechanism is differentiable clustering, which seeks to partition the input in a number of segments while learning the similarity function.\nIndividual segments are disjoint and result in modularity, while the iterative nature of these clustering procedures allow top-down feedback to be incorporated~\\cite{greff2017neural, greff2019multiobject}.\n\n\n\n\n\n\n\n\n\n\n\n\n\t\n\n\\section{Composition}\n\\label{sec:composition}\n\n\n\n\n\n\nLet us now assume that representation and segregation have been addressed, and we have available a set of relevant independent objects represented in a common format.\nNote that when used correctly, these object representations can already make tasks like performing basic feature comparisons very easy.\nFor example, a function that receives a pair of objects as input and compares their size-related features could easily be learned, and would almost automatically generalize to arbitrary pairs of objects.%\n\nIn contrast, combinatorial generalization is not a given for more complex relational reasoning. \nWhile it also involves learning general functions that accept objects as their arguments, one has to take extra care in being able to flexibly assign the right objects to their corresponding function arguments, as well as in learning about different structural forms that imply different ways of generalizing~\\cite{kemp2008discovery}.\nThese then imply the following requirements:\n\\begin{description}\n\\item[General Relations]\nRelations differ both in their meaning and in the patterns of generalization that they imply. \nA general reasoning system, therefore, has to be able to instantiate many different types of relations, which necessitates a general representational form.\n\n\\item[Dynamic Binding] \nIn order to construct a model for a specific situation, the system needs the flexibility to freely combine objects and relations into an arbitrary structure.\nBoth the structure of relations and the associated objects (\\emph{variable binding}; \\citealt{browne2000connectionist}) have to be inferred dynamically during run-time.\n\n\\item[Role-filler Independence] The content of objects should be independent of their structural roles~\\cite{hummel2003symbolicconnectionist}.\nThat is, any object can take part in any relation, and the interpretation of the whole is determined by both the parts and the structure.\nThis is related to \\emph{common format} and is the key to compositionality that enables the powerful systematic generalization that is characteristic of many symbolic systems.\n\n\\end{description}\n\nOne approach is to implement complex relational reasoning in a sequential fashion. \nAt each step, an object associated with a particular role is processed and the resulting intermediate computation is stored, to be combined in the next step.\nWhile it is clear that a plain RNN can perform this type of computation, the dual role of intermediate representations in representing objects and intermediate computation suggests a very specific function that may be hard to learn~\\cite{graves2014neural}.\nAlternatively, by combining an RNN with a suitable memory mechanism (eg.~\\citet{Das:92,mozer1993connectionist,reed2015neural,graves2016hybrid}) or fast\nweights (eg.~\\citet{Schmidhuber:92ncfastweights,Schmidhuber:93ratioicann,schlag2018learning}) it may be more easy to learn general functions of this kind.\n\n\nAn alternative approach is to embed objects, and intermediate representations as nodes in a (directed) graph and let computation take place along its edges.\nThese computation graphs can implement arbitrary relationships, including recursive computation by re-applying the same function successively.\nGraph Networks~\\cite{battaglia2018relational} structure neural network computations according to this underlying graph and perform relational reasoning through repeated message-passing between the nodes in the graph. %\nCompositionality is achieved through weight-sharing, i.e. by learning a general function that operates on (pairs of) nodes following their topological relationship.\nHowever, while graph networks have been successfully applied in the domain of physical reasoning (e.g. \\citet{battaglia2016interaction, vansteenkiste2018relational}), a remaining challenge is in dynamically inferring the right structure (i.e. dynamic binding).\n\nWhile graph networks appear most promising in addressing the challenges of composition, one other approach deserves a mention.\n\\emph{Embedding approaches}, such as Poincar\\'{e} embeddings~\\cite{nickel2017poincare}, generalize Euclidean representations to other spaces that more suited in modeling certain types of relations, in this case: hierarchical relationships.\nHowever, the feature representations are essentially adapted to reflect the underlying relation during training, which implies fixed roles and binding during inference.\n\n\n\n\n\n\n\n\n\n\n\n\\section{Conclusion}\n\\label{sec:conclusion}\nWe have argued that feature representations alone are inadequate abstractions for planning, reasoning, and for systematically transferring knowledge to novel situations.\nTo meet the diverse challenges on our quest towards AGI an agent needs to be able to dynamically construct new models about its environment on the fly while reusing as much prior knowledge as possible. \nWe have argued that objects (dynamically bound features) are adequate building blocks to quickly and flexibly compose such task-specific models.\nAlthough our examples were centered around vision, we believe that the notion of objects applies equally to other domains like audio, tactile and even abstract thought.\nBy focusing on their role as compositional primitives, we have identified some inductive biases that we believe are necessary for objects to arise within a connectionist system.\nThey can be categorized into three areas: representation, segregation, and composition of objects.\n\nAmong these three we find that segregation is most frequently neglected and deserves more attention.\nCommon approaches rely either on some combination of pre-processing pipelines, supervision, or highly engineered generative models of objects.\nMeanwhile, the few approaches that tackle this challenge in a holistic and unsupervised way are brittle and have not yet been scaled to real-world data.\nDeveloping better methods for tackling the segregation problem within the framework of connectionism is going to be a central challenge on the way towards AGI.\nSimilarly, we would like to stress the importance of integrating solutions to all three aspects into a single system.\nThe potential of objects as modular building blocks can only be realized in full if they are both informed by learned representations, and by feedback from the composite model. \n\nAnother important direction is the integration of objects with other critical cognitive mechanisms such as attention and memory.\nBecause objects are optimized to be modular, they naturally aggregate features that need to be processed together, but which can be separated from other information.\nThis makes them ideal primitives for attention and for storage and retrieval from long-term memory.\nAttention, in turn, can simplify a task by filtering out irrelevant information and can guide the processing required for more complex reasoning chains.\nSuch a reasoning process could then also query objects from memory on demand to be compared to or integrated with the current model. \n\nWith this short essay, we hope to draw attention to the intricacies of objects and inspire others to think critically about their integration in connectionist models. \n\n \n\n\n\n\n\n\n\n\\section*{Acknowledgements}\n\nThis research was funded by SNF grant 200021\\_165675\/1.\n\n\n\\section{Introduction}\n\\label{sec:introduction}\n\n\n\n\n\n\nArtificial General Intelligence (AGI) requires an intelligent agent to solve a wide variety of real-world tasks.\nLearning to solve these tasks efficiently involves sharing knowledge between tasks, and systematic generalization from relatively few samples. \nIn contrast, agents trained with Reinforcement Learning (RL) frequently fall short in this regard: they rely on excessive amounts of data~\\cite{francois-lavet2018introduction} and are unable to generalize beyond their initial training regime~\\cite{leike2017ai}. %\n\n\n\nModel-based RL promises to alleviate this problem by using a general (non task-specific) world model that captures the latent structure of the environment.\nThis more abstract knowledge about the world is expected to be useful for many (even novel) tasks and facilitate simulation and planning.\nIt is unclear what constitutes a good model, and frequently models are either engineered~\\cite{deavilabelbute-peres2018endtoend} or obtained by training a deep (recurrent) neural network to predict future states of the world~\\cite{schmidhuber1990making,ha2018recurrent}. \nIn the latter case, an underlying assumption is that the learned representations of such a network present suitable abstractions for transfer and planning, analogous to the versatility of features learned by a deep convolutional image classifier.\nHowever, the limited success of learned models for model-based RL in these domains raises doubts about the validity of this assumption.\n\nIn this work, we argue that, rather than learning a single monolithic model that handles all situations in all environments, what is needed is a flexible system which dynamically infers a suitable model on the fly. %\nA human playing the game of \\emph{Space Invaders} uses a mental model that revolves around space ships and aliens, without simultaneously also considering all other aspects of the real world that are relevant for other tasks.\nHumans are also quick to adapt their model to new information by adding or removing additional assumptions.\nFor example, reading the manual of a game before playing greatly increases first-episode performance~\\cite{tsividis2017human}. %\n\nInitially, it may appear that in arguing for a dynamic model we have mostly made the task of model-learning harder: we now require learning many different models that fit specific situations. \nWhy then would we expect such a model to perform any better or even work at all?\n\n\n\n\n\n\n\n\n\n\n\\paragraph{Objects}\nObjects are the key piece to this puzzle, in that they facilitate the modular reuse of prior knowledge and the combinatorial construction of novel models.\nIt is well-established that objects play a central role in human cognition, both for internal reasoning and as the basis for communicating about the world.\nIndeed, objects are widely considered to be core knowledge~\\cite{spelke2007core}, and infants learn about objects already within their first year of life~\\cite{munakata1997rethinking}.\n\nRL methods that leverage the combinatorics of objects and relations have shown similar benefits in terms of systematic generalization~\\cite{zambaldi2019deep}, sample efficiency~\\cite{diuk2008objectoriented}, and transferring skills and knowledge across domains~\\cite{kansky2017schema}. %\nRecently, there appears to be an emerging consensus that objects are important in learning intelligent agents~\\cite{lake2017building}, while it remains unclear in how to fully realize this potential.\nThe discrete and compositional nature of objects seems at odds with many of the core tenets of connectionism, and they are unlikely to emerge naturally in neural networks.\nReconciling the two is a difficult problem and requires careful thought to ensure a synergistic integration. %\n\n\n\n\n\n\n\n\\paragraph{The Binding Problem}\n\nHow then should we think about objects?\nWhy do they not simply emerge in neural networks, what is missing, and how can this be addressed?\nMany of these questions have been raised and debated in theoretical neuroscience and have become known as the binding problem: How does the brain bind features together into objects while keeping them separate from other objects.\nInspired by this literature (cf. \\citet{treisman1999solutions, vondermalsburg1995binding}) we will focus on three main challenges in incorporating objects in connectionist models of the world: segregation, representation, and composing, which we discuss in the next sections.\n\nSegregation is about object discovery, i.e. given a set of observations, what are good candidates for representational objects and how can they be extracted.\nRepresentation is about storing this representational content in neural networks, and as we will find, plain fully-connected feedforward networks are ill-equipped to solve this task.\nFinally, composition is about using representational objects efficiently in a way that ensures combinatorial generalization (\\emph{systematicity}; \\citet{niklasson1994being}).\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Representation}\n\\label{sec:representation}\n\n\n\n\nWhat are good object representations? \nIf objects are to serve as the primitives for compositional reasoning, it is important that their representations support that end.\nHere we argue for three main requirements:\\footnote{Keeping space limitations in mind we will not attempt at exhaustively listing all requirements. Instead we focus on those that we believe to be most important.}\n\n\\begin{description}\n \\item[Universal] Each object representation should be able to represent any object regardless of position, class or other properties. \n It should facilitate generalization, even to unseen objects (zero-shot generalization), which in practice means that its representation should be distributed and disentangled.\n \\item[Multi-Object] It should be possible to represent multiple objects simultaneously, such that they can be related and composed but also transformed individually. \n This only needs to cover a small number of objects at the same time (e.g. $7\\pm2$; \\citealt{miller1956magical}), since there is an intractable number of possible objects. \n Instead, objects should be swapped in and out of this working memory on demand.\n \\item[Common Format] All objects should be represented in the same format, i.e. in terms of the same features. \n This makes representations comparable, provides a unified interface for compositional reasoning and allows the transfer of knowledge between objects. %\n\\end{description}\n\n\nIt is easy to see how regular representations of fully-connected neural networks fall short in this regard:\nWhen representing multiple objects, they can either reuse the same features for all objects simultaneously, thus superimposing representations which leads to ambiguities (\\{red, square\\} + \\{blue, triangle\\} = \\{red, blue, square, triangle\\}).\nAlternatively, they can allocate a different set of features per object which violates common format.\nWithout any architectural bias in the form of weight sharing, useful multi-object representations are thus unlikely to emerge naturally in a neural network.\nIn what way can this problem be addressed?\n\nWeight sharing, as it is used, for example, in ConvNets and RNNs, is a step in the right direction.\nWe call these approaches ``slot-based'' because they provide several slots that all share weights and can thus be used to represent objects in a common format.\nIn the case of RNNs there is one slot per time-step~\\cite{eslami2016attend}, while in ConvNets there is one slot per spatial location in the image~\\cite{santoro2017simple}.\nNote that both are in slight violation of universality because they tie a slot to a specific time step or location, while RNNs additionally do not \\emph{simultaneously} represent multiple objects. \nWe can extend the idea of representational slots and consider a setting in which each object has its own universal slot and all slots share a common format (instance slots, c.f.~\\cref{fig:slots}).\nWhile this constitutes a good object representation, it raises another problem: if all slots are identical and share weights, then how do they not end up all representing the same object?\nSolving this conundrum requires a dynamic information routing process that goes beyond simple feed-forward processing (see~\\cref{sec:segregation}). \n\nThere are two, less developed, alternatives to slot-based approaches that have the potential to meet our requirements: \n\\emph{Augmentation} approaches keep a single set of features but augment each feature to include some extra grouping information. \nExamples include complex-valued activations (e.g. \\citet{reichert2013neuronal}) or spiking networks that encode grouping via synchronization (e.g. \\citet{lane1998simple}).\n\\emph{Embedding} approaches carefully embed multi-object representations in a higher-dimensional space (e.g. \\emph{Tensor Product Representations};~\\citealt{smolensky1990tensor}).\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=\\linewidth]{approach_replicate.pdf}\n\\caption{Different types of \\emph{slot-based} representation strategies.}\n\\label{fig:slots}\n\\vspace{-10pt}\n\\end{figure}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Segregation}\n\\label{sec:segregation}\n\nIt can be difficult to provide precise boundaries or definitions even for concrete objects like a tree, a mountain, and a river.\nMatters become even worse with slightly more abstract objects like a hole, a shadow, or a street corner.\nClearly, sensory information does not come pre-structured into objects, yet we so effortlessly and consistently perceive them. %\nHow can we aid our agents in developing an equally general understanding of objects?\nWe address this by focusing on the role of objects as computational primitives in a compositional reasoning system, namely as abstract patterns of the input data that are modular, dynamic, and consistent:\n\n\n\n\n\\begin{description}\n \\item[Modular] Objects subdivide the input into parts with strong internal coherence while being mostly independent of each other given some task under consideration.\n This division can be thought of as a form of clustering by mutual predictability and helps minimize the error that results from treating them as independent entities. \n \n \n \\item[Dynamic] Objects are task-dependent, i.e. there is no one fixed definition of objects that applies to all tasks.\n For example, objects can be part-whole hierarchies whose parts are objects themselves: a stack of chairs can be viewed as a single object (the stack) or as multiple objects (the individual chairs). \n It necessitates top-down feedback: interaction between the up-stream problem solving and down-stream segregation to obtain a dynamic definition of objects.\n \\item[Consistent] \n Representational objects often ``refer to'' physical objects in the real world (although this does not need to be the case), and their usefulness depends on the reliability of that link. \n The output of the segregation process must thus be stable and consistent to ensure that the results from internal reasoning can be mapped back onto the environment.\n Consistency is also important in communication (different agents should agree on objects), and in the absence of information, e.g. as a result of occlusion.\n\\end{description}\n\n\n\n\nModularity rules out standard convolutional neural networks as a means to learn object representations given by the representational content at each spatial slot.\nEach convolutional layer with a kernel size exceeding $1 \\times 1$ creates dependencies between local spatial neighborhoods. \nThrough depth, the representational content of upper layers encode information from all spatial positions and are no longer modular: a change affecting a single object in the input image affects the representations at \\emph{all} spatial locations in the upper layers. \n\nDynamicity implies that we can not treat segregation as a pre-processing step that extracts objects from input data.\nThis rules out the use of large quantities of labeled data to pre-train an image segmenter, or the use of domain-specific engineering as is commonly found in generative models that essentially encode a fixed definition of object.\nMoreover, human labor is an expensive resource that we can not spend exhaustively to overcome all possible situations.\n\n\nWe conclude that to a large extend object learning must be \\emph{unsupervised} through a specialized mechanism that allows for the possibility to incorporate top-down feedback.\nTwo promising approaches from the literature are \\emph{attention} and \\emph{differentiable clustering}.\n\nAttention mechanisms are used to selectively attend to a subset of the image, i.e. parts that correspond to a single object~\\cite{schmidhuber1991learning, eslami2016attend}.\nIn this way, attention restricts the information intake and ensures that the resulting representations are modular. \nTop-down feedback can be incorporated by granting control of the attention window to the agent that learns to solve some task~\\cite{mnih2014recurrent}.\nA downside is that objects are processed in an iterative fashion, which may make it more challenging to reason about multiple objects simultaneously~\\cite{kosiorek2018sequential}.\n\nAn alternative mechanism is differentiable clustering, which seeks to partition the input in a number of segments while learning the similarity function.\nIndividual segments are disjoint and result in modularity, while the iterative nature of these clustering procedures allow top-down feedback to be incorporated~\\cite{greff2017neural, greff2019multiobject}.\n\n\n\n\n\n\n\n\n\n\n\n\n\t\n\n\\section{Composition}\n\\label{sec:composition}\n\n\n\n\n\n\nLet us now assume that representation and segregation have been addressed, and we have available a set of relevant independent objects represented in a common format.\nNote that when used correctly, these object representations can already make tasks like performing basic feature comparisons very easy.\nFor example, a function that receives a pair of objects as input and compares their size-related features could easily be learned, and would almost automatically generalize to arbitrary pairs of objects.%\n\nIn contrast, combinatorial generalization is not a given for more complex relational reasoning. \nWhile it also involves learning general functions that accept objects as their arguments, one has to take extra care in being able to flexibly assign the right objects to their corresponding function arguments, as well as in learning about different structural forms that imply different ways of generalizing~\\cite{kemp2008discovery}.\nThese then imply the following requirements:\n\\begin{description}\n\\item[General Relations]\nRelations differ both in their meaning and in the patterns of generalization that they imply. \nA general reasoning system, therefore, has to be able to instantiate many different types of relations, which necessitates a general representational form.\n\n\\item[Dynamic Binding] \nIn order to construct a model for a specific situation, the system needs the flexibility to freely combine objects and relations into an arbitrary structure.\nBoth the structure of relations and the associated objects (\\emph{variable binding}; \\citealt{browne2000connectionist}) have to be inferred dynamically during run-time.\n\n\\item[Role-filler Independence] The content of objects should be independent of their structural roles~\\cite{hummel2003symbolicconnectionist}.\nThat is, any object can take part in any relation, and the interpretation of the whole is determined by both the parts and the structure.\nThis is related to \\emph{common format} and is the key to compositionality that enables the powerful systematic generalization that is characteristic of many symbolic systems.\n\n\\end{description}\n\nOne approach is to implement complex relational reasoning in a sequential fashion. \nAt each step, an object associated with a particular role is processed and the resulting intermediate computation is stored, to be combined in the next step.\nWhile it is clear that a plain RNN can perform this type of computation, the dual role of intermediate representations in representing objects and intermediate computation suggests a very specific function that may be hard to learn~\\cite{graves2014neural}.\nAlternatively, by combining an RNN with a suitable memory mechanism (eg.~\\citet{Das:92,mozer1993connectionist,reed2015neural,graves2016hybrid}) or fast\nweights (eg.~\\citet{Schmidhuber:92ncfastweights,Schmidhuber:93ratioicann,schlag2018learning}) it may be more easy to learn general functions of this kind.\n\n\nAn alternative approach is to embed objects, and intermediate representations as nodes in a (directed) graph and let computation take place along its edges.\nThese computation graphs can implement arbitrary relationships, including recursive computation by re-applying the same function successively.\nGraph Networks~\\cite{battaglia2018relational} structure neural network computations according to this underlying graph and perform relational reasoning through repeated message-passing between the nodes in the graph. %\nCompositionality is achieved through weight-sharing, i.e. by learning a general function that operates on (pairs of) nodes following their topological relationship.\nHowever, while graph networks have been successfully applied in the domain of physical reasoning (e.g. \\citet{battaglia2016interaction, vansteenkiste2018relational}), a remaining challenge is in dynamically inferring the right structure (i.e. dynamic binding).\n\nWhile graph networks appear most promising in addressing the challenges of composition, one other approach deserves a mention.\n\\emph{Embedding approaches}, such as Poincar\\'{e} embeddings~\\cite{nickel2017poincare}, generalize Euclidean representations to other spaces that more suited in modeling certain types of relations, in this case: hierarchical relationships.\nHowever, the feature representations are essentially adapted to reflect the underlying relation during training, which implies fixed roles and binding during inference.\n\n\n\n\n\n\n\n\n\n\n\n\\section{Conclusion}\n\\label{sec:conclusion}\nWe have argued that feature representations alone are inadequate abstractions for planning, reasoning, and for systematically transferring knowledge to novel situations.\nTo meet the diverse challenges on our quest towards AGI an agent needs to be able to dynamically construct new models about its environment on the fly while reusing as much prior knowledge as possible. \nWe have argued that objects (dynamically bound features) are adequate building blocks to quickly and flexibly compose such task-specific models.\nAlthough our examples were centered around vision, we believe that the notion of objects applies equally to other domains like audio, tactile and even abstract thought.\nBy focusing on their role as compositional primitives, we have identified some inductive biases that we believe are necessary for objects to arise within a connectionist system.\nThey can be categorized into three areas: representation, segregation, and composition of objects.\n\nAmong these three we find that segregation is most frequently neglected and deserves more attention.\nCommon approaches rely either on some combination of pre-processing pipelines, supervision, or highly engineered generative models of objects.\nMeanwhile, the few approaches that tackle this challenge in a holistic and unsupervised way are brittle and have not yet been scaled to real-world data.\nDeveloping better methods for tackling the segregation problem within the framework of connectionism is going to be a central challenge on the way towards AGI.\nSimilarly, we would like to stress the importance of integrating solutions to all three aspects into a single system.\nThe potential of objects as modular building blocks can only be realized in full if they are both informed by learned representations, and by feedback from the composite model. \n\nAnother important direction is the integration of objects with other critical cognitive mechanisms such as attention and memory.\nBecause objects are optimized to be modular, they naturally aggregate features that need to be processed together, but which can be separated from other information.\nThis makes them ideal primitives for attention and for storage and retrieval from long-term memory.\nAttention, in turn, can simplify a task by filtering out irrelevant information and can guide the processing required for more complex reasoning chains.\nSuch a reasoning process could then also query objects from memory on demand to be compared to or integrated with the current model. \n\nWith this short essay, we hope to draw attention to the intricacies of objects and inspire others to think critically about their integration in connectionist models. \n\n \n\n\n\n\n\n\n\n\\section*{Acknowledgements}\n\nThis research was funded by SNF grant 200021\\_165675\/1.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{\\label{sec:introduction}Introduction}\n\n\nIn eddy viscosity modelling, the specific relationship between the Reynolds stress tensor and the mean velocity gradient tensor is the primary subject of interest. Many relationships have been proposed, from simple mixing length models to the sophisticated elliptic blending $k$-$\\varepsilon$ model. Despite the well-known limitations of the eddy viscosity models for turbulent flows, most industrial simulations still rely on these types of turbulence closures within the Reynolds-averaged Navier-Stokes (RANS) framework because of their computational simplicity and efficiency \\cite{Witherden2017}. While the inherent assumption of a local dependence in eddy viscosity modelling is nearly impossible to avoid, improving the functional relationship between the Reynolds stress tensor and the mean velocity gradient tensor remains critically important for accurate industrial simulations \\cite{CFD2030}. \n\n\nTraditionally, turbulence closure models are formulated using heuristic and physics-based arguments. Common examples include the analogy drawn between turbulent stresses and viscous stresses, or the model transport equations for turbulent kinetic energy (TKE) $k$ and a length-scale determining variable such as $\\varepsilon$ (TKE dissipation rate). However, a new approach in turbulence modelling is the physics-\\textit{informed} data-driven approach \\cite{Duraisamy2020}, which leverages the capabilities of machine learning. Rather than simplifying the underlying physics to close the RANS equations, the functional relationship between the mean flow gradients and Reynolds stresses is determined from high-fidelity datasets. Machine learning is well-suited to ``discover'' this (complex) functional relationship using a data-driven approach.\n\nWithin the framework of the application of machine learning for data-driven closure modelling, a wide range of methods have been presented and nearly all of them have demonstrated promising results for simple flows~\\cite{Duraisamy2019a,Brunton2020,Kutz2017,Wang2017}. Several machine learning and injection frameworks have been proposed~\\cite{Duraisamy2021,Zhu2020,Chang2018,Jiang2021}, including open-loop~\\cite{Bhushan2021} and iterative frameworks~\\cite{Liu2021}. These methods have been demonstrated to improve RANS results in canonical flows~\\cite{Srinivasan2019,Yin2020,Fang2020,Zhang2019b}, flow over airfoils~\\cite{Zhu2019,Matai2019}, combustion~\\cite{Nikolaou2020}, and transonic flows~\\cite{Tan2021}. Several investigations~\\cite{Kaandorp2018,Kaandorp2020,Song2019,Zhang2019a} have used machine learning to determine the coefficients of a tensor basis expansion for the anisotropy tensor, based on the seminal work of Ling {\\em et al.}~\\cite{Ling2016}. Other frameworks include the optimal eddy viscosity approach proposed by Wu {\\em et al.}~\\cite{Wu2018}, or the Reynolds force vector approach advocated by Cruz {\\em et al.}~\\cite{Cruz2019}. However, the issue of ill-conditioning was recently shown by Brener {\\em et al.}~\\cite{Brener2021} to affect a number of these modelling frameworks. When the RANS equations are ill-conditioned, small errors in the Reynolds stress tensor can be amplified, resulting in large errors in the prediction of the mean velocity field~\\cite{Wu2019}. This error amplification is a major issue for physics-informed data-driven closures, as some amount of model error in predicting the Reynolds stress tensor is expected. Brener {\\em et al.}~\\cite{Brener2021} showed that the optimal eddy viscosity approach proposed by Wu {\\em et al.}~\\cite{Wu2018} is an effective strategy to address the ill-conditioning problem.\n\n\nWhile previous investigations have presented optimal eddy viscosity-based frameworks that improve the conditioning of the RANS equations, these formulations provide no guarantee of a non-negative eddy viscosity. For practical purposes, the eddy viscosity should always be non-negative---a negative eddy viscosity prediction has the potential to destabilize convergence in an iterative RANS solver. A new formulation of the optimal eddy viscosity is proposed in the present work, which guarantees a non-negative optimal eddy viscosity. The conditioning of the new formulation is analyzed, and the suitability for use in a physics-informed data-driven turbulence closure model is assessed by using a neural network to predict this optimal eddy viscosity. An additional neural network is used to predict the remaining (non-linear) portion of the Reynolds stress tensor. The prediction of the velocity field is greatly improved after injecting the neural network-predicted Reynolds stress tensor into the mean momentum equations. \n\n\nThe present work is the first to propose a stability constraint on the optimal eddy viscosity and to implement this constraint into a neural network. Along with proposing a neural network architecture for predicting the optimal eddy viscosity, the present work also aims to demonstrate the ability of deep learning augmentation to improve the results of a sophisticated physics-based turbulence closure model. While previous studies have improved results from the simpler $k$-$\\varepsilon$ \\cite{Ling2016} and $k$-$\\omega$ models \\cite{Kaandorp2020}, the model augmented in the present work is the $k$-$\\varepsilon$-$\\phi_t$-$f$ model \\cite{Hanjalic2004},\nwhich is an improved reformulation of the $v2$-$f$ model \\cite{Durbin1991}. This model contains three turbulent transport equations for $k$, $\\varepsilon$, and $\\phi_t$, along with an elliptic equation for the damping variable $f$.\n\n\nThe present work is structured as follows: in Section~\\ref{sec:methodology}, we motivate and discuss the proposed optimal eddy viscosity formulation, and injection process. The choice of the network topology and the associated hyperparameter tuning for the customization of deep neural networks for the task of turbulence closure modelling is then described in Section~\\ref{sec:deeplearning}. Section~\\ref{sec:results} presents the model predictions for an unseen test case. An \\textit{a priori} analysis of the model predictions is given in Section~\\ref{sec:apriori}. After injecting the model predictions into a RANS solver, the converged mean fields are analyzed in an \\textit{a posteriori} sense in Section~\\ref{sec:aposteriori}. Having demonstrated the accuracy improvements gained through machine learning augmentation, Section~\\ref{sec:interpret} presents an interpretation and discussion of the data-driven closure. Conclusions and recommendations for further study are given in Section~\\ref{sec:conclusion}.\n\n\\section{Methodology}\\label{sec:methodology}\n\nThe Reynolds-averaged continuity and momentum equations for an incompressible, isothermal, steady-state mean velocity field $\\vec U = (U,V,W)$ and pressure field $p$ is given by\n\\begin{eqnarray}\n\\nabla \\cdot \\vec{U} = 0\\ ,\\\\\n\\nabla\\cdot (\\vec{U}\\vec{U}) = - \\nabla p + \\nu \\nabla^2 \\vec{U} - \\nabla \\cdot \\tau\\ ,\\label{eq:momentum}\n\\end{eqnarray}\nwhere $\\nu$ is the molecular kinematic viscosity of the fluid and $\\tau \\equiv \\overline{\\vec{u}'\\vec{u}'}$ is the Reynolds stress tensor. Here, mean values are denoted using an overbar and $\\vec{u}' \\equiv (\\vec{u} - \\vec{U})$ is the fluctuating velocity ($\\vec{u}$ is the total instantaneous turbulent velocity).\n\nThese equations are hereinafter referred to as the RANS equations. The unclosed term (Reynolds stress tensor), $\\tau$, is the subject of turbulence closure modelling. The common modelling approach is to approximate the anisotropic part of $\\tau$ analogously to the viscous stress term, so\n\\begin{equation}\na \\equiv \\tau - \\frac{1}{3}\\text{tr}(\\tau)I \\approx -2 \\nu_t S \\ , \\label{eq:anisotropy}\n\\end{equation}\nwhere $a$ is the anisotropy tensor and $I$ is the identity tensor. Equation~(\\ref{eq:anisotropy}) is known as the eddy viscosity approximation, where $\\nu_t$ is the eddy viscosity, and $S$ is the mean strain-rate tensor (symmetric part of the mean velocity gradient tensor) defined by\n$$S \\equiv \\tfrac{1}{2}\\left(\\nabla \\vec{U} + \\nabla \\vec{U}^\\text{T}\\right)\\ ,$$\nwhere the superscript $T$ denotes matrix transposition.\n\n\nThis approximation for $\\tau$ permits the eddy viscosity to be treated implicitly, and added to the molecular viscosity $\\nu$ in Eq.~(\\ref{eq:momentum}). Numerically, this has a stabilizing effect and therefore this approach is widely used in RANS modelling. Additionally, the term $\\nabla \\cdot \\tfrac{1}{3}\\text{tr}(\\tau)I$ (which is equal to $\\nabla \\cdot \\tfrac{2}{3}kI$) is typically absorbed into the isotropic term $\\nabla p$. The resulting isotropic term becomes the gradient of the ``modified pressure'' given by $p' = p + \\tfrac{2}{3}k$. \n\nWhile this approximation for $\\tau$ is common in traditional turbulence modelling, the approach in data-driven turbulence modelling has not been uniform. Liu {\\em et~al.}~\\cite{Liu2021} and Brener {\\em et al.}~\\cite{Brener2021} used the eddy viscosity approximation as follows:\n\\begin{eqnarray}\n\\tau \\approx -2 \\nu_t S \\label{eq:eddyviscosity_tau}. \n\\end{eqnarray}\nHowever, a deficiency with the approximation given by Eq.~(\\ref{eq:eddyviscosity_tau}) is that $\\tau$ (whose non-zero trace is $2k$) can never be completely aligned with $S$ (whose trace is necessarily zero owing to the incompressibility of the flow). Therefore, when formulating an ``optimal eddy viscosity approach'' (Section \\ref{sec:optimal}), it is more appropriate to use the more conventional approximation given in Eq.~(\\ref{eq:anisotropy}).\n\n\\subsection{Optimal eddy viscosity}\\label{sec:optimal}\nWithin the commonly used eddy viscosity approach, closing the RANS equations is achieved using a single constitutive coefficient $\\nu_t$. However, calculating $\\nu_t$ in a way that minimizes the error in the mean flow field is a complex task, and has led to the development of hundreds of RANS closure models. In the present work, we propose to use a neural network to directly estimate $\\nu_t$, effectively eliminating the need for any additional turbulent scalar transport equations. This approach was also applied by Wu {\\em et al.}~\\cite{Wu2018} and Liu {\\em et al.}~\\cite{Liu2021}.\n\nThe optimal eddy viscosity $\\nu^*_t$ is obtained by minimizing the error in the approximation of the anisotropy tensor as being directly proportional to the mean strain-rate tensor. More specifically, the optimal eddy viscosity is obtained as the solution of the following least-squares approximation problem:\n\\begin{eqnarray}\n\\nu^*_t = \\text{arg min}_{\\nu_t} \\| a - (-2\\nu_t S) \\|\\ ,\\label{eq:leastsquares}\n\\end{eqnarray}\nwhere $\\nu^*_t$ is the optimal eddy viscosity in a least-squares sense ($\\|\\ \\cdot\\ \\|$ denotes the Euclidean norm). Equation~(\\ref{eq:leastsquares}) has the following closed-form analytical solution: \n\\begin{eqnarray}\n\\nu^*_t= -\\dfrac{1}{2}\\dfrac{a:S}{S:S}\\label{eq:analyticalnutopt}\\ ,\n\\end{eqnarray}\nwhere the colon operator denotes double contraction. More specifically, for two Cartesian tensors $A$ and $B$ of rank two, their double contraction is given by $A:B\\equiv A_{ij}B_{ij} = \\text{tr}(AB^T) = {\\rm tr}(BA^T)$ (Einstein summation convention implied on repeated indices), where $AB$ denotes the matrix product of $A$ and $B$. For the {\\em special} case where both $A$ and $B$ are symmetric tensors (as in the current case), $A:B = {\\rm tr}({AB}) = {\\rm tr}({BA})$.\n\nWhile optimal, the formulation in Eq.~(\\ref{eq:analyticalnutopt}) has the disadvantage that $\\nu^*_t$ can become negative (especially when the eigenframes of $a$ and $S$ are poorly aligned). When treated implicitly in Eq.~(\\ref{eq:momentum}), a negative eddy viscosity can result in a combined negative effective viscosity, greatly destabilizing the numerical solution. Therefore, we propose the following formulation for the estimation of the optimal eddy viscosity:\n\\begin{eqnarray}\n\\nu^\\dagger_t = \\text{arg min}_{\\nu_t\\geq 0} \\|a - (-2 \\nu_t S)\\|\\ ,\\label{eq:nnlsnut}\n\\end{eqnarray}\nwhere $\\nu^\\dagger_t$ is the optimal eddy viscosity obtained using a non-negative least-squares approximation. The formulation in Eq.~(\\ref{eq:nnlsnut}) guarantees that the injected eddy viscosity will not destabilize the iterative solution, at least in terms of reducing the diagonal dominance of the (assembled) coefficient matrix resulting from the discretization of the mean momentum transport equation. This formulation also allows an non-negativity constraint to be enforced on the output of a predictive model, further guaranteeing that any erroneous predictions at testing time do not destabilize the iterative solution.\n\nIn summary, the present optimal eddy viscosity formulation ($\\nu^\\dagger_t$) is the non-negative least-squares fit for $a\\approx -2 \\nu_t S$. This formulation harmonizes the conventional eddy viscosity approximation used in turbulence modelling, which is in terms of the anisotropy tensor, and promotes iterative stability by guaranteeing a non-negative effective viscosity.\n\n\n\\subsection{Non-linear part of the Reynolds stress tensor}\\label{sec:aperp}\nThe formulation in Eq.~(\\ref{eq:nnlsnut}) optimizes the \\textit{linear} eddy viscosity approximation for $a$. An important question that arises is: how accurate is this linear approximation itself?\n\nTo investigate this question, we consider a case of turbulent flow over periodic hills, simulated using both RANS~\\cite{McConkey2021b} and direct numerical simulation (DNS)~\\cite{Xiao2020}. Specifically, the $\\alpha=1.2$ case is considered, where $\\alpha$ is the slope steepness factor. Figure~\\ref{fig:Eddy_viscosity_error_a_theta_histograms} shows the distributions of the error in $a$ after applying a linear eddy viscosity approximation for this canonical flow. Since this flow is two-dimensional, there are only three independent components of $a$: namely, $a_{xx}$, $a_{xy}$, and $a_{yy}$ (owing to the fact that $a$ is traceless). In the present work, the subscript $\\theta$ is used for quantities obtained from a high-resolution DNS simulation, and the subscript $\\psi$ indicates a quantity obtained from a RANS closure model. Even though the linear eddy viscosity approximation has been optimized on high-fidelity data for $a=-2\\nu^*_{t\\theta}S_\\theta$ and $a=-2\\nu^\\dagger_{t\\theta}S_\\theta$, errors on the order of $100\\%$ are present in all components of $a$. In particular, the principal components of $a$ are severely under predicted, even with an optimized eddy viscosity. Compared to the base RANS simulation, an optimal eddy viscosity approach applied to high-fidelity DNS data does not result in substantial accuracy gain.\n\\begin{figure}\n\\includegraphics[]{Eddy_viscosity_error_a_theta_histograms.pdf}\n\\caption{\\label{fig:Eddy_viscosity_error_a_theta_histograms} Error distribution in each anisotropy component after invoking the linear eddy viscosity approximation $a=-2\\nu_t S$. Relative error is calculated by $\\Delta a_{ij} = (a_{ij}-a_{ij\\theta})\/a_{ij\\theta}$. $\\nu^*_{t\\theta}$ is the optimal eddy viscosity calculated using Eq.~(\\ref{eq:analyticalnutopt}), and $\\nu^\\dagger_{t\\theta}$ is the optimal eddy viscosity calculated using Eq.~(\\ref{eq:nnlsnut}). The subscript $\\theta$ indicates a quantity from DNS, and the subscript $\\psi$ indicates a quantity from a RANS simulation using the $\\phi$-$f$ model (Section~\\ref{sec:phif}). The RMSE for each component of $a$ is shown in each plot.}\n\\end{figure}\n\nFigure~\\ref{fig:R2_R_contours} shows two fit quality metrics for the approximation $a\\approx -2 \\nu^\\dagger_t S$. In Fig.~\\ref{fig:R2_R_contours}(a), the local $R^2$ value for the non-negative least squares fit $a_{ij\\theta} \\approx -2 \\nu^\\dagger_{t\\theta} S_{ij\\theta}$ is shown. While the $R^2$ values are higher near the bulk flow above the separated region, the $R^2$ values are generally poor near the walls, in the separated region, and during reattachment. Figure~\\ref{fig:R2_R_contours}(b) shows the eddy viscosity fit quality metric proposed by Thompson {\\em et al.}~\\cite{Thompson2010} which is expressed as\n\\begin{eqnarray}\\label{eq:thompson}\nR_i = 1 - \\frac{2}{\\pi}\\cos^{-1}\\left( \\sqrt{\\frac{{\\rm tr}(4\\nu_{t\\theta}^{\\dagger 2} S^2_\\theta)}{{\\rm tr}(a^2_\\theta)}}\\right)\\ .\n\\end{eqnarray}\nThis second fit quality metric generally agrees with the $R^2$ value in the bulk flow, in supporting a reasonable quality of the linear eddy viscosity approximation. However, for the separation over the left hill, and the acceleration over the right hill, higher values of the $R_i$ metric suggest a better quality in the optimal linear eddy viscosity fit. Nevertheless, visualizing both of these quality metrics demonstrates that there is substantial room for improvement in terms of representing $a$ using only a linear eddy viscosity approximation for this separated flow.\n\n\n\\begin{figure}\n\\includegraphics[]{R2_R_contours.pdf}\n\\caption{\\label{fig:R2_R_contours} Fit quality metrics after invoking the linear eddy viscosity approximation $a = -2 \\nu^\\dagger_{t\\theta} S_\\theta$: (a) $R^2$ value and (b) eddy viscosity model fit factor $R_i$ [Eq.~(\\ref{eq:thompson})].}\n\\end{figure}\n\n\nThe errors in each component of $a$ arising from the linear eddy viscosity approximation are shown in Fig.~\\ref{fig:a_error_contours}. The linear eddy viscosity approximation fails significantly for all components of $a$ during separation of the flow, a well-known deficiency~\\cite{Pope2000}. Noticeable errors also exists in $a_{yy}$ during reattachment of the flow along the bottom wall. Though Thompson {\\em et al.}'s $R_i$ value suggests a good approximation for $a$ as the flow accelerates over the right hill, the largest errors in the shear component $a_{xy}$ occur here.\n\n\\begin{figure*}\n\\includegraphics{a_error_contours.pdf}\n\\caption{\\label{fig:a_error_contours} \\textit{A priori} contours of error in each component of $a$ after invoking the linear eddy viscosity approximation $a=-2\\nu^\\dagger_{t\\theta}S_\\theta$.}\n\\end{figure*}\n\nWhile the linear eddy viscosity approximation, even after optimization, was found to be deficient in approximating $a$, it was of interest to see whether these errors would affect the accuracy of the mean velocity field. Figure~\\ref{fig:Conditioning_error_linear} shows the errors in the mean field components after injecting the optimal eddy viscosity implicitly into the RANS equation, and allowing the solution to converge around the fixed $\\nu^\\dagger_{t\\theta}$. Figure~\\ref{fig:Conditioning_error_linear}(a) shows that significant errors occur during separation and reattachment along the bottom wall in the streamwise velocity component $U$. \n\n\\begin{figure}\n\\includegraphics[]{Conditioning_error_linear.pdf}\n\\caption{\\label{fig:Conditioning_error_linear} A poteriori contours of error in the velocity vector components after injecting the optimal linear eddy viscosity into the RANS equations, without any non-linear terms: (a) error in the $x$-component and (b) error in the $y$-component.}\n\\end{figure}\n\nEven though the solution in Fig.~\\ref{fig:Conditioning_error_linear} was produced by injecting a high-fidelity optimal eddy viscosity into the RANS equations, there are significant errors in the mean flow field. Furthermore, the analysis in Section \\ref{sec:conditioning} shows that conditioning errors with this optimal eddy viscosity formulation are minimal. If the target of the machine learning model is only the optimal eddy viscosity, mean field accuracy improvements are not guaranteed. In fact, the error fields shown in Fig.~\\ref{fig:Conditioning_error_linear} represent the ``upper limit'' that could be achieved using a linear-component-only data-driven closure based on the optimal eddy viscosity as formulated in the present work.\n\nGiven the deficiencies of the linear eddy viscosity approximation, we propose an additional non-linear term in the anisotropy representation; namely, $a^\\perp$ which is obtained from the following proportional\/orthogonal tensor decomposition of $a$ :\n\\begin{eqnarray}\na = -2 \\nu^\\dagger_t S + a^\\perp.\\label{eq:a_decomp}\n\\end{eqnarray}\nIn Eq.~(\\ref{eq:a_decomp}), $a$ has been decomposed into a linear component ($-2 \\nu^\\dagger_t S$) and a non-linear component ($a^\\perp$). After calculating the optimal eddy viscosity using Eq.~(\\ref{eq:nnlsnut}), the non-linear portion of the anisotropy tensor can be calculated using $a^\\perp = a - (-2\\nu^\\dagger_t S)$.\n\n\n\n\n\n\n\\subsection{Conditioning}\\label{sec:conditioning}\n\nA subject that is of immense importance for data-driven turbulence closure modelling is the issue of conditioning within the RANS equations. When the RANS equations are ill-conditioned, small errors in the closure term can be amplified, resulting in large errors in the mean flow field. This issue affects data-driven models because an error in the model prediction of the closure term is almost always expected. Even with an error-free closure term, an accurate converged mean field cannot be guaranteed~\\cite{Brener2021}. \n\nTo demonstrate the importance of the conditioning problem, a conditioning analysis similar to that performed by Brener {\\em et al.}~\\cite{Brener2021} was conducted. Figures~\\ref{fig:conditioningU} and \\ref{fig:conditioningV} show the errors in the mean velocity field components for two injection experiments. The flow considered here is a turbulent flow at $Re_H=5600$ over periodic hills\\cite{Xiao2020}. In both Figs~\\ref{fig:conditioningU}(a) and (b), the RANS equations have been closed using a highly accurate $a$ from a DNS simulation undertaken by Xiao {\\em et al.}~\\cite{Xiao2020}. In Figures~\\ref{fig:conditioningU}(a) and \\ref{fig:conditioningV}(a), $a_\\theta$ has been injected as an explicit source term into the RANS equations. In Figures~\\ref{fig:conditioningU}(b) and \\ref{fig:conditioningV}(b), the decomposition proposed in the present work has been injected into the RANS equations in accordance to the following scheme: $\\nu^\\dagger_{t\\theta}$ is treated implicitly and $a^\\perp_\\theta$ is treated as an explicit source term in the discretized mean momentum equations.\n\nDespite both systems being closed by the same quantity arising from a high-fidelity simulation, the solution in Figs~\\ref{fig:conditioningU}(a) and \\ref{fig:conditioningV}(a) has a root mean square error (RMSE) that is an order of magnitude larger than the solution in Figs~\\ref{fig:conditioningU}(b) and \\ref{fig:conditioningV}(b). It should be noted that additional differences between the two solutions may arise from the need to use first-order schemes to stabilize the solution for the fully explicit propagation.\n\nThe low error observed after injecting the full anisotropy representation [Eq.~(\\ref{eq:a_decomp})] from a DNS simulation into the RANS equation highlights the merits of the present approach. As discussed in Section~\\ref{sec:deeplearning}, since the quantities $\\nu^\\dagger_{t\\theta}$ and $a^\\perp_\\theta$ are the training labels, injecting these quantities into the RANS equations provides the ``upper limit'' that the data-driven model could achieve. For this reason, we recommend that data-driven closure frameworks be evaluated in terms of the conditioning errors that arise even after a perfect model prediction ($\\tilde a=a_\\theta$).\n\n\n\n\n\\begin{figure}\n\\includegraphics[]{Conditioning_error_U.pdf}\n\\caption{\\label{fig:conditioningU} \\textit{A posteriori} contours of error in the $x$-velocity component after injecting the Reynolds stress anisotropy tensor from DNS into the RANS equations in two different ways: (a) fully explicit injection of $a_\\theta$ and (b) the injection framework used in the present work.}\n\\end{figure}\n\n\\begin{figure}\n\\includegraphics[]{Conditioning_error_V.pdf}\n\\caption{\\label{fig:conditioningV} \\textit{A posteriori} contours of error in the $y$-velocity component after injecting the Reynolds stress anisotropy tensor from DNS into the RANS equations in two different ways: (a) fully explicit injection of $a_\\theta$ and (b) the injection framework used in the present work.}\n\\end{figure}\n\n\\subsection{Injection procedure}\\label{sec:injection}\n\nWe use an open-loop, data-driven framework referred to by Ho and West\\cite{Ho2021} as a \"one-time correction\" model. To this purpose, our framework involves making a fixed correction to the closure term, and then allowing the mean field to converge around this fixed correction. A qualitative description of the injection process in terms of the commonly used residual plot is shown in Figure~\\ref{fig:injection}. The correction is estimated using a converged RANS simulation obtained from a base turbulence model as an input. This framework is in contrast to an iterative framework, where the closure term correction is updated repeatedly as the velocity field evolves. For example, the iterative framework used by Liu {\\em et al.}~\\cite{Liu2021} calls the machine learning model during each iteration. While open-loop frameworks are limited to steady flows, the design and use of iterative correction frameworks remains an open question. For example, the converged solution from an iterative framework can depend on the initial flow field~\\cite{Ho2021}.\n\n\\begin{figure}\n \\centering\n \\includegraphics{qualitative_plot.pdf}\n \\caption{\"Qualitative\" residual plot for the machine learning corrective framework used in the present study. After injection, no further updates to the turbulence equations (e.g., the $k$ and $\\varepsilon$ transport equations) are required.}\n \\label{fig:injection}\n\\end{figure}\n\nMotivated by the conditioning analysis of Brener {\\em et al.}~\\cite{Brener2021}, and our own analysis shown in Figs~\\ref{fig:conditioningU} and \\ref{fig:conditioningV}, we make two closure corrections in the present work: namely, one using $\\nu^\\dagger_t$ and another using $a^\\perp$. We correct the eddy viscosity to be the optimal eddy viscosity, formulated in Eq.~(\\ref{eq:nnlsnut}), and treat it implicitly in the momentum equation through an effective viscosity. Having established the importance of including the non-linear part of the anisotropy tensor (Section~\\ref{sec:aperp}), we also include the divergence of $a^\\perp$ as a source term in the momentum equation. \n\nAfter the correction, the RANS equations are solved iteratively until the mean fields converge around the fixed correction terms. The goal is that this converged solution will be more accurate than the solution produced by the base turbulence model. No update is required for the turbulent transport equations, as the eddy viscosity is directly corrected in our framework.\n\n\n\\subsection{The $\\phi$-$f$ model}\\label{sec:phif}\nIn the present work, we use the $\\phi$-$f$ model implemented in OpenFOAM v2006 as the base RANS model~\\cite{Laurence2005,Openfoam}. The $\\phi$-$f$ model is a reformulated version of the $v^2$-$f$ model, where $\\phi$ is the ratio $v^2\/k$, and $f$ is a damping scalar. By adding a carefully constructed transport equation for the streamline-normal Reynolds stress component ($v^2$), this model accounts for the wall-blocking effects on the Reynolds stresses. The construction of the $v^2$ equation includes the effects of pressure strain on the streamline-normal Reynolds stress, a key contributor to wall-blocking effects. These wall-blocking effects are observed as unequal scaling (anisotropy) of the Reynolds stress components, which are not captured in a two-equation model. Often, the use of wall damping functions is required in two-equation models to correctly capture near-wall scaling. Laurence {\\em et al.}'s~\\cite{Laurence2005} $\\phi$-$f$ model is a reformulated version of a $v^2$-$f$ model variant due to Lien and Kalitzin~\\cite{Lien2001}, which is, in turn, an improved version of Durbin's~\\cite{Durbin1991} original $v^2$-$f$ model. Reformulating the $v^2$ and $f$ equation in terms of $\\phi$ results in a more robust model. For example, this model has reduced stiffness in terms of the near-wall damping singularity \\cite{Laurence2005}. The $\\phi$-$f$ model estimates the eddy viscosity as\n\\begin{equation}\n \\nu_t = C_\\mu \\phi k T \\ ,\n\\end{equation}\nwhere $C_\\mu$ is a model coefficient, and $T$ is the turbulent time scale.\n\nDetails on the OpenFOAM implementation of this model are given in the OpenFOAM documentation \\cite{Openfoam}. The original $\\phi$-$f$ model equations can be summarized as follows:\n\\begin{equation}\n \\frac{\\partial k}{\\partial t} + U_i \\frac{\\partial k}{\\partial x_i} = P - \\varepsilon + \\frac{\\partial}{\\partial x_j}\\left[ \\left(\\nu + \\frac{\\nu_t}{\\sigma_k}\\right) \\frac{\\partial k}{\\partial x_j}\\right]\\ ;\\label{eq:k}\n\\end{equation}\n\\begin{equation}\n \\frac{\\partial \\varepsilon}{\\partial t} + U_i \\frac{\\partial \\varepsilon}{\\partial x_i} = \\frac{C_{\\varepsilon_1} P}{T} - \\frac{C_{\\varepsilon_2}\\varepsilon}{T} + \\frac{\\partial}{\\partial x_j}\\left[ \\left(\\nu + \\frac{\\nu_t}{\\sigma_\\varepsilon}\\right) \\frac{\\partial \\varepsilon}{\\partial x_j}\\right]\\ ;\\label{eq:epsilon}\n\\end{equation}\n\\begin{eqnarray}\n \\frac{\\partial \\phi}{\\partial t} + U_i \\frac{\\partial \\phi}{\\partial x_i} = f - P \\frac{\\phi}{k}+\\frac{2\\nu_t}{k\\sigma_k}&&\\frac{\\partial \\phi}{\\partial x_j}\\frac{\\partial k}{\\partial x_j} \\nonumber\\\\ && + \\frac{\\partial}{\\partial x_j}\\left[ \\left(\\frac{\\nu_t}{\\sigma_k}\\right) \\frac{\\partial \\phi}{\\partial x_j}\\right]\\ ;\\label{eq:phi}\n\\end{eqnarray}\nand,\n\\begin{eqnarray}\nL^2 \\frac{\\partial^2 f}{\\partial x_j} - f = \\dfrac{1}{T}(C_{f_1} &&-1) \\left[\\phi - \\dfrac{2}{3}\\right] \\nonumber\\\\ &&- C_{f_2} \\dfrac{P}{k} - 2 \\dfrac{\\nu}{k}\\dfrac{\\partial \\phi}{\\partial x_j}\\dfrac{\\partial k}{\\partial x_j} - \\nu \\dfrac{\\partial^2 \\phi}{\\partial x_j}\\ .\\label{eq:f}\n\\end{eqnarray}\nEquations~(\\ref{eq:k}), (\\ref{eq:epsilon}), and (\\ref{eq:phi}) are model transport equations for $k$, $\\varepsilon$, and $\\phi$, respectively. Written in index notation, $U_i = \\vec{U}$. Equation~(\\ref{eq:f}) is an elliptic relaxation equation for $f$, which is a scalar predicting near-wall damping effects.\n\nThe turbulent time scale $T$ and length scale $L$ are given, respectively, by\n\\begin{eqnarray}\n T = \\text{max}\\left(\\frac{k}{\\varepsilon},C_T\\sqrt{\\frac{\\nu}{\\varepsilon}}\\right)\\label{eq:T} \\ ,\\\\\n L = C_L\\text{max}\\left(\\frac{k^{3\/2}}{\\varepsilon}, C_\\eta \\left(\\frac{\\nu^3}{\\varepsilon}\\right)^{1\/4}\\right) \\ .\\label{eq:L}\n\\end{eqnarray}\nFinally, the model constants assume the values summarized \nas follows: $C_\\mu = 0.22$; $C_{\\varepsilon_1}=1.4(1.0+0.05 \\sqrt{1.0\/\\phi})$; $ C_{\\varepsilon_2}=1.9$; $C_T = 6.0$; $C_L = 0.25$; $C_{f_1} = 1.4$; $ C_{f_2} = 0.3$; $C_\\eta = 110.0$; $\\sigma_k = 1.0$; and, $\\sigma_\\varepsilon = 1.3$.\n\n\\subsection{Numerical methods}\nAll simulations in the present work use OpenFOAM v2006. For the base RANS simulation, the PIMPLE algorithm was used to achieve a converged solution. Then, a modified PIMPLE solver, which accepts the corrected $\\nu^\\dagger_t$ and $a^\\perp$, was used to inject these corrections into the RANS equations and iteratively solve for the mean fields. While all flows in the present work reach a steady state solution, the unsteady algorithm was used to stabilize the iterative solution. \n\nFor discretizing the RANS equations, a second-order upwind scheme was used for the convective terms, and a second-order central difference scheme was used for the diffusion terms. For the convective terms in the base turbulence model transport equations, a first-order scheme was used. \n\nA wall-resolved mesh ($y^+ \\leq 1$) was used for all simulations. Further details on the mesh, domain, and boundary conditions are provided in McConkey {\\em et al.}'s~\\cite{McConkey2021b} description of the machine learning dataset. Cyclic boundary conditions were used for all flow variables at the inlet and outlet. At the top and bottom walls, the boundary conditions were fixed-zero for velocity, and zero-gradient for pressure.\n\n\n\\section{Deep learning procedure for predicting the Reynolds stress tensor}\\label{sec:deeplearning}\n\n\\subsection{Dataset}\nTo test the proposed decomposition and injection framework, a series of flow over periodic hills was chosen. This flow features separation over the left hill, reattachment along the bottom wall, and acceleration over the right hill. The dataset presented for data-driven turbulence modelling by McConkey {\\em et al.}~\\cite{McConkey2021b}, includes these cases, based on the DNS simulations of Xiao {\\em et al.}~\\cite{Xiao2020}. The periodic hills portion of the dataset consists of five cases, each with 14,751 data points, for a total of 73,755 data points. At each data point, the complete set of RANS input features and DNS fields are provided, so that the model can learn a mapping from the RANS features to the DNS labels. The RANS fields come from a converged solution using the $k$-$\\varepsilon$-$\\phi_t$-$f$ model \\cite{Laurence2005} in OpenFOAM v2006. The five cases are generated by parametrically varying the hill steepness $\\alpha$ and the overall length of the geometry. The five cases correspond to $\\alpha = 0.5$, 0.8, 1.0, 1.2, and $1.5$, as shown in Fig.~\\ref{fig:geometry_phll}.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.5\\textwidth]{PHLL_geom_annotated.eps}\n\\caption{\\label{fig:geometry_phll} The geometry and flow variables used for the five periodic hills cases in the present work (Reproduced from McConkey {\\em et al}.~\\cite{McConkey2021b}).}\n\\end{figure}\n\n\n\n\\subsection{Model architecture}\n\n\nThe model consists of two independent neural networks: namely, EVNN, and apNN. The objective of EVNN is to predict the optimal eddy viscosity $\\nu^\\dagger_t$, whereas that of apNN is to predict the non-linear part of the anisotropy tensor, $a^\\perp$. Both models are feed-forward, fully connected neural networks. The EVNN has one output neuron, and the apNN has three output neurons (one for each independent component of $a^\\perp$).\n\nBecause $\\nu^\\dagger_t$ is non-negative, the stability of the EVNN predictions during training and injection can benefit by enforcing a non-negative prediction. For the EVNN, the activation function used for the output neuron was a simple exponential function $f(x) = e^x$, where $x$ is the input value to the layer. This activation function guarantees a non-negative eddy viscosity prediction by the EVNN, regardless of the input activation layer (viz., the choice of the exponential function as the activation function induces an inductive bias of the network output). This stability constraint is possible due to the formulation of $\\nu^\\dagger_t$, which guarantees a non-negative training label. For the apNN, the three output neurons utilize a linear activation function. For the hidden layers, the scaled exponential linear unit (SELU) activation function was used. Combined with the LeCun initialization of the network weights, a SELU-activated, fully connected, feed-forward neural network is self-normalizing~\\cite{Klambauer2017}, eliminating the need for batch normalization layers~\\cite{Geron}.\n\nThe neural networks were implemented using the high-level Keras application programming interface (API) in TensorFlow \\cite{keras}.\nThe hyperparameters used for both networks were identical: namely, both networks have $L=14$ hidden layers, with $n=30$ neurons in each layer, resulting in a network architecture with an depth-to-width aspect ratio of $L\/n \\approx 0.467$. The EVNN has 13,141 trainable parameters (weights and biases) with a single output, and the apNN has \\text{13,203} trainable parameters with three outputs. For traditional turbulence modelling, tuning this number of parameters is a nearly impossible task. However, for deep learning, tuning these parameters is relatively ``straightforward''---the number of learnable parameters is often orders of magnitude greater than those used in the EVNN and the apNN. Nevertheless, despite the large number of model parameters used in these types of overparameterized neural networks (where the number of model parameters far exceed the number of training data), there is mounting empirical (indeed, practical) evidence that these networks can be successfully trained to provide excellent predictions---perhaps supporting the notion that it is not the number of model parameters, but the representation learning provided by the subtle correlations in the intralayer and interlayer interactions of the neural units in the network that ultimately determines the predictive skill and generalizability of the model in deep learning.\n\n\\subsection{Input feature selection}\nA wide range of input features have been used in previous data-driven closure frameworks. The input features for a data-driven closure are generally required to possess the same invariance properties as the underlying Navier-Stokes equations: reflection, rotation, and Galilean transformation invariance. Methods used to generate the input feature set vary from a purely heuristic selection, to a systematic generation of an integrity basis~\\cite{Wu2018}. In the present work, we use a combination of these two methods. Three heuristic scalars are selected as input features given by\n\\begin{eqnarray}\nq_1 &&\\equiv \\text{min}(2.0,\\sqrt{k} y_\\perp\/(50\\nu))\\ , \\nonumber\\\\\nq_2 &&\\equiv k\/\\varepsilon \\|S\\|\\ , \\nonumber\\\\\nq_3 &&\\equiv \\|\\tau\\|\/k\\ .\n\\end{eqnarray}\nHere, $y_\\perp$ is the wall-normal distance. $q_1$ is the wall distance based Reynolds number, $q_2$ is the ratio of the mean strain timescale to a turbulent time scale, and $q_3$ is the ratio of the total Reynolds stress magnitude to the diagonal Reynolds stresses. These Galilean invariant scalars were used previously by Kaandorp and Dwight~\\cite{Kaandorp2020}.\n\nIn addition to these scalars, we modify the procedure presented by Wu {\\em et al.}~\\cite{Wu2018} to significantly augment the input feature set. In Wu {\\em et al.}'s procedure, four gradient tensors were selected: $S$, $R$,$\\nabla k$, and $\\nabla p$. The TKE and pressure gradient vectors were converted to second-order tensors, by casting their components into an anti-symmetric matrix. Then, the method outlined by Spencer and Rivlin~\\cite{Spencer1962} was used to generate an integrity basis for the space spanned by these four second-order tensors. This integrity basis consists of 47 tensors. Finally, a scalar input feature was extract from each tensor, by taking the first invariant (viz., the trace) of each integrity basis tensor.\n\nWhile this procedure extracts a large number of input features from the flow, a number of issues arise that have received limited attention in the literature. Firstly, the terms related to $\\nabla p$ in the integrity basis tend to be numerically unstable in our experience, possibly due to the normalization used by Wu {\\em et al.}~\\cite{Wu2018}. In their frameworks, Wang {\\em et al.}~\\cite{Wang2017} and Kaandorp and Dwight~\\cite{Kaandorp2020} omitted these terms, reducing the number of invariants down to 16. Secondly, many of these invariants are zero for two-dimensional flows. These zero invariants arise from either a direct result of the $z$-direction gradients being zero, or from the incompressibility of the flow. Lastly, this procedure only uses the first invariant $I_1$ (or, trace) of each integrity basis tensor---more information can be extracted from this basis by including further invariants in the input feature set. We have presented an analysis of Wu {\\em et al.}'s~\\cite{Wu2018} integrity basis in Appendix \\ref{ap:invariants}, showing expressions for each of the basis tensor invariants for two-dimensional flows.\n\nTo circumvent the issue of unstable terms related to $\\nabla p$ being unusable as input features, we replace $\\nabla p$ in Wu {\\em et al.}'s basis with $\\nabla v^2$. This replacement is possible since our base turbulence model is the $\\phi$-$f$ model (Section~\\ref{sec:phif}). We found that terms including $\\nabla v^2$ were much more stable than the terms including $\\nabla p$, and therefore more features could be included. It is noted that $\\nabla v^2$ provides additional information compared to $\\nabla k$, because $v^2$ measures the degree of wall-blocking effects and anisotropy in the flow. Additionally, we augment the 47 original first invariants by also taking the second invariant $I_2$ of a tensor $A$ ($I_2 = \\tfrac{1}{2}[(\\text{tr}(A))^2 - \\text{tr}(A^2)]$). For a three-dimensional flow, this effectively doubles the number of basis tensor invariants that can be used as input features. However, as shown in Appendix~\\ref{ap:invariants}, many of these invariants are zero for two-dimensional flows. Nevertheless, after eliminating the zero-valued invariants, a set of 29 invariants remained as suitable input features. Table~\\ref{tbl:features} summarizes the input features used.\n\n\\begin{table}[]\n\\caption{Input features for EVNN and apNN.}\\label{tbl:features}\n\\begin{tabular}{cccc}\n\\hline\nNumber & Input feature & Expression & Transformation \\\\ \\hline\n1 & $I_1(B_1)$ & $I_1(S^2)$ & $\\text{log}(|x|+1)$ \\\\\n2 & $I_1(B_3)$ & $I_1(R^2)$ & $\\text{log}(|x|+1)$ \\\\\n3 & $I_1(B_4)$ & $I_1(A_{v2}^2)$ & $\\sqrt[3]{x}$ \\\\\n4 & $I_1(B_5)$ & $I_1(A_k^2)$ & $\\text{log}(|x|+1)$ \\\\\n5 & $I_1(B_7)$ & $I_1(R^2S^2)$ & $\\text{log}(|x|+1)$ \\\\\n6 & $I_1(B_9)$ & $I_1(A_{v2}^2 S)$ & $\\sqrt[3]{x}$ \\\\\n7 & $I_1(B_{10})$ & $I_1(A_{v2}^2 S^2)$ & $\\text{log}(|x|+1)$ \\\\\n8 & $I_1(B_{12})$ & $I_1(A_{k}^2 S)$ & $\\sqrt[3]{x}$ \\\\\n9 & $I_1(B_{13})$ & $I_1(A_{k}^2 S^2)$ & $\\text{log}(|x|+1)$ \\\\\n10 & $I_1(B_{16})$ & $I_1(A_{v2}A_k)$ & $\\text{log}(|x|+1)$ \\\\\n11 & $I_1(B_{21})$ & $I_1(A_{v2}RS)$ & $\\text{log}(|x|+1)$ \\\\\n12 & $I_1(B_{25})$ & $I_1(A_{v2}^2SRS^2)$ & $\\text{log}(|x|+1)$ \\\\\n13 & $I_1(B_{29})$ & $I_1(A_{k}^2RS)$ & $\\text{log}(|x|+1)$ \\\\\n14 & $I_1(B_{33})$ & $I_1(A_{k}SRS^2)$ & $\\text{log}(|x|+1)$ \\\\\n15 & $I_1(B_{34})$ & $I_1(A_{v2}A_kS)$ & $\\sqrt[3]{x}$ \\\\\n16 & $I_1(B_{35})$ & $I_1(A_{v2}A_k S^2)$ & $\\text{log}(|x|+1)$ \\\\\n17 & $I_1(B_{42})$ & $I_1(RA_{v2}A_k)$ & $\\sqrt[3]{x}$ \\\\\n18 & $I_1(B_{43})$ & $I_1(RA_{v2}A_kS)$ & $\\sqrt[3]{x}$ \\\\\n19 & $I_1(B_{45})$ & $I_1(RA_{v2}A_kS^2)$ & $\\sqrt[3]{x}$ \\\\\n20 & $I_1(B_{46})$ & $I_1(RA_kA_{v2}S^2)$ & $\\sqrt[3]{x}$ \\\\\n21 & $I_2(B_1)$ & $I_2(S^2)$ & $\\text{log}(|x|+1)$ \\\\\n22 & $I_2(B_2)$ & $I_2(S^3)$ & $\\text{log}(|x|+1)$ \\\\\n23 & $I_2(B_3)$ & $I_2(R^2)$ & $\\text{log}(|x|+1)$ \\\\\n24 & $I_2(B_4)$ & $I_2(A_{v2}^2)$ & $\\text{log}(|x|+1)$ \\\\\n25 & $I_2(B_5)$ & $I_2(A_{k}^2)$ & $\\text{log}(|x|+1)$ \\\\\n26 & $I_2(B_6)$ & $I_2(R^2S)$ & $\\text{log}(|x|+1)$ \\\\\n27 & $I_2(B_7)$ & $I_2(R^2S^2)$ & $\\text{log}(|x|+1)$ \\\\\n28 & $I_2(B_8)$ & $I_2(R^2SRS^2)$ & $\\text{log}(|x|+1)$ \\\\\n29 & $I_2(B_{16})$ & $I_2(A_{v2}A_k)$ & $\\text{log}(|x|+1)$ \\\\\n30 & $q_1$ & $\\text{min}(2.0,\\sqrt{k} y_\\perp\/(50\\nu))$ & --- \\\\\n31 & $q_2$ & $k\/\\varepsilon \\|S\\|$ & --- \\\\\n32 & $q_3$ & $\\|\\tau\\|\/k$ & --- \\\\\n33 & $\\phi$ & $\\phi$ & --- \\\\ \\hline\n\\end{tabular}\n\\end{table}\nThis input feature set represents one of the richest feature sets used in a data-driven turbulence closure model. While many studies relax the condition of Galilean invariance (most commonly, by including the turbulence intensity), all of our input features are Galilean invariant. Section~\\ref{sec:interpret} analyzes the relative importance of each of these features to the model predictions. \n\n\n\n\\subsection{Label calculation}\nEach neural network has a different set of training labels, based on the desired output. For the EVNN, the goal is to predict the optimal eddy viscosity $\\nu^\\dagger_t$. As shown in Section~\\ref{sec:conditioning}, injecting the optimal eddy viscosity from DNS produced a well-conditioned closure. Therefore, the label field for the EVNN consists of the $\\nu^\\dagger_{t\\theta}$ field. To calculate this field, scikit-learn \\cite{scikit-learn} was used to perform the non-negative least-squares (NNLS) regression fit for $a_\\theta = -2\\nu_tS_\\theta$ at each cell in the dataset.\n\nThe labels for the apNN consist of the three independent components of the non-linear part of the Reynolds stress tensor. Therefore, the three component labels were extracted from the tensor $a^\\perp_\\theta = a_\\theta + 2 \\nu^\\dagger_{t\\theta}S_\\theta$, where $\\nu^\\dagger_{t\\theta}$ are the labels for the EVNN. At injection time, $a^\\perp_{zz}$ can be calculated as $a^\\perp_{zz}=-a^\\perp_{xx} - a^\\perp_{yy}$, because $a^\\perp$ is a traceless tensor.\n\n\\subsection{Pre-processing}\n\\label{sec:preprocessing}\n\nAfter calculating the input feature and label sets, the five periodic hills cases were divided into a training set, a validation set, and a testing set. The training process consists of updating the model weights and biases using an optimization algorithm. The model makes predictions using the training set features and evaluates the predictions using the training set labels. The validation and testing sets are used similarly, but at different times in the process. The validation set is used to evaluate the predictive performance of the neural network throughout the training process in order to assess when the model has converged. The validation set is not used to update the weights and biases of the network. Instead, it helps identify overfitting during the training process. If the training process continues for too long, the model will begin to memorize noise and other undesired details in the training set. Repeated evaluation of the neural network predictive performance using the validation set will allow one to assess the generalization performance of the model during training. The testing set is used to evaluate the final, trained model on an entirely new set of data---it can be used to determine the model (or generalization) error and to evaluate the generalization capability of the fully-trained neural network model. More specifically, the generalization error is perhaps the primary quantitative measure of the success of the fully-trained neural network model providing, as such, the standard for how well the network is really approximating the underlying function for the Reynolds stresses.\n\nThe $\\alpha=1.2$ case was selected as a test set. In terms of the parameter $\\alpha$, this represents an interpolation test case, since the training and validation set contain $\\alpha$ values both above and below 1.2. The training\/validation set consists of 59,004 data points, for the remaining four cases. At this stage, the typical procedure is to shuffle and split all of the remaining data into a training and validation set. However, for training a data-driven turbulence closure neural network model, we propose a different method to better estimate the generalization performance during training. Our recommendation is that the validation set should also consist of distinct cases from the training set. For example, the validation set could be all data points for the $\\alpha=1.0$ test case, and the training data would then consist of the $\\alpha = 0.5$, 0.8, and 1.5 cases.\n\nIt is important to stress that the choice of validation set greatly influences the training process. This issue is normally solved by using $k$-fold cross-validation, where the training data is split into $k$ folds. Each of the folds is then used as a validation set, while the model is trained on the remaining folds. We apply 4-fold cross-validation, where the model is trained four times, each using a remaining case ($\\alpha = 0.5$, 0.8, 1.0, 1.5) as the validation set. The model which exhibits the best generalization performance (defined as the difference between the training and validation error) is then selected for testing.\n\nThe input features and labels are often scaled and transformed in deep learning in order to eliminate scale differences between input features and to ensure well behaved weights and biases during training~\\cite{Geron}. To this purpose, a transformation was selected for each of the input features and labels to ensure a more uniform distribution (histogram) of their values before they are used in training. These transformations are given in Table~\\ref{tbl:features}. The selection of input feature transformations is highly heuristic, and depends on the data used for each study. However, we have generally applied a logarithm transform ($\\hat{x} = \\log (|x|+1)$) to data which is entirely positively- or negatively-valued, and a cube-root transform ($\\hat{x} = x^{1\/3}$) to data which contains both positive and negative values. For the majority of data containing RANS mean fields, the histograms tend to be highly skewed due to domain-specific factors. For example, the behavior of almost all fields in the near-wall region is completely different than the bulk flow region. It was found that transforming the components of $a^\\perp$ was not necessary for sufficient model performance. After transforming the features (Table~\\ref{tbl:features}), the values were scaled to lie in the unit interval [0,1] using scikit-learn's MinMaxScaler~\\cite{scikit-learn}. The $\\nu^\\dagger_{t\\theta}$ labels were transformed using $\\hat{ x} = \\text{log}(|x| +1)$, then scaled using the same method as the features. No transformation was necessary for the $a^\\perp_{\\theta}$ labels. Only scaling was applied to the $a^\\perp_{\\theta}$ label set.\n\n\\subsection{Training}\nFor the training process, the Nadam optimizer in Keras was selected~\\cite{keras}. This optimizer combines the adaptive moment estimation (Adam) optimizer with Nesterov momentum to accelerate training. The learning rate used was $1\\times 10^{-4}$. For the EVNN, the loss function was the mean-squared error, with L2 regularization used. The L2 regularization weight $\\lambda$ was $1\\times 10^{-5}$. L2 regularization adds an additional penalty term to the loss function to constrain the norm squared of the optimized model parameters (or, weights)---the neural network seeks the best compromise between minimizing the errors in the prediction of the output and constraining the magnitude of the neural network weights to be as small as possible (viz., shrinking the weights towards zero). In practice, this regularization is required for some deep learning scenarios. We did not apply any L2 regularization to the apNN loss function, as the generalization performance was found to be sufficient without any additional regularization term to constrain the weights.\n\nThe training process was completed for the four cross-validation folds, as described in Section~\\ref{sec:preprocessing}. After completing the training procedure for each of these folds, the model which provided the best generalization performance was selected. The training loss curves for the selected EVNN and apNN are shown in Figs~\\ref{fig:loss_EVNN} and \\ref{fig:loss_apNN}, respectively. While the apNN loss curve remains relatively stable, the loss curve for the EVNN contained instabilities throughout the training process. Typically, these instabilities are a symptom of the learning rate being too large, but the learning rate was not found to be responsible for the occurrence of these jumps. This indicates that there may be significant discontinuities in the total loss surface consisting of the mean-squared error for prediction of $\\nu_t^\\dagger$ and the L2 regularization (penalty) term involving the weights. After approximately 1000 epochs, the mean-square error (MSE) part of the EVNN loss function remains relatively stable, indicating that no further accuracy is being gained in predicting $\\nu^\\dagger_t$. The remaining drop in the overall loss function comes from the training procedure optimizing the weights in the EVNN. It was found that the optimal number of epochs at which to stop the EVNN training was 2000, which allows an additional 1000 epochs for the optimizer to tune the network weights.\n\nThe loss curve for the apNN (cf.~Fig.~\\ref{fig:loss_apNN}) is simply the MSE of the prediction, because no regularization term for the weights was included in the training process for this neural network. After about 500 epochs, the training loss is smaller than the validation loss, and the training loss continues to decrease monotonically after this point. By assessing the {\\em a priori} and {\\em a posteriori} performance for various selections of the number of epochs to use as the training endpoint (early stopping), it was found that a training endpoint of 1500 epochs yielded the best results for the apNN. \n\n\\begin{figure}\n\\includegraphics[]{training_curve_EVNN.pdf}\n\\caption{\\label{fig:loss_EVNN} Loss curves during training for the EVNN: (a) the total loss function consisting of the mean squared error (MSE) and the L2 regularization (penalty term) of the weights and (b) the MSE portion of the total loss function.}\n\\end{figure}\n\n\\begin{figure}\n\\includegraphics[]{training_curve_apNN.pdf}\n\\caption{\\label{fig:loss_apNN} Loss curve during training for the apNN. The loss function for the apNN consists of the mean squared error (MSE) only (viz., no regularization term was added to the loss function).}\n\\end{figure}\n\n\n\\section{Results}\\label{sec:results}\nThe final step in the deep learning procedure includes a test on a hold-out case---known also as the test case that is set aside and used to evaluate the neural network model only after training is complete. For the periodic hills training set, the $\\alpha=1.2$ case was held out of the training. Testing the network on a hold-out case provides an assessment of the network's ability to generalize by predicting a previously unseen case. The results for the hold-out test case are discussed in the following sections. A tilde is used to denote a quantity predicted by the neural networks (e.g., $\\tilde \\nu^\\dagger_t$ is a prediction of the optimal eddy viscosity provided by the neural network).\n\nFor the open-loop framework used here, there are two stages of evaluation: namely, {\\em a priori} and {\\em a posteriori}. In Section~\\ref{sec:apriori}, the model predictions are evaluated before injecting into the modified PIMPLE solver ({\\em a priori} assessment). In Section~\\ref{sec:aposteriori}, the resulting mean flow fields and combined anisotropy tensor ($a$) are evaluated after the iterative solution has converged around the fixed predictions ({\\em a posteriori} assessment). The framework used here treats the linear component of $a$ implicitly, so the combined prediction $\\tilde a$ is not available in an {\\em a priori} sense---it must be evaluated {\\em a posteriori}.\n\n\\subsection{{\\em A priori} assessment}\\label{sec:apriori}\nPrior to injecting the model predictions into the RANS equations, the predictions on the $\\alpha=1.2$ test case were analyzed for accuracy. The objective of the EVNN is to accurately predict $\\nu^\\dagger_t$. The mean-squared error is calculated by (for $N$ data points)\n\\begin{equation}\n{\\rm MSE} = \\frac{1}{N}{\\sum_{i=1}^N (\\tilde \\nu^\\dagger_{ti} - \\nu^\\dagger_{t\\theta i})^2}\\ .\n\\end{equation}\nFor the $\\alpha=1.2$ test case, the MSE in the prediction of $\\nu^\\dagger_t$ is $1.89\\times 10^{-7}$ m$^4$~s$^{-2}$. Figure \\ref{fig:nut_pred_hist} compares $\\tilde \\nu^\\dagger_t$ to $\\nu^\\dagger_{t\\theta}$. Figure~\\ref{fig:nut_pred_hist}(a) shows that while the predictions generally fall along the (ideal) line $\\tilde \\nu^\\dagger_t = \\nu^\\dagger_{t\\theta}$ at smaller values of $\\nu^\\dagger_t$, some scatter in the predictions is observed at larger values. The error distribution for the prediction of $\\nu^\\dagger_t$ [Fig.~\\ref{fig:nut_pred_hist}(b)] is roughly symmetric, indicating that there is not a significant over or under prediction tendency for the EVNN. The large prediction errors in Fig.~\\ref{fig:nut_pred_hist} demonstrate the importance of model conditioning---these errors will always be present at model testing time.\n\nFigure~\\ref{fig:nut_pred_contours} shows the spatial contours of $\\tilde \\nu^\\dagger_t$ and $\\nu^\\dagger_{t\\theta}$.\nThis figure shows that the EVNN is able to capture trends in the spatial variation of $\\nu^\\dagger_t$ well. The large values for $\\nu^\\dagger_t$ in the center of the domain are predicted well by the EVNN. In general, the neural network predictions match the ground truth (DNS data), with some minor discrepancies in the high $\\nu^\\dagger_t$ region above the first crest. While Fig.~\\ref{fig:nut_pred_hist} presents a pessimistic view of the ability of the EVNN to accurately predict $\\nu^\\dagger_t$, the contours in Fig.~\\ref{fig:nut_pred_contours} provide a more optimistic view.\n\n\n\\begin{figure}\n\\includegraphics[]{nut_pred_hist.pdf}\n\\caption{\\label{fig:nut_pred_hist} \\textit{A priori} prediction accuracy for the EVNN on the $\\alpha=1.2$ test case: (a) plot of the predicted value (ordinate) vs the ground-truth value obtained from DNS (abscissa) and (b) distribution of the $\\nu^\\dagger_t$ prediction errors. }\n\\end{figure}\n\n\\begin{figure}\n\\includegraphics[]{nut_pred_contours.pdf}\n\\caption{\\label{fig:nut_pred_contours} \\textit{A priori} contours of $\\nu^\\dagger_t$ for the test case: (a) ground-truth value from DNS and (b) value predicted by the EVNN.}\n\\end{figure}\n\n\nFigures~\\ref{fig:ap_pred_hist} and \\ref{fig:ap_pred_contour} show that the apNN predicts the non-linear part of the anisotropy tensor well. The MSEs associated with the prediction of each component of $a^\\perp$ are ${\\rm MSE}_{a^\\perp_{xx}}=2.31\\times 10^{-12}$, MSE$_{a^\\perp_{xy}}=9.40\\times 10^{-13}$, and MSE$_{a^\\perp_{yy}}=9.56\\times 10^{-13}$ (in units of m$^4$~s$^{-4}$). Spatial trends in the magnitude of $a^\\perp$ are accurately predicted by the apNN (cf.~Fig.~\\ref{fig:ap_pred_contour}). The individual points for the various components generally lie all along the ideal line $\\tilde{a}^\\perp=a^\\perp_\\theta$ (shown as the diagonal dashed lines in Fig.~\\ref{fig:ap_pred_hist}). A perusal of Fig.~\\ref{fig:ap_pred_hist} shows that the diagonal (normal stress) components of $a^\\perp$ are generally better predicted using apNN than the off-diagonal (shear stress) components. Indeed, the error distributions for the prediction of the normal stress components are more markedly peaked around zero (error) than that for the shear stress component [cf.~Figs.~\\ref{fig:ap_pred_hist}(a) and (f) with Fig.~\\ref{fig:ap_pred_hist}(d)].\n\n\n\\begin{figure}\n\\includegraphics[]{ap_pred_hist.pdf}\n\\caption{\\label{fig:ap_pred_hist} \\textit{A priori} prediction accuracy for the apNN on the $\\alpha=1.2$ test case. The prediction accuracy for each component of $a^\\perp$ is shown: (a), (c), (e) plots of the predicted value (ordinate) vs the ground-truth value obtained from DNS (abscissa) and (b), (d), (f) distribution of the $a^\\perp$ prediction errors. }\n\\end{figure}\n\\begin{figure*}\n\\includegraphics[]{ap_pred_contours.pdf}\n\\caption{\\label{fig:ap_pred_contour} \\textit{A priori} contours of each component of $a^\\perp$ for the test case: (a), (b), (c) ground-truth value obtained from DNS and (d), (e), (f) value predicted by the apNN.}\n\\end{figure*}\n\nComparing the predictions by the EVNN (Figs~\\ref{fig:nut_pred_hist} and \\ref{fig:nut_pred_contours}) to the predictions by the apNN (Figs~\\ref{fig:ap_pred_hist} and \\ref{fig:ap_pred_contour}), we see that the apNN generally out-performs the EVNN in terms of predicting their respective targets. This outcome agrees with our experience training and testing several times with these models---$\\nu^\\dagger_t$ is harder to predict than \n$a^\\perp$. This discrepancy indicates that the information content embodied in the RANS fields for the prediction of $a^\\perp$ is larger than that for the prediction of $\\nu^\\dagger_{t\\theta}$.\n\n\n\\subsection{{\\em A posteriori} assessment}\\label{sec:aposteriori}\nThe fields predicted by the EVNN and apNN were injected into the RANS equations. A similar procedure as Section \\ref{sec:conditioning} was utilized, whereby a modified PIMPLE solver in OpenFOAM v2006 was employed to inject the predicted fields from the EVNN ($\\tilde \\nu_t^\\dagger$) and apNN ($a^\\perp$) into the RANS equations, and iteratively reach a converged solution for the mean velocity field. \n\nFigure~\\ref{fig:U_mag_contours} shows the contours of the velocity magnitude for the converged solution. The converged results agree well with the flow fields coming from the highly resolved DNS simulation. In particular, the size of the separated region is well predicted, as well as several other important features of the flow: namely, boundary layers along the top and bottom wall, reattachment of the flow, and acceleration over the rightward hill. \n\nExamining several slices of the velocity field in Figs~\\ref{fig:U_samples} and \\ref{fig:V_samples}, we see that both the $U$ and $V$ velocity components are captured well compared to the DNS fields. While the base RANS model (the green line in Figs~\\ref{fig:U_samples} and \\ref{fig:V_samples}) fails to predict the sharp transition in the $U$ profiles above the separated region, the ML augmented solution predicts these transitions accurately. Furthermore, the $U$ profiles after reattachment show much better agreement with the DNS solution than the base RANS model. Along the top wall, the ML augmented solution slightly underpredicts the $U$ profile, while the base RANS model overpredicts the $U$ profile.\n\nWhile the test flow here is dominated by the streamwise flow, the $V$ profiles in Fig.~\\ref{fig:V_samples} shed light on key areas that ML augmentation improves the solution. The base RANS models under-predicts the bulge in $V$ within the separated region, thereby indicating an over-prediction of separation. The tendency of commonly used RANS models to over-predict separation is widely known. Another key area of improvement in the solution concerns the acceleration of the flow over the right hill---the RANS model greatly under-predicts the $V$ component along the right hill, while the corrected solution exhibits a much more accurate estimate compared to DNS. \n\n\n\n\\begin{figure}\n\\includegraphics[]{U_mag_contours.pdf}\n\\caption{\\label{fig:U_mag_contours} \\textit{A posteriori} contours of velocity magnitude after injecting the model predictions into the RANS equations: (a) DNS flow field and (b) result predicted by the present study. Here, $\\| \\tilde U \\|$ is the magnitude of the velocity vector arising from the final converged solution, after injecting the model predictions for $\\nu^\\dagger_t$ and $a^\\perp$.}\n\\end{figure}\n\n\n\n\\begin{figure*}\n\\includegraphics[]{U_samples.pdf}\n\\caption{\\label{fig:U_samples} \\textit{A posteriori} Samples of the $U$ velocity component along several lines throughout the flow field.}\n\\end{figure*}\n\\begin{figure*}\n\\includegraphics[]{V_samples.pdf}\n\\caption{\\label{fig:V_samples} \\textit{A posteriori} Samples of the $V$ velocity component along several lines throughout the flow field.}\n\\end{figure*}\n\n\nFigure \\ref{fig:U_V_error_histograms} compares the error distributions in $U$ and $V$ from the base RANS model to the ML augmented solution. The base RANS model displays a tendency to over-predict the $U$ component, demonstrated by a left skewed $U$ error histogram. While the augmented solution contains smaller error magnitudes, it also contains this same skewed tendency. The $V$ histograms are both roughly symmetric, with the augmented solution being slightly more skewed than the base RANS model.\n\n\\begin{figure}\n\\includegraphics[]{U_V_error_histograms.pdf}\n\\caption{\\label{fig:U_V_error_histograms} Error distribution for the $U$ and $V$ velocity components after injecting the neural network model predictions for $\\nu_t^\\dagger$ and $a^\\perp$ into the RANS equations: (a) error in the $U$ velocity component and (b) error in the $V$ velocity component.}\n\\end{figure}\n\nSince the linear part of $a$ continues to evolve as the solution converges, the full anisotropy tensor is an {\\em a posteriori} quantity. Figure~\\ref{fig:a_mag_contours} compares the predicted anisotropy tensor magnitude to the DNS value. The prediction for $a$ generally agrees well with $a_\\theta$. A slight increase in the magnitude of $a$ as the flow accelerates over the right hill is captured, as well as the large increase in $a$ as the flow separates over the left hill. This accurate prediction of $a$ confirms the assumption in the present open loop framework that if $a^\\perp$ and $\\nu^\\dagger_t$ are accurately predicted, $\\vec{U}$ can evolve such that $\\vec{U}\\approx\\vec{U}_\\theta$, and therefore $S\\approx S_\\theta$. Finally, given $a^\\perp \\approx a^\\perp_\\theta$, $\\nu^\\dagger_t \\approx \\nu^\\dagger_{t\\theta}$, and $S\\approx S_\\theta$, we have $a \\approx a_\\theta$, which is the result shown in Fig.~\\ref{fig:a_mag_contours}.\n\n\\begin{figure}\n\\includegraphics[]{a_mag_contours.pdf}\n\\caption{\\label{fig:a_mag_contours} \\textit{A posteriori} contours of the anisotropy tensor magnitude: (a) ground-truth value obtained from DNS and (b) value predicted by the present study.}\n\\end{figure}\n\n\nThe conditioning problem for data-driven closures is often described as an amplification of errors in the closure term. It was of interest to determine whether this amplification was present in the solution after correcting $\\nu^\\dagger_t$ and $a^\\perp$. Figure~\\ref{fig:Rel_error_comparison_hist} compares the relative errors in the converged velocity field to the relative errors in the closure term, $a$. The errors in $U$ are \\textit{reduced} compared to the errors in $a$. This reduction indicates that a well-conditioned closure formulation can also benefit the solution by suppressing errors in the closure term, leaving more room for errors in the machine learning model predictions. Clearly, the issue of conditioning deserves more attention in data-driven turbulence modelling. We recommend that a similar conditioning analysis be completed each time a new decomposition of the closure term is presented, or each time a new injection framework is proposed. \n\n\\begin{figure}\n\\includegraphics[]{Rel_error_comparison_hist.pdf}\n\\caption{\\label{fig:Rel_error_comparison_hist} \\textit{A posteriori} comparison of conditioning error in $U$ to conditioning errors in $a$. Relative error is calculated in an identical manner to Fig.~\\ref{fig:Eddy_viscosity_error_a_theta_histograms}.}\n\\end{figure}\n\n\nAn important flow phenomenon in the periodic hills test case is separation. For the $\\alpha=1.2$ test case, the flow completely separates over the left hill, and reattaches along the bottom wall. The reattachment point is identified by the change in sign of the wall shear stress $\\nu \\frac{\\partial U_n}{\\partial n}$, where $n$ is the wall-normal direction, and $U_n$ is the velocity parallel to the wall. Figure~\\ref{fig:WSS} shows the predicted wall shear stress along the reattaching region of the bottom wall.\n\nThe reattachment point $x_r$ in Fig.~\\ref{fig:WSS} is the location where the wall shear stress changes signs. The base RANS model predicts a delayed reattachment point, a result of over-predicting the separation. The ML corrected velocity field reattaches slightly earlier than the DNS field. Nevertheless, the predicted reattachment point is much closer after applying the ML augmentation. The relative error in $x_r$ for the base RANS model is 20\\%, while the error in the ML corrected solution is significantly reduced to 8.9\\%.\n\n\n\\begin{figure}\n\\includegraphics[]{WSS.pdf}\n\\caption{\\label{fig:WSS} Wall shear stress for a portion of the bottom wall. The area considered is shown in the top right corner of the figure. The reattachment position (i.e., the location at which the wall shear stress changes sign) is summarized in the bottom right corner.}\n\\end{figure}\n\n\\subsection{Interpreting the data-driven corrective closure model}\\label{sec:interpret}\nGiven the rich input feature set used by the two neural networks in the present work, it was of interest to determine which features were found to be important by the model. In deep learning, this analysis is referred to as \"interpretability\"---taking a look inside the neural network black box. For this purpose, Shapley additive explanation (SHAP) values are commonly used to interpret machine learning models. The SHAP package by Lundberg and Lee~\\cite{Lundberg2017} was used to collect the SHAP values for a range of $\\nu^\\dagger_t$ predictions by the EVNN. The EVNN was selected for an interpretability analysis, because it is the principle neural network in the present model. The eddy viscosity prediction is a critical injected component. SHAP values are used to determine the relative incremental contribution of each input feature on the prediction, while accounting for non-linear interactions between features. \n\n\\begin{figure}\n\\includegraphics[]{shap_hist.pdf}\n\\caption{\\label{fig:shap_hist} Mean absolute SHAP values for each feature over the entire test case. This value is used to measure the importance of each feature. The values are sorted from highest relevance ($\\phi$) to lowest relevance ($I_1(B_{21})$). The SHAP values in this plot were calculated using the EVNN.}\n\\end{figure}\n\nSummation of the SHAP values for all predictions can be used to determine the global relative importance of each input feature. Figure~\\ref{fig:shap_hist} shows the relative importance of each input feature used in the EVNN. On a global basis, the use of $\\phi$ as an input feature (made possible by the use of the $\\phi$-$f$ model) was highly valuable, as the EVNN ranked it as the most important feature. Furthermore, the heuristic scalars ($q_1$, $q_2$, $q_3$) also ranked relatively high in terms of importance. The rest of the input features consists of various combinations of the strain rate, rotation rate, $k$ gradient, and $v^2$ gradient. The additional invariant $I_2$ used to extract additional features in the present work also added useful input features. Interestingly, there is no general correlation between the usefulness of $I_1$ and $I_2$ for a given basis tensor---the feature $I_1(B_7)$ ranked third, but $I_2(B_7)$ ranked twenty-fifth. The opposite is true for $B_5$---the second invariant ranked much higher than the first invariant.\n\nFigure~\\ref{fig:local_shap} shows the relative local SHAP values for the EVNN at several important points in the flow. The SHAP values for the eight most important features from Fig.~\\ref{fig:shap_hist}, normalized by the maximum value at each location, are exhibited in this figure. The locations are annotated in Fig.~\\ref{fig:local_shap}(a), and are summarized as follows: (b) the bulk flow region above the separated region; (c) the upper wall above the reattached and developing flow; (d) the accelerating flow above the right hill; (e) the lower wall just after reattachment; and, (f) the separated region. A blue colour indicates a positive influence on the $\\nu^\\dagger_t$ prediction, while a red colour indicates a negative influence. The sign indicates the direction each input feature \"nudges\" the output prediction, while the magnitude indicates the amount of influence.\n\nFigure~\\ref{fig:local_shap} shows that while some regions of the flow may feature similar phenomena (e.g., (b) and (g)), the relative importance of each feature is distinct. Though locations (c), (d), and (e) are all near the boundary layer, the magnitude of the $q_1$ importance is relatively low. Though $q_1$ is an input feature designed to detect near-wall regions, $q_1$ does not strongly influence $\\tilde \\nu^\\dagger_t$ near the wall. At the top wall (c), the anisotropy measure $\\phi$ is the most important feature. On the contrary, as the flow develops along the bottom wall at (e), $\\phi$ is less important than some higher-order gradients of the mean field. In the separated region (f), the ratio of total to normal Reynolds stress ($q_3$) is significantly more important than the other input features. This observation justifies the heuristic basis for including $q_3$. As the flow recirculates along the bottom wall, it is possible that the EVNN has learned to associate a particular change in $q_3$ with separation. The anisotropy measure $\\phi$ was not found to be as important in the separated region, but was important for the near-wall points (c) and (e).\n\n\n\n\n\n\n\\begin{figure}[!h]\n\\includegraphics[]{local_shap.pdf}\n\\caption{\\label{fig:local_shap} Relative feature importance at six locations in the flow: (a) locations in the flow domain and (b)--(g) bar plots showing the relative feature importance for the eight most important features in Fig.~\\ref{fig:shap_hist}. The relative influence on the prediction is calculated by normalizing the SHAP values by the highest SHAP value within the top eight features at the given location.}\n\\end{figure}\n\n\n\\section{Conclusion}\\label{sec:conclusion}\nApplying machine learning to develop data-driven turbulence closures is a promising method to improve the accuracy of RANS simulations. However, a major issue for such models is the conditioning of the RANS equations after injecting a prediction for the closure term. If the framework is ill-conditioned, any errors made in the closure term could be amplified, and result in large errors in the converged mean field.\n\nThe decomposition of $\\tau$ (and, in turn, $a$) proposed in this work was shown to have good conditioning, through an analysis similar to Brener {\\em et al.}'s~\\cite{Brener2021}. By injecting the training labels (calculated from DNS), rather than the model predictions, the resulting mean field provides the \"upper limit\" achievable at testing time. We demonstrated that decomposing $a$ into an optimal eddy viscosity ($\\nu^\\dagger_t$) and a remaining non-linear part ($a^\\perp$) produced a well-conditioned closure. We also motivated the inclusion of the non-linear part of $a$ through evaluating the best prediction by a purely linear model.\n\nWe impose the often relaxed requirement of Galilean invariance on all input features. The input feature set was augmented by using an additional invariant to the one previously used in the literature. The minimal integrity basis for the strain rate, rotation rate, $k$ gradient, and $p$ gradient consists of 47 tensors, and the first invariant yields a scalar for each of these basis tensors. For a complex three-dimensional flow, using the second tensor invariant can effectively double the number of input features arising from this tensor basis to 94. For the two-dimensional flow considered here, we showed that many of these basis tensors are zero (Appendix~\\ref{ap:invariants}), an outcome not previously discussed in detail. However, the input feature set used in the present work is still one of the richest feature sets applied for data-driven turbulence modelling to date. To determine which input features were found to be important, an interpretability framework (SHAP) was also applied to explain the predictions for the optimal eddy viscosity. Using this framework, it was found that the three heuristic scalars $q_1$, $q_2$ and $q_3$ ranked high along with $\\phi$ (normalized wall-normal Reynolds stress) for the prediction of the optimal eddy viscosity.\n\n\nUsing one neural network to predict $\\nu^\\dagger_t$ and another to predict $a^\\perp$, we demonstrated that these quantities are predictable with reasonable accuracy for a periodic hills dataset. Even after introducing model error into the predictions, the resulting mean fields agree well with the DNS results. {\\em A posteriori} predictions of both the velocity field and the anisotropy tensor were demonstrated to agree well with the DNS results. Furthermore, the framework proposed here has a \\textit{smaller} error in the velocity field than the closure prediction. This result is significant because it shows that under certain conditions, the RANS equations can suppress rather than amplify errors in the closure prediction.\n\nIn the present work, we also demonstrated that machine learning augmentation can be used to improve even a sophisticated RANS model. The base turbulence model used was the $\\phi$-$f$ model, a type of $v^2$-$f$ model with three transport equations and one elliptic relaxation equation. The majority of previous studies has focused on simpler turbulence closure models, such as the standard $k$-$\\varepsilon$ model or the $k$-$\\omega$ model. This result indicates that machine learning augmentation has the potential to improve the results from even more sophisticated and recent RANS models, such as the elliptic blending (EB) $k$-$\\varepsilon$-lag model~\\cite{Lardeau2016}. Since the field $\\phi$ was found to be a highly useful feature, we conclude that more sophisticated models may benefit even more from machine learning augmentation than the simpler RANS models. The wider set of mean fields for forming a complex closure relationship may provide a richer description of the flow, yielding more information for a machine learning model to use in predicting quantities of interest. This idea could be extended to develop an augmented Reynolds stress transport model, where an even broader set of mean fields are available.\n\nFuture work includes further optimizing the non-linear part of the anisotropy tensor $a^\\perp$. While the present work imposes strict invariance quantities for the input features, the direct prediction of the components of $a^\\perp$ means that the outputs do not possess the same invariance quantities. Possible options for remedying this discrepancy include using a tensor basis neural network (TBNN) for $a^\\perp$ to construct neural network for $a^\\perp$ that is invariant with respect to coordinate transformations (e.g., rotations, reflections). Further work also includes applying this framework to a broader set of flows. While the periodic hills dataset used here sufficiently demonstrated the merits of this framework, using a wider range of flows could provide additional useful information, especially when interpreting the closure model. For example, an analysis of the importance of various features for distinct flows could guide the input feature selection for a sophisticated three-dimensional machine learning closure model.\n\n\n\\begin{acknowledgments}\nR.M. is supported by the Ontario Graduate Scholarship (OGS) program, and the Natural Sciences and Engineering Research Council of Canada (NSERC). The computational resources for this work were supported by the Tyler Lewis Clean Energy Research Foundation (TLCERF) and the Shared Hierarchical Academic Research Computing Network (SHARCNET).\n\\end{acknowledgments}\n\n\\section*{Data Availability Statement}\n\n\n\n\nThe data that support the findings of this study are openly available in \"A curated dataset for data-driven turbulence modelling\" at doi.org\/10.34740\/kaggle\/dsv\/2637500.\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nDynamical localization of Floquet states in time-dependent and chaotic Hamiltonian systems\nis a phase coherent effect arising from quantum interferences. Quantum kicked rotor \nis a paradigmatic model for quantized chaotic systems which displays localization effects.\nQuantum localization in kicked rotor (KR) continues to attract attention\nin a variety of contexts ranging from metal-insulator transitions \\cite{chabe, garcia}, coherent control \\cite{gong},\nentanglement measures \\cite{santh}, quantum resonances \\cite{fish1, max, beh, dana1}, quantum ratchets \\cite{schanz, dana} to quantum transport \\cite{jones}\nand decoherence effects \\cite{sarkar, lutz}. Most of such studies have focussed on KR as a model for time-dependent potential exhibiting classically chaotic\ndynamics and quantum mechanical localization. For sufficiently strong non-linearity,\nKR displays chaotic classical dynamics and it is associated with diffusive growth of mean energy\nwith time. In the corresponding quantum regime, this unbounded energy growth\nis strongly suppressed by localization arising from destructive quantum interferences \\cite{kr}.\nThis effect in KR has been shown to be analogous to Anderson localization for electronic\ntransport in crystalline solids \\cite{fishman, stockmann, reichl}.\n\nOne significant property shared by the quantum KR and Anderson model is\nthe exponential decay of their eigenstates. In the one dimensional Anderson model all\nthe eigenstates are exponentially localized in position representation \\cite{anderson1, anderson2}, \ni.e, $\\psi(x) \\sim e^{-x\/x_l}$ where $x_l$ is the localization length. In the KR\nsystem, eigenstates are exponentially localized in the momentum representation \\cite{dima}.\nThe latter has been experimentally realized in microwave ionization of hydrogen atoms and \nin cold atomic cloud in optical lattices \\cite{moore, delande, fromhold}.\n\nKicked rotor can thus be regarded as a representative dynamical system from two distinct points of view.\nFirstly, in the classical sense, it belongs to a class of chaotic systems that obeys \nKolmogorov-Arnold-Moser (KAM) theorem \\cite{jurgen}. This effectively implies that, upon\nvariation of a chaos parameter, the system makes a smooth transition from regular to\npredominantly chaotic dynamics. Secondly, in the quantum mechanical regime,\nKR is a paradigmatic example of dynamical localization and the associated exponential\nprofile of its Floquet states. In the last one decade, many other facets of chaos and localization\nin variants of KR have been studied that have provided results different\nfrom this standard scenario \\cite{bala, italo, pragya, klappauf, garcia}.\n\nOne class of important variant is to place the KR in a singular\npotential. Presence of singularity in the potential violates one of the conditions\nfor the applicability of KAM theorem and leads to a scenario in which abrupt,\nrather than smooth, transition\nfrom integrability to chaotic dynamics becomes possible. Such abrupt transition to chaos\nis a feature of non-KAM systems and is seen, for instance, in the\nkicked particle in an infinite potential well \\cite{sankar, hu}. The quantum eigenstates of this\nsystem had been reported to display localization and its profile is {\\sl not}\nexponential but was claimed to have power-law type decay in the unperturbed basis.\nA more systematic study in Ref. \\cite{garcia} incorporated singularity in the KR\nthrough a tunable potential term $V(q;\\alpha)$ such that it becomes singular at some special\nvalue of tunable parameter $\\alpha= \\alpha_s$. It was shown, through numerical simulations,\nthat if $\\alpha=\\alpha_s$ in the potential, then all the eigenstates of the system are\npower-law localized. Indeed, it was even suggested that KR when acted upon by\na singular potential would display eigenstate localization with power-law profile\nin contrast to the exponential profile obtained in the context of standard KR \\cite{hu, liu}.\nThis suggestion has not yet been numerically tested in a variety of chaotic\nHamiltonian systems and general analytical results in support for this claim remains\nan open question.\n\nIn this paper, we examine the question whether the presence of non-analytic potential\nin a kicked rotor would {\\it generically} imply power-law profile for its eigenstates\nin the quantum regime. To address this question, we consider the dynamics of a periodically kicked particle\nplaced in a stationary finite potential well of height $V_0$. This is primarily a non-KAM system\nand its unusual classical and quantum transport properties, reflective of its\nnon-KAM nature, were recently reported in Ref. \\cite{paul}. This system subsumes two limiting cases; it\nis the standard KR (a KAM system) in the absence of finite well potential, i.e, $V_0 = 0$ and\nif $V_0 \\to \\infty$, then it becomes a kicked rotor system placed in an infinite well\n(a non-KAM system) and has been studied in Refs. \\cite{sankar, hu}.\nHence, this is a suitable test bed to understand the transition in the nature of\nFloquet states as $V_0$ is varied from the\nlimit of a KR system (analytic potential) to that of a system with \nsingular potential. Further, this can lead to a better understanding of the quantum manifestations of\nclassical chaos non-KAM systems. \n\n\n\\begin{figure}\n\\includegraphics*[width=3.0in]{fig1.eps}\n\\caption{(Color online) Schematic of the stationary\npotential, $V_{sq}(\\theta)$ with $V_0$ as the potential height, $b$ and $w$ as barrier and well width. $A$ and $B$ represents the positions at which periodic boundary conditions are applied. I and II denotes the regions below and above $V_0$.}\n\\label{fig1}\n\\end{figure}\n\n\n\n\nUsing the context of this system based on KR, we show in this paper that the\npresence of singularity in the potential does not always guarantee\npower-law localization of Floquet states.\nSingular potentials are associated with power-law localized Floquet states provided\nthat the Floquet states span an energy band in which the singularity is effectively\nfelt by the particle. Further, it is demonstrated that the spectral\nfluctuations properties such as the level spacing distributions for this\nsystem depends on the energy range being considered. Hence, spacing distributions\ndo not characterize the system at all the energy scales.\n\nIn section 2, we introduce the model of kicked particle in a finite barrier and in\nsection 3 we report results on the decay profile of the Floquet states and relate\nit to the decay of the Floquet matrix and to the effective singularity ``felt\" by the\nkicked particle at various energy scales. In section 4, we obtain a tight binding\nform for our system to deduce the non-exponential nature of Floquet state decays.\nFinally, in section 5, we discuss the manifestation of potential singularity in \nthe averaged quantities derived from Floquet states.\n\n\n\n\n\n\n\\section{Kicked particle in finite barrier}\n\n\n\nThe dimensionless Hamiltonian of a periodically kicked particle in a finite well\npotential \\cite{paul} is\n\\begin{equation}\n\\begin{aligned}\nH &= \\frac{p^2}{2}+V_{sq}(\\theta)+k~\\cos(\\theta) \\sum_{n=-\\infty}^{\\infty} \\delta(t-n)\\\\\n &= H_0 + V(\\theta) \\sum_{n=-\\infty}^{\\infty} \\delta(t-n).\n\\end{aligned}\n\\label{ham1}\n\\end{equation}\nIn this, $V(\\theta)=k~\\cos(\\theta)$ and $V_{sq}(\\theta)$ is the square well potential shown in Fig. \\ref{fig1} and can\nbe represented as\n\\begin{equation*}\nV_{sq}(\\theta)=V_0[\\Theta(\\theta-R\\pi)-\\Theta(\\theta-R\\pi-b)],\n\\end{equation*}\nwhere $V_0$ is the potential height and $b$ is the barrier width,\n$R=w\/\\lambda$ is the ratio of well width to the wavelength of the kicking field and $k$ is the kick \nstrength. Throughout this work, we have set $\\lambda=2\\pi$, $b$ and $w$ are constrained by\n$b+w=2\\pi$. Periodic boundary conditions are applied at positions A and B shown in Fig. \\ref{fig1}.\n\n\n\n\n\t\t\t \nLet $E_n$ and $|\\psi_n\\rangle$ represent the energy and the eigenstate of\nthe unperturbed system such that $H_0|\\psi_n \\rangle = E_n |\\psi_n \\rangle$.\nFurther, $|\\psi_n\\rangle$ can be written as a superposition of all momentum\nstates $|l\\rangle$, i.e. $|\\psi_n \\rangle=\\sum_l a_{nl}|l\\rangle$, where $a_{nl}$ represents the \nexpansion coefficient. Then any general initial state can be expressed\nin the energy basis state representation as $|\\Psi\\rangle=\\sum_n b_n|\\psi_n\\rangle$.\nThe mean energy in the state $|\\Psi(t)\\rangle$ can be obtained as\n\n \\begin{equation}\n \\begin{aligned}\n E(t) &= \\langle\\Psi(t)|\\hat{H_0}|\\Psi(t)\\rangle \\\\\n &= \\sum_m E_m |b_m(t)|^2.\n \\end{aligned}\n \\end{equation}\nThe quantum map that connects the state $|\\Psi(N+1)\\rangle$ at time $N+1$ with the \nstate $|\\Psi(N)\\rangle$ can be obtained by evolving the Schroedinger equation\n$|\\Psi(N+1)\\rangle=\\widehat{U}|\\Psi(N)\\rangle$,\nwhere $\\widehat{U}$ is the Floquet operator, \n\\begin{equation}\n\\widehat{U}=e^{-H_0\/\\hbar_s}e^{-iV(\\theta)\/\\hbar_s},\n\\end{equation}\nand $\\hbar_s$ is the scaled Planck's constant. In the energy representation,\nthe elements of the Floquet operator are given by\n\\begin{equation}\nU_{nm}=\\sum_{p,p'} a_{np}^{*} a_{mp'} i^{|p-p'|}J_{|p-p'|}\\left(\\frac{k}{\\hbar_s}\\right),\n\\label{umat}\n\\end{equation}\nwhere $J_{|p-p'|}(\\cdot)$ is the Bessel function of order $|p-p'|$.\n\nThe eigenvalue equation governing the Floquet operator is $\\widehat{U} |\\phi\\rangle = e^{i \\omega} |\\phi\\rangle$\nin which $|\\phi\\rangle$ represents a Floquet state and $\\omega$ is its quasi-energy. The \nFloquet operator is an unitary operator and hence the eigenvalues lie\non a unit circle. Further the quasi-energy state, $|\\phi\\rangle$, can be decomposed as a \nsuperposition of all energy states $|\\psi_n\\rangle$, i.e., \n$|\\phi\\rangle = \\sum_n c_n |\\psi_n\\rangle$, where $|c_n|$ is the\nprobability density of finding the particle in state $|\\psi_n\\rangle$.\n\n\n\n\nIn order to analyse the localization properties of the\nFloquet states, Floquet matrix of order $N$ is numerically diagonalized to determine the\nquasi-energies and the Floquet vectors. In this work, $N=10035$ and we have ensured \nthat for the choice of parameters used in this paper, the system is classically chaotic \\cite{supplement}.\nFloquet states for standard KR are generally known to be exponentially localized in\nmomentum space $|\\psi(p)|^2 \\sim \\exp(-p\/\\xi)$ characterized by a localization length $\\xi$. In\ncontrast to that, Floquet states of a kicked particle in a periodic potential well is localized over\nenergy basis state $|\\psi_n\\rangle, n=1,2,\\dots$. In the subsequent sections,\nit is shown that the system in Eq. \\ref{ham1} exhibits a transition from exponential\nto power-law localization as the parameters $V_0$ and $k$ are varied.\n\t\t\t\n\n \n\\section{Floquet states}\t\t\n\n\n\\begin{figure}\n\\centering\n\\includegraphics*[width=3.3in]{fig2.eps}\n\\caption{(Color online) Decay of Floquet states over the unperturbed basis states, averaged\nover all the Floquet states. The parameters are $b=1.4\\pi, \\hbar_s = 1.0$, (a) (KRIW limit) \n$V_0=5000.0, k=0.25$, and (b) (KR limit) $V_0=0.5, k=4.25$. In (a) black line is a linear fit and in (b) black curve corresponds to KR.}\n\\label{avgdFloquet states}\n\\end{figure}\n\n\t \nIn this section, we will mainly focus on the average spectral properties of the\nFloquet states which governs the dynamics in the quantum regime.\nFor the finite well represented by Hamiltonian in Eq. \\ref{ham1}, the nature of\nFloquet state decay profile, in general, will depend on the choice of parameters, namely,\nkick strength $k$ and potential height $V_0$.\nFig. \\ref{avgdFloquet states} has been obtained by averaging over $10035$\nFloquet states $|\\phi\\rangle(=\\sum_n c_n |\\psi_n\\rangle)$ for each set of\nparameters. Prior to averaging, each Floquet state was shifted by $n_{max}$, i.e. $v = n-n_{max}$, where $n_{max}$\ncorresponds to $n$ for which $|c_n|^2$ is maximum.\n\nIf $V_0 > 0$, the Hamiltonian in Eq. \\ref{ham1} is a non-KAM system due to the presence of\nsingularities in $V_{sq}(\\theta)$. Based on numerical simulations of kicked systems with singular\npotentials, it was argued that their Floquet states display power-law decay over the\nunperturbed basis \\cite{sankar, hu, liu, garcia, jose}. Further, a new universality class has been proposed in Ref. \\cite{garcia} based on the\npresence of classical singularity and power-law localization.\nTo discuss the results, in the light of this proposal, two limiting cases can be identified;\n(i) $0< V_0 < 1$ (KR limit) and\n(ii) $V_0 \\gg 1$ (KR in infinite well (KRIW) limit).\nIn the KR limit, notwithstanding the singularity in the potential, the Floquet states can be \nexpected to be qualitatively closer to that of KR.\nIn particular, if kick strength $k \\gg 1$, all the Floquet states display exponential decay profile.\nOn the other hand, in the KRIW limit, the potential height is large ($V_0 \\gg 1$) and is \nqualitatively closer to the kicked infinite well system \\cite{sankar, hu}. In this limit, even for small kick strengths \n$k<1$, it is known that all the Floquet states show power-law decay over the \nunperturbed basis \\cite{hu, liu}. Both these limits are illustrated in Fig. \\ref{avgdFloquet states}.\n\nIn Fig. \\ref{avgdFloquet states}(a), the decay of the averaged Floquet state in the KRIW limit is\nshown for $V_0=5000.0$ and $k=0.25$. It is consistent with a power-law form $P(v) \\sim v^{-\\gamma}$,\nwhere $v > 0$ and $\\gamma \\approx 2.5$, in agreement with the value reported in Ref. \\cite{hu} and the deviation observed can be attributed to the finite height of well.\nOn the other hand, averaged Floquet state in the KR limit for $V_0 = 0.5$\nand $k = 0.25$ shown in Fig. \\ref{avgdFloquet states}(b) displays exponential decay, $P(v) \\sim \\exp(-v\/l)$, where $l$ is the\nlocalization length. This is the standard dynamical localization scenario but is generally\nnot associated with non-KAM systems.\nBoth these decay profiles in Fig. \\ref{avgdFloquet states} can be understood if the relation\nbetween singular potential and power-law localization can be restated in the following\nmanner. For this purpose, let $\\epsilon_{if}= \\{E_i, E_{i+1}, E_{i+2}, \\dots E_f\\} $ \ncollectively represent the energies of a set of states of $H_0$ lying in the energy band $(E_f-E_i)$\nbetween two states with quantum numbers $f$ and $i$\nin the unperturbed system. The classical singularities are associated with quantum\npower-law localization of a set of Floquet states mostly lying in the energy range $\\epsilon_{if}$,\nprovided $\\epsilon_{if} < V_0$. Thus, Floquet states will display power-law localization\nonly if they effectively ``feel\" the non-smooth potential. This requires that\nenergy scale $\\epsilon_{if}$ be less than that representing $V_0$.\n\nIt must be emphasised that the Hamiltonian in Eq. \\ref{ham1}\nis classically a non-KAM system if $V_0 > 0$, for all values of $k>0$.\nHence, with $V_0=5000.0$ and for kick strength as small\nas $k=0.25$ the system is classically chaotic \\cite{supplement} and the corresponding quantized\nsystem displays power-law localized profile of the Floquet states (Fig. \\ref{avgdFloquet states}(a)).\nIn this case, most of the $10035$ Floquet states used for averaging are such that $\\epsilon_{if} < V_0$.\nOn the other hand, in the\ncase of Fig. \\ref{avgdFloquet states}(b), even though it is still a non-KAM system with\nsingular potential, the Floquet states mostly straddle energy scales $\\epsilon_{if}$ larger than\n$V_0$ and are not affected by the shallow singular potential and hence\nthe exponentially localized Floquet states are obtained.\n\n\\begin{figure}\n\\centering\n\\includegraphics*[width=3.3in]{fig3.eps}\n\\caption{(Color online) Averaged decay of the matrix elements of the Floquet operator $\\widehat{U}$\nas a function of $m$. The parameters are $b=1.4\\pi, \\hbar_s = 1.0$, (a) (KRIW limit) \n$V_0=5000.0, k=0.25$, and (b) (KR limit) $V_0=0.5, k=4.25$, (c) $V_0=100.0, k=4.25$,\n(d) $V_0=5000.0, k=4.25$. $n_c$ represents the crossover point from exponential to power-law profile. All black curves corresponds to KR.}\n\\label{mele}\n\\end{figure}\n\n\\subsection{Matrix element decay}\nIt is known that exponential localization in the KR is associated with\nthe exponential decay of the matrix elements of the corresponding Floquet operator.\nFurther, from Ref. \\cite{sankar, hu}, it is also known that in the case of KR in the\ninfinite well, a non-KAM system, the matrix elements of $\\widehat{U}$ display a \npower-law decay after a bandwidth $\\eta\\propto k$.\nHence, it is natural to enquire how the decay of matrix elements changes\nits character as $V_0>>1$ approaches the limit $V_0 \\to 0$. In the unperturbed basis, the\nmatrix elements are $U_{nm}$ as given by Eq. \\ref{umat}.\nThis is illustrated in Fig. \\ref{mele} which shows $M_m = \\langle |U_{nm}| \\rangle_n$\nas a function of $m$, with $m>n$, in log-log plot.\n\nFigure \\ref{mele}(a,b) shows $M_n$ as log-log plot for the same choice of parameters as in \nFig. \\ref{avgdFloquet states}(a,b). Figure \\ref{mele}(a) corresponds to KRIW limit and\nshows a short regime of exponential decay followed by an asymptotic power-law decay.\nIn Fig. \\ref{mele}(b), $V_0=0.5$ corresponding to the KR limit and the decay of $M_n$\nlargely follows that of KR except for $n>>1$ where it decays as a power-law.\nIn general, the following features are observed. In the limit as $V_0 \\to \\infty$,\nthe decay is of power-law form. In the opposite limit of $V_0 \\to 0$, the decay\nis exponential in nature. In general, for any intermediate $V_0$, i.e., $0 < V_0 < \\infty$, \nan initial exponential decay is followed by an asymptotic power-law decay whose\nslope is approximately 2.7.\nIf $V_0 < \\infty$, the initial exponential decay is always present. \nThe exponential decay sharply changes over\nto a power-law decay at $n=n_c$ as shown by dotted vertical lines in Fig. \\ref{mele}.\nFor any fixed value of kick strength $k$, as $V_0$ varies from 0 $\\to \\infty$, then $n_c$\nchanges from $\\infty \\to 0$. It is also to be noted that for fixed $V_0$, as $k$\nincreases, $n_c$ also increases.\n\n\n\\begin{figure*}[t]\n\\centering\n\\includegraphics*[width=6.2 in]{fig4.eps}\n\\caption{(Color online) Floquet states of Hamiltonian in Eq. \\ref{ham1} with parameters $b=1.4\\pi$, $\\hbar_s=1.0$, \n$V_0=5000.0$, $k=4.25$. The three figures differ in how Floquet states were averaged over;\n(a) averaged over all the computed states, (b) averaged over states with $\\mu< 1$, \n(c) averaged over states with $\\mu \\gg 1$. In (a,b) black curve represent best fit line and in (c) black curve corresponds to KR.}\n\\label{avgdrm}\n\\end{figure*}\n\n\n\\subsection{Energy scales}\n\nIn this section, we show how singularity of the potential and energy scales associated\nwith the Floquet states determine the localization structure of these states.\nAs discussed in Sec. 2, $c_{rj} = \\langle \\phi_r | \\psi_j \\rangle$,\nwhere $|\\psi_j\\rangle$ is an eigenstate of $H_0$ with\nassociated energy $E_j$. Let $c_{max} = \\mbox{max}(|c_{r,1}|^2, |c_{r,2}|^2, \\dots |c_{r,N}|^2)$\nrepresent the largest overlap of $r$-th Floquet state with $|\\psi_j\\rangle$. The energy\nassociated with $|\\psi_j\\rangle$, and hence with $c_{max}$, is denoted by $E_{max}$.\nThen, an effective parameter $\\mu=E_{max}\/V_0$ can be identified to distinguish two regimes,\nnamely, (i) $\\mu \\le 1$ and (ii) $\\mu >> 1$. Physically, $\\mu \\le 1$ corresponds to\nFloquet states (in energy basis) mostly confined to the potential height $V_0$ and $\\mu >>1$\ncorresponds to those Floquet states that have significant overlap with states lying\nin the energy scales far greater than $V_0$.\n\nIn Fig. \\ref{avgdrm}(a), $\\ln \\langle c_{v} \\rangle$ is shown by averaging\nover all the computed Floquet states for $V_0=5000.0$ and $k=4.25$. As discussed\nearlier, the Floquet state profile is a combination of initial exponential decay\nfollowed by a power-law decay. However, if average is taken only over those states\nthat satisfy the condition $\\mu \\le 1$, then the resulting profile is shown\nin Fig. \\ref{avgdrm}(b).\nIn this case $k=4.25$ and all the Floquet states are confined\nto an energy scale well below $V_0$. Hence, these set of Floquet states can be\nexpected to ``feel\" the presence of singularity in the potential. In this\nregime, we observe a power-law profile for the averaged Floquet states as displayed in Fig. \\ref{avgdrm}(b).\n \nHowever if the states are averaged subject to the condition that $\\mu >> 1$, then\nsingularity is not strongly felt by the Floquet states since the bandwidth of their energy\ndistribution is far greater than $V_0$. Effectively, at this\nenergy scale, the singularity becomes insignificant and hence we can expect it to \nlie in the KR limit. Indeed, as seen in Fig. \\ref{avgdrm}(c), \n$\\langle c_{v} \\rangle$ is nearly identical to that of KR (shown as black curve in Fig. \\ref{avgdrm}(c)) at $k=4.25$.\n\nIn general, for the Hamiltonian in Eq. \\ref{ham1}, the localization property of a subset \nof Floquet states in an energy band $\\epsilon_{if}$ depends on the effectiveness of \nthe singularity for the spectral range $\\epsilon_{if}$ under consideration.\nIn a given energy band $\\epsilon_{if}$, if the singularity is effective, then \npower-law localization is obtained and if singularity is weak or absent then \nexponential localization results for the states in $\\epsilon_{if}$. \nAs far as localization of eigenstates of chaotic systems are concerned, it is known \nthat either all the states are exponentially localized (as in KR) or power-law \nlocalized (as in systems with singular potentials) but, to the best of our\nknowledge, combinations of these localization profiles have not been reported before.\nIn the next section, we transform the Hamiltonian in Eq. \\ref{ham1} to that of a\ntight binding model and show that $V_0\/E$ controls the localization property of eigenvectors.\n\n\n\\section{Tight binding model}\t\t\t\nThe dynamical localization in the quantum KR system was mapped to \nAnderson model for electron transport in a one-dimensional crystalline lattice \\cite{fishman}.\nBy implication, the exponential decay profile of eigenstates in the Anderson model \ntranslates to exponential profile (in the momentum representation) for the \nFloquet states of quantum KR.\nFollowing this mapping technique, in this section, we map the Hamiltonian in Eq. \\ref{ham1}\nto a tight binding Hamiltonian.\nSince Eq. \\ref{ham1} represents a time periodic system, using Floquet-Bloch \ntheorem, we can write the quasi-energy state as\n\\begin{equation}\n\\phi(\\theta,t)=e^{-i\\omega t} u(\\theta,t),\n\\label{eq1}\n\\end{equation}\nwhere, $u(\\theta, t) = u(\\theta, t+1)$. In between two consecutive\nkicks, the Hamiltonian $H_0$ governs the evolution of the particle and is given by,\n\\begin{equation}\n\\phi_n^{-}(t+1)=e^{-iE_n} \\phi_n^{+}(t).\n\\label{eq2}\n\\end{equation}\nIn this, $\\phi_n^{-}(t+1)$ and $\\phi_n^{+}(t)$ are the quasi-energy states just\nbefore the $(t+1)$-th kick and just after $t$-th kick and $E_n$ is the $n$-th energy\nlevel of $H_0$. During the evolution it acquires an extra phase $e^{-iE_n}$.\nBy substituting Eq. \\ref{eq1} in Eq. \\ref{eq2} and using the periodicity of $u(\\theta, t)$, \nwe obtain\n\\begin{equation}\n\\begin{aligned}\nu_n^{-}(\\theta, t+1)=e^{i \\omega} e^{-i E_n} u_n^{+}(\\theta, t).\n\\end{aligned}\n\\label{eq3}\n\\end{equation} \n\t\t\t\nNow the quasi-energy state just after a $t$-th kick can be obtained using a map\n$\\phi^{+}(\\theta, t) = e^{-iV(\\theta)} \\phi^{-}(\\theta, t)$.\nBy using Eq. \\ref{eq1}, this can be written in-terms of $u(\\theta, t)$ as \n\\begin{equation}\nu^{+}(\\theta, t)=e^{-iV(\\theta)} u^{-}(\\theta, t).\n\\label{eq4}\n\\end{equation}\n\nNow, $e^{-iV(\\theta)}$ is expressed in terms of trigonometric function\n$W(\\theta)=-\\tan\\left(\\frac{V(\\theta)}{2} \\right)$ as\n\\begin{equation}\ne^{-iV(\\theta)}=\\frac{1+iW(\\theta)}{1-iW(\\theta)}.\n\\label{eq5}\n\\end{equation}\nThis is used in Eq. \\ref{eq4} to obtain\n\\begin{equation}\n\\frac{u^{+}(\\theta)}{1+i W(\\theta)}=\\bar{u}=\\frac{u^{-}(\\theta)}{1-i W(\\theta)},\n\\end{equation}\nwhere $\\bar{u}$ is defined as $\\bar{u}=[u^{+}(\\theta)+u^{-}(\\theta)]\/2$.\nUsing Eq. \\ref{eq4} and Eq. \\ref{eq5}, the evolution of the quasi-energy state after \none period is\n\\begin{equation}\nu^{+}(\\theta)=e^{-iV(\\theta)} e^{i(\\omega -E_n)} u^{+}(\\theta).\n\\end{equation}\nThis can be written as,\n\\begin{equation}\n(1-i W(\\theta)) \\bar{u}=e^{i(\\omega -H_0)} \\bar{u} (1+i W(\\theta)),\n\\label{and1}\n\\end{equation}\nwhere $\\bar{u}=\\frac{u^{+}}{1+i W(\\theta)}$. \nNow rearrangement of terms leads to\n\\begin{equation}\n\\tan\\left(\\frac{\\omega - H_0}{2} \\right) \\bar{u} + W(\\theta) \\bar{u} = 0.\n\\label{and2}\n\\end{equation}\n\nThe quasi-energy state can be expanded in the unperturbed basis as $|\\bar{u}\\rangle=\\sum_m u_m |\\psi_m\\rangle$\nwhere, $|\\psi_m\\rangle$ are the eigenstates of $H_0$ and $u_m$ is given by,\n\\begin{equation}\nu_m = \\int \\bar{u} \\psi_m(\\theta) d\\theta = \\int \\frac{1}{2} [u^{+}(\\theta) + u^{-}(\\theta)] \\psi_m(\\theta)~d\\theta.\n\\end{equation}\nTaking the inner product of Eq. \\ref{and2} with $|\\psi_m\\rangle$, we will formally obtain\n\\begin{equation}\nT_m u_m + \\sum_l W_{ml} u_l = 0.\n\\label{tight1}\n\\end{equation}\nIn this, $T_m=\\tan(\\frac{\\omega - E_m}{2})$ represents the on-site energy and $W_{ml}$ is the\nhopping strength for a particle to hop from $m$th site to $l$th site and can be written in the\nenergy basis as\n\\begin{equation}\n\\begin{aligned}\nW_{ml} &= \\langle \\psi_m| W(\\theta) | \\psi_l \\rangle \\\\\n&= \\int \\sum_{p,q} a^{*}_{mp} e^{-ip\\theta} W(\\theta) a_{lq} e^{i q \\theta} d\\theta \\\\\n&= \\sum_{p,q} a_{mp}^{*} a_{lq} \\int W(\\theta) e^{-i(p-q) \\theta} d\\theta \\\\\n&= \\sum_{p,q} a_{mp}^{*} a_{lq} W_{p-q},\n\\end{aligned}\n\\end{equation}\nwhere $W_n=\\frac{1}{2 \\pi}\\int_0^{2 \\pi} W(\\theta) e^{-i n \\theta} d\\theta$ is the\nFourier transform of $W(\\theta)$.\nThus, in energy basis, after simple manipulation, Eq. \\ref{tight1} takes the form\n\\begin{equation}\n\\left(T_m + \\sum_{p,q} a_{mp}^{*} a_{mq} W_{p-q} \\right) u_m + \\sum_{p,q,l\\neq m} a_{mp}^{*} a_{lq} W_{p-q} u_l = 0.\n\\label{tight2}\n\\end{equation}\nThis is the tight binding model version of the Hamiltonian in Eq. \\ref{ham1}. In this, \n$(T_m + \\sum_{p,q} a_{mp}^{*} a_{mq} W_{p-q})$ represents the diagonal term and \n$a_{mp}^{*} a_{lq} W_{p-q}$ is the off-diagonal term of the transfer matrix. \nIt does not appear straightforward to analytically prove power-law profile of\nFloquet states\nstarting from Eq. \\ref{tight2}, though it appears fair to expect that\nin this case the decay of Floquet state profile will be different from exponential form. As numerical results\nshow, we obtain power-law localization. Similar results has also been reported in \\cite{cohen}.\n\nHowever, using Eq. \\ref{tight2}, it is possible to make an inference about Floquet state profile\nin the limit $\\mu >> 1$.\nIn this case $E_n \\gg V_0$ and the singularity in the potential becomes\ninsignificant. Effectively, the system behaves as a free particle with energy\n$E_n=\\frac{\\hbar_s n^2}{2}$ and the wave-function of $H_0$ is just the momentum eigenstate,\n$|\\psi_n\\rangle = a_{nn} e^{i n \\theta}$, with $a_{mn}=\\delta_{mn}$.\nThis set of conditions, if applied to Eq. \\ref{tight2}, lead to \n\\begin{equation}\n(T_m + W_0 ) u_m + \\sum_l W_{m-l} u_l = 0.\n\\end{equation}\nThis is just standard KR Hamiltonian transformed to the 1D Anderson model \\cite{fishman}, for which\nall the eigenstates are known to display exponential profile. Hence,\nas seen in Fig. \\ref{avgdrm}(c) for $\\mu \\gg 1$, the observed localization is exponential in nature.\nThus, even in the presence of singular potentials, eigenstate localization is not\ngenerically of power-law form. We reiterate the main result of the paper that the association\nbetween power-law profile of eigenstates and singular potentials needs to take into account\nthe effectiveness of singularity in a given energy band.\n\t\t\t\n\\section{Spectral signatures}\n\n Based on the results presented in Fig. \\ref{avgdrm}, a novel scenario for the\nspectral signatures can be expected. As the regimes $\\mu < 1$ and $\\mu \\gg 1$ are\ntraversed, by considering Floquet states in a suitable energy band $\\epsilon_{if}$, the decay profile \nof Floquet state changes from power-law to exponential form. This would also imply that \na unique spectral signature for the nearest neighbour spacing distribution $P(s)$, such \nas either the Poisson or Wigner distributions, may not exist for the system as a whole. Quite\nunusually, $P(s)$ would depend on the energy band $\\epsilon_{if}$ being considered.\nThus, in the same system for a given choice of parameters, in the limit $\\mu \\gg 1$ (KR limit)\nwe expect Poisson distribution and in the limit $\\mu < 1$ (KRIW limit) we expect P(s) to be \ncloser to Wigner distribution or possibly, a Brody distribution \\cite{brody}. \n\nThe Floquet operator $\\widehat{U}$ being a unitary operator, all the\neigenvalues lie on a unit circle, $\\omega_i\\in[0,2\\pi)$. In this case, level\ndensity is constant $\\left(\\frac{N}{2\\pi} \\right)$ and hence the unfolding of Floquet\nlevels is not necessary. To compute the spacing distribution, we have treated the \neigenvalues of even and odd parity states separately. The nearest neighbour spacing \ndistribution reveals two\ndifferent forms; for $\\mu < 1$ level repulsion is observed in the form of Brody distribution\nand for $\\mu \\gg 1$ level clustering is seen in the form of Poisson distribution \\cite{supplement}.\nThe regime of $\\mu < 1$ corresponds to KRIW limit and power-law decay of Floquet\nstates (see Fig. \\ref{avgdrm}(b)) and is associated with level correlations that\nare intermediate between no correlation and random matrix type level repulsion.\nOn the other hand, the limit of $\\mu \\gg 1$ is KR limit and levels remain uncorrelated\ndue to occurrence of dynamical localization.\nIt must be emphasised that two different level spacing distributions and level correlations \nfor the same system with identical parameters is a novel feature not usually\nencountered in the context of chaotic quantum systems. This unusual spacing distribution\nreinforces the central result of this paper that the relation between potential singularity \nand eigenvector profile is conditioned by energy regime being considered.\n\n\\begin{figure}\n\\centering\n\\includegraphics*[width=3.0in]{fig5.eps}\n\\caption{(Color online) Participation ratio of the Floquet states as a function of $E_{max}$ \nfor $b=1.4\\pi$, $\\hbar_s = 1.0$, $V_0=5000.0$, $k=4.25$ (same set of parameters\nas in Fig. \\ref{avgdrm}). The vertical line is placed at $E_{max}=V_0$ ($\\mu=1$), the height \nof potential barriers. Horizontal black dotted line represents the mean participation ratio for $\\mu>1$.}\n\\label{PR}\n\\end{figure}\n\nThis dichotomy is reflected in the eigenvector statistics as well.\nThis is easily observed by studying the participation ratio (PR) of the Floquet states\nthat provides information about their localization properties. For an eigenstate that resides\nin the infinite dimensional Hilbert space, participation ratio is defined as\n\\begin{equation}\nP=\\sum_{i=1}^{\\infty} |\\psi_i|^4\n\\end{equation}\nwith the condition that $\\sum_i |\\psi_i|^2 = 1.0$, where $\\psi_i$ are components \nof a Floquet state. It is a measure of how many basis states\neffectively participate in making up the eigenstate. If $P \\approx 1$, then\nthe state is strongly localized and implies that one basis state contributes significantly \nto the Floquet state\nwhile the contribution from the rest of the basis are almost negligible. However, if \n$P \\sim \\frac{1}{N}$, then the Floquet state is of extended nature and all the \nbasis states make equal contribution on an average.\nFigure \\ref{PR} displays $P$ for all the $10035$ converged Floquet states as a function\nof energy $E_{max}$ for the identical choice of parameters as in Fig. \\ref{avgdrm}.\nQuite surprisingly, $P$ distinguishes the two regimes, $\\mu < 1$ and $\\mu \\gg 1$. The\nboundary between the two regimes is at $E_{max}=V_0$, the height of potential barriers.\nFor $\\mu \\gg 1$, exponential localization of Floquet states implies that\n$|\\phi\\rangle \\sim e^{-n\/l}$, where $l$ is the localization length. A remarkable result due\nto Izrailev \\cite{kr, chirikov} provides the relation, $l \\approx \\frac{k^2}{2 \\hbar_s^2}$. For our case, this estimate\ngives $l \\approx \\frac{k^2}{2 \\hbar_s^2} = 9.03$\nand this represents the effective number of basis states that goes in constructing\nthe Floquet states. As participation ratio is the inverse of the effective number\nof basis states, it is estimated to be $P \\approx \\frac{2 \\hbar_s^2}{k^2} = 0.11$. As seen in Fig. \\ref{PR},\nthis value closely matches the computed PR in the regime $\\mu \\gg 1$.\n\nFor $\\mu<1$, the mean $P$ is larger compared to that for $\\mu>1$ as shown in Fig. \\ref{PR}.\nThe reason can be traced back to the fact that in the case of infinite well $E_n \\sim n^2$\nand hence levels are spaced far apart.\nThis implies that the Floquet states for $\\mu<1$ has overlap only with a few unperturbed\nbasis states and this effectively increases the value of participation ratio for $\\mu<1$. \nUltimately, this results in a more compact localization.\n\nFinally, all the results discussed in this paper can be summarised in the form of a \n`phase diagram' displayed in Fig. \\ref{sumfig}. For $\\mu <1$, singularity in the\npotential is effective and hence power law profile of the Floquet states is obtained.\nThis regime is indicated by red color in Fig. \\ref{sumfig}.\nHowever, if $\\mu > 1$, singularity is not effectively `felt' by the particle and\nhence exponential profile is obtained. This regime is indicated by black color \nin the figure. Depending on the choice of parameters, regimes\nin which transition occurs between these two Floquet state profiles are also\nobserved. In Fig. \\ref{sumfig}, this is indicated by white color.\n\n\n\\begin{figure}\n\\centering\n\\includegraphics*[width=3.0in]{fig6.eps}\n\\caption{(Color online) Summary of the results presented in this work. In all the cases,\nthe potential $V_{sq}$ is singular. Power-law decay profile\nof the Floquet is obtained whenever $\\mu < 1$ (shown as deep red). Stronger red color\nrepresents power-law profile over larger energy scales. For $\\mu > 1$, exponential localization\nis obtained. Darker grey represent dominant exponential profile over longer energy scales. \nWhite color represents regimes of transition between these profiles. They cannot be classified\nas power-law or exponential with definiteness. Black broken horizontal line corresponds to $V_0=1$} \n\\label{sumfig}\n\\end{figure}\n\n\n\\section{Conclusion}\nIn summary, we have studied a non-KAM system represented by the Hamiltonian in Eq. \\ref{ham1}, \nnamely a periodically kicked particle in a finite potential well of height $V_0$, to primarily understand \nthe nature of its Floquet states. This Hamiltonian can be thought of as representing two limiting\ncases, (i) the standard KR for $V_0=0$ and (ii) KR in infinite potential\nwell for $V_0 \\to \\infty$. It is well known that, for sufficiently large kick strengths, all the \nFloquet states of the KR are localized with an exponential profile \\cite{kr}. Further, it has been\nsuggested that for kicked systems with singularity in their potential, the Floquet states display\npower-law profile \\cite{liu}. We examine the Floquet states of the Hamiltonian in Eq. \\ref{ham1} in the light\nof these results. To understand its Floquet states, we map this problem to that of a tight binding\nmodel. \n\nThe results presented in this work show that the decay profile of the Floquet states\nis not determined by the potential singularity alone, but by the representative energy band \n$\\epsilon_{if}$ of a set of Floquet states relative to the potential height $V_0$. \nThus, we show that if $\\epsilon_{if} > V_0$ then the effect of singularity is weak for those\nset of Floquet states and they display exponential profile. This represents the KR\nlimit of the problem. On the other hand, the condition $V_0 > \\epsilon_{if}$ represents\nFloquet states strongly affected by the singular potential. In this case, we have shown that \nFloquet states have predominantly power-law profile. In the region intermediate between these\ntwo extremes, the Floquet states typically display an initial exponential decay followed by\nan asymptotic power-law decay. The presence of these two contrasting Floquet state profiles\nin Hamiltonian in Eq. \\ref{ham1} leaves its signature in the spectral correlations as well.\nFor identically same set of parameters, depending on the reference energy scale $\\epsilon_{if}$,\nthe spacing distribution turns out to be Poisson distribution ($\\epsilon_{if} > V_0$) or a \ngeneral Brody distribution ($V_0 > \\epsilon_{if}$). Typically, the spacing distribution \nis taken to characterize quantum chaos in a system and it is generally independent of the energy\nband being considered provided it is in the semi-classical limit.\nQuite surprisingly, the semi-classical limit of the system in Eq. \\ref{ham1} lacks a unique \nspacing distribution as it depends on the energy band $\\epsilon_{if}$ being considered.\nKR was experimentally realized in a test-bed of cold atomic cloud in flashing \noptical lattices. Using more than one optical lattice, KR confined to a `potential well'\nhas also been realized. We believe that the results in this work is amenable to experiments\nin a suitable atom-optics set up.\n\n\n\\section{Acknowledgement}\nS. P. would like to acknowledge the University Grants Commission for research fellowship.\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nConsider an affine algebraic curve \n$$\nC: F(x,y)=\\sum_{i+j \\leq d} a_{i,j} x^i y^j =0, a_{i,j} \\in {\\bf k},\n$$\ndefined over field ${\\bf k},$ $ {\\rm char}\\, {\\bf k} >0.$\nLet ${\\bf k}[C]$ and ${\\bf k}(C)$ be the algebras of polynomial and rational function of coefficients of the curve $C$.\nThose affine transformations of plane whose preserve the algebraic form of equation $F(x,y)$ generate a group $G$ which is subgroup of the group of plane affine transformations $A(2).$ A function $\\phi(a_{0,0},a_{1,0},\\ldots,a_{d,0}) \\in {\\bf k}(C) $ is called $G$-invariant if \n$\\phi(\\tilde{a}_{0,0},\\tilde{a}_{1,0},\\ldots,\\tilde{a}_{d,0})=\\phi(a_{0,0},a_{1,0},\\ldots,a_{d,0}),$ where $\\tilde{a}_{0,0},$ $\\tilde{a}_{1,0},\\ldots,$ $\\tilde{a}_{d,0}$ are defined from the condition \n$$F(gx,gy)=\\sum_{i+j \\leq d} a_{i,j} (gx)^i (gy)^j=\\sum_{i+j \\leq d} \\tilde{a}_{i,j} x^i y^j,$$ for all $g \\in G.$\nCurves $C$ and $C'$ are said to be $G$-isomorphic if they lies on the same $G$-orbit.\n\nThe algebras of all $G$-invariant polynomial and rational functions denotes by ${\\bf k}[C]^G$ and by ${\\bf k}(C)^G,$ respectively. One way to find elements of the algebra ${\\bf k}[C]^G$ is a specification of invariants of associated ternary form of order $d.$ In fact, consider a vector space $T_d$ generated by the ternary forms $\\sum\\limits_{i+j\\leq d}b_{i,j}x^{d-(i+j)} y^i z^{j},$ $b_{i,j} \\in {\\bf k}$ endowed with the natural action of the group $GL_3:=GL_3({\\bf k}).$ Given $f$ $GL_3$-invariant function of ${\\bf k}(T_d)^{GL_3},$ the specification $f:$ $b_{i,j} \\mapsto a_{i,j}$ or $b_{i,j} \\mapsto 0$ in the case when $a_{i,j} \\notin {\\bf k}(C),$ gives us an element of ${\\bf k}(C)^G.$\n\n\n But $SL_3$-invariants(thus and $GL_3$-invariants) of ternary form are known only for the cases $d \\leq 4,$ see \\cite{Bro}. Furthermore, analyzing of the Poincare series of the algebra of invariants of ternary forms, \\cite{B_X}, we see that the algebras are very complicated and no chance to find theirs mininimal generating sets.\n\nSince ${\\bf k}(T_d)^{GL_3}$ coincides with ${\\bf k}(T_d)^{\\mathfrak{gl}_3}$ it implies that the algebra of invariants is an intersection of kernels of some derivations of the algebra ${\\bf k}(T_d).$ Then in place of the specification of coefficients of the form we may use a \"specification\" of those derivations. \n\nFirst, consider the motivating example.\nLet \n$$\nC_3: y^2+a_0x^3+3a_1x^2+3a_2x+a_3=0, \n$$\nand let $G_0$ be a group generated by the translations $x \\mapsto \\alpha \\tilde{x}+b.$\n It is easy to show that $j$-invariant of the curve $C_3$ equals (\\cite{Sylv86}, p. 46): \n$$\nj(C_3)=6912\\,{\\frac { \\left( a_{{0}}a_{{2}}-{a_{{1}}}^{2} \\right) ^{3}}{{a_{\n{0}}}^{2} \\left( 4\\,{a_{{1}}}^{3}a_{{3}}-6\\,a_{{3}}a_{{0}}a_{{1}}a_{{2\n}}-3\\,{a_{{1}}}^{2}{a_{{2}}}^{2}+{a_{{3}}}^{2}{a_{{0}}}^{2}+4\\,a_{{0}}\n{a_{{2}}}^{3} \\right) }}.\n$$\nUp to constant factor the $j(C_3)$ equal to $\\dfrac{S^3}{T}$ where $S$ and $T$ are the specification of two $SL_3$-invariants of ternary cubic, see \\cite{Stur}, p.173.\n\nFrom another side a direct calculation yields that the following is true: $\\mathcal{D} \\left(j(C_3)\\right)=0$ and ${\\mathcal{H}} \\left(j(C_3)\\right)=0$ where $\\mathcal{D},$ ${\\mathcal{H}}$ denote the derivations of the algebra of rational functions ${\\bf k}(C_3)={\\bf k}(a_0,a_1,a_2,a_3):$\n $$\\mathcal{D}(a_i)=i a_{i-1},{ \\mathcal{H}}(a_i)=(3-i)a_{i}, i=0,1,2,3.$$\n \n From computing point of view, the calculation of $\\ker \\mathcal{D} \\cap \\ker \\mathcal{H} $ is more effective than the calculating of the algebra of invariants of the \n ternary cubic. We will derive further that \n$$ \n\\ker \\mathcal{D}_3 \\cap \\ker H_3={\\bf k}\\left( \\frac{\\left( a_{{0}}a_{{2}}-{a_{{1}}}^{2} \\right) ^{3}}{a_0^3}, \\frac{{a_{3}}\\,a_0^{2} + 2\\,{a_{1}}^{3} - 3\\,{a_{1}}\\,{a_{2}}\\,a_0}{a_0^2} \\right).\n$$\n\nIn the notes we prove that for arbitrary algebraic curve $C$ and its preserved form group $G$ there exist derivations $D_i, i\\leq 6$ of ${\\bf k}(C)$ such that ${\\bf k} (C)^G=\\bigcap\\limits_i \\ker D_i,$ (Theorem 3.2).\n\nIn section 1 we give full description of the algebras of polynomial and rational invariants for the curve $y^2=f(x)$. In section 2 we give exactly form for the derivations of the action of the Lie algebra $\\mathfrak{gl}_3$ on ${\\bf k} (C)$ and give a specification of such action for a curve of the form $y^2+g(x)y=f(x).$ \n\n\n\\section{Invariants of $y^2=f(x).$}\n\nIn some simple cases we may obtain the defining derivation directly.\n\n\nConsider the curve\n$$\nC_d: y^2=a_0x^d+da_1 x^{d-1}+\\cdots +a_d =\\sum_{i=0}^d a_d {d \\choose i} x^{d-i},\n$$\nand let $G$ be the group generated by the following transformations \n$$\nx=\\alpha \\tilde{x}+b, y=\\tilde{y}.\n$$\nThe algebra ${\\bf k}(C_d)^G$ consists of functions $\\phi(a_0,a_1,\\ldots,a_d)$ that have the invariance property\n$$\n\\phi(\\tilde{a}_0,\\tilde{a}_1,\\ldots,\\tilde{a}_d)=\\phi(a_0,a_1,\\ldots,a_d).\n$$\nHere $\\tilde{a}_i$ denote the coefficients of the curve $\\tilde{C}_d:$\n$$\n\\tilde{C}_d:\\sum_{i=0}^d a_d {d \\choose i} (\\alpha \\tilde{x}+b)^{d-i}=\\sum_{i=0}^d \\tilde{a}_d {d \\choose i} \\tilde{x}^{d-i}.\n$$\nThe coefficients $\\tilde{a}_i$ are given by the formulas\n\\begin{gather}\\label{1}\n\\tilde{a}_i=\\alpha^{n-i}\\sum_{k=0}^k {i \\choose k} b^k a_{n{-}k}.\n\\end{gather}\nThe following statement holds\n\\begin{te} We have \n$$\n{\\bf k}(C)^G=\\ker \\mathcal{D}_d \\cap \\ker \\mathcal{E}_d,\n$$\nwhere $\\mathcal{D}_d,$ $\\mathcal{E}_d$ denote the following derivations of the algebra ${\\bf k}(C):$\n$$\n\\mathcal{D}_d(a_i)=i a_{i-1}, \\mathcal{E}_d(a_i)=(d-i) a_i.\n$$\n\\end{te}\n\nRecall that a linear map $D:{\\bf k}(C) \\to {\\bf k}(C)$ is called a derivation of the algebra ${\\bf k}(C)$ if $D(f g)=D(f)g+f D(g),$ for all $f,g \\in {\\bf k}(C).$ The subalgebra $\\ker D := \\{ f \\in {\\bf k}(C) \\mid D(f)=0 \\}$ is called the kernel of the derivation $D.$ The above derivation $\\mathcal{D}_d$ is called the basic Weitzenb\\\"ock derivation.\n\\begin{proof}\nActing in classical manner, we differentiate with respect to $b$ both sides of the equality\n$$\\phi(\\tilde{a}_0,\\tilde{a}_1,\\ldots,\\tilde{a}_d)= \\phi(\\tilde{a}_0,\\tilde{a}_1,\\ldots,\\tilde{a}_d),$$\nand obtain in this way \n$$\n\\frac{\\partial \\phi(\\tilde{a}_0,\\tilde{a}_1,\\ldots,\\tilde{a}_d)}{\\partial \\tilde{a}_0} \\dfrac{\\partial \\tilde{a}_0}{\\partial b}+\\frac{\\partial \\phi(\\tilde{a}_0,\\tilde{a}_1,\\ldots,\\tilde{a}_d)}{\\partial \\tilde{a}_1}\\dfrac{\\partial \\tilde{a}_1}{\\partial b}+\\cdots+\\frac{\\partial \\phi(\\tilde{a}_0,\\tilde{a}_1,\\ldots,\\tilde{a}_d)}{\\partial \\tilde{a}_d}\\dfrac{\\partial \\tilde{a}_d}{\\partial b}=0.\n$$\nSubstitute $\\alpha=1,$ $b=0$ to $\\phi(\\tilde{a}_0,\\tilde{a}_1,\\ldots,\\tilde{a}_d)$ and taking into account that $\\dfrac{\\partial \\tilde{a}_i}{\\partial b}\\Bigl |_{b=0}=i a_{i-1},$ we get:\n$$\n\\tilde{a}_0 \\frac{\\partial \\phi(\\tilde{a}_0,\\tilde{a}_1,\\ldots,\\tilde{a}_d)}{\\partial \\tilde{a}_1} +2\\tilde{a}_1 \\frac{\\partial \\phi(\\tilde{a}_0,\\ldots,\\tilde{a}_d)}{\\partial \\tilde{a}_2}+\\cdots d \\tilde{a}_{d-1} \\frac{\\partial \\phi(\\tilde{a}_0,\\ldots,\\tilde{a}_d)}{\\partial \\tilde{a}_d} =0\n$$\nSince the function $\\phi(\\tilde{a}_0,\\ldots,\\tilde{a}_d)$ depends on the variables $\\tilde{a}_i$ in the exact same way as the function $\\phi(a_0,a_1,\\ldots,a_d)$ depends on the $a_i$ then it implies that $\\phi(a_0,a_1,\\ldots,a_d)$ satisfies the differential equation\n\\begin{gather*}\\label{D1}\na_0 \\frac{\\partial \\phi({a}_0,a_1\\ldots,{a}_d)}{\\partial {a}_1} +2a_1 \\frac{\\partial \\phi({a}_0,a_1\\ldots,{a}_d)}{\\partial {a}_2}+d a_{d-1} \\frac{\\partial \\phi({a}_0,a_1\\ldots,{a}_d)}{\\partial {a}_d} =0\n\\end{gather*}\nThus, $\\mathcal{D}_d(\\phi)=0.$\nNow we differentiate with respect to $\\alpha$ both sides of the same equality $$\\phi(\\tilde{a}_0,\\tilde{a}_1,\\ldots,\\tilde{a}_d)= \\phi(\\tilde{a}_0,\\tilde{a}_1,\\ldots,\\tilde{a}_d).$$\n$$\n\\frac{\\partial \\phi(\\tilde{a}_0,\\tilde{a}_1,\\ldots,\\tilde{a}_d)}{\\partial \\tilde{a}_0} \\dfrac{\\partial \\tilde{a}_0}{\\partial \\alpha}+\\frac{\\partial \\phi(\\tilde{a}_0,\\tilde{a}_1,\\ldots,\\tilde{a}_d)}{\\partial \\tilde{a}_1}\\dfrac{\\partial \\tilde{a}_1}{\\partial \\alpha}+\\cdots+\\frac{\\partial \\phi(\\tilde{a}_0,\\tilde{a}_1,\\ldots,\\tilde{a}_d)}{\\partial \\tilde{a}_d}\\dfrac{\\partial \\tilde{a}_d}{\\partial \\alpha}=0.\n$$\nSubstitute $\\alpha=1,b=0,$ to $\\phi(\\tilde{a}_0,\\tilde{a}_1,\\ldots,\\tilde{a}_d)$ and taking into account $\\dfrac{\\partial \\tilde{a}_i}{\\partial \\alpha}\\Bigl |_{ \\alpha=1 \\,\nb=0}=(d-i) a_{i},$ we get:\n$$\n\\tilde{a}_0 \\frac{\\partial \\phi(\\tilde{a}_0,\\tilde{a}_1,\\ldots,\\tilde{a}_d)}{\\partial \\tilde{a}_0} +(d-1)\\tilde{a}_1 \\frac{\\partial \\phi(\\tilde{a}_0,\\ldots,\\tilde{a}_d)}{\\partial \\tilde{a}_1}+\\cdots + \\tilde{a}_{d-1} \\frac{\\partial \\phi(\\tilde{a}_0,\\ldots,\\tilde{a}_d)}{\\partial \\tilde{a}_{d-1}} =0\n$$\nIt implies that \n $\\mathcal{E}_d(\\phi({a}_0,a_1\\ldots,{a}_d))=0.$\n\\end{proof}\n\nThe derivation $\\mathcal{E}_d$ sends the monom $a_0^{m_0} a_1^{m_1}\\cdots a_d^{m_d}$ to the term $$(m_0 d+m_1 (d-1)+\\cdots m_{d-1}) a_0^{m_0} a_1^{m_1}\\cdots a_d^{m_d}.$$ Let the number $\\omega \\left( a_0^{m_0} a_1^{m_1}\\cdots a_d^{m_d} \\right):= m_0 d+m_1 (d-1)+\\cdots m_{d-1}$ be called the weight of the monom $a_0^{m_0} a_1^{m_1}\\cdots a_d^{m_d}.$ In particular $\\omega(a_i)=d-i.$ \n\n A homogeneous polynomial $f \\in {\\bf k}[C]$ be called isobaric if all their monomial have equal weights. A weight $\\omega(f)$ of an isobaric polynomial $f$ is called a weight of its monomials. Since $\\omega(f)>0,$ then ${\\bf k}[C]^{\\mathcal{E}_d}=0.$ It implies that ${\\bf k}[C]^{G}=0.$ \n \n If $f,g$ are two isobaric polynomials then \n$$\n\\mathcal{E}_d\\left(\\frac{f}{g}\\right)=(\\omega(f)-\\omega(g))\\frac{f}{g}.\n$$\nTherefore the algebra $k(C)^{\\mathcal{E}_d}$ is generated by rational fractions which both denominator and numerator has equal weight.\n\n\nThe kernel of the derivation $\\mathcal{D}_d$ also is well-known, see \\cite{Now}, \\cite{Aut}, and \n$$\n\\ker \\mathcal{D}_d={\\bf k}(a_0,z_2,\\ldots,z_d),\n$$\nwhere \n$$\nz_i:= \\sum_{k=0}^{i-2} (-1)^k {i \\choose k} a_{i-k} a_1^k a_0^{i-k-1} +(i-1)(-1)^{i+1} a_1^i, i=2,\\ldots,d.\n$$\nIn particular, for $d=5$, we get\n$$\n\\begin{array}{l}\nz_2={a_{2}}\\,a_0 - {a_{1}}^{2}\n\\\\\nz_3={a_{3}}\\,a_0^{2} + 2\\,{a_{1}}^{3} - 3\\,{a_{1}}\\,{a_{2}}\\,a_0\n\\\\\nz_4={a_{4}}\\,a_0^{3} - 3\\,{a_{1}}^{4} + 6\\,{a_{1}}^{2}\\,{a_{2}}\\,a_0 - 4\n\\,{a_{1}}\\,{a_{3}}\\,a_0^{2}\n\\\\\nz_5={a_{5}}\\,a_0^{4} + 4\\,{a_{1}}^{5} - 10\\,{a_{1}}^{3}\\,{a_{2}}\\,a_0 + \n10\\,{a_{1}}^{2}\\,{a_{3}}\\,a_0^{2} - 5\\,{a_{1}}\\,{a_{4}}\\,a_0^{3}\n.\n\\end{array}\n$$\nIt is easy to see that $\\omega(z_i)=i (n-1).$ The following element $\\dfrac{z_i^d}{a_0^{i(d-1)}}$ has the zero weight for any $i.$ Therefore the statement holds:\n\\begin{te}\n$$\n{\\bf k}(C_d)^G={\\bf k}\\left( \\frac{z_2^d}{a_0^{2(d-1)}},\\frac{z_3^d}{a_0^{3(d-1)}}, \\cdots, \\frac{z_d^d}{a_0^{d(d-1)}}\\right).\n$$\n\\end{te}\nFor the curve \n$$\nC_d^0: y^2=x^d+da_1 x^{d-1}+\\cdots +a_d =x^d+\\sum_{i=1}^d a_d {d \\choose i} x^{d-i}.\n$$\nand for the group $G_0$ generated by translations $x=\\tilde{x}+b,$ the algebra of invariants becomes simpler:\n$${\\bf k}\\left({C^0}_d\\right)^{G_0}={\\bf k}(z_2,z_3,\\ldots,z_d).$$\n \n\\begin{te}\n$(i)$ For arbitrary set of $d-1$ numbers $j_2,$$j_3,\\ldots,j_d$ there exists a curve $C$ such that \n$z_i(C)=j_i.$ \n\n$(ii)$ For two curves $C$ and $C'$ the equalities $z_i(C)=z_i(C')$ hold for $ 2 \\leq i \\leq d,$ if and only if these curves are $G_0$-isomorphic.\n\\end{te}\n\\begin{proof}\n$(i).$ Consider the system of equations \n$$\n\\left\\{\n\\begin{array}{l}\n{a_{2}} - {a_{1}}^{2}=j_2\n\\\\\n{a_{3}} + 2\\,{a_{1}}^{3} - 3\\,{a_{1}}\\,{a_{2}}=j_3\n\\\\\n{a_{4}} - 3\\,{a_{1}}^{4} + 6\\,{a_{1}}^{2}\\,{a_{2}} - 4\n\\,{a_{1}}\\,{a_{3}}=j_4\n\\\\\n\\ldots \\\\\n\\displaystyle a_d+\\sum_{k=1}^{d-2} (-1)^k {d \\choose k} a_{d-k} a_1^k +(d-1)(-1)^{d+1} a_1^d=j_d\n\\end{array}\n\\right.\n$$\nBy solving it we obtain \n\\begin{gather}\\label{2}\na_n=j_n+\\sum_{i=2}^n {n \\choose i} a_1^k j_{n-k}\n\\end{gather}\nPut $a_1=0$ we get $a_n=j_n$,\\, i.e. the curve \n$$\nC: y^2=x^d+{d \\choose 2} j_2 x^{d-2}+\\cdots +j_d,\n$$\nis desired one.\n\n$(ii).$ We may assume, without loss of generality, that the curve $C$ has the form\n$$\nC: y^2=x^d+{d \\choose 2} j_2 x^{d-2}+\\cdots +j_d.\n$$\nSuppose that for a curve \n$$\nC': y^2=x^d+da_1 x^{d-1}+\\cdots +a_d =x^d+\\sum_{i=1}^d a_d {d \\choose i} x^{d-i}.\n$$\nholds $z_i(C')=z_i(C)=j_i.$\nComparing (\\ref{2}) with (\\ref{1}) we deduce that the curve $C'$ is obtained from the curve $C$ by the translation $x+a_1.$ \n\\end{proof}\n\n\\section{General case and invariants of $y^2+g(x)y=f(x)$}\n\nConsider the vector ${\\bf k}$-space $T_d$ of ternary form of degree $d:$\n$$ \nu(x,y,z)=\\sum_{i+j\\leq d} \\, \\frac{d!}{i! j! (d{-}(i+j))!}a_{i, j}\\, x^{d-(i+j)} y^i z^j,\n$$\nwhere $a_{i, j} \\in {\\bf k}.$\nLet us identify in the natural way the algebra of rational function ${\\bf k}(T_d)$ on the vector space $T_d$ with the algebra of polynomials of the $\\displaystyle \\frac{1}{2} (d+1)(d+2)$ variables. The natural action of the group $GL_3$ on $T_d$ induced the action of $GL_3$ (and the Lie algebra $\\mathfrak{gl_{3}}$) on ${\\bf k}[T_d]$. \nThe corresponding algebra of invariants ${\\bf k}(T_d)^{GL_3}={\\bf k}(T_d)^{\\mathfrak{gl_{3}}}$ is called the algebra of $GL_3$-invariants (or absolute invariants) of ternary form of degree $d.$ The following statement holds:\n\\begin{te}\n$$ {\\bf k}(T_d)^{GL_3}=\\ker D_1 \\cap \\ker D_2 \\cap \\ker \\hat{D}_1 \\cap \\ker \\hat{D}_2 \\cap \\ker E_1 \\cap \\ker E_2 \\cap \\ker E_3$$\nwhere\n$$\n\\begin{array}{ll}\n\\displaystyle \\! \\! D_1(a_{i,j})=i\\,a_{i{-}1,j}, & \\! \\! D_2(a_{i,j})=j\\,a_{i{+}1,j{-}1},\\\\\n\\displaystyle \\! \\! \\hat D_1(a_{i,j})=(n-(i+j))\\,a_{i{+}1,j}, & \\! \\! \\hat D_2(a_{i,j})=i\\,a_{i{-}1,j{+}1},\\\\\n \\hat{D}_3(a_{i,j})=(n-(i+j)) a_{i,j+1}, & \\! \\! D_3(a_{i,j})=j a_{i,j-1}, \\\\\n\\! \\! E_1(a_{i,j})=(n-(2i+j)) a_{i,j}, & \\! \\! E_2(a_{i,j})=i a_{i,j},\n\\end{array}\n$$\n$$E_3(a_{i,j})=j a_{i,j}. $$\n\\end{te}\n\\begin{proof}\nThe Lie algebra $\\mathfrak{gl}_3$ acts on the vector space of ternary form $T_d$ by derivations,namely \n$$\n\\begin{array}{ll}\n\\displaystyle D_1=-y \\frac{\\partial}{\\partial x}, & \\displaystyle D_2=-z \\frac{\\partial}{\\partial y}, \\\\\nE_1=-x\\frac{\\partial}{\\partial x}, &E_2= -y\\frac{\\partial}{\\partial y}, \\\\\n\\displaystyle \\hat D_1=-x \\frac{\\partial}{\\partial y}, & \\displaystyle \\hat D_2=-y \\frac{\\partial}{\\partial z}, \\\\\n \\displaystyle D_3=-z \\frac{\\partial}{\\partial x}, & E_3= -\\displaystyle z \\frac{\\partial}{\\partial z},\n\\end{array}\n$$\n$$\n\\begin{array}{lll}\n \\displaystyle & \\displaystyle \\hat D_3=-x \\frac{\\partial}{\\partial z}. &\n\\end{array}\n$$\nTo extend the actions $\\mathfrak{gl}_3$ to the algebra ${\\bf k}(T_d)$ we use the well-known fact of classical invariant theory that the generic form $u(x,y,z)$ is a covariant. It means that any of above derivation (considered as derivation of ${\\bf k}[T_d,x,y,z]$) must kill the form. \nIn particular, for the derivation $D_1$ we have\n\\begin{gather*}\nD_1(u(x,y,z))=\n\\sum_{i+j\\leq n} \\, \\frac{n!}{i! j! (n{-}(i+j))!}(D_1(a_{i,j}) x^{n-(i+j)} y^i z^j+\n+a_{i,j} D_1(x^{n-(i+j)} y^i z^j))=\\\\\n=D_1(a_{0,1}) x^{n-1} x_3+\\cdots+D_1(a_{0,n}) \\frac{1}{n!} z^n+ \n+\\sum_{\\begin{array}{c} \\mbox{\\small \\it i+j}\\leq n \\\\ i>0 \\end{array} } \\Bigl( D_1(a_{i,j})-i\\,a_{i{-}1,j}\\Bigr) x^{d-(i+j)} y^i z^j.\n\\end{gather*}\nIt following that the equality $D_1(u(x,y,z))=0$ is possible only if all coefficients are equal to zero. Therefore we get $D_1(a_{0,j})=0$ for all $0\\leq j \\leq n,$ and $D_1(a_{i,j})=i\\,a_{i{-}1,j} $ as required. In an exactly similar way we will obtain actions on ${\\bf k}(T_d)$ for the rest derivations. \n\\end{proof}\n\n\\noindent\n{\\bf Corollary.} ${\\rm tr \\,deg}_k\\,\\,{\\bf k}(T_{d})^{GL_3} \\leq \\displaystyle \\frac{1}{2} (d+1)(d+2)-7.$\n\\begin{proof}\nIt follows from the fact that for a nonzero derivation $D$ of polynomial ring $R$ of $n$ variables ${\\rm tr \\,deg} \\ker D \\leq n-1,$ see \\cite{Now}, Proposition 7.1.1.\n\\end{proof}\n\nAn obvious consequence of the theorem is the following: \n\n\\begin{te} For affine algebraic curves $d$\n\n$$\nC: \\sum_{i+j=d} \\frac{d!}{i! j! (d-(i+j))!} a_{i,j} x^{d-(i+j)}y^i=0, a_{d,0}=0, a_{i,j} \\in {\\bf k}, {\\rm char} {\\bf k} >0,\n$$\nthe following holds:\n\n(i) if $a_{d,0} \\neq 0$ and $a_{0,0} \\neq 0$ then\n$$\n{\\bf k}(C)^G=\\ker D_1 \\cap \\ker D_2 \\cap \\ker \\hat{D}_1 \\cap \\ker E_1 \\cap \\ker E_2 , \n$$\n\n(ii) if $a_{d,0} =0 0$ $a_{0,0} \\neq 0$ then\n\n$$\n{\\bf k}(C)^G=\\ker D_2 \\cap \\ker D_3 \\cap \\ker \\hat{D}_1 \\cap \\ker E_1 \\cap \\ker E_2 , \n$$\nwhere\n\\begin{align*}\nD_1(a_{i,j})&=i a_{i-1,j},D_2(a_{i,j})=j a_{i+1,j-1}, \\hat{D}_1(a_{i,j})=(d-(i+j))a_{i+1,j},\\\\\n&\\\\\nE_1(a_{i,j})&=(d-(i+j)) a_{i,j}, E_2(a_{i,j})=i a_{i,j},D_3(a_{i,j})=j a_{i,j-1}.\n\\end{align*}\n\\end{te}\n\\begin{proof}\n$(i)$ Consider the associate projective plane curve in $\\mathbb{P}^2:$\n$$\n\\sum_{i+j=d} \\frac{d!}{i! j! (d-(i+j))!} a_{i,j} X^{d-(i+j)}Y^iZ^j=0, a_{d,0} \\neq 0, a_{0,0} \\neq 0. \n$$\nThe transformations $X \\mapsto \\alpha X+ \\beta Y+ b Z,$ $Y \\mapsto \\gamma X+ \\delta Y+a Z,$ $Z \\mapsto Z$ generate of a subgroup of $GL_3$ which preserve the algebraic form of the equation of the curve. Therefore the algebra of invariants of the curve (and corresponding affine curve) coincides with the intersection of the kernels of the five derivations $ D_1, D_2, \\hat{D}_1, E_1, E_2,$ ($[D_1,D_2]=D_3$).\n\n$(ii).$ For this case the transformations are as follows: $X \\mapsto \\alpha X+ b Z,$ $Y \\mapsto \\gamma X+ \\delta Y+a Z,$ $Z \\mapsto Z $ and we have to exclude the derivation $D_1.$\n\\end{proof}\n\nFor the curve \n\\begin{gather*}\n\\mathcal{C'}_{d}:\\frac{d(d-1)}{2} a_{2,d-2} y^2+\\sum_{i=0}^{d-1}\\frac{ d!}{i!(d-(1+i))!}a_{1,i}x^{d-(i+1)}y+\\sum _{i=0}^{d}{\\frac {d!\\,}{i!\\, \\left( d-i\n \\right) !}a_{{0,i}}{x}^{d-i}}=0,\n\\end{gather*}\nand for the group $G$ generated by $x \\mapsto \\alpha x+a, y \\mapsto \\beta y+b$ we have \n$$\n{\\bf k}(\\mathcal{C'}_{d})^G=\\ker D_2 \\cap \\ker D_3 \\cap \\ker \\hat{D}_1 \\cap \\ker E_1 \\cap \\ker E_2 , \n$$\nand ${\\rm tr \\,deg}_k\\,\\,{\\bf k}(\\mathcal{C'}_{d})^G \\leq 2d-3.$\n\n{\\bf Example.} Let us calculate the invariants of curve $C'_5$\n\\begin{gather*}\n10\\,{y}^{2}+(5\\,a_{{1,0}}{x}^{4}+20\\,a_{{1,1}}{x}^{3}+30\\,a_{{1,2}}{x\n}^{2}+20\\,a_{{1,3}}x+5\\,a_{{1,4}})y=\\\\\n={x}^{5}+5\\,a_{{0,1}}{x}^{4}+10\\,a_{{0,2}}{x}^{3}+10\\,a_{{0,3}}{x}^{2}+5\\,a_{{0,4}}x+a_{{0,5}},\n\\end{gather*}\nwith respect to the group $G_0$ generated by the translations $x=\\tilde{x}+a, y=\\tilde{y}+b.$ Theorem 3.2 implies that $C_5^G=\\ker D_3 \\cap \\ker D_2,$ where the derivations $D_2, D_3$ act by\n\\begin{align*}\n &D_{{2}} \\left( a_{{1,1}} \\right) =0,D_{{2}} \\left( a_{{1,0}} \\right) =0,D_{{2}} \\left( a_{{0,1}} \\right) =-a_{{1,0}},D_{{2}}\n \\left( a_{{1,2}} \\right) =0,D_{{2}} \\left( a_{{0,2}} \\right) =-2\\,a_{\n{1,1}},\\\\&D_{{2}} \\left( a_{{0,3}} \\right) =-3\\,a_{{1,2}},D_{{2}} \\left( \na_{{0,5}} \\right) =-5\\,a_{{1,4}},D_{{1}} \\left( a_{{0,4}} \\right) =-4\n\\,a_{{1,3}},D_{{2}} \\left( a_{{1,3}} \\right) =0,D_{{2}} \\left( a_{{1,4\n}} \\right) =4.\n\\end{align*}\nand \n\\begin{align*}\n&D_{{3}} \\left( a_{{1,2}} \\right) =2\\,a_{{1,1}},D_{{3}} \\left( a_{{1,1}} \\right) =a_{{1,0}},D_{{2}} \\left( a_{{1,0}} \\right) \n=0,D_{{2}} \\left( a_{{0,2}} \\right) =2\\,a_{{0,1}},D_{{3}} \\left( a_{{0\n,1}} \\right) =1,\\\\\n&D_{{2}} \\left( a_{{0,3}} \\right) =3\\,a_{{0,2}},D_{{3}}\n \\left( a_{{0,5}} \\right) =5\\,a_{{0,4}},D_{{2}} \\left( a_{{0,4}}\n \\right) =4\\,a_{{0,3}},D_{{2}} \\left( a_{{1,3}} \\right) =3\\,a_{{1,2}},\nD_{{3}} \\left( a_{{1,4}} \\right) =4\\,a_{{1,3}}.\n\\end{align*}\nBy using the Maple command {\\tt pdsolve()} we obtain that \n$$\n{\\bf k}(C'_5)^{G_0}={\\bf k}(g_1,g_2,g_3,g_4,g_5,g_6,g_7),{\\bf k}[C'_5]^{G_0}={\\bf k}[g_1,g_2,g_3,g_4,g_5,g_6,g_7],\n$$\nwhere\n\\begin{gather*}\ng_1=a_{1,0},\\\\\ng_2={a_{{1,0}}}^{2}a_{{0,2}}+{a_{{1,1}}}^{2}-2\\,a_{{1,1}}a_{{1,0}}a_{{0,1}},\\\\\ng_3=a_{{1,2}}-2\\,a_{{1,1}}a_{{0,1}}+a_{{1,0}}a_{{0,2}},\\\\\ng_4=6\\,{a_{{1,1}}}^{2}a_{{0,1}}a_{{1,0}}-4\\,{a_{{1,1}}}^{3}-3\\,{a_{{1,0}}}\n^{2}a_{{0,2}}a_{{1,1}}-3\\,a_{{1,2}}{a_{{1,0}}}^{2}a_{{0,1}}+3\\,a_{{1,0\n}}a_{{1,1}}a_{{1,2}}+a_{{0,3}}{a_{{1,0}}}^{3},\\\\\ng_5=2\\,{a_{{1,1}}}^{3}-3\\,a_{{1,0}}a_{{1,1}}a_{{1,2}}+a_{{1,3}}{a_{{1,0}}}\n^{2},\\\\\ng_6=3\\,{a_{{1,0}}}^{4}{a_{{0,2}}}^{2}+{a_{{1,0}}}^{4}a_{{0,4}}-12\\,{a_{{1,0\n}}}^{3}a_{{1,1}}a_{{0,1}}a_{{0,2}}-4\\,{a_{{1,0}}}^{3}a_{{1,1}}a_{{0,3}\n}-4\\,{a_{{1,0}}}^{3}a_{{1,3}}a_{{0,1}}-\\\\-12\\,{a_{{1,0}}}^{3}a_{{0,2}}a_{\n{1,2}}+12\\,{a_{{1,1}}}^{2}{a_{{1,0}}}^{2}{a_{{0,1}}}^{2}+24\\,{a_{{1,1}\n}}^{2}{a_{{1,0}}}^{2}a_{{0,2}}+4\\,a_{{1,1}}{a_{{1,0}}}^{2}a_{{1,3}}+36\n\\,a_{{1,1}}{a_{{1,0}}}^{2}a_{{0,1}}a_{{1,2}}-\\\\-24\\,a_{{1,0}}a_{{1,2}}{a_\n{{1,1}}}^{2}-48\\,a_{{1,0}}{a_{{1,1}}}^{3}a_{{0,1}}+24\\,{a_{{1,1}}}^{4},\\\\\ng_7={a_{{1,0}}}^{4}{a_{{0,2}}}^{2}+{a_{{1,0}}}^{3}a_{{1,4}}-4\\,{a_{{1,0}}}\n^{3}a_{{1,1}}a_{{0,1}}a_{{0,2}}+6\\,{a_{{1,0}}}^{3}a_{{0,2}}a_{{1,2}}+4\n\\,{a_{{1,1}}}^{2}{a_{{1,0}}}^{2}{a_{{0,1}}}^{2}-\\\\-4\\,{a_{{1,1}}}^{2}{a_{\n{1,0}}}^{2}a_{{0,2}}-12\\,a_{{1,1}}{a_{{1,0}}}^{2}a_{{0,1}}a_{{1,2}}-4\n\\,a_{{1,1}}{a_{{1,0}}}^{2}a_{{1,3}}+4\\,{a_{{1,0}}}^{2}a_{{0,1}}-4\\,a_{\n{1,1}}a_{{1,0}}+\\\\+12\\,a_{{1,0}}a_{{1,2}}{a_{{1,1}}}^{2}+8\\,a_{{1,0}}{a_{\n{1,1}}}^{3}a_{{0,1}}-8\\,{a_{{1,1}}}^{4}\n\\end{gather*}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}