\section{Experimental Evaluation}
\label{sec:experiments}

The reproduction focused on the Helmholtz benchmarks reported in the original paper. All runs used TensorFlow~2.20 on an Intel Core i5 CPU with an NVIDIA RTX~4070~SUPER GPU and 24~GB of system memory.

\subsection{One-Dimensional Helmholtz Equation}

The governing equation is $u'' - 10u = f(x)$ on $[0, 8]$ with Dirichlet boundary conditions and exact solution $u(x) = \sin(3\pi x + 3\pi/20)\cos(2\pi x + \pi/10) + 2$. Domain decomposition with $C^1$ continuity and uniform collocation points was employed.

\begin{table}[h]
  \centering
  \caption{Effect of subdomain count ($Q=50$, $M=50$, $R_m=3.0$).}
  \label{tab:subdomains}
  \begin{tabular}{@{}lccc@{}}
    \toprule
    Subdomains ($N_e$) & Max Error & RMS Error & Time (s) \\
    \midrule
    1 & $1.47\times 10^{2}$ & $4.22\times 10^{1}$ & 0.25 \\
    2 & $9.50\times 10^{1}$ & $2.52\times 10^{1}$ & 0.46 \\
    4 & $2.03$ & $7.06\times 10^{-1}$ & 1.09 \\
    8 & $2.00$ & $7.07\times 10^{-1}$ & 2.35 \\
    \bottomrule
  \end{tabular}
\end{table}

\begin{table}[h]
  \centering
  \caption{Effect of training parameters ($N_e=2$, $Q=100$, $R_m=3.0$).}
  \label{tab:training}
  \begin{tabular}{@{}lccc@{}}
    \toprule
    Parameters ($M$) & Max Error & RMS Error & Time (s) \\
    \midrule
    50  & $4.30\times 10^{1}$ & $7.23$ & 0.46 \\
    100 & $3.10\times 10^{1}$ & $1.04\times 10^{1}$ & 0.93 \\
    200 & $7.00$ & $1.70$ & 1.84 \\
    300 & $3.00$ & $7.83\times 10^{-1}$ & 2.75 \\
    \bottomrule
  \end{tabular}
\end{table}

The solver exhibits the expected qualitative trends: accuracy improves markedly when increasing the number of subdomains or trainable parameters, and runtime scales approximately linearly with problem size. Nevertheless, the absolute error remains stubbornly high---the best observed configuration ($N_e=4$, $Q=100$, $M=100$) yielded a maximum error of $2.0$, roughly nine orders of magnitude worse than the $10^{-9}$ reported in the paper, while taking 2.18~s versus the reference 1.1~s.

\begin{figure}[h]
  \centering
  \includegraphics[width=0.8\textwidth]{../helmholtz_1d_solution.png}
  \caption{Computed versus exact 1D Helmholtz solution with absolute error overlay for the $N_e=4$, $Q=100$, $M=100$ configuration.}
  \label{fig:helmholtz1d}
\end{figure}

\subsection{Two-Dimensional Helmholtz Equation}

For the 2D benchmark $\nabla^2 u - 10u = f(x,y)$ on $[0,3.6]^2$ with Dirichlet boundary conditions, the implementation currently fails to complete due to automatic differentiation overhead. The initial test case (single subdomain, $25\times 25$ collocation grid, $M=400$) timed out after five minutes without producing a solution.

\begin{table}[h]
  \centering
  \caption{Summary of two-dimensional Helmholtz experiment.}
  \label{tab:helmholtz2d}
  \begin{tabular}{@{}lcc@{}}
    \toprule
    Configuration & Status & Observed Time \\
    \midrule
    $1\times 1$ subdomain, $25\times 25$ collocation, $M=400$ & Timeout & $>300$~s \\
    \bottomrule
  \end{tabular}
\end{table}

Profiling identified the nested derivative loops inside \texttt{helmholtz\_operator\_2d}: for each of the 400 hidden nodes, the code replays the network forward pass and invokes TensorFlow's \texttt{GradientTape} twice to accumulate second derivatives with respect to both spatial dimensions. This serial computation creates roughly $2M$ forward evaluations per call and leads to prohibitive runtimes compared with the paper's approximately 33~s for a $2\times 2$ decomposition.

\begin{figure}[h]
  \centering
  \includegraphics[width=0.8\textwidth]{../helmholtz_2d_solution.png}
  \caption{Illustrative 2D Helmholtz visualisation from earlier runs: surface plot of the predicted solution and absolute error heatmap. Current vectorised implementation work aims to regenerate this figure without timeouts.}
  \label{fig:helmholtz2d}
\end{figure}

\subsection{Advection Equation Benchmarks}

Time-dependent advection experiments, documented in Section~3.2 of the paper and the accompanying advection report\footnote{experiments/section1\_report.md}, employ block time marching with periodic spatial boundaries. Despite matching the published parameter sweeps, all tested configurations plateau at a maximum error of approximately $2.0$ with RMS error near $7\times 10^{-1}$, indicating a systemic correctness issue rather than a lack of model capacity. Baseline timings range from 2~s for coarse meshes to roughly 18~s for the $N_e=8$, $Q=20\times 20$, $M=300$ setup.

\begin{figure}[h]
  \centering
  \includegraphics[width=0.8\textwidth]{../experiments/advection_baseline.png}
  \caption{Baseline advection simulation ($t_f=10$, $N_e=8$, $Q=20\times 20$, $M=300$). The numerical wave propagates but exhibits order-one absolute error throughout the domain.}
  \label{fig:advection-baseline}
\end{figure}

Refinement studies confirm the absence of convergence: increasing the number of subdomains, collocation points, or hidden-layer width improves runtime scaling but leaves the error unchanged.

\begin{figure}[h]
  \centering
  \includegraphics[width=0.8\textwidth]{../experiments/advection_subdomain_conv.png}
  \caption{Effect of increasing spatial-temporal subdomains on advection accuracy (Section~\ref{sec:review}). Error curves remain flat at $\mathcal{O}(1)$ across all tested decompositions.}
  \label{fig:advection-subdomains}
\end{figure}

\begin{figure}[h]
  \centering
  \includegraphics[width=0.8\textwidth]{../experiments/advection_colloc_conv.png}
  \caption{Collocation sweep demonstrating that denser sampling does not mitigate the large residual bias in the advection solver.}
  \label{fig:advection-colloc}
\end{figure}

\begin{figure}[h]
  \centering
  \includegraphics[width=0.8\textwidth]{../experiments/advection_trapar_conv.png}
  \caption{Training-parameter study for the advection equation. Even with $M=300$ hidden units per subdomain, the maximum error stalls near $2.0$.}
  \label{fig:advection-M}
\end{figure}

Finally, reduced configurations (Figure~\ref{fig:advection-basic}) corroborate that the discrepancy persists at smaller time horizons and coarser meshes, supporting the hypothesis of an operator assembly or boundary treatment issue.

\begin{figure}[h]
  \centering
  \includegraphics[width=0.8\textwidth]{../experiments/advection_basic.png}
  \caption{Simplified advection run highlighting the persistent bias at lower resolution.}
  \label{fig:advection-basic}
\end{figure}

\subsection{Diffusion and Nonlinear Helmholtz Studies}

Section~\ref{sec:review} synthesises the diffusion and nonlinear Helmholtz investigations detailed in the diffusion/nonlinear report\footnote{experiments/section2\_report.md}. The diffusion solver mirrors the advection behaviour with order-one errors despite extensive refinement, pointing toward shortcomings in block-to-block continuity or initial-condition transfer. The nonlinear Helmholtz pipeline is structurally complete but requires residual validation before large-scale sweeps can begin.

\subsection{Burgers Equation Experiments}

The Burgers equation tests, summarised in the Burgers report\footnote{experiments/section3\_report.md}, extend the time-dependent framework to nonlinear dynamics. While the infrastructure---including block marching, nonlinear solvers, and diagnostic tooling---is in place, the combination of finite-difference derivative evaluations and expensive perturbation solves renders the current implementation too slow for the full parameter grid. Accuracy remains well below the $10^{-8}$ target for the canonical configuration ($t_f=10$, $N_b=40$, $N_e=5$, $Q=20\times 20$, $M=200$, $R_m=0.75$), reinforcing the need for the improvements outlined in Section~\ref{sec:recommendations}.
