\chapter{Simulation Framework Design}
\label{chap:simulation}

\section{Platform Selection and Roles}

We adopt a tiered tooling stack:
\begin{itemize}
    \item \textbf{NetLogo} for rapid prototyping and visualization of small-scale models.
    \item \textbf{Repast Simphony} or \textbf{Mesa} for large-scale agent-based experiments.
    \item \textbf{Python/Julia} pipelines for batch orchestration, data processing, and metric computation.
\end{itemize}

Each platform is assigned roles per \cref{tab:platform_roles}.

\begin{table}[H]
    \centering
    \caption{Simulation platform roles}
    \label{tab:platform_roles}
    \begin{tabular}{p{3cm}p{4cm}p{5cm}}
        \toprule
        Platform & Primary usage & Integration notes \\
        \midrule
        NetLogo & Exploratory modelling, concept demos & Hooks export runs to CSV/JSON; uses BehaviorSpace for sweeps \\
        Repast & High-fidelity, parallel simulations & Java-based; integrate with Python analytics via HDF5 dumps \\
        Mesa & Python-native ABM for experimentation & Direct link to metric computation; container-friendly \\
        \bottomrule
    \end{tabular}
\end{table}

\section{Model Configuration Schema}

Configuration files use YAML or TOML, mirroring theoretical symbols.

\begin{longtable}{p{3cm}p{4cm}p{5cm}}
    \caption{Core configuration fields}\label{tab:config_schema}\\
    \toprule
    Field & Description & Example \\
    \midrule
    \endfirsthead
    \toprule
    Field & Description & Example \\
    \midrule
    \endhead
    \midrule
    \multicolumn{3}{r}{\textit{Continued on next page}}\\
    \midrule
    \endfoot
    \bottomrule
    \endlastfoot
    lattice\_size & Base lattice dimension & 64 \\
    levels & Hierarchy description & $[\text{micro}, \text{meso}, \text{macro}]$ \\
    intra\_coupling & $J^{(\ell)}$ tensors & $\{\text{micro}: 1.5, \text{meso}: 0.8\}$ \\
    inter\_coupling & $K^{(\ell,\ell+1)}$ & $\{(\text{micro},\text{meso}): 0.4\}$ \\
    noise\_schedule & Randomness parameters & $\sigma_0 = 0.2$, decay $= 0.01$ \\
    governance & Safety thresholds & queue\_cap $= 30$ \\
    instrumentation & Metrics toggles & entropy=true, mutual\_info=true \\
    seeds & Random seeds per run & $[42, 43, 44]$ \\
    storage & Output locations & data/raw/run\textunderscore id \\
\end{longtable}

Schema definitions are versioned in `data/schema/` with JSON Schema validation scripts.

\section{Instrumentation and Logging}

We instrument simulations to capture both raw trajectories and derived metrics.

\begin{itemize}
    \item \textbf{State snapshots}: Periodic dumps of $(x_i(t))$, aggregated $\Phi_\ell(t)$, queue lengths, and control signals.
    \item \textbf{Metric pipeline}: Online computation of entropy, transfer entropy, effective information (via plug-in estimators), and queue statistics.
    \item \textbf{Provenance metadata}: Git commit hashes, parameter IDs, environment fingerprints stored alongside outputs.
\end{itemize}

Logging schema follows the metadata template in Appendix~\ref{app:implementation}. Instrumentation code stubs reside in Appendix~\ref{app:algorithms}.

\section{Automation and Reproducibility}

Continuous integration (CI) workflows orchestrate batch simulations:
\begin{enumerate}
    \item Containerized environments ensure consistent dependencies.
    \item CI triggers run smoke tests each commit; nightly builds execute full sweeps.
    \item Results uploaded to `artifacts/<chapter>/` with checksums and manifest files.
\end{enumerate}

\begin{mdframed}[style=theoremstyle]
\textbf{Reproducibility Checklist}
\begin{itemize}
    \item Deterministic random seed assignment (per-run seeds + master seed).
    \item Environment capture via Conda/Poetry lockfiles and container definition.
    \item Logging of simulator version, git hash, hardware summary.
    \item Automated validation that configuration diffs correspond to tracked experiment IDs.
\end{itemize}
\end{mdframed}

\section{Validation Readiness}

Before executing hypothesis tests (Chapter~\ref{chap:experiments}), simulations must satisfy:
\begin{itemize}
    \item \textbf{Unit tests} for rule implementations and instrumentation functions.
    \item \textbf{Smoke runs} verifying stable execution for representative configurations.
    \item \textbf{Baseline comparisons} with analytic benchmarks (e.g., mean-field predictions) to sanity-check outputs.
    \item \textbf{Data quality guards} ensuring no missing fields, consistent sampling cadence, and bounded values.
\end{itemize}

