% This is samplepaper.tex, a sample chapter demonstrating the
% LLNCS macro package for Springer Computer Science proceedings;
% Version 2.21 of 2022/01/12
%
\documentclass[runningheads]{llncs}
%
\usepackage[T1]{fontenc}
% T1 fonts will be used to generate the final print and online PDFs,
% so please use T1 fonts in your manuscript whenever possible.
% Other font encondings may result in incorrect characters.
%
\usepackage{graphicx}
\usepackage{amsmath}
\usepackage{array}    % 确保表格支持增强功能
\usepackage{listings}
\usepackage{afterpage}
\usepackage{fancyhdr}
\usepackage{tabularx}
\usepackage{multirow}
\usepackage{longtable}
\usepackage{algorithm}
\usepackage{algpseudocode}
\usepackage{booktabs}
\usepackage{microtype}
\usepackage[hyphens]{url}
\usepackage{amssymb}
\usepackage{pifont}
% \usepackage[hidelinks]{hyperref}
% \usepackage{etoolbox}
% \usepackage{hyperref}
% \usepackage[english]{babel}
% \usepackage{hyphenat}
% \usepackage{geometry}

% Used for displaying a sample figure. If possible, figure files should
% be included in EPS format.
%
% If you use the hyperref package, please uncomment the following two lines
% to display URLs in blue roman font according to Springer's eBook style:
%\usepackage{color}
%\renewcommand\UrlFont{\color{blue}\rmfamily}
%\urlstyle{rm}
%
\begin{document}
%
\title{Auto-CLOUDSC: An Auto-generation Framework for Vectorization and Optimization of Cloud Microphysics Parameterization on ARM CPUs}

% AutoFFT: a template-based FFT codes auto-generation framework for ARM and X86 CPUs

%
%\titlerunning{Abbreviated paper title}
% If the paper title is too long for the running head, you can set
% an abbreviated paper title here
%
\author{
Tun Chen\inst{}\orcidID{0000-0003-3459-7960} \and
Jianping Wu\inst{}\orcidID{0000-0001-6365-777X}\thanks{Corresponding author.} \and
Yuntian Zheng\inst{}\orcidID{0009-0001-9227-4919}\and
Yingjie Wang\inst{}\orcidID{0000-0002-4350-1871}\and
Fukang Yin\inst{}\orcidID{0000-0003-3459-7960}\and
Jinhui Yang\inst{}\orcidID{0000-0002-7437-0245}\and
Juan Zhao\inst{}\orcidID{0000-0002-8412-2472}\and
Xiaoli Ren\inst{}\orcidID{0000-0002-8665-5571}
}
%
\authorrunning{T. Chen et al.}
% First names are abbreviated in the running head.
% If there are more than two authors, 'et al.' is used.
%
\institute{College of Meteorology and Oceanography, National University of Defense Technology, Changsha 410073, China
\email{\{chentun,wjp,zhengyuntian18,wangyingjie20,yinfukang,yangjinhui,zhaojuan,\\renxiaoli18\}@nudt.edu.cn}}
%
\maketitle              % typeset the header of the contribution
%
\begin{abstract}

With the rise of scalable vector extension instruction sets in ARM processor architectures, large Fortran-based scientific codes face challenges in performance portability. This paper focuses on CLOUDSC, a computationally intensive and data-access complex cloud microphysics parameterization scheme from the Integrated Forecasting System of ECMWF. We propose Auto-CLOUDSC, an auto-generation framework that optimizes CLOUDSC by the following methods, including (1) An auto-generator that contains three modules: function interface generator, code structure analyzer, and expression parser to convert Fortran to vectorization instruction sets. (2) A physics-combine algorithm that applies loop fusion to reduce redundant memory access. (3) A cache-aware algorithm that uses cache tiling and data layout optimization to improve data reuse.  Experiments demonstrate that the auto-generated code achieves a speedup of 1.3 to 2.1 times over the original Fortran baseline on the Phytium FT2000+ ARMv8 processor.

\keywords{Cloud microphysics parameterization  \and Cache tiling \and Loop transformations \and Auto-generation.}

\end{abstract}
%
%
%
\section{Introduction}

% Numerical weather prediction (NWP) is a key scientific application in high-performance computing (HPC). It has traditionally been heavily reliant on the Fortran programming language. Fortran's native support for multi-dimensional arrays and its performance-driven design made it the preferred tool for meteorological and climate modeling soon after its release in 1957. Fortran drives the evolution of large-scale weather and climate models over several decades \cite{mendez2014climate}. With the end of Moore's Law and increasing demands for efficiency, HPC architectures have increasingly diversified \cite {muller2019escape}, including multi-core CPUs and wide vector units such as ARM Scalable Vector Extension (SVE)\cite{noauthor_sve_nodate}. This shift has highlighted the challenge of achieving performance portability for legacy Fortran code, as aging weather and climate models are becoming less competitive, and the demand for performance improvements is increasingly urgent \cite {schulthess2018reflecting}. This underscores the urgency of adapting algorithms and software\cite{bauer_digital_2021}. Traditional Fortran-based scientific computing codebases contain thousands of source files and millions of lines of code from decades of scientific discovery and software development \cite{schar2020kilometer,lapillonne2017using}. A major challenge now is how to exploit high-level abstraction and automated programming models to ensure efficient, portable, and maintainable operation of weather and climate models on new hardware architectures.

% The microphysics parameterization constitute a fundamental approach in numerical weather prediction. It represents the impact of subgrid-scale phenomena on the resolved-scale fluid dynamics using grid-scale variables. Numerous physical and chemical processes possess spatial scales that are substantially smaller than the resolution afforded by the dynamical core of the model. Consequently, these processes elude explicit representation by the model’s dynamics and must instead be parameterized. The atmospheric component of the Integrated Forecasting System (IFS) developed by the European Centre for Medium-Range Weather Forecasts (ECMWF) encompasses a suite of parameterizations to account for radiative transfer, deep and shallow convection, cloud microphysics, large-scale precipitation and so on \cite{ecmwf_ifs_2023}.


Numerical weather prediction (NWP) is a key scientific application in high-performance computing (HPC). It has traditionally been heavily reliant on the Fortran programming language\cite{mendez2014climate}. With the end of Moore's Law and increasing demands for efficiency, HPC architectures have increasingly diversified \cite{muller2019escape}, including multi-core CPUs and wide vector units such as ARM Scalable Vector Extension (SVE)\cite{noauthor_sve_nodate}. This shift has highlighted the challenge of achieving performance portability for legacy Fortran code\cite{schulthess2018reflecting}. It underscores the urgency of adapting algorithms and software\cite{bauer_digital_2021}. Traditional Fortran-based scientific computing codebases contain thousands of source files and millions of lines of code from decades of scientific discovery and software development \cite{schar2020kilometer,lapillonne2017using}. A major challenge now is how to exploit high-level abstraction and automated programming models to ensure efficient, portable, and maintainable operation of weather and climate models on new hardware architectures.


CLOUDSC is a sophisticated multi-physics cloud microphysics parameterization scheme developed by the European Centre for Medium-Range Weather Forecasts (ECMWF) for its Integrated Forecasting System (IFS)\cite{ecmwf_ifs_2023}. The Fortran implementation of CLOUDSC for CPU platforms can be freely accessed via GitHub\cite{noauthor_ecmwf-ifsdwarf-p-CLOUDSC_2025}. The computations in CLOUDSC are organized as two-dimensional data structures along horizontal (column) and vertical (level) directions. Computations are independent across horizontal columns, and there exist data dependencies between different vertical levels within each column. Data for all columns at the same vertical level is stored contiguously. The layout is especially suitable for vectorization, paralleling efficiently on data from multiple columns at the same vertical level \cite{allen1987automatic,hampton2008compiling,kennedy2001optimizing}. The independence between columns also enables them to be blocked in cache, so each processing unit can focus on a subset of columns and thereby improve data locality\cite{song1999new,coleman1995tile}.Furthermore, CLOUDSC typically utilizes multiple nested loop structures. These are well-suited to targeted loop transformations, which can reduce loop control overhead and reveal more opportunities for register reuse and instruction-level parallelism \cite{wolf1991loop,mckinley1996improving}. As a highly-coupled and advanced cloud parameterization routine operating on nearly fifty input arrays, CLOUDSC is also one of the most computationally intensive components within IFS, accounting for approximately 10\% of the total model runtime\cite{ubbiali2024exploring,hague_ifs_2014}. The primary challenges faced by cloud microphysics parameterization schemes can be summarized as follows:

% CLOUDSC is a sophisticated multi-physics cloud microphysics parameterization scheme developed by ECMWF for its IFS. The Fortran implementation of CLOUDSC for CPU platforms can be freely accessed via GitHub\cite{noauthor_ecmwf-ifsdwarf-p-CLOUDSC_2025}. The computations CLOUDSC are horizontally independent, but the computation at a given level requires values from other levels. This structural characteristic allows the input data to be conceptualized as columns, each of which is mutually independent and thus amenable to parallelization. CLOUDSC operates on almost fifty input arrays to represent various physical variables; as a highly-coupled, advanced cloud parameterization routine, it is also among the most computation intensive components within IFS, accounting for approximately 10\% of the total model runtime\cite{ubbiali2024exploring,hague_ifs_2014}. The primary challenges faced by cloud microphysics parameterization schemes can be summarized as follows:

% The computations of CLOUDSC are mutually independent vertical level columns. Data is structured as a two-dimensional array where columns are the primary dimension. Datas from multiple columns at the same vertical level are stored contiguously in memory. This layout enables efficient exploitation of vectorization instructions. The complete independence between columns signifies that columns can be partitioned and cached in blocks. Each processing unit or thread can dedicate its resources to a subset of columns, thereby enhancing cache locality.

% CLOUDSC typically utilizes multiple nested loop structures. These loops are amenable to targeted loop transformations. Loop interchange reduces loop control overhead. It also allows the compiler to expose more register reuse and instruction-level parallelism.

(1) Due to the substantial codebase and intricate structure of CLOUDSC, which features numerous nested loops and branches, complex expressions, and statement functions, manually implementing explicit vectorization and parallelization for ARM SVE imposes an overwhelming workload. This complexity also complicates code maintenance and hinders porting efforts.

(2) The inherent structure of the original CLOUDSC code precludes the effective reuse of grid-point data in cache across successive physical-process computations. This deficiency precipitates cache miss and exacerbates the “memory wall” bottleneck.

(3) The native data access pattern of CLOUDSC induces large memory strides when accessing vertical level data. It thereby hinders the exploitation of spatial locality and results in suboptimal cache-line utilization.

To systematically address these challenges, this paper proposes and implements an auto-generation and optimization framework called Auto-CLOUDSC. It adopts a holistic, hierarchical strategy to transform legacy Fortran codes into high-performance, modernized ones suitable for ARM SVE processors. It optimizes the computations and memory access patterns of cloud microphysics parameterization through two core components: optimization based on auto-vectorization and loop transformation. The primary innovative contributions of this work are summarized as follows:

\begin{itemize}
\item \textbf{An Auto-CLOUDSC framework:} We have developed a comprehensive source-to-source framework to parse complex Fortran expressions. It automatically identifies interface parameters, recognizes code structures, and parses expressions to generate high-performance ARM SVE intrinsic codes, fully harnessing the computational capabilities of advanced vector hardware.
\item \textbf{A physics-combine algorithm:} We propose a loop fusion algorithm that consolidates multiple independent kernels in CLOUDSC into a unified master loop. This algorithm significantly enhances temporal locality and reduces computational redundancy.
\item \textbf{A cache-aware algorithm:} We design a cache tiling algorithm meticulously tailored for column-major data structures. By optimizing spatial locality, this approach ensures that the working dataset required for vertical column physical computations can be accommodated and retained within the L2 cache, thereby minimizing costly main memory accesses.
\end{itemize}

We conducted a comprehensive performance evaluation on an ARMv8-based server (Phytium FT2000+)\cite{fang2021performance,gao2023wrbench}. The results demonstrate that the optimized code achieves a performance improvement of 1.3 to 2.1 times, thereby substantiating the efficacy of the proposed approaches.

The remainder of the paper is organized as follows. Section \ref{sec:Auto-CLOUDSC} presents Auto-CLOUDSC for the auto-generation of vectorization code. Section \ref{sec:loop_opt} elaborates on optimization of loop transformations for cloud microphysics parameterizations. Section \ref{sec:Perf-Evaluation} details performance evaluation and analysis. Section \ref{sec:Related-Work} reviews related work. Finally, Section \ref{sec:Conclusion} concludes the paper.



% \section{Background}


% \begin{figure}[t!]
%   \centering
%   \includegraphics[width=.6\textwidth]{cimg/arm_cache.pdf}
%   \caption{The cache hierarchy of the Phytium FT2000+ CPU.}
%   \label{arm_cache}
% \end{figure}

% \subsection{CPU Architecture}

% The Phytium FT2000+ is a high-performance processor based on the ARMv8 architecture and designed for scientific computing applications. It also supports the SVE instruction set. It features a hierarchical cache structure and sharing scheme as illustrated in Figure~\ref{arm_cache}. It features a multi-core architecture with eight cores, organized into four core groups, each containing two cores. The cache hierarchy of the Phytium FT2000+ includes a dedicated L1 cache of 64 KB per core, a shared L2 cache of 2 MB per core group, and a unified L3 cache of 64 MB accessible by all core groups. This hierarchical cache structure is designed to optimize data access speed and parallel processing efficiency, which is crucial for applications that are intensive in array memory access and have large memory strides in high-dimensional data, such as cloud microphysics parameterization.

% \subsection{The Challenges of CLOUDSC}

% \subsubsection{Optimization of Temporal Data Locality}

% The original CLOUDSC code structure follows a procedural design. Each physical-process (such as condenzation, ice growth, and precipitation formation) is encapsulated in its kernel subroutine. The model iterates over all grid blocks using an outer loop. Inside each block, it performs a complete set of nested loops for every physical-process, traversing all vertical levels and horizontal grid points.

% This structure induces poor temporal data locality. After one physical module finishes its computation, a large volume of data (like temperature, humidity, and cloud water arrays) is already loaded into the cache. However, the cache is quickly overwritten by new data required by the next independent physical module. Thus, for the same grid point, its state variables cannot be efficiently reused in cache during different physical-process computations. The variables must be repeatedly fetched from the much slower main memory.

% This persistent cache eviction and data reloading significantly waste memory bandwidth. It forces the CPU to stall frequently, waiting for data. As a result, the system hits the so-called "memory wall" bottleneck.

% \subsubsection{Optimization of Spatial Data Locality}

% Fortran adopts column-major order to store multidimensional arrays. This means the first index is stored contiguously in memory. In CLOUDSC, this dimension represents the horizontal grid points. This design inherently favors access along the horizontal direction. However, cloud microphysics calculations exhibit data dependencies in the vertical direction. Computing data for one vertical level requires values from adjacent layers.

% The original code processes horizontal grid points sequentially. It then completes all vertical level calculations within each point. When accessing different vertical levels at a single point, the memory access stride equals the number of horizontal grid points. This stride is giant. Modern CPUs fetch a fixed-size data block, called a cache line, from main memory at each access. The large stride renders only a small fraction of each cache line relevant for the current computation, while the majority of the data is wasted.

% Such inefficient cache line usage undermines spatial data locality. This causes the actual number of memory accesses to exceed the theoretical minimum far. As a result, memory bandwidth pressure intensifies further.

% \subsubsection{Data Vectorization and Parallel Optimization}

% The core advantage of the ARM SVE architecture lies in its powerful vector processing units. These units can execute a single instruction to process multiple data elements in parallel, thus substantially increasing computational throughput. However, CLOUDSC is legacy Fortran code and lacks explicit vectorization directives for ARM SVE. Although modern compilers, such as the Arm Fortran Compiler, integrate specific automatic vectorization capabilities, their effectiveness often remains limited for highly complex codes like CLOUDSC.

% When compilers confront frequent conditional branches, intricate loop dependencies, and irregular memory access patterns, their ability to analyze and generate efficient vectorized code becomes severely restricted. At the same time, due to the massive codebase and complex structure of CLOUDSC, manually refactoring the code for efficient ARM SVE vectorized parallelism incurs enormous development effort and poses significant challenges for ongoing maintenance and optimization.

% Given the scale of migration and optimization required, it is critical to introduce tools or frameworks that can automatically analyze the code and generate efficient parallel vectorized instructions. Such solutions can markedly reduce development workload, enhance maintainability, and fully exploit the computational potential of the ARM SVE architecture.

\section{Auto-CLOUDSC: Auto-generation Architecture}
\label{sec:Auto-CLOUDSC}

\begin{figure}[t!]
\centering
\includegraphics[width=.72\textwidth]{cimg/auto-sve-arch.pdf}
\caption{Auto-Generator Workflow}
\label{fig:auto-sve-arch}
\end{figure}

The auto-generation architecture Auto-CLOUDSC contains an auto-generator. It transforms Fortran source code into ARM SVE intrinsic code. The overall workflow and module structure of the framework comprise three modules: function interface generator, code structure analyzer, and expression parser. The primary objective of the framework is to automatically generate efficient ARM SVE intrinsic code, thereby fully exploiting the parallel computing capabilities of vectorization on ARM architectures. Fig.\ref{fig:auto-sve-arch} presents the invocation relationships between modules and the process of auto-vectorization.

\subsection{Function Interface Generator}

\begin{algorithm}[t!]
\caption{Vars\_in\_out\_tmp for Parameter Type Identification}
\label{alg:vars_in_out_tmp}
\begin{algorithmic}[1]
\Require $index$, $left\_vars$, $right\_vars$, $all\_left\_vars$, $all\_right\_vars$, $input\_vars$, $output\_vars$, $temp\_vars$
\If{$index = 0$}
\State $output\_vars \gets left\_vars$
\State $input\_vars \gets right\_vars$
\State $all\_left\_vars \gets left\_vars$
\State $all\_right\_vars \gets right\_vars$
\Else
\ForAll{$r \in right\_vars$}
\If{$r \in all\_left\_vars$}
\State $all\_left\_vars \gets all\_left\_vars \setminus \{r\}$
\State $output\_vars \gets output\_vars \setminus \{r\}$
\State $all\_right\_vars \gets all\_right\_vars \cup \{r\}$
\State $temp\_vars \gets temp\_vars \cup \{r\}$
\State \textbf{continue}
\ElsIf{$r \notin all\_right\_vars$}
\State $all\_right\_vars \gets all\_right\_vars \cup \{r\}$
\State $input\_vars \gets input\_vars \cup \{r\}$
\EndIf
\EndFor
\ForAll{$l \in left\_vars$}
\State $all\_left\_vars \gets all\_left\_vars \cup \{l\}$
\State $output\_vars \gets output\_vars \cup \{l\}$
\EndFor
\If{$left\_vars \subseteq all\_right\_vars$}
\State $all\_right\_vars \gets all\_right\_vars \setminus left\_vars$
\EndIf
\If{$left\_vars \subseteq temp\_vars$}
\State $temp\_vars \gets temp\_vars \setminus left\_vars$
\EndIf
\EndIf
\end{algorithmic}
\end{algorithm}

The function interface generator comprehensively analyzes and categorizes all variables encountered in the Fortran source code. This process lays a foundation for automated interface generation and subsequent optimizations. The module first scans the source code line by line. By matching the left-hand side (LHS) and right-hand side (RHS) of assignment statements, it extracts variable names. It allocates them as input, output, or temporary variables based on predefined function vars\_in\_out\_tmp, as shown in Algorithm~\ref{alg:vars_in_out_tmp}. Specifically, the function employs an incremental analysis method. It dynamically maintains variable sets and continuously updates variable roles during traversal. It accepts contextual information and variable sets of the current code line as input. By cross-referencing the variables on both sides of the current and previous equations, it dynamically maintains the sets of input, output, and temporary variables.

After concluding variable dependency analysis, the system infers variable type attributes (such as array or scalar) and data types in the Fortran subroutine. It then automatically synthesizes the function interfaces for the target language (C/ARM SVE). The generated interface accurately specifies all required input and output parameters. It also includes essential array sizes and index symbols, ultimately forming a complete parameter list. Through this mechanism, Auto-CLOUDSC reliably discerns variable dependencies in Fortran code and automatically generates function declarations compliant with the constraints of the target architecture.

The function interface generator also generates parts of the initialization code automatically. First, all Fortran array and scalar parameters are mapped to pointer types at the C level. It then dereferences the pointer-type scalar parameters (such as $KFDIA$ and $KIDIA$) received from Fortran into local integer variables. Finally, the module initializes Fortran constants as vector register values (such as SVE vector constants created by svdup\_n\_f64), thus improving the efficiency of subsequent vectorized computations. It ensures that the following loop and calculation codes can directly utilize local variables, which avoids unnecessary address operations.

This module provides users with a convenient and highly automated interface generation capability. Users can utilize it to generate interface declarations and assist with parameter analysis. For multiple code segments, the module supports automatic code refactoring and optimization by analyzing data dependencies between variables, thereby maximizing the reuse of computed results and minimizing redundancy. In conclusion, the module precisely determines all input and output variables required in the code transformation and optimization pipeline, thereby establishing a parameter relation for subsequent structure recognition and code generation modules.


\subsection{Code Structure Analyzer}

The code structure analyzer automatically processes loop and branch structures. Upon encountering a loop structure, the loop transformation stage rewrites the original loop into a form better suited for data locality. Furthermore, the module determines whether the loop variable traverses a vectorizable dimension, and it generates either vectorized or scalar loops. Suppose the loop variable is $KLON$ (data-address-contiguous). In that case, the module generates a loop with a step size equal to the vector length and initializes the corresponding predicate, thereby enabling batch iteration. For non-parallel dimensions, the module generates regular scalar loops. When the module detects a branch structure, it extracts the condition expression, parses its internal logical structure, and decomposes complex logical statements into a sequence of Python logic operations. The module incrementally computes the resulting predicate mask, which is then used to govern the vectorized computation.


\subsubsection{Loop Transformation}

Loop Transformation plays a vital role in optimizing the Auto-CLOUDSC. This module analyzes and modifies the original Fortran code structure. It detects loop constructs in the source code and implements transformations such as loop fusion and loop tiling. These techniques maximize data locality and boost cache efficiency. The details of loop transformation are described in Section \ref{sec:loop_opt}.

\subsubsection{Parallelization Level Determination}

% Auto-CLOUDSC treats loop variables with contiguous memory addresses as parallel dimension variables. For example, when $x_dimension = \text{KLON}$, the module leverages KLON as the loop range in DO loops to meet the requirements of data-parallel processing in the horizontal direction, and applies ARM SVE instructions to realize vectorization. By examining whether the DO loop header includes the $x_dimension$ variable, the module autonomously identifies data-parallel loops.

Auto-CLOUDSC differentiates between parallel and non-parallel loops, as illustrated in Algorithm~\ref{alg:vectorized_scalar_loops}. For parallel loops, it generates vectorized loop code and dynamically computes the number of data elements processed per iteration using \texttt{svcntd()}. For non-parallel loops, such as vertical level loops, the module emits regular scalar code. The \texttt{svcntd()} function denotes the number of double-precision elements that a vector register can accommodate, as shown in Equation~(\ref{equ:V}). For instance, on a 256-bit architecture, each iteration can process four double-precision elements.

\begin{equation}
    svcntd() = \frac{vector\ register\ (bits)}{data\ type\ (bits)}
    \label{equ:V}
\end{equation}

\begin{algorithm}[t!]
    \caption{Vectorized and Scalar Loops}
    \label{alg:vectorized_scalar_loops}
    \begin{algorithmic}[1]
        \Statex {\bf // Vectorized parallel loop}
        \For{ $JL \gets KIDIA$ \textbf{to} $KFDIA$ \textbf{step} $\operatorname{svcntd()}$ }
        \State $pg \gets \operatorname{svwhilelt\_b64}(JL-1, KFDIA)$
        \State \ldots\EndFor\Statex\Statex {\bf // Scalar loop}
        \For{ $JK \gets 1$ \textbf{to} $KLEV$ }
        \State \ldots\EndFor
    \end{algorithmic}
\end{algorithm}

\subsubsection{Generation of Predicate}

\paragraph{Predicate for Loop Structure}The predicate \texttt{svbool\_t} mechanism precisely controls the range of vectorized data elements computed in each code block. By allowing each vector instruction to operate only on specific elements, the predicate mechanism guarantees data safety and enhances the efficiency of conditional computations. In parallel loops, the module invokes \texttt{svwhilelt\_b64()} to handle boundary cases. This function returns a predicate that leverages a masking mechanism to ensure data safety during the final iteration. The predicate variable is fundamentally a Boolean vector that marks the active elements within the current loop range. Elements satisfying Equation~(\ref{equ:svwhilelt}) are tagged as true by predicate $PG$, while others are marked false, thereby preventing out-of-bounds access.

\begin{equation}
    PG[i] =
    \begin{cases}
        1, & \text{if } (JL - 1) + i < KFDIA \\
        0, & \text{otherwise}
    \end{cases}
    \quad \forall~ i = 0, 1, \dots, \operatorname{svcntd}() - 1
    \label{equ:svwhilelt}
\end{equation}


\paragraph{Predicate for Branch Structure}When the module identifies a line of code beginning with an IF branch, it automatically initiates condition processing. In the branch construction $IF(cond)$, $cond$ is expressed as a predicate vector. Only vector elements with corresponding true values in this predicate participate in the computation. 

The module first extracts and decomposes the conditional expression into basic comparison and logical operations, then translates these operations into SVE vector instructions. The logical outcomes are leveraged to generate the corresponding predicate register variable, and the \texttt{svptest\_any()} function tests if any vector elements satisfy the condition.

The module abstracts the predicate-based branch control as \texttt{svptest\_any(PG, cond)}, determining if any true elements exist under the $PG$ mask. If the result is true, the branch code executes; otherwise, it is skipped. The SVE instruction set employs \texttt{svptest\_any()} to implement conditional branching efficiently. In summary, through predicate computation, Auto-CLOUDSC efficiently transforms the original Fortran logic into vectorized mask operations, enabling element-wise evaluation.


\subsection{Expression Parser}

\begin{algorithm}[t!]
\caption{Expression Analysis: \\
Example of $E = \mathrm{MAX}(A+B, \mathrm{MIN}(C, \mathrm{EXP}(D^2)))$}
\label{alg:expression_analysis}
\begin{algorithmic}[1]
\State $E_1 \gets A + B$
\State $E_2 \gets \mathrm{MIN}(C, \mathrm{EXP}(D^2))$
    \State \quad $E_{2,1} \gets C$
    \State \quad $E_{2,2} \gets \mathrm{EXP}(D^2)$
        \State \qquad $E_{2,2,1} \gets D^2$
        \State \qquad $T_1 \gets D \times D$
        \State \qquad $T_2 \gets \mathrm{EXP}(T_1)$
    \State \quad $T_3 \gets \mathrm{MIN}(C, T_2)$
\State $T_4 \gets A + B$
\State $T_5 \gets \mathrm{MAX}(T_4, T_3)$
\State $\mathrm{Extract}(E) = \mathrm{Extract}(E_1) \cup \mathrm{Extract}(E_2) \cup \{\mathrm{MAX}(E_1, E_2)\}$ % 重要修正
\end{algorithmic}
\end{algorithm}

A challenge in automatic vectorization lies in correctly handling operator precedence within complex expressions. The expression parser forms the computational nucleus of the Auto-CLOUDSC framework. This module handles the most fundamental computations in the code, which are arithmetic and logical expressions. It takes Fortran expressions originating from assignment statements or conditional branches and, through a sophisticated parsing and mapping mechanism, decomposes them into Python tokens that can ultimately be mapped to a sequence of efficient ARM SVE instructions.

The overarching strategy employed by the expression parser is an iterative, precedence-driven approach, which decomposes an input expression into a sequence of atomic operations. The parser adopts iterative regular-expression-based matching and replacement. It defines a set of pattern-matching steps ordered by operator precedence. At each step, it identifies a specific operation within the expression, generates a temporary variable to hold the result, and replaces the corresponding sub-expression with this variable name. This process strictly follows the operator precedence rules of Fortran, starting from the highest precedence constructs such as parentheses and functions, and proceeding incrementally to lower precedence logical operations. The flow is formalized in Equation~(\ref{equ:Extract}):

% \begin{equation}
% \mathrm{Extract}(E) = 
% \begin{cases}
%   E, & \text{if } \mathrm{IsAtomic}(E) \\[2ex]
%   \displaystyle\bigcup_{i=1}^k \mathrm{Extract}(E_i) \cup \mathcal{O}(E), & \text{if } E \text{ can be decomposed into } E_1, E_2, \ldots, E_k
% \end{cases}
% \label{equ:Extract}
% \end{equation}

\begin{equation}
\mathrm{Extract}(E) = 
\begin{cases}
  E, & \text{if } \mathrm{IsAtomic}(E) \\[2ex]
  \displaystyle\bigcup_{i=1}^k \mathrm{Extract}(E_{i}) \cup \mathcal{O}(E), & \text{if } E \rightarrow E_1, \ldots, E_k
\end{cases}
\label{equ:Extract}
\end{equation}

Here,

\begin{itemize}
    \item The input is an expression $E$,
    \item $\mathcal{O}(E)$ denotes the set of operation steps produced by decomposing $E$ at the current level,
    \item $\mathrm{IsAtomic}(E)$ returns true if $E$ is atomic (i.e., a variable or constant).
\end{itemize}

The $IsAtomic(E)$ tests whether an expression is an indivisible atomic unit (such as a variable or constant). The set $O(E)$ refers to the operations isolated at the current level of decomposition (e.g., \texttt{MAX}), while $\bigcup_{i=1}^k \mathrm{Extract}(E_i)$ captures the recursive application of this algorithm to each sub-expression $E_i$, and the union accumulates all derived operations. This formalism precisely describes the divide-and-conquer process and the final aggregation of computational steps.

Essentially, this module implements an operator-precedence parser using a recursive descent algorithm. It leverages the operator precedence hierarchy to determine the correct evaluation order for any given expression. When the expression is atomic (either a variable or a constant), it is returned directly; otherwise, the parser locates the decomposition points, splits $E$ into $E_1,..., E_k$, recursively invokes this algorithm on each $E_i$, and includes the local operation $O(E)$ in the result. This parsing mechanism ensures efficient conversion of complex expressions into elementary computational steps, thereby facilitating straightforward mapping to ARM SVE instructions.

Algorithm~\ref{alg:expression_analysis} demonstrates this recursive decomposition strategy, using the example $E = MAX(A+B, MIN(C, EXP(D²)))$. The recursive process constructs a parse tree for a complex expression, ultimately breaking it down into atomic computation units consistent with the original Fortran semantics.


\section{Optimization of Loop Transformations}
\label{sec:loop_opt}

\begin{algorithm}[t!]
\caption{Comparison of Parallel Grid Computing Structures}
\label{alg:grid_computing_comparison}
\begin{algorithmic}[1]
\Require
  \Statex $NGPBLKS$: number of processor grid blocks
  \Statex $KLEV$: number of vertical levels
  \Statex $KLON$: horizontal grid size of processor grid blocks
  \Statex $BL$: number of horizontal grid blocks
  \Statex $NB$: data size of horizontal grid block 
  % \Statex $NB$: block data size
\Ensure $kernel$
% \State do something \Comment{这是注释}
\Statex \Comment{Structure 1: Original CLOUDSC code structure}
\For{$JB = 1, NGPBLKS$}
    \For{$JK = 1, KLEV$}
        \For{$JL = 1, KLON$}
            \State $\text{CLOUD\_PHYSICS\_COMPUTE\_1}$
        \EndFor
    \EndFor
    \State \textbf{...}
    \For{$JK = 1, KLEV$}
        \For{$JL = 1, KLON$}
            \State $\text{CLOUD\_PHYSICS\_COMPUTE\_N}$
        \EndFor
    \EndFor
\EndFor

\Statex \Comment{Structure 2: loop optimization with SIMD parallelism}
\For{$JB = 1, NGPBLKS$}
    \For{$JM = 1, BL$}
        \For{$JK = 1, KLEV$}
            \State \textbf{\#SIMD vectors parallel}
            \For{$JL = 1, NB$}
                \State $\text{CLOUD\_PHYSICS\_COMPUTE\_1}$
                \State \textbf{...}
                \State $\text{CLOUD\_PHYSICS\_COMPUTE\_N}$
            \EndFor
        \EndFor
    \EndFor
\EndFor

\end{algorithmic}
\end{algorithm}

CLOUDSC contains many kernel subroutines, each encapsulated within separate DO loops. Each subroutine consists of three-dimensional loops with different structures. Each innermost loop iterate over a set of columnar data. Most kernels operate across two dimensions the column and the level. Some kernels or arrays include an additional dimension, NCLV, which is fixed to 5, and equals to the number of microphysical variables, including cloud cover, cloud liquid water, cloud ice, rain, and snow.

Cloud microphysics parameterization exhibits distinct optimization dimensions. As shown in Algorithm \ref{alg:grid_computing_comparison}, from innermost to outermost, these are the dimensions of $Physics$-kernel computations ($CLOUD\_PHYSICS\_COMPUTE$), horizontal grid points ($KLON$), and vertical levels ($KLEV$). Structure 1 illustrates the original code organization employed in the CLOUDSC microphysics parameterization. A salient feature is that each $CLOUD\_PHYSICS\_COMPUTE$ independently traverses all vertical levels and horizontal grid points.

Structure 2 optimizes the baseline using loop transformation techniques. Firstly, it applies a physics-combined loop fusion algorithm. This merges independently executed physical-process computations into a single main loop. It improves temporal locality and mitigates redundant memory access. Secondly, it integrates a cache-aware data tiling algorithm. It partitions horizontal data into blocks of $BL$. This design ensures that each data block efficiently resides in the processor's L2 cache, thereby enhancing spatial locality. Finally, this structure adopts single instruction multiple data (SIMD) vector parallelization. It leverages the processor's SIMD vector units to process multiple data elements. This approach substantially improves computational throughput and performance. Compared to the original code structure, the loop-transformed and optimized structure offers advantages in cache efficiency, memory access patterns, and computational parallelism.


\subsection{Physics-combine Algorithm in the Temporal Dimension}

\begin{figure}[t!]
    \centering
    \includegraphics[width=.6\textwidth]{cimg/vector.pdf}
    \caption{Vectorized parallel computation for physical-process parameterization}
    \label{fig:vector}
\end{figure}


The \textbf{Physics dimension} belongs to the temporal dimension. The computation of physical-processes requires executing multiple specific cloud microphysics parameterization subroutines over different periods, such as condensation, ice crystal growth, and precipitation generation. The computation of physical-processes at different $Physics$ stages exhibits temporal dependencies. The input values of a subsequent $Physics$ process depend on the outputs computed from the previous process. 

In computations along the $Physics$ dimension, the code typically contains nested loops over the horizontal direction (dimension $KLON$) and the vertical direction (dimension $KLEV$), as illustrated in Fig.\ref{fig:vector}. Each $Physics$ routine traverses all horizontal grid points and vertical levels to perform its computations. However, in the original code, Each $Physics$ routine individually iterates over all horizontal grid points and vertical levels, which triggers repeated memory access and computational redundancy.

This study introduces a physics-combine algorithm in the temporal dimension. It merges the multiple loop structures of $physics$ subroutines into a single main loop, completing all relevant computations within a single loop body. This approach avoids multi-pass memory access and reduces loop overhead, while enhancing temporal locality.

\subsection{Cache-Aware Algorithm in the Spatial Dimension}

The idea of the cache-aware algorithm in the spatial dimension is to decompose data along the $KLON$ dimension. It combines the partitioned sub-data with high-dimensional data tiling for computation. It ensures that the data can reside in the cache during the calculation cycle of the physical-process module, thus augmenting data reuse.

In the physical-process routine, the $KLON$ dimension represents the number of horizontal grid points. Calculations along the KLON dimension are independent of each other. In contrast, data dependencies along the $KLEV$ and $Physics$ dimensions imply that input for a physical-process routine depends on the output from previous routines. Therefore, it is essential to utilize data tiling to enhance the efficiency of data access in cloud microphysics parameterization.

The cache-aware algorithm combines data tiling of the $KLON$ dimension with blocks in the $KLEV$ and $Physics$ dimensions. The $KLON$ data is divided into $BL = KLON / NB$ blocks, with the block size set to $NB$. For each block of a physical-process, the data block $D_{block}$ resides within the L2 cache, satisfying:

\begin{equation}
  \label{formu:cache_aware}
  D_{block} = KLEV \times NB \times N_{var} \times S_{var} \leq C_{L2},
\end{equation}

where:


\begin{itemize}
    \item $KLEV$: the number of vertical levels,
    \item $NB$: the block size in the horizontal direction of the grid node,
    \item $N_{var}$: the total number of physical variables per grid point (e.g., temperature, humidity, cloud water content, particle content),
    \item $S_{var}$: the size per variable in bytes,
    \item $C_{L2}$: the size of L2 cache in bytes.
\end{itemize}


Since cloud microphysics parameterization involves intensive array computations in both $KLEV$ and $Physics$ dimensions, applying the cache-aware algorithm in the spatial dimension increases the cache reuse of dependent data across these two dimensions. It intensifies L2 cache utilization, thereby optimizing the efficiency of data access.

In the cache-aware algorithm, the block size $NB$ of the $KLON$ dimension must be tuned according to the cache-line and vector-register size of the processor. It guarantees that each computation can fully leverage the bandwidth of cache lines and vector operations. The cache-line size is $C\_Line$ bytes, while the vector register has a length of $R\_size$ bytes. It maximizes the utilization of cache lines and vector resources. Therefore, the selection of $NB$ must satisfy the following condition:

\begin{equation}
  \label{formu:nb_condition}
  NB = k \cdot \frac{\mathrm{lcm}(C\_Line,~ R\_size)}{E\_size}, 
\end{equation}

where LCM denotes the least common multiple, $k$ is a positive integer, and $E\_size$ represents the size of each data element in bytes.

In addition, the $KLON$ dimension should consider the data alignment of cache lines. The processor loads $C\_Line$ bytes from main memory at a time. Padding is added to each vertical level to ensure alignment with the cache-line size $C\_Line$. This strategy ensures that each memory access exploits cache lines efficiently. Good data alignment concentrates memory access on continuous cache lines, reduces unaligned or cross-cache-line accesses, and boosts memory access efficiency.



\section{Performance Evaluation}
\label{sec:Perf-Evaluation}


% \subsection{CPU Architecture}

% \begin{figure}[t!]
%   \centering
%   \includegraphics[width=.6\textwidth]{cimg/arm_cache.pdf}
%   \caption{The cache hierarchy of the Phytium FT2000+ CPU.}
%   \label{arm_cache}
% \end{figure}

% The Phytium FT2000+ is a high-performance processor based on the ARMv8 architecture and designed for scientific computing applications. It also supports the SVE instruction set. It features a hierarchical cache structure and sharing scheme as illustrated in Fig.\ref{arm_cache}. It features a multi-core architecture with eight cores, organized into four core groups, each containing two cores. The cache hierarchy of the Phytium FT2000+ includes a dedicated L1 cache of 64 KB per core, a shared L2 cache of 2 MB per core group, and a unified L3 cache of 64 MB accessible by all core groups. This hierarchical cache structure is designed to optimize data access speed and parallel efficiency, which is crucial for applications that are intensive in array memory access and have large memory strides, such as cloud microphysics parameterization.


\subsection{Setup}

\begin{table}[t!]
\centering
\caption{Experimental Environment}
\label{tab:Environment_FT2000}
\begin{tabular}{cc}
\hline
CPU & Phytium FT2000+ \\
\hline
Arch. & AArch64 \\
Frequency & 2.2GHz \\
L1 cache & 32KB \\
L2 cache & 2MB \\
CLOUDSC version & CLOUDSC-1.5.3 \\
\hline
\end{tabular}
\end{table}

In this section, we conduct a series of performance benchmark tests to validate the effectiveness of the Auto-CLOUDSC framework and its optimization strategies. We provide a detailed description of the experimental setup, an analysis of execution time, and specific results regarding performance improvements. All experiments are implemented on the Phytium FT2000+ 256-bit ARM processor. Table \ref{tab:Environment_FT2000} enumerates the detailed specifications of the experimental environment. We employ the open-source physical parameterization library CLOUDSC-1.5.3 \cite{noauthor_ecmwf-ifsdwarf-p-CLOUDSC_2025} as the baseline, and perform performance evaluation based on this version. We aim to assess the performance gains of the Auto-CLOUDSC framework at various optimization levels and compare the results with the original Fortran code. Before conducting the performance tests, we rigorously examine the correctness of the generated code by performing element-wise comparisons between the SVE version and the original version. We ensure that all optimizations preserve the accuracy of the final computational results, with a relative accuracy of the order of $10^{-13}$ to $10^{-14}$.



% The performance evaluation of the Auto-CLOUDSC framework is conducted on the Phytium FT2000+ ARMv8 processor. The results are presented in Table \ref{tab:performance_comparison}. The table compares the execution time of the original Fortran code with the optimized SVE code generated by Auto-CLOUDSC. The results demonstrate that the optimized code achieves a speedup ranging from 1.40× to 1.76× over the original Fortran baseline, depending on the specific cloud microphysics parameterization process.

% The performance evaluation results are summarized in Table \ref{tab:optime_FT2000}. The table presents the execution time in seconds for different optimization strategies, including the original Fortran baseline, the loop optimization, and the loop with vectorization. The results are categorized by different data sizes (ngptot) and varying degrees of loop unrolling (npromas).


\subsection{Performance Analysis}

\begin{table}[t!]
\centering
\caption{Speedup of Vector, Loop and Loop\_Vector methods over Baseline for different input sizes (ngptot) and block sizes (npromas).}
\label{tab:speedup_FT2000}
\renewcommand\arraystretch{1.15}
\begin{tabular}{ccccccccc}
\toprule
\multirow{2}{*}{\textbf{ngptot}} & \multicolumn{7}{c}{\textbf{npromas}} \\ 
\cmidrule{2-8}
 & \textbf{1} & \textbf{2} & \textbf{4} & \textbf{8} & \textbf{16} & \textbf{32} & \textbf{64} \\
\midrule

\multicolumn{8}{c}{\textbf{Vector Speedup}} \\
\midrule
2048   & 0.888 & 1.104 & 1.429 & 1.585 & 1.575 & 1.558 & 1.542 \\
4096   & 0.884 & 1.097 & 1.412 & 1.530 & 1.568 & 1.535 & 1.440 \\
8192   & 0.881 & 1.116 & 1.452 & 1.584 & 1.573 & 1.511 & 1.513 \\
16384  & 0.891 & 1.113 & 1.445 & 1.517 & 1.588 & 1.505 & 1.538 \\
32768  & 0.895 & 1.108 & 1.456 & 1.569 & 1.541 & 1.502 & 1.503 \\
65536  & 0.903 & 1.104 & 1.427 & 1.521 & 1.606 & 1.539 & 1.519 \\
\midrule

\multicolumn{8}{c}{\textbf{Loop Speedup}} \\
\midrule
2048   & 1.306 & 1.214 & 1.186 & 1.140 & 1.312 & 1.340 & 1.451 \\
4096   & 1.294 & 1.207 & 1.161 & 1.114 & 1.176 & 1.360 & 1.440 \\
8192   & 1.314 & 1.212 & 1.208 & 1.209 & 1.323 & 1.272 & 1.395 \\
16384  & 1.316 & 1.213 & 1.244 & 1.278 & 1.233 & 1.354 & 1.324 \\
32768  & 1.315 & 1.216 & 1.365 & 1.407 & 1.481 & 1.503 & 1.586 \\
65536  & 1.322 & 1.224 & 1.376 & 1.437 & 1.492 & 1.501 & 1.591 \\
\bottomrule


\multicolumn{8}{c}{\textbf{Loop\_Vector Speedup}} \\
\midrule
2048   & 1.099 & 1.328 & 1.842 & 1.857 & 2.032 & 2.030 & 2.114 \\
4096   & 1.094 & 1.320 & 1.803 & 1.954 & 2.082 & 2.000 & 2.057 \\
8192   & 1.134 & 1.325 & 1.785 & 1.875 & 1.984 & 1.934 & 2.058 \\
16384  & 1.136 & 1.333 & 1.813 & 1.930 & 1.939 & 1.949 & 2.031 \\
32768  & 1.147 & 1.330 & 1.794 & 1.886 & 2.012 & 2.014 & 2.131 \\
65536  & 1.151 & 1.337 & 1.805 & 1.901 & 2.058 & 2.048 & 2.131 \\
\bottomrule

\end{tabular}

\end{table}


\begin{figure}[!t]
    \centering
    \includegraphics[width=\textwidth]{cimg/Time_vs_npromas_Size_multi_bigfont_FT.png}
    \caption{Execution time of the baseline, the Vector, the Loop and the Loop\_Vector version}
    \label{fig:Time_vs_npromas_Size_multi_bigfont}
  \end{figure}


Table~\ref{tab:speedup_FT2000} presents the speedup achieved by the Vector, Loop, and Loop\_Vector optimization methods over the Baseline for different input sizes ($ngptot$) and block sizes ($npromas$). All three strategies deliver clear performance improvements (speedup $>$ 1 in most cases), with the best results achieved as the block size increases. 
At $npromas = 1$,  full vectorization (Vector or Loop\_Vector) may slightly underperform compared to Loop and Baseline—likely due to vector overheads on scalar workloads. However, as $npromas$ increases, all optimized versions exhibit significant drops in runtime, with the greatest benefit observed in Loop\_Vector. 

All optimizations are applied and evaluated for block sizes $npromas>1$. The Vector method, which enables vectorization without loop transformations, achieves higher speedups than Loop, typically reaching 1.1 to 1.6 times the speed of Loop. The Loop method provides consistent acceleration of approximately 1.2 to 1.6 times by improving loop structure and data locality. The best results are achieved with the Loop\_Vector method, which combines loop optimizations and SVE-based vectorization to achieve approximately 1.3 to 2.1 times speedup.

The data show a trend: increasing $npromas$ (block size) leads to greater speedup for all methods, most markedly for Vector and Loop\_Vector. Beyond $npromas$ = 32, the gains saturate, likely due to hardware constraints such as memory bandwidth and the width of vector registers. Additionally, increasing the input size ($ngptot$) amplifies the benefits of all optimization strategies, as larger input sizes allow the computational and memory overheads to be better amortized. 

Fig.~\ref{fig:Time_vs_npromas_Size_multi_bigfont} shows the execution times of the Baseline, Vector, Loop, and Loop\_Vector versions across all input and block sizes. For ($npromas$>1) configuration, Baseline is consistently the slowest. Both Loop and Vector substantially reduce runtime relative to Baseline. The Loop method reliably shortens runtime by improving cache utilization and basic loop structure. The Vector method is more sensitive to $npromas$, as it directly leverages hardware vector units. Its advantage over Loop grows as $npromas$ increases. Loop\_Vector achieves the lowest execution time for most cases, especially when $npromas \geq 8$. It indicates that combining loop-level trasformation and vectorization best exploits available hardware.

% It demonstrates that both larger data blocks and larger problem sizes are essential to unlocking the full potential of cache and vector optimizations.

Overall, these results highlight the importance and mutual reinforcement of multi-level optimizations. Combining loop fuse, data tiling, and vectorization yields scalable performance gains on ARM SVE hardware. Furthermore, the effectiveness of the Vector method in isolation demonstrates that for suitable block sizes, the benefit can be realized via code generation that exploits vector hardware, even before complete loop transformations are applied. 

In summary, Auto-CLOUDSC delivers speedups through its layered optimizations. The observed trends in Table~\ref{tab:speedup_FT2000} and Fig.~\ref{fig:Time_vs_npromas_Size_multi_bigfont} confirm that both block size and total problem size are critical to maximizing performance, and that vectorization—especially when combined with loop optimizations—can provide improvements for scientific workloads on ARM hardware.


  
% Table~\ref{tab:speedup_FT2000} presents the speedup achieved by Vector, Loop and Loop\_Vector optimization methods over the Baseline for different input sizes $ngptot$ and block sizes $npromas$. For all parameter settings, both strategies deliver clear performance improvements (speedup $>$ 1). The Loop method provides consistent acceleration 1.2–1.5$\times$ mainly by improving loop structure and data locality, while Loop\_Vector, leveraging the ARM SVE vectorization, further increases speedup up to 1.76$\times$, especially at larger block and problem sizes.

% The data show that increasing $npromas$ (block size) generally leads to higher speedups for both methods, especially for $npromas$ values up to 32. Beyond this range, speedup gains tend to plateau, indicating diminishing returns that are likely due to hardware limits, such as memory bandwidth and vector register width. Similarly, larger problem sizes  $ngptot$ amplify the benefits of optimization, as more data allows for better amortization of loop and memory access overheads. Overall, Auto-CLOUDSC’s optimizations are effective for large-scale scientific workloads.

% Fig.\ref{fig:Time_vs_npromas_Size_multi_bigfont} complements the table by showing the relative execution times of Baseline, Loop, and Loop\_Vector across a variety of input and block sizes. Across all subplots with different $ngptot$, the Baseline implementation is consistently the slowest. The Loop method steadily reduces runtime, demonstrating the foundational impact of improved cache utilization and loop restructuring. The Loop\_Vector version achieves the best performance for most scenarios, typically offering the lowest execution time, especially as $npromas$ is larger than 4 or 8.

% Notably, when $npromas$ is set to 1, the difference between all versions narrows, and sometimes Loop\_Vector is slightly worse than Loop, due to vectorization overhead on scalar workloads. As block size increases, the execution time drops steeply for all versions, most markedly for Loop\_Vector, until reaching a regime of diminishing returns for a larger $npromas$. This pattern demonstrates the effectiveness of data tiling in improving cache utilization and vectorization throughput. The gap between the baseline and optimized versions also widens at larger $ngptot$ values, as increasing the problem size accentuates the efficiency gains from both cache-aware and vectorized computation.

% These results demonstrate the importance and synergy of multi-level optimizations. The combination of loop fusion, cache tiling, and SVE-based vectorization delivers scalable and robust performance improvements on ARM platforms. Practically, legacy scientific applications implemented in FORTRAN can achieve both high efficiency and portability for modern ARM hardware with minimal manual work, thanks to Auto-CLOUDSC.

% In summary, Auto-CLOUDSC’s optimizations deliver substantial and scalable speedups, and the observed trends validate that both the choice of block size and the problem size play critical roles in maximizing performance. The approach is practical for large, legacy Fortran codes, offering a robust solution for porting scientific applications to new hardware architectures.





















% Table~\ref{tab:speedup_FT2000} presents the speedup of the Loop and Loop\_Vector optimization methods relative to the Baseline implementation, across various input sizes  $ngptot$ and data tiling sizes  $npromas$. Several trends can be clearly observed from the results.

% First, for all tested values of $ngptot$ and $npromas$, both the Loop and Loop\_Vector methods outperform the Baseline, as indicated by speedup values consistently greater than 1. The Loop method yields speedup factors ranging from approximately 1.2 to 1.5, while the Loop\_Vector approach achieves even higher speedups, up to about 1.76 at the largest tested block sizes and data scales.

% Second, as $npromas$ increases, both optimization methods demonstrate increasing speedup over the Baseline, particularly for Loop\_Vector. However, the magnitude of speedup tends to saturate for large $npromas$, indicating diminishing returns after a certain threshold of tiling. The most significant performance improvements occur as 
% $npromas$ increases from 1 to 16 or 32.

% Third, the effect of the input data size $ngptot$ is also visible. Larger values of $ngptot$ consistently result in higher speedup, especially for Loop\_Vector. This suggests that the vectorization and loop optimizations are more effective for large-scale data processing, likely because they better amortize loop overhead and memory access costs.

% In conclusion, both Loop unrolling and vectorization substantially accelerate execution times over the Baseline, with the Loop\_Vector scheme providing the highest performance gains. Careful selection of the tiling parameter  $npromas$ is important for achieving optimal speedup, especially on large-scale datasets.


% Figure \ref{fig:Time_vs_npromas_Size_multi_bigfont} shows the execution times (in seconds) of three code versions under six different total problem sizes (ngptot\_Size from 2,048 to 65,536). The x-axis in each subplot (npromas\_Size) represents the data block size used during execution. These results reveal several key performance characteristics:

% Effectiveness of Optimization Levels: A clear performance hierarchy is visible across all tested configurations. The Loop version (orange) consistently outperforms the baseline FORTRAN code (blue), demonstrating that structural optimizations aimed at improving data locality are effective in themselves. The Loop\_Vector version (green) delivers a further significant speedup over Loop, illustrating the substantial performance gains enabled by ARM SVE vectorization. Notably, when the block size is 1, the Loop\_Vector version essentially executes serially; due to the overheads associated with vector instructions in this case, its performance is slightly lower than the Loop version.

% Impact of Data tiling: A general trend is that the execution time for all versions decreases as the block size (npromas\_Size) increases. This confirms the validity of the tiling algorithm design, indicating that processing with larger data blocks improves cache behavior and reduces memory access overhead. However, the benefits show a “diminishing returns” effect: the most significant time reductions occur when moving from very small to moderate block sizes (such as 16 or 32). Beyond this point, the performance curves level off, signaling a saturation point where performance becomes limited by other factors such as memory bandwidth or the fixed length of hardware vector registers.

% Scalability with Problem Size: Comparing the subplots reveals that as the total problem size (ngptot\_Size) increases, the absolute performance gap between the optimized versions and the baseline widens considerably. For the largest problem size (65,536), the time savings achieved by Loop\_Vector compared to the original Baseline FORTRAN code are especially impressive. This demonstrates that the optimization strategies in Auto-CLOUDSC deliver greater efficiency and scalability when handling large-scale scientific computations.

% In summary, this performance assessment verifies the effectiveness of the automated, multi-level optimization strategies adopted in Auto-CLOUDSC. The results show that optimizations such as loop fusion and cache tiling lay a solid foundation for performance by improving memory access patterns, while the subsequent application of SVE vectorization delivers an additional boost in computational throughput. The synergistic effect of these optimizations leads to an efficient solution on ARM processors and validates the value of this approach for achieving high-performance portability of legacy Fortran scientific codes on ARM hardware.


\section{Related Work}
\label{sec:Related-Work}

To enhance the adaptability of large-scale Fortran applications, such as numerical weather prediction models, on various computing platforms, source-to-source tools like ECMWF’s Loki \cite{noauthor_ecmwf-ifsloki_2025} and the CLAW compiler \cite{clement2019automatic} have been widely adopted for automatic insertion of parallelization directives. These tools analyze Fortran code structures and automatically annotate parallel pragmas, such as OpenACC\cite{1055553175812} or OpenMP\cite{dagum_openmp_1998}. They reduce the complexity for developers by eliminating the need for manual code refactoring and direct manipulation of low-level instructions. This approach helps to preserve the overall readability and consistency of the codebase. However,  deep rewriting for specific hardware optimizations and complex loop structures remains unavoidable in practice. This situation often introduces code redundancy and increases maintenance difficulty \cite{dahm2023pace}. Additionally, limited support and optimization for parallel directives across different hardware and compilation environments still constitute a significant bottleneck for performance portability.

In the area of automatic code generation for high-performance scientific computing, frameworks such as NumPy, GridTools, and DaCe represent critical technical approaches\cite{harris2020array,afanasyev2021gridtools,ben2019stateful}. NumPy \cite{harris2020array}, as the de facto standard for multidimensional array computation in Python, facilitates rapid prototyping and debugging with its rich interfaces and interactive environment, establishing itself as a cornerstone in scientific computing. Nevertheless, its pure Python backend cannot fully exploit the parallelism of modern high-performance devices, making it suitable only for lightweight scientific experiments on CPUs. GridTools \cite{afanasyev2021gridtools} provides a comprehensive suite of C++ templates and tools, delivering portable, high-performance stencil computations for atmospheric and climate models. It automates optimization and parallel scheduling but involves a steep learning curve for domain scientists due to its complex template system. DaCe \cite{ben2019stateful} introduces a data-centric parallel programming paradigm based on Stateful Data Flow Graphs (SDFG). This approach decouples core computational workflows from hardware implementations, enabling automated analysis, transformation, and optimization of data access patterns to achieve cross-platform parallel efficiency. Martin proposed an end-to-end DaCe-based scheme to migrate Fortran CLOUDSC routines to CUDA\cite{martin2023dace} automatically. This approach significantly improved model efficiency through automated memory layout and data flow optimizations, providing a viable pathway for next-generation high-performance scientific applications.

The GT4Py framework advances automation from high-level scientific descriptions to efficient hardware implementations. GT4Py employs a domain-specific language (DSL) embedded in Python to express multidimensional numerical operators and integrates multiple backends, including NumPy, GridTools, and DaCe \cite{ubbiali2024exploring}. It enables automatic generation and compilation of high-performance code tailored to different platforms. As a result, scientists can focus on physical modelling and leverage scientific expressiveness, without concern for low-level details. It dramatically accelerates development and enhances maintainability, while the generated code achieves near-hand-tuned performance on NVIDIA and AMD GPUs, as well as leading CPU platforms.

However, GT4Py and its dependent frameworks—NumPy, GridTools, and DaCe—exhibit several shared limitations. NumPy’s backend, despite its utility for prototyping, is constrained in large-scale, high-performance production scenarios by Python’s Global Interpreter Lock (GIL) and limited CPU parallelism\cite{beazley_david_nodate}. GridTools, built upon C++ metaprogramming, offers high customizability but poses significant barriers to ease of use and quick adoption. Although DaCe automates workflow optimization, it still requires expert intervention to adapt abstractions to diverse physical applications, and lacks robust support for compatibility and visual analysis tools. GT4Py itself, while offering a backend-agnostic DSL, still faces challenges in adapting backends to new hardware and exploiting architecture-specific optimizations, particularly on emerging ARM platforms such as Phytium, Kunpeng and Fugaku\cite{fang2021performance,xia2021kunpeng,sato2020co}. On the one hand, the ARM software ecosystem, including compilers and libraries, is less mature compared to x86 and NVIDIA CUDA, which limits performance improvements from code auto-generation. On the other hand, core backends such as GridTools lack complete support for ARM-specific features, including storage and SIMD vectorization, which impairs operator efficiency and portability for frameworks like GT4Py on ARM systems.

In summary, although automatic code generation and domain-specific languages have significantly streamlined scientific model development and cross-platform portability, current tool chains still exhibit pronounced shortcomings in fine-grained optimization and hardware adaptation for heterogeneous platforms, such as ARM. Future work should focus on advancements in compiler optimization, hardware abstraction, and ecosystem compatibility to address these gaps and promote further innovation.

\section{Conclusion}
\label{sec:Conclusion}

Auto-CLOUDSC efficiently translates high-level Fortran scientific code into optimized ARM SVE code using a modular workflow. By merging physical-processes with advanced loop and cache optimizations, it significantly enhances data locality and enables effective SVE vectorization. The framework’s function interface generator, code structure analyzer and expression parser modules provides modularity, extensibility, and potential for multi-language and multi-architecture support. Tests show that Auto-CLOUDSC’s automatic vectorization consistently outperforms traditional Fortran implementations. Ultimately, Auto-CLOUDSC serves as a robust and portable solution for HPC performance, simplifying optimization for scientists and supporting efficient execution on modern hardware.


\section*{Acknowledgments}

The authors would like to thank all the reviewers for their valuable comments. This work is supported by the Postdoctoral Fellowship Program of CPSF under Grant Number GZC20242265, Hunan
Provincial Natural Science Foundation General Program (2023JJ30630) and National Natural Science Foundation of China under Grant No. 42305170.


% The Auto-CLOUDSC framework transforms high-level Fortran scientific expressions into high-performance ARM SVE code through a multi-module translation workflow. First, the cloud microphysics parameterization uses a loop transformation strategy. It leverages the Physics-combine algorithm to merge loops and integrate multiple physical-processes into a single main loop. This greatly enhances temporal data locality. In parallel, the cache-aware algorithm intelligently partitions data. It guarantees that computational data blocks can efficiently reside in the L2 cache, thereby optimizing spatial locality. These locality enhancements establish a strong foundation for subsequent SVE vectorization, ensuring that processor cores continuously receive high-throughput data streams.

% Auto-CLOUDSC introduces an intermediate representation (IR). This design endows the framework with modularity and scalability. The framework demonstrates strong potential for future development and applications. For example, it can extend the frontend by supporting other scientific languages such as C++ or Python—requiring only new parsers to generate the same IR. It can also expand the backend by adding code generators for new vector architectures, such as the RISC-V Vector Extension or future x86 SIMD instruction sets. Moreover, it can deepen optimizations by enriching the IR with more Fortran functions and complex control flow patterns, thus covering broader scientific computation domains. In every test scenario, the SVE-vectorized version—Loop\_Vector—achieves superior performance. This outcome validates that automatic vectorization delivers substantial speedup over traditional Fortran code.

% In summary, Auto-CLOUDSC is not merely an optimization tool for a specific application (CLOUDSC). It offers an effective paradigm for addressing the performance portability crisis in HPC. The framework automates and abstracts low-level processes. It liberates domain scientists from tedious manual optimization, while ensuring scientific code executes efficiently on cutting-edge hardware. Auto-CLOUDSC thus provides instructive value for the future evolution of scientific computing software.





% By comparing the execution times of the three versions, we can clearly demonstrate the advantages of the Auto-CLOUDSC framework in performance optimization. As the data tiling size, npromas, increases from 1 to 64 cores, the execution times of all three methods significantly decrease across different data scales, ngptot. This indicates that configuring an appropriate data block size can facilitate the improvement of overall computational efficiency. However, when the block size exceeds a specific threshold, the reduction in execution time tends to saturate. The experimental results reveal that, compared with the Baseline, the Loop method achieves a speedup ranging from 1.20x to 1.49x under various input data scales and block sizes. The Loop_Vector method obtains a speedup ranging from 1.40x to 1.76x.


% \begin{figure}[t!]
%     \centering
%     \includegraphics[width=\textwidth]{cimg/Time_vs_npromas_Size_multi_bigfont_FT.png}
%     \caption{云物理过程不同优化方案的执行时间}
%     \label{fig:Time_vs_npromas_Size_multi_bigfont}
%   \end{figure}


% Although modern compilers, such as the Arm Fortran Compiler, possess inherent automatic vectorization capabilities, their effectiveness often remains suboptimal when applied to complex and logic-intensive codes like CLOUDSC.


% Optimal exploitation of multi-level CPU caches and the diverse sharing characteristics at each hierarchy can significantly enhance data access speed and parallel processing efficiency. Algorithms exhibiting high data locality serve as the principal means to achieve these objectives. The Phytium 3000 CPU, built on the ARMv8 processor architecture, features a hierarchical cache structure and sharing scheme as illustrated in Figure~\ref{arm_cache}. Specifically, the Phytium 3000 CPU comprises 8 cores, organized into 4 clusters, with each cluster containing 2 cores. Every core is equipped with a dedicated 64 KB L1 cache. Within each cluster, the cores share a 2 MB L2 cache, while all clusters have access to a unified 64 MB L3 cache.





% ---- Bibliography ----
%
% BibTeX users should specify bibliography style 'splncs04'.
% References will then be sorted and formatted in the correct style.
%
\bibliographystyle{splncs04}
\bibliography{Cmm}
% %
% \begin{thebibliography}{8}
% \bibitem{ref_article1}
% Author, F.: Article title. Journal \textbf{2}(5), 99--110 (2016)

% \bibitem{ref_lncs1}
% Author, F., Author, S.: Title of a proceedings paper. In: Editor,
% F., Editor, S. (eds.) CONFERENCE 2016, LNCS, vol. 9999, pp. 1--13.
% Springer, Heidelberg (2016). \doi{10.10007/1234567890}

% \bibitem{ref_book1}
% Author, F., Author, S., Author, T.: Book title. 2nd edn. Publisher,
% Location (1999)

% \bibitem{ref_proc1}
% Author, A.-B.: Contribution title. In: 9th International Proceedings
% on Proceedings, pp. 1--2. Publisher, Location (2010)

% \bibitem{ref_url1}
% LNCS Homepage, \url{http://www.springer.com/lncs}, last accessed 2023/10/25
% \end{thebibliography}
\end{document}



% To enhance grid-point computational efficiency, this work deploys a vector parallelization strategy in the parameterization of cloud microphysical-processes, as shown in Algorithm \ref{alg:grid_computing_comparison}. First, it partitions the computational domain along latitude and longitude directions.  To align with the ARM computing architecture, it leverages parallelism via the ARM SVE vector instruction set. This approach aims to unlock the computational potential of the processor. The ARM architecture’s Scalable Vector Extension (SVE) is a vectorization extension for the ARMv8-A architecture, specifically designed to manage large-scale data-parallel computing efficiently. One core advantage of SVE is its highly scalable vector length, which can adapt according to the hardware configuration.
