% ---------------------------------------------------------------------------
% Author guideline and sample document for EG publication using LaTeX2e input
% D.Fellner, v1.21, Jan 08, 2024

\documentclass{egpubl}
\usepackage{pg2025}

 
% --- for  Annual CONFERENCE
% \ConferenceSubmission   % uncomment for Conference submission
% \ConferencePaper        % uncomment for (final) Conference Paper
% \STAR                   % uncomment for STAR contribution
% \Tutorial               % uncomment for Tutorial contribution
% \ShortPresentation      % uncomment for (final) Short Conference Presentation
% \Areas                  % uncomment for Areas contribution
% \Education              % uncomment for Education contribution
% \Poster                 % uncomment for Poster contribution
% \DC                     % uncomment for Doctoral Consortium
%
% --- for  CGF Journal
% \JournalSubmission    % uncomment for submission to Computer Graphics Forum
% \JournalPaper         % uncomment for final version of Journal Paper
%
% --- for  CGF Journal: special issue
% \SpecialIssueSubmission    % uncomment for submission to , special issue
\SpecialIssuePaper         % uncomment for final version of Computer Graphics Forum, special issue
%                          % EuroVis, SGP, Rendering, PG
% --- for  EG Workshop Proceedings
% \WsSubmission      % uncomment for submission to EG Workshop
% \WsPaper           % uncomment for final version of EG Workshop contribution
% \WsSubmissionJoint % for joint events, for example ICAT-EGVE
% \WsPaperJoint      % for joint events, for example ICAT-EGVE
% \Expressive        % for SBIM, CAe, NPAR
% \DigitalHeritagePaper
% \PaperL2P          % for events EG only asks for License to Publish

% --- for EuroVis 
% for full papers use \SpecialIssuePaper
% \STAREurovis   % for EuroVis additional material 
% \EuroVisPoster % for EuroVis additional material 
% \EuroVisShort  % for EuroVis additional material
% \MedicalPrize  % uncomment for Medical Prize (Dirk Bartz) contribution, since 2021 part of EuroVis
% \EuroVisEducation              % uncomment for Education contribution

% Licences: for CGF Journal (EG conf. full papers and STARs, EuroVis conf. full papers and STARs, SR, SGP, PG)
% please choose the correct license
\CGFStandardLicense
%\CGFccby
%\CGFccbync
%\CGFccbyncnd

% !! *please* don't change anything above
% !! unless you REALLY know what you are doing
% ------------------------------------------------------------------------
\usepackage[T1]{fontenc}
\usepackage{dfadobe}  

\usepackage{cite}  % comment out for biblatex with backend=biber
% ---------------------------
%\biberVersion
\BibtexOrBiblatex
%\usepackage[backend=biber,bibstyle=EG,citestyle=alphabetic,backref=true]{biblatex} 
%\addbibresource{egbibsample.bib}
% ---------------------------  
\electronicVersion
\PrintedOrElectronic
% for including postscript figures
% mind: package option 'draft' will replace PS figure by a filename within a frame
\ifpdf \usepackage[pdftex]{graphicx} \pdfcompresslevel=9
\else \usepackage[dvips]{graphicx} \fi

\usepackage{egweblnk} 
% end of prologue

\usepackage{amsmath,amssymb}  % 数学符号支持
\usepackage{bm}
\usepackage{caption}        % 优化标题
\usepackage{graphicx}
\usepackage{subcaption}      % 子图布局
\usepackage{booktabs}        % 表格优化（可选）
\usepackage{adjustbox}
\usepackage{ReviewCommand}
\usepackage{diagbox}   % 用于创建斜线单元格
\usepackage{array}     % 改进表格格式
\usepackage{booktabs} 

\newcommand{\methodname}[1]{\textbf{\textsf{#1}}} % 方法名称样式
\newcommand{\failurealert}[1]{\textcolor{red}{#1}} % 失败提示样式
\newcommand{\subblock}{$\mathcal{G}_{block}~$}
\newcommand{\ssubblock}{$\mathcal{G}^i_{block}~$}
\newcommand{\subimage}{$\mathbf{I}_{sub~}$}
\newcommand{\ssubimage}{$\mathcal{I}^i_{sub}~$}
\newcommand{\subimageset}{$\{\mathcal{I}^i_{sub}\}~$}
\newcommand{\mask}{$\mathcal{M}^i(\mathcal{G}_{block})~$}
\newcommand{\coarsemask}{$\mathcal{\hat{M}}^i(\mathcal{G}_{block})~$}
\newcommand{\subtask}{($\mathcal{G}_{block}, \mathbf{I}_{sub})~$}
\newcommand{\lowGS}{$\mathcal{G}^{coarse}~$}
\newcommand{\image}{$\mathcal{I}^i~$}
% end of prologue

% ---------------------------------------------------------------------
% EG author guidelines plus sample file for EG publication using LaTeX2e input
% D.Fellner, v2.04, Dec 14, 2023


\title[EG \LaTeX\ Author Guidelines]%
      {Gaussian Splatting for Large-Scale Aerial Scene Reconstruction From Ultra-High-Resolution Images}

% for anonymous conference submission please enter your SUBMISSION ID
% instead of the author's name (and leave the affiliation blank) !!
% for final version: please provide your *own* ORCID in the brackets following \orcid; see https://orcid.org/ for more details.
\author[Sun Qiulin \& Lai Wei \& Li Yixian \& Zhang Yanci]
{\parbox{\textwidth}{\centering Sun, Qiulin$^{1}$\orcid{0009-0000-5617-1667}, Lai Wei$^{1}$\orcid{0000-0001-7756-0901}, Li Yixian$^{1}$\orcid{0000-0001-7756-0901}
        and Zhang Yanci$^{1}$\orcid{0000-0001-5923-423X} 
%        S. Spencer$^2$\thanks{Chairman Siggraph Publications Board}
        }
        \\
% For Computer Graphics Forum: Please use the abbreviation of your first name.
{\parbox{\textwidth}{\centering $^1$Sichuan University\\
%        $^2$ Another Department to illustrate the use in papers from authors
%             with different affiliations
       }
}
}
% ------------------------------------------------------------------------

% if the Editors-in-Chief have given you the data, you may uncomment
% the following five lines and insert it here
%
% \volume{36}   % the volume in which the issue will be published;
% \issue{1}     % the issue number of the publication
% \pStartPage{1}      % set starting page


%-------------------------------------------------------------------------
\begin{document}

% uncomment for using teaser
% \teaser{
%  \includegraphics[width=0.9\linewidth]{eg_new}
%  \centering
%   \caption{New EG Logo}
% \label{fig:teaser}
%}

\maketitle
%-------------------------------------------------------------------------
\begin{abstract}
  Using 3D Gaussian splatting to reconstruct large-scale aerial scenes from ultra-high-resolution images is still a challenge problem because of two memory bottlenecks - excessive Gaussian primitives and the tensor sizes for ultra-high-resolution images.
  In this paper, we propose a task partitioning algorithm that operates in both object and image space to generate a set of small-scale subtasks. Each subtask's memory footprints is strictly limited, enabling training on a single high-end consumer-grade GPU. 
  More specifically, Gaussian primitives are clustered into blocks in object space, and the input images are partitioned into sub-images according to the projected footprints of these blocks. This dual-space partitioning significantly reduces training memory requirements.
  During subtask training, we propose a depth comparison method to generate a mask map for each sub-image. This mask map isolates pixels primarily contributed by the Gaussian primitives of the current subtask, excluding all other pixels from training.
  Experimental results demonstrate that our method successfully achieves large-scale aerial scene reconstruction using 9K resolution images on a single RTX 4090 GPU. The novel views synthesized by our method retain significantly more details than those from current state-of-the-art methods.
%-------------------------------------------------------------------------
%  ACM CCS 1998
%  (see https://www.acm.org/publications/computing-classification-system/1998)
% \begin{classification} % according to https://www.acm.org/publications/computing-classification-system/1998
% \CCScat{Computer Graphics}{I.3.3}{Picture/Image Generation}{Line and curve generation}
% \end{classification}
%-------------------------------------------------------------------------
%  ACM CCS 2012
%The tool at \url{http://dl.acm.org/ccs.cfm} can be used to generate
% CCS codes.
%Example:
\begin{CCSXML}
<ccs2012>
   <concept>
       <concept_id>10010147.10010371</concept_id>
       <concept_desc>Computing methodologies~Computer graphics</concept_desc>
       <concept_significance>500</concept_significance>
       </concept>
   <concept>
       <concept_id>10010147.10010257</concept_id>
       <concept_desc>Computing methodologies~Machine learning</concept_desc>
       <concept_significance>500</concept_significance>
       </concept>
 </ccs2012>
\end{CCSXML}

\ccsdesc[500]{Computing methodologies~Computer graphics}
\ccsdesc[500]{Computing methodologies~Machine learning}


\printccsdesc   
\end{abstract}  
%-------------------------------------------------------------------------
\section{Introduction}

3D Gaussian splatting (3DGS)\cite{kerbl20233d} has demonstrated remarkable performance in novel view synthesis. Recent methods like VastGS\cite{lin2024vastgaussian} and CityGS\cite{liu2024citygaussianv2} successfully extend 3DGS to large-scale scene reconstruction tasks via scene partition strategy.

However, aerial drone-captured datasets pose new challenges for large-scale 3DGS. While drone imagery provides ultra-high-resolution (8K+) photos with wide field-of-view coverage, training on such ultra-high-resolution data introduces two GPU memory bottlenecks that current methods struggle to resolve.

The primary issue stems from the substantial growth of Gaussian primitives required for high-fidelity reconstruction. Ultra-high-resolution images capture intricate scene details, necessitating denser and smaller Gaussians to represent fine structures, thereby increasing GPU memory consumption.
VastGS\cite{lin2024vastgaussian} and CityGS-v2\cite{liu2024citygaussianv2} reduce the memory footprint during training through spatial block partitioning in object space. However, they introduce external information during training, resulting in suboptimal memory reduction. 
GrendelGS\cite{zhao2024scaling} addresses this challenge by prematurely terminating the densification process when approaching GPU memory capacity limits at the cost of compromising scene reconstruction quality.

A secondary challenge arises from the GPU memory overhead of large-scale computational tensors during training. For 8K+ resolution inputs, intermediate tensors (e.g., per-pixel gradients, visibility buffers) scale quadratically with image resolution, rapidly exhausting available GPU memory. 
Grendel-GS \cite{zhao2024scaling} mitigates this issue through distributed tensor storage across multiple GPUs, however, this approach imposes specific requirements on GPU configurations.
Methods like CityGS-v2 \cite{liu2024citygaussianv2} and VastGS \cite{lin2024vastgaussian} simply employ full-resolution tensors for training, resulting in extremely high GPU memory usage when processing ultra-high-resolution images.

In order to build 3DGS representation for a large-scale scene captured by a set of ultra-high-resolution images, we require a method capable of partitioning scenes in object space to reduce memory consumption from excessive Gaussian primitives, while concurrently partitioning tensors in image space to alleviate memory overhead.

In this paper, we propose a method for decomposing the complex task of reconstructing large-scale scenes from ultra-high-resolution images using 3D Gaussian splatting into a set of manageable subtasks. The key feature of our method is that our task partition is conducted in both object and image space, producing a set of dual-space pairs. Each pair \subtask represents an independent training task. Specifically:

\begin{itemize}
  \item \subblock denotes a set of Gaussian primitives contained within a specific 3D cell $C$ in object space;
  \item \subimage represents a set of sub-images \subimageset, where each \ssubimage represents the pixel region covered by the projection of \subblock onto the input image \image.  
\end{itemize}

Under this mechanism, each subtask processes only the Gaussian primitives within a \subblock while allocating tensors for its corresponding \subimageset. This strict confinement of memory allocation restricts each subtask's overhead to the capacity of a single RTX 4090 GPU.

Constructing an effective mask map \mask for each image \image is critical during task partitioning and subtask training. In this paper, we propose a depth comparison-based method to generate \mask that simultaneously achieves compact \ssubimage and precise identification of pixels predominantly influenced by Gaussian primitives within \subblock. During subtask training, \mask actively defines the loss function to exclude unrelated pixels from subtask training.

The remainder of this paper is organized as follows: 
Section \ref{sec:related works} presents related 3DGS works. Section \ref{sec: Overview} analyzes the necessity of partition in dual spaces and outlines our framework. In Section \ref{sec: task_partition}, we detail the subtask construction with particular focus on the generation of \mask. Section \ref{sec: training process} specifies the subtask training procedures, including loss function formulation and the block boundaries management. Finally, Section \ref{subsec: experiment} provides comprehensive comparisons against baselines in memory efficiency and reconstruction quality.


\section{Related works}
\label{sec:related works}

\subsection{Novel View Synthesis}

Novel view synthesis generates unseen scene perspectives by inferring 3D geometry and appearance from sparse images. 
Traditional approaches employed Structure from Motion (SfM) to estimate camera poses and sparse 3D points, followed by Multi-View Stereo (MVS) for dense geometry reconstruction. 
Neural Radiance Fields (NeRF)\cite{mildenhall2021nerf} revolutionized this field by mapping spatial locations directly to color and density via Multi-layer Perceptrons (MLPs). Subsequently, methods represented by MipNeRF and InstantNGP respectively advanced NeRF in terms of rendering quality and training efficiency. MipNeRF\cite{barron2021mip} addressed aliasing by modeling pixel footprints as conical frustums, while InstantNGP\cite{muller2022instant} accelerated training through explicit hash grid encoding and a shallow MLP decoder.

Complementing these advances, 3D Gaussian Splatting (3DGS)\cite{kerbl20233d} introduced a real-time rendering paradigm using anisotropic Gaussian primitives, achieving unprecedented quality-speed tradeoffs. Numerous variants enhanced sparse-view robustness, rendering fidelity, storage efficiency, and dynamic scene modeling. 
For sparse input scenarios, FSGS\cite{zhu2024fsgs} mitigated geometric degradation through Gaussian unpooling initialization, while MVSplat\cite{chen2024mvsplat} leveraged cost volumes to inject multi-view geometric priors. Concurrently, DNGaussian\cite{li2024dngaussian} enhanced depth consistency through regularization strategies. 
A pivotal advancement emerged through 2D Gaussian Splatting (2DGS)\cite{huang20242d}, which parameterized Gaussian primitives as perspective-exact 2D planar disks in local tangent spaces, resolving projection ambiguities via ray-splat intersection calculations. Rendering quality improvements were simultaneously achieved by Analytical-Splatting\cite{zhang2025Analyticalglossy}, preserving high-frequency details through pixel-area integration, and GaussianShader\cite{jiang2024gaussianshader}, integrating shading functions for specular reflection modeling. 
Storage optimizations included CompGS\cite{liu2024compgs} with vector quantization, LightGaussian\cite{fan2024lightgaussian} pruning redundant Gaussians via contribution analysis, and Mini-Splatting\cite{fang2024mini} reorganizing spatial distributions through densification-simplification cycles. 
Furthermore, to extend 3DGS to dynamic scenes, Luiten et al.\cite{luiten2024dynamic} adopted dynamic 3D Gaussians incorporating local rigidity constraints, achieving physically plausible dynamic reconstruction and self-supervised dense tracking. 4D Gaussian Splatting (4D-GS)\cite{wu20244d} integrated an explicit representation combining 3D Gaussians with 4D neural voxels for dynamic scenes, employing a decomposed neural voxel encoding scheme and a lightweight MLP for deformation prediction. DrivingGaussian\cite{zhou2024drivinggaussian} modeled the scene background via incremental static 3D Gaussians, utilized a composite dynamic Gaussian graph to independently reconstruct moving objects and restore their occlusion relationships.

\subsection{Large Scale Scene Reconstruction}

Large-scale scene reconstruction has evolved through decades of methodological advancements. Some early studies were based on SfM and MVS.
Zhu et al.\cite{zhu2018large} converted multi-view stereo meshes into multi-LOD simplified models with semantics using semantic segmentation and structured modeling. 
Liu et al.\cite{liu2023efficient} proposed a robust hybrid SfM approach featuring multifactor scene partitioning and pre-assigned balanced subcluster expansion to enhance intra-cluster compactness and inter-cluster connectivity.
Subsequently, NeRF emerged as a paradigm for large-scale scene reconstruction. 
NeRF++\cite{zhang2020nerf++} introduced an inverted sphere parametrization to extend NeRF to unbounded 360° scenes. 
Block-NeRF\cite{tancik2022block} and Mega-NeRF\cite{turki2022mega} adopted a divide-and-conquer strategy, decomposing large-scale scenes into parallel-trained blocks to decouple rendering time from scene scale. 
Building on this foundation, Drone-NeRF\cite{jia2024drone} achieved efficient neural rendering for drone oblique photography in expansive scenes through pose-sampling co-optimization and a hash-fused MLP. 
To enhance rendering quality, NeRFusion\cite{zhang2022nerfusion} combined NeRF with TSDF fusion, predicting per-frame local radiance fields via direct network inference, while Urban Radiance Fields\cite{rematas2022urban} compensated for viewpoint sparsity in complex large-scale scenes by fusing RGB signals with LiDAR data. 
For multi-scale rendering in large-scale scenes, BungeeNeRF\cite{xiangli2022bungeenerf} employed a progressive strategy by incrementally appending neural modules and activating high-frequency encoding channels. City-on-Web\cite{song2024city} proposed a tile-based multi-shader volumetric rendering approach, implementing LOD techniques and dynamic loading/unloading strategies for adaptive resource management.

\begin{figure*}[htp]
\centering
\includegraphics[width=\textwidth]{Images/flowchart.png}
\caption{The pipeline of our dual-space task partition. }
\label{fig:flowchart}
\end{figure*}

Since the introduction of 3DGS, substantial research efforts have focused on enhancing its capacity for representing large-scale scenes. VastGaussian\cite{lin2024vastgaussian} pioneered progressive data partitioning with visibility-based camera selection, calculating projection areas of cell bounding boxes to ensure full spatial supervision through camera contribution metrics. Grendel-GS\cite{zhao2024scaling} independently developed distributed 3DGS training via sparse all-to-all communication and dynamic load balancing, enabling selective transmission of Gaussian primitives to pixel partitions based on localized rendering influence. Partitioning strategies further diversified with CityGS\cite{liu2024citygaussian}, which leveraged structural similarity index (SSIM) loss for camera data allocation after establishing coarse global Gaussian priors. Addressing visual quality challenges in textureless regions, 2024 witnessed CityGaussianV2\cite{liu2024citygaussianv2} employing 2D Gaussian primitives with decomposed gradient-based density-depth regression to eliminate blur artifacts and mitigate Gaussian count explosions, followed by PG-SAG\cite{wang2025pg} integrating semantic cues for fine-grained urban reconstruction without resolution compromise. Rendering innovations emerged through hierarchical management: OctreeGS\cite{ren2024octree} implemented octree-structured anchors for LoD control using viewpoint-to-anchor distance metrics, while hierarchical 3DGS nodes\cite{kerbl2024hierarchical} extended this approach through projected size thresholds. Concurrently, multi-modal enhancements flourished in 2024—Hgs-mapping\cite{wu2024hgs} fused LiDAR data via hybrid Gaussian initialization for depth-constrained reconstructions, Mm-gaussian\cite{wu2024mm} resolved unbounded scene inaccuracies through LiDAR-guided geometry, and LetsGo\cite{cui2024letsgo} specialized in garage environments using proprietary LiDAR-augmented 3DGS pipelines.

 %-------------------------------------------------------------------------
\section{Algorithm Overview}
\label{sec: Overview}

The core objective of our algorithm is to build 3DGS representation for large-scale aerial scene from ultra-high-resolution images. The primary challenge stems from the enormous memory overhead associated with storing both the vast number of Gaussian primitives and the tensors needed for ultra-high-resolution images.

In this paper, we propose an algorithm to partition the original training task across both object and image space, generating a set of smaller subtasks. Each subtask is represented as a \subtask pair, specifying the task partition in object space (\subblock) and image space (\subimage). Note that $\mathbf{I}_{sub}=\{\mathcal{I}^i_{sub\}}$, where each \ssubimage is a sub-image extracted from the input ultra-high-resolution image \image.

To help illustrate the concept of subtask pair \subtask, we present the following two analogies:

\begin{itemize}
  \item For 3DGS algorithms designed for small-scale scenes, task partition is redundant, as GPU memory is sufficiently large to accommodate all Gaussian primitives as well as the tensors for one full-resolution input image.  In this case, the training process comprises a single task, represented as $(\mathcal{G}_{sfm}, \mathbf{I})$, where $\mathcal{G}_{sfm}$ denotes a set of initial Gaussian primitives generated via SfM, and $\mathbf{I}$ represents the set of input images.
  \item For 3DGS algorithms designed for large-scale scenes, task partitioning is usually performed exclusively in object space. For example, CityGS-v2\cite{liu2024citygaussianv2} divides the scene into spatially distinct blocks using equidistant intervals in object space and employs full-resolution images during training. In this scenario, the original full-scale task is decomposed into a set of subtasks $\{(\mathcal{G}^i_{block}, \mathbf{I})\}$. 
\end{itemize}

The large-scale scene and ultra-high-resolution input images impose significant memory constraints on our method. In order to limit the memory requirement, every subtask pair \subtask must satisfy the following inequality:

\begin{equation}
  \mathbb{C}(\mathcal{G}_{block}) + \max_{\mathcal{I}_{sub}^i \in \mathbf{I}_{sub}}\mathbb{C}(\mathcal{I}^i_{sub}) \leq \mathbb{G}
\end{equation}
where $\mathbb{C}(\mathcal{G}_{block})$ and $\mathbb{C}(\mathcal{I}^i_{sub})$ is the memory consumption for storing Gaussian primitives in \subblock and tensors for \ssubimage respectively, $\mathbb{G}$ is the GPU memory size.

To achieve this objective, the basic idea of constructing \subtask are (as illustrated in Fig.\ref{fig:flowchart}): 

\begin{itemize}
  \item  Construction of \subblock: Each \subblock is obtained by uniformly partitioning the 3D space occupied by \lowGS, a coarse 3DGS representation reconstructed from low-resolution images (\textcircled{1} in Fig.\ref{fig:flowchart}). All Gaussian primitives in \lowGS are then distributed to their corresponding \subblock partitions according to their 3D positions (\textcircled{2} in Fig.\ref{fig:flowchart}).
  \item  Construction of \mask: \mask is the key to build \ssubimage from its corresponding \subblock.  It indicates the approximate coverage of the \subblock in image space. To construct \mask, we first generate a coarse mask \coarsemask by projecting Gaussian primitives in \subblock onto the input image \image (\textcircled{3} in Fig.\ref{fig:flowchart}). Subsequently, a depth comparison-based algorithm is employed to remove erroneous pixels in \coarsemask, yielding \mask(\textcircled{4} in Fig.\ref{fig:flowchart}).
  \item  Construction of \subimage: Based on \mask, \ssubimage can be easily constructed by computing the axis-aligned bounding box (AABB) of \mask(\textcircled{5} in Fig.\ref{fig:flowchart}).
\end{itemize}

Beyond task partition, effective subtask training requires careful mitigation of reconstruction artifacts. During the training for a specified subtask \subtask, pixels in \ssubimage not predominantly influenced by \subblock's Gaussian primitives must be excluded. We achieve this exclusion by reformulating the loss function using mask \mask.

\section{Dual-Space Task Partition}
\label{sec: task_partition}
This section describes how the original large training task is decomposed into smaller subtasks. While our method decomposes the training task in both object and image space, the decomposition process does not occur simultaneously in both spaces. Specifically, the task is first decomposed in object space, yielding a set of \subblock. Subsequently, \subimage in image space is constructed from its corresponding \subblock.

\subsection{Construction of \subblock}

In order to create \subblock, our method first conducts a coarse-grained training from  low-resolution images. This process constructs a coarse model \lowGS that captures the fundamental structure of the scene, serving as the basis for block partitioning. 

Following this coarse training, we align the scene with a Manhattan coordinate system according to the VastGS\cite{lin2024vastgaussian} methodology, then perform equidistant grid partitioning on the base plane, resulting in a set of uniform 3D cells $\{C_i\}$. The Gaussian primitives in \lowGS are then distributed to $\{C_i\}$ based on their 3D positions, where Gaussians within each cell $C_i$ forms a \ssubblock.


\subsection{Construction of \mask}
\label{subsec: construct mask}
\mask plays the following two important roles in our algorithm:

\begin{itemize}
  \item As we mentioned before, \mask will be utilized to create \subimage;
  \item More importantly, \mask plays a crucial role in defining the loss function during training to prevent the generation of Gaussian primitives outside the 3D cell domain $C$ of \subblock (see Sec.\ref{sec:loss_function} for more details).
\end{itemize}

A straightforward way to generate \ssubimage from \subblock is to project the Gaussian primitives in \subblock onto image \image, yielding a boolean mask \coarsemask marking pixels covered by the result projection. Subsequently, the AABB encompassing the true-valued region in \image can be taken as \ssubimage. 

Unfortunately, this strategy is problematic because pixels in \coarsemask may correspond to Gaussian primitives not belonging to \subblock. This problem arises from the following two primary sources (as illustrated in Fig.\ref{fig:Multi-Block culling}):

\begin{itemize}
  \item \textit{Oversized Gaussians:} Note that \subblock is built from \lowGS, which is reconstructed from low-resolution imagery. It contains numerous large-sized Gaussian primitives whose projections might cover excessive pixels out of \subblock boundaries in ultra-high-resolution image. This problem is worsened by 3DGS's intrinsic floater artifacts. As shown in Fig.\ref{fig:Multi-Block culling}, the oversized splat's projections cover pixel $p_C$, but it is clear that we should not include $p_C$ in the training of \subblock.
  \item \textit{Inter-block occlusion:} The generation of \coarsemask disregards occlusion relationships between blocks. Whereas adjacent blocks $\mathcal{G}^i$ and $\mathcal{G}^j$ are separated in object space, their 2D projections onto an input image can potentially overlap. This is illustrated in the 2D case in Fig.\ref{fig:Multi-Block culling}. Ignoring \subblock's neighbor blocks, pixel $p_A$ would be covered by Gaussian pritimitives in \subblock. However, the color of $p_A$ is clearly determined by some other block. Therefore, $p_A$ should be excluded from the training of \subblock.
\end{itemize}

In this paper, we propose an optimization technique to refine \coarsemask, yielding a more accurate mask \mask. To achieve this, we choose 2D Gaussian \cite{huang20242d} as base primitive because 2DGS provides a more accurate depth buffer than 3DGS. Given a reasonably accurate depth buffer, we can utilize depth information to identify pixels covered by Gaussian primitives located outside \subblock.

In our paper, we employ the median depth strategy from \cite{huang20242d} to define pixelwise depth $d_p$ denoted by Eq.\ref{eq:depth}:

\begin{equation}
  \label{eq:depth}
  d_p=\max\{z_i|T_i>0.5\}
\end{equation}
where $z_i$ is the depth of the $i$-th ray-splat intersection point, and $T_i$ represents the accumulated transparency.
\begin{figure}[ht]
\centering
\includegraphics[width=0.5\textwidth]{Images/multi-tile culling.png}
\caption{Mask refinement algorithm via depth comparison.}
\label{fig:Multi-Block culling}
\end{figure}

The basic idea to achieve \mask is to determine whether pixel $p$'s color is primarily contributed by Gaussians in \subblock via depth comparison. The following two depth values for pixel $p$ are involved in the depth comparison: 1) the depth value $d_p$ rendered from \lowGS; 2) the depth value $d_p^{block}$ achieved by only rendering \subblock;

As illustrated in Fig.\ref{fig:Multi-Block culling}, given these two depth values, we can filter pixels in \coarsemask that should not participate in later subtask training.

\begin{itemize}
  \item $d_p^{block} = d_p$: In this case, \subblock and \lowGS yield identical depth values, indicating that pixel $p$ should be included in the training for subtask \subtask (pixel $p_B$ in Fig.\ref{fig:Multi-Block culling} demonstrates this situation);
  \item  $d_p^{block} < d_p$: This case indicates that \subblock's Gaussians are occluded at $p$ by Gaussians from other blocks. Therefore, $p$ should be excluded from \subtask training (pixel $p_A$ in Fig.\ref{fig:Multi-Block culling} demonstrates this situation);  
  \item $d_p^{block} > d_p$: This occurs due to oversized Gaussians in \subblock. It indicates that Gaussians from other blocks contribute primarily to $p$'s color. Consequently, $p$ should be excluded from \subtask training (pixel $p_C$ in Fig.\ref{fig:Multi-Block culling} demonstrates this situation).
\end{itemize}

Applying the refinement algorithm to \coarsemask yields a more accurate \mask.  We then compute the axis-aligned bounding box (AABB) of this \mask in image space to produce \subimage.

\section{Training Process}
\label{sec: training process}

\subsection{Loss Function}
\label{sec:loss_function}

One key distinction between our method from existing large-scale 3DGS methods lies in our image space task partitioning. This indicates that during the training process for a subtask, only a subset of input image \subimage is required. However, training directly on \subimage is problematic because not all pixels within \subimage are influenced primarily by the Gaussian primitives from \subblock.

To address this issue, we constrain the loss computation using the \mask obtained in Sec \ref{subsec: construct mask}. The training loss function of a subtask is defined as:

\begin{equation}
  \mathcal{L} = \mathcal{L}_{1} + \mathcal{L}_{SSIM} + \mathcal{L}_{n}
\end{equation}
where $\mathcal{L}_{1}$ denotes the image quality loss, $\mathcal{L}_{SSIM}$ represents the structural similarity loss, and $\mathcal{L}_{n}$ is the normal loss.

For $\mathcal{L}_1$, its computation directly measures the absolute difference between the \image and the rendered result $\hat{\mathcal{I}}^i$, which causes invalid results outside the \subimage region to contribute to the loss. Thus, we make the following modification by using \mask to ensure that only pixels within the \subimage contribute to the loss calculation:

%To enable seamless block merging, the primary challenge lies in eliminating the numerous black Gaussian primitives generated outside block boundaries when using masked images for training. This phenomenon occurs because the processed input images contain large black invalid regions. As Gaussian optimization calculates various loss terms based on the entire image, the training process tends to generate black Gaussian primitives in these invalid areas to minimize the difference between rendered results and original images.
%Our method addresses this through mask-weighted invalid pruning, eliminating the influence of masked invalid regions on loss calculation. For the $L1$ loss term, 

\begin{equation}\label{eq: l1 loss}
  \mathcal{L}_{1} = \frac{1}{\mathcal{C}} \sum_{j=1}^{N} m_j \cdot \left\| \mathcal{I}^i_j - \hat{\mathcal{I}}^i_j \right\|
\end{equation}
where $\mathcal{C}$ represents the count of valid pixels in the mask,$N$ is the total number of pixels in the mask,  $m_j$ denotes the value of the $j$-th pixel in the \mask, and $\mathcal{I}^i_j$ and $\hat{\mathcal{I}}^i_j$ represent the color values of $\mathcal{I}^i$ and $\hat{\mathcal{I}}^i$ at pixel $j$, respectively.

For $\mathcal{L}_{SSIM}$, its inherent computation naturally ignores out of \subimage regions where both $ \mathcal{I}_j $ and $ \hat{\mathcal{I}}_j $ are zero-valued. So we directly zero out pixels outside \mask in the SSIM map $\mathcal{S}(\mathcal{I}_j, \hat{\mathcal{I}}_j)$:

\begin{equation}\label{eq: ssim loss}
  \mathcal{L}_{SSIM} = \frac{1}{N} \sum_{j=1}^{N} m_j \cdot \mathcal{S}(\mathcal{I}_j, \hat{\mathcal{I}}_j)
\end{equation}

For $\mathcal{L}_n$ of 2D Gaussian Splatting\cite{huang20242d}, directly applying \mask to loss computation as in \ref{eq: l1 loss} and \ref{eq: ssim loss} is infeasible. 
The value of $\mathcal{L}_n$ at the  $j$-th pixel is determined by follows:

\begin{equation}\mathcal{L}{n,j}=\sum_i\omega_i(1-\mathbf{n}_i^\mathrm{T}\mathbf{N})\end{equation}
\begin{equation}
  \mathbf{N}(x,y)=\frac{\nabla_x\mathbf{p}_s\times\nabla_y\mathbf{p}_s}{|\nabla_x\mathbf{p}_s\times\nabla_y\mathbf{p}_s|}
\end{equation}
where $i$ indexes over intersected splats along the ray, $w$ denotes the blending weight of the intersection point, $n_i$ represents the normal of the splat that is oriented towards the camera, and $\mathbf{N}$ is the normal estimated by the gradient of the depth map.
%Another merging challenge arises from abnormal Gaussian primitives appearing at block edges, typically elongated and severely inclined relative to scene surfaces. 
%This stems from miscalculations in the normal loss term $L_N$ ,which measures the discrepancy between estimated surface normals and Gaussian primitive normals. The surface normal estimation formula:

When $(x, y)$ is at \mask boundaries, depth values of adjacent pixels outside the valid region default to zero, causing abnormal gradients $\nabla_{x}$ and $\nabla_{y}$ that severely corrupt $N(x, y)$ calculations, leading to abnormal calculation of $\mathcal{L}_n$.

To resolve this, we perform global mask erosion to exclude boundary pixels from normal computation, ensuring pixels at the \mask boundary do not contribute to normal estimation. For \mask, we perform convolution operation $\circledast$ with kernel $K$, retaining pixels only when all surrounding pixels are valid:

\begin{equation}
  \mathcal{E}(m_j) = \mathbb{I}\left[ (m_j \circledast K) = \|K\|_1 \right], \quad 
  K = \mathbf{1}_{3\times3},
\end{equation}
where $\mathbb{I}[\cdot]$ is the Iverson bracket evaluating to 1 when the condition is true and 0 otherwise.
%causing their reconstructed world coordinates to coincide with camera positions. This results in normal directions perpendicular to the camera-pixel line, prompting the generation of excessively tilted Gaussian primitives to minimize normal discrepancy.


%To address this, we implement edge normal correction. When estimating surface normals, we deliberately exclude edge pixels from normal calculation. 
Since 2D Gaussian primitives typically span multiple pixel regions in training views, each primitive's normal orientation remains constrained by multiple pixel-level normal loss terms.
This approach only increases the weighting of non-edge pixels in the normal loss function while maintaining stable constraints on boundary-region Gaussians, without inducing random normal distributions at block boundaries.
%Building on the masks obtained in Section\ref{subsec: precise mask generation}, we apply mask erosion to 

Finally, $\mathcal{L}_n$ can be modified as:

\begin{equation}
  \mathcal{L}_{n} = \frac{1}{N} \sum_{j=1}^{N} \mathcal{E}(m_j) \cdot \mathcal{L}{n,j}
\end{equation}

\subsection{Handling Block boundaries}
\label{subsec:seam process}
%Traditional block-based training methods typically directly discard Gaussian primitives in non-corresponding regions during block merging. Since our method does not incorporate information from other blocks when training individual blocks, the lack of geometric constraints from adjacent regions at block edge leads to noticeable seams when using simple merging strategies.
Since our algorithm conducts training independently for each \ssubblock, the portions of Gaussian primitives extending beyond \ssubblock boundaries remain unconstrained. During block merging, these unconstrained portions manifest as visible artifacts along seams.

To address this, we introduce an additional Gaussian splitting mechanism targeting boundary primitives. Since these Gaussians typically exhibit elongation with major axes nearly perpendicular to \subblock boundaries, we aim to bisect them into intra-block and extra-block segments, discarding the external portion while preserving the integrity of the internal component. 
When a Gaussian primitive satisfies the condition in Eq.\ref{eq:split_condition}, it will be split into two new Gaussian primitives. 

\begin{equation}\label{eq:split_condition}
\sigma_{\mathrm{max}} > \kappa \cdot \sigma_{\mathrm{min}} \quad \text{and} \quad \sigma_{\mathrm{max}} > \gamma
\end{equation}
where $\kappa$ is set to 5 and $\gamma$ (splitting threshold) is set to 0.05. 
The $\sigma_{\mathrm{max}}$ of the new primitives is halved. Their positions are shifted along the positive and negative directions of the axis corresponding to the original $\sigma_{\mathrm{max}}$, this bisects elongated Gaussians into distinct intra-block and extra-block segments.

Finally, another pruning pass is performed to remove any new primitives whose centers lie outside the \subblock, preventing them from further influencing training. Importantly, although the newly split primitives may still exhibit minor overlaps with neighboring blocks, these residual overlaps have minimal impact on the merged reconstruction results.

\begin{table*}[]
      \centering
      \scriptsize
      \normalsize
\begin{tabular}{l|cccccccc}
                                                 & \multicolumn{8}{c}{\textbf{D}\textsubscript{1}}                                                                                                                                                                              \\
                                             & \multicolumn{2}{c|}{1k}                               & \multicolumn{2}{c|}{2k}                               & \multicolumn{2}{c|}{4k}                                 & \multicolumn{2}{c}{9k}            \\ 
                                                  & VRAM           & \multicolumn{1}{c|}{Avg Gaussian \#}   & VRAM           & \multicolumn{1}{c|}{Avg Gaussian \#}   & VRAM            & \multicolumn{1}{c|}{Avg Gaussian \#}    & VRAM            & Avg Gaussian \#   \\ \hline
VastGS \cite{lin2024vastgaussian}      & 16.25G         & \multicolumn{1}{c|}{3652037}         & \multicolumn{2}{c|}{Fail}                             & \multicolumn{2}{c|}{Fail}                               & \multicolumn{2}{c}{Fail}          \\
Grendel-GS \cite{zhao2024scaling}      & 18.60G         & \multicolumn{1}{c|}{8041526}         & 20.43G         & \multicolumn{1}{c|}{5331380}         & 23.37G          & \multicolumn{1}{c|}{4790187}          & 23.11G          & \textbf{444394} \\
CityGS-v2 \cite{liu2024citygaussianv2} & 7.88G          & \multicolumn{1}{c|}{680970}          & 16.71G         & \multicolumn{1}{c|}{1360491}         & \multicolumn{2}{c|}{Fail}                               & \multicolumn{2}{c}{Fail}          \\
Ours                                                    & \textbf{3.82G} & \multicolumn{1}{c|}{\textbf{465699}} & \textbf{6.68G} & \multicolumn{1}{c|}{\textbf{786740}} & \textbf{10.61G} & \multicolumn{1}{c|}{\textbf{1119789}} & \textbf{19.09G} & 1487919        
\end{tabular}
\begin{tabular}{l|cccccccc}
                                                 & \multicolumn{8}{c}{\textbf{D}\textsubscript{2}}                                                                                                                                                                                \\
                                              & \multicolumn{2}{c|}{1k}                               & \multicolumn{2}{c|}{2k}                               & \multicolumn{2}{c|}{4k}                                 & \multicolumn{2}{c}{9k}            \\ 
                                                  & VRAM           & \multicolumn{1}{c|}{Avg Gaussian \#}   & VRAM           & \multicolumn{1}{c|}{Avg Gaussian \#}   & VRAM            & \multicolumn{1}{c|}{Avg Gaussian \#}    & VRAM            & Avg Gaussian \#   \\ \hline
VastGS \cite{lin2024vastgaussian}      & 18.96G         & \multicolumn{1}{c|}{3996946}         & \multicolumn{2}{c|}{Fail}                             & \multicolumn{2}{c|}{Fail}                               & \multicolumn{2}{c}{Fail}          \\
Grendel-GS \cite{zhao2024scaling}      & 20.48G         & \multicolumn{1}{c|}{9173437}         & 23.60G         & \multicolumn{1}{c|}{7406372}         & 23.42G          & \multicolumn{1}{c|}{3950981}          & 23.55G          & \textbf{509271} \\
CityGS-v2 \cite{liu2024citygaussianv2} & 8.79G          & \multicolumn{1}{c|}{831994}          & 19.02G         & \multicolumn{1}{c|}{1660343}         & \multicolumn{2}{c|}{Fail}                               & \multicolumn{2}{c}{Fail}          \\
Ours                                                    & \textbf{3.20G} & \multicolumn{1}{c|}{\textbf{482728}} & \textbf{6.75G} & \multicolumn{1}{c|}{\textbf{910205}} & \textbf{11.37G} & \multicolumn{1}{c|}{\textbf{1019955}} & \textbf{20.16G} & 1119760        
\end{tabular}
\caption{ Memory Comparison: This table shows the VRAM and the average counts of Gaussian primitives usage per task across different algorithms during training at specified resolutions, where 1k, 2k, 4k, and 9k correspond to 8×, 4×, 2×, and 1× downsampling of the original input images, respectively, with corresponding resolutions of 1188×792, 2376×1584, 4752×3168, and 9504×6336.
       'Fail' indicates that the method encountered insufficient VRAM during training at this resolution, resulting in training failure.}
    \label{tab:Memory}
    \end{table*}


    \begin{table*}[]
      \centering
      \scriptsize
      \small
\begin{tabular}{l|cccccccccccc}
                                                & \multicolumn{12}{c}{\textbf{D}\textsubscript{1}}                                                                                                                                                                                                                                                          \\
                                             & \multicolumn{3}{c|}{1k}                                                   & \multicolumn{3}{c|}{2k}                                                   & \multicolumn{3}{c|}{4k}                                                   & \multicolumn{3}{c}{9k}                               \\
                                                  & PSNR             & LPIPS           & \multicolumn{1}{c|}{SSIM}            & PSNR             & LPIPS           & \multicolumn{1}{c|}{SSIM}            & PSNR             & LPIPS           & \multicolumn{1}{c|}{SSIM}            & PSNR             & LPIPS           & SSIM            \\ \hline
VastGS \cite{lin2024vastgaussian}      & 17.3989          & 0.0880          & \multicolumn{1}{c|}{0.3876}          & \multicolumn{3}{c|}{Fail}                                                 & \multicolumn{3}{c|}{Fail}                                                 & \multicolumn{3}{c}{Fail}                             \\
Grendel-GS \cite{zhao2024scaling}      & 19.3544          & 0.0709          & \multicolumn{1}{c|}{\textbf{0.5040}} & 21.5363          & 0.0563          & \multicolumn{1}{c|}{\textbf{0.5754}} & 22.4181          & 0.0517          & \multicolumn{1}{c|}{0.6164}          & 20.9593          & 0.0599          & 0.5390          \\
CityGS-v2 \cite{liu2024citygaussianv2} & 16.8141          & 0.0958          & \multicolumn{1}{c|}{0.3624}          & 18.4856          & 0.0776          & \multicolumn{1}{c|}{0.4493}          & \multicolumn{3}{c|}{Fail}                                                 & \multicolumn{3}{c}{Fail}                             \\
\textbf{Ours}                                           & \textbf{20.5496} & \textbf{0.0611} & \multicolumn{1}{c|}{0.4640}          & \textbf{22.4003} & \textbf{0.0498} & \multicolumn{1}{c|}{0.5695}          & \textbf{24.2048} & \textbf{0.0411} & \multicolumn{1}{c|}{\textbf{0.6978}} & \textbf{24.6594} & \textbf{0.0382} & \textbf{0.7365}
\end{tabular}
      \centering
      \scriptsize
      \small
\begin{tabular}{l|cccccccccccc}
                                               & \multicolumn{12}{c}{\textbf{D}\textsubscript{2}}                                                                                                                                                                                                                                                            \\
                                            & \multicolumn{3}{c|}{1k}                                                   & \multicolumn{3}{c|}{2k}                                                   & \multicolumn{3}{c|}{4k}                                                   & \multicolumn{3}{c}{9k}                               \\
                                                  & PSNR             & LPIPS           & \multicolumn{1}{c|}{SSIM}            & PSNR             & LPIPS           & \multicolumn{1}{c|}{SSIM}            & PSNR             & LPIPS           & \multicolumn{1}{c|}{SSIM}            & PSNR             & LPIPS           & SSIM            \\ \hline
VastGS \cite{lin2024vastgaussian}      & 17.8821          & 0.0827          & \multicolumn{1}{c|}{0.5245}          & \multicolumn{3}{c|}{Fail}                                                 & \multicolumn{3}{c|}{Fail}                                                 & \multicolumn{3}{c}{Fail}                             \\
Grendel-GS \cite{zhao2024scaling}      & 18.0620          & 0.0742          & \multicolumn{1}{c|}{\textbf{0.5406}} & 21.1762          & 0.0489          & \multicolumn{1}{c|}{\textbf{0.6413}} & 22.9214          & 0.0405          & \multicolumn{1}{c|}{0.6808}          & 21.4207          & 0.0493          & 0.6200          \\
CityGS-v2 \cite{liu2024citygaussianv2} & 17.4734          & 0.0789          & 0.5174                               & 19.3586          & 0.0609          & \multicolumn{1}{c|}{0.5950}          & \multicolumn{3}{c|}{Fail}                                                 & \multicolumn{3}{c}{Fail}                             \\
\textbf{Ours}                                           & \textbf{21.3243} & \textbf{0.0474} & \multicolumn{1}{c|}{0.5260}          & \textbf{22.0562} & \textbf{0.0382} & \multicolumn{1}{c|}{0.6113}          & \textbf{23.7316} & \textbf{0.0353} & \multicolumn{1}{c|}{\textbf{0.7154}} & \textbf{24.7057} & \textbf{0.0342} & \textbf{0.7651}
\end{tabular}
      \caption{ Quantitative Evaluation: This table compares scene reconstruction quality across algorithms under different datasets and input resolutions. The definitions of 1k, 2k, 4k, 9k resolutions and Fail notations remain consistent with Table \ref{tab:Memory}. 
      For evaluation, regardless of the downsampling resolution used during training, all final scene renderings are performed at native resolution (9504×6336), with quality assessment conducted against original resolution input images to ensure objective reconstruction quality measurement.}
    \label{tab:Quality}
    \end{table*}

    \begin{table*}[]
      \centering
      \scriptsize
      \normalsize
\begin{tabular}{l|cccc|cccc}
                                                        & \multicolumn{4}{c|}{\textbf{D}\textsubscript{1}}                                                                                                      & \multicolumn{4}{c}{\textbf{D}\textsubscript{2}}                                                                                                      \\
                                                        & \multicolumn{1}{c|}{1k}             & \multicolumn{1}{c|}{2k}              & \multicolumn{1}{c|}{4k}              & 9k               & \multicolumn{1}{c|}{1k}             & \multicolumn{1}{c|}{2k}              & \multicolumn{1}{c|}{4k}              & 9k              \\
                                                        & \multicolumn{1}{c|}{Training}       & \multicolumn{1}{c|}{Training}        & \multicolumn{1}{c|}{Training}        & Training         & \multicolumn{1}{c|}{Training}       & \multicolumn{1}{c|}{Training}        & \multicolumn{1}{c|}{Training}        & Training        \\ \hline
VastGS \cite{lin2024vastgaussian}       & \multicolumn{1}{c|}{18h36m}         & \multicolumn{1}{c|}{Fail}            & \multicolumn{1}{c|}{Fail}            & Fail             & \multicolumn{1}{c|}{20h24m}         & \multicolumn{1}{c|}{Fail}            & \multicolumn{1}{c|}{Fail}            & Fail            \\
Grendel-GS \cite{zhao2024scaling}       & \multicolumn{1}{c|}{40m}         & \multicolumn{1}{c|}{48m}          & \multicolumn{1}{c|}{2h16m}          & 7h50m           & \multicolumn{1}{c|}{43m}         & \multicolumn{1}{c|}{53m}          & \multicolumn{1}{c|}{2h}          & 6h29m          \\
CityGS-v2 \cite{liu2024citygaussianv2} & \multicolumn{1}{c|}{21h18m}          & \multicolumn{1}{c|}{50h31m}          & \multicolumn{1}{c|}{Fail}            & Fail             & \multicolumn{1}{c|}{26h9m}          & \multicolumn{1}{c|}{60h26m}          & \multicolumn{1}{c|}{Fail}            & Fail            \\
Ours                                                    & \multicolumn{1}{c|}{\textbf{26h40m}} & \multicolumn{1}{c|}{\textbf{33h15m}} & \multicolumn{1}{c|}{\textbf{130h40m}} & \textbf{367h44m} & \multicolumn{1}{c|}{\textbf{28h12m}} & \multicolumn{1}{c|}{\textbf{28h55m}} & \multicolumn{1}{c|}{\textbf{126h25m}} & \textbf{398h14m}
\end{tabular}
      \caption{ Trianing Time Comparison: The table quantifies the aggregate computational time consumed to finalize all training tasks across algorithms under different datasets and input resolutions. The definitions of 1k, 2k, 4k, 9k resolutions and Fail notations remain consistent with Table \ref{tab:Memory}. }
    \label{tab:Time}
    \end{table*}

\section{Experiment}
\label{subsec: experiment}
\subsection{Set Up}

\textbf{Datasets}: We utilize two high-altitude aerial photography datasets for experimental validation. 
The first dataset is the drone outdoor high-altitude aerial photography dataset \textbf{D}\textsubscript{1}, which contains 700 images with a resolution of 9504×6336 pixels, covering a total scene area of 2.0 $km^2$. 
The second dataset, \textbf{D}\textsubscript{2}, consists of 480 images with the same resolution (9504×6336 pixels), covering a scene area of 1.5 $km^2$.

\textbf{Implementation}: All experiments in this study were conducted on a single NVIDIA GeForce RTX 4090 GPU (24GB GPU memory). 
During the coarse-grained training phase, input images are downsampled by a factor of 4, with the densification frequency set to 500.
When partitioning the coarsely trained model into multiple blocks, we configure 16 blocks along both the x-axis and z-axis, \textit{i.e.}, 256 subtasks in total.
 When performing optimization training for each subtask, no downsampling operation is applied to input images. 
The densification frequency is adjusted to 200 during this stage, with other hyperparameter settings maintaining consistency with the 2D Gaussian splatting.

\textbf{Compared methods}: We compare three mainstream large-scale scene reconstruction methods (Grendel-GS\cite{zhao2024scaling}, VastGS\cite{lin2024vastgaussian}, CityGS-v2\cite{liu2024citygaussianv2})
VastGS employs a strategy that first partitions cameras uniformly, defines block boundaries using the spatial distribution of camera groups, then allocates cameras to blocks based on visibility. 
Conversely, CityGS-v2 initially partitions the scene spatially, assigns cameras located within block boundaries, then selects additional cameras based on the structural similarity impact of block content on images. 
When block sizes are small, VastGS suffers from insufficient camera allocation due to inadequate projection coverage, while CityGS-v2 fails to acquire sufficient cameras because of minimal SSIM influence from block content, both leading to marked degradation in reconstruction quality.
Therefore, in our comparative experiments, VastGS adopts 6×6 block-based training while CityGS-v2 employs 4×4 block-based training. 
Except for these necessary adjustments, all compared methods maintain their default hyperparameter configurations.

\subsection{Results and Evaluation}
\textbf{Memory}: Table \ref{tab:Memory} presents the VRAM consumption and average number of Gaussian primitives usage per task across different algorithms under varying input resolutions.  At 1k and 2k resolutions, our method maintains both lower VRAM consumption and fewer Gaussian primitives compared to other approaches. While VastGS reduces VRAM demand through block partitioning, its training process introduces substantial irrelevant block content during individual block optimization, resulting in limited VRAM reduction. 
VastGS's partitioning strategy allocates images to blocks even when they contain no projected scene content, compelling unnecessary expansion of reconstruction scope. This forces a sharp increase in required Gaussian primitives, resulting in prohibitively high GPU memory consumption that fails to complete training even at 2K resolution.
CityGS-v2 reduces the number of Gaussian primitives used during training by pruning the 10\% of primitives contributing the least within the scene at intervals of every 200 training iterations. It also compresses the spherical harmonics (SH) coefficients of each primitive to two dimensions, thereby decreasing the VRAM footprint per Gaussian primitive. 
These mechanisms enable CityGS-v2 to successfully complete training at 2K resolution despite incorporating additional data for block processing.

At 4k and 9k resolutions, conventional methods including VastGS and CityGS-v2 exceed VRAM capacity constraints as Gaussian primitive counts and computational tensor size grow exponentially. 
Although Grendel-GS manages to complete training at these high resolutions, it employs a mechanism that prematurely halts Gaussian primitive densification when VRAM approaches capacity limits. This strategy, combined with increased tensor memory demands at elevated resolutions, causes its Gaussian primitive counts to plummet drastically compared to lower-resolution scenarios. 
Particularly at 9K resolution, the memory footprint of computational tensors approaches the VRAM capacity limit during initial training phases, forcing Grendel-GS to terminate Gaussian primitive densification prematurely, consequently yielding extremely sparse primitive distributions in reconstructed scenes.
In contrast, our method successfully completes 4k and 9k training through rigorous per-iteration control of Gaussian primitive quantities and tensor size. By enforcing strict spatial constraints and adaptive memory allocation, we maintain stable primitive densities while preventing VRAM overflow, achieving scalable ultra-high-resolution optimization without compromising reconstruction integrity.

\textbf{Reconstruction quality}: Table \ref{tab:Quality} compares SSIM, PSNR, and LPIPS metrics across algorithms under different resolutions. 
The results demonstrate our method's superior performance in both cross-resolution comparisons and maximum-supported-resolution evaluations. 
At 1k and 2k resolutions, our method outperforms VastGS, CityGS-v2 and Grendel-GS in reconstruction quality. 
This advantage stems from our method's ability to partition scene reconstruction task into smaller subtasks compared to other algorithms. When each subtask receives equivalent optimization iterations, this approach enables more comprehensive optimization of the scene.
At 4K and 9K resolutions, Grendel-GS demonstrates suboptimal performance constrained by its densification strategy limitations. The method produces significantly fewer Gaussian primitives in ultra-high-resolution reconstructions, with primitive counts plummeting drastically at 9K resolution compared to lower resolutions. This severe quantitative degradation directly correlates with its worst-case reconstruction quality at 9K resolution.
Fig. \ref{fig:Qualitative Result On EntireCity} and \ref{fig:Qualitative Result On Downtown} visually confirms that our method's comprehensive scene partition training enables higher detail fidelity and improved scene reconstruction quality through enhanced texture clarity.

\textbf{Training efficiency}: Table \ref{tab:Time} compares total training time across algorithms under different resolutions. 
Grendel-GS requires less training time because it performs only one complete training session for the entire scene. 
In contrast, Vast-GS, CityGS-v2, and our method all involve dividing the scene into blocks and thus require multiple training sessions, resulting in longer training durations.

\subsection{Component Analysis} 

% 表1：显存与高斯数消融实验
\begin{table}[]
  \begin{tabular}{l|cc}
    & VRAM & Avg Gaussian \# \\ \hline
    w/o mask refinement & 23.37G & 1,832,819 \\
    Full model & \textbf{19.09G} & \textbf{1,487,919}  
  \end{tabular}
  \caption{Impact of algorithmic component on VRAM consumption and Gaussian primitive count}
  \label{tab: ablation vram}
\end{table}

% 表2：质量指标消融实验
\begin{table}[]
  \begin{tabular}{l|ccl}
    & PSNR & LPIPS & SSIM \\ \hline
    w/o boundaries handling & 21.9702 & 0.0498 & 0.6622 \\
    Full model & \textbf{24.6594} & \textbf{0.0382} & \textbf{0.7365}       
  \end{tabular}
  \caption{Impact of algorithmic component on reconstruction quality metrics}
  \label{tab: ablation quality}
\end{table}

\begin{figure}[ht!]
  \centering
  \begin{subfigure}[t]{0.48\columnwidth}
    \includegraphics[width=\textwidth, height=3.8cm, trim=0 15 0 0, clip]{Images/precise_render_result.png}
    \caption{Training with \mask}
    \label{fig:eg-aware}
  \end{subfigure}
  \hfill
  \begin{subfigure}[t]{0.48\columnwidth}
    \includegraphics[width=\textwidth, height=3.8cm, trim=0 15 0 0, clip]{Images/no_cull_render_result.png}
    \caption{Training with \coarsemask}
    \label{fig:eg-agnostic}
  \end{subfigure}
  \vspace{-4pt}
  \caption{Excessive redundant Gaussians are generated without \mask construction.}
  \label{fig:no_cull_render_result}
\end{figure}

% 图4：法线校正对比
\begin{figure}[ht!]
  \centering
  \begin{subfigure}[t]{0.48\columnwidth}
    \includegraphics[width=\textwidth, height=3.8cm, trim=0 15 0 0, clip]{Images/with_normal_adjustment.png}
    %\caption{With boundaries handling}
    \label{fig:eg-aware}
  \end{subfigure}
  \hfill
  \begin{subfigure}[t]{0.48\columnwidth}
    \includegraphics[width=\textwidth, height=3.8cm, trim=0 15 0 0, clip]{Images/no_normal_adjustment(1).png}
    %\caption{Without boundaries handling}
    \label{fig:eg-agnostic}
  \end{subfigure}
  \vspace{-4pt}
  %\caption{Anisotropic Gaussians at block boundaries without edge normal correction}
  \label{fig:normal_adjustment}

  \centering
  \begin{subfigure}[t]{0.48\columnwidth}
    \includegraphics[width=\textwidth, height=3.8cm, trim=0 15 0 0, clip]{Images/with_split.png}
    \caption{With boundaries handling}
    \label{fig:eg-aware}
  \end{subfigure}
  \hfill
  \begin{subfigure}[t]{0.48\columnwidth}
    \includegraphics[width=\textwidth, height=3.8cm, trim=0 15 0 0, clip]{Images/no_split(1).png}
    \caption{Without boundaries handling}
    \label{fig:eg-agnostic}
  \end{subfigure}
  \vspace{-4pt}
  \caption{Artifacts manifest when boundary regularization is omitted}%Based on the revised results in Section 5.}
  \label{fig:split}
\end{figure}

    \begin{figure*}[ht]
      \centering
      \includegraphics[scale=0.4]{Images/EntireCity/Result.png}
      \caption{ Qualitative comparison of rendering quality on \textbf{D}\textsubscript{1}.}
      \label{fig:Qualitative Result On EntireCity}
        %%\label{fig:compare}
    \end{figure*}

        \begin{figure*}[ht]
      \centering
      \includegraphics[scale=0.4]{Images/Downtown/Result.png}
      \caption{ Qualitative comparison of rendering quality on \textbf{D}\textsubscript{2}.}
      \label{fig:Qualitative Result On Downtown}
        %%\label{fig:compare}
    \end{figure*}


In this section, we independently evaluate each component of our algorithm to validate their effectiveness and necessity. All experiments were conducted using 9K-resolution imagery from the \textbf{D}\textsubscript{1} dataset.Table \ref{tab: ablation vram} quantifies the impact of Depth Occlusion Based Refinement on memory consumption, while Table \ref{tab: ablation quality} evaluates the influence of boundaries handling on reconstruction quality.

\textbf{Depth Occlusion Based Refinement}:We verify the necessity of our depth occlusion based refinement technique for approximate \mask acquisition by demonstrating the significant drawbacks of directly utilizing \coarsemask instead. When bypassing this refinement process and employing \coarsemask directly for \subtask training, substantial non-\subblock content is introduced into \subimage. This omission causes significant expansion of the reconstruction scope, which simultaneously induces excessive generation of additional Gaussian primitives (Fig. \ref{fig:no_cull_render_result}) and increases computational tensor dimensions, thereby elevating VRAM consumption.

\textbf{Boundaries Handling}: We analyze the comprehensive impact of boundaries handling mechanism on reconstruction quality. As illustrated in Fig. \ref{fig:split}, when no constraints are applied to block boundaries, Gaussian primitives whose coverage areas extend beyond the block will significantly compromise merging results, causing disruptive effects on scene reconstruction quality.

\section{Conclusion}

In this paper, we significantly reduce training memory consumption by partitioning both object space and image space to construct multiple task groups, enabling large-scale aerial scene reconstruction with 3DGS from ultra-high-resolution images.
Nevertheless, our approach still has room for further optimization. The training speed becomes progressively slower as the number of blocks increases and input resolution grows. A viable solution involves implementing our algorithm in a multi-GPU environment. Developing more sophisticated block boundary handling techniques presents a promising direction for reducing boundary artifacts like floaters and oversized Gaussian primitives.

\bibliographystyle{eg-alpha-doi} 
\bibliography{egbibsample}       

\end{document}



