diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzigyu" "b/data_all_eng_slimpj/shuffled/split2/finalzzigyu" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzigyu" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction} \\label{Introduction}\n\nLiDAR point clouds, compared against other sensors such as camera and radar in the autonomous driving perception, have advantages of both accurate distance measurements and fine semantic descriptions. Studies on point clouds have gained increasing popularity in the computer vision area. Typical research topics include 3D shape recognition, part segmentation, indoor scenario parsing, and outdoor large-scale scene understanding. Several benchmark datasets such as ModelNet40 \\cite{wu20153d}, ShapeNet \\cite{chang2015shapenet}, S3DIS \\cite{armeni20163d}, and Semantic3D \\cite{hackel2017semantic3d} have been established for these topics. However, there exists spatial property disparity between these datasets and outdoor 360$^\\circ$ sweep scans, which are typically produced by the on-board vehicle LiDAR. Per swept LiDAR point clouds are much sparser, and their sparsity is generally increased with the reflection distance. Examples in Figure \\ref{fig2} demonstrate the difference between dense and sparse point clouds in outdoor areas.\n\nPoint clouds processing approaches could be generally classified into three categories. First, projection into 2D representations \\cite{lawin2017deep, wu2018squeezeseg, caltagirone2017fast, meyer2019lasernet}. The basic idea of this approach is to transform unstructured point clouds into 2D images, which could be directly plugged into image-based Convolutional Neural Networks (CNN) \\cite{krizhevsky2012imagenet}. Consequently, the 2D projection benefits to an easier fusion strategy with the image \\cite{chen2017multi, meyer2019sensor} or multi-views \\cite{su2015multi, qi2016volumetric}. However, the drawback is inevitably occluding a large number of points from the projection perspective, which suffers massive information loss. Second, voxelization into 3D volumetric grid cells \\cite{maturana2015voxnet, huang2016point, li20173d, tchapmi2017segcloud, jiang2018pointsift} or their structural variations (e.g., Octree \\cite{riegler2017octnet} or Spherical \\cite{rao2019spherical}). Even though this approach is more effective to maintain points in the 3D space, the data loss problem is not eliminated due to the grid quantization. 3D convolutions are also computationally expensive. Third, directly process raw point clouds. A fundamental model PointNet \\cite{qi2017pointnet} has been specifically designed to process raw unstructured point clouds, which inspires many other studies \\cite{qi2017pointnet++, wang2018dynamic, wu2019pointconv, wang2019graph, wang2019associatively} to follow this idea. Many existing methods have reported their performance on several dense data classification and segmentation benchmarks, however, their effectiveness for sparse data is unknown.\n\nTransfer learning \\cite{pan2009survey} and domain adaptation \\cite{tzeng2017adversarial} techniques have been recently discussed to bridge the gap between the source data and target data. They either extend the training procedure with new labeled data \\cite{rist2019cross}, or re-design the network architecture by adopting an adversarial generator-discriminator \\cite{lai2010object, wu2019squeezesegv2} for semi-supervised or unsupervised learning. In this particular study, however, we focus on the specific problem of semantic segmentation for sparse point clouds. We retain the cross-domain adaptation in our continued work. \n\nTo the best of our knowledge and to the time of our work, VirtualKITTI \\cite{3dsemseg_ICCVW17}, SemanticKITTI \\cite{behley2019dataset}, and DeepenaiKITTI\\footnote{https:\/\/www.deepen.ai\/kitti-labels-download\/} are currently available public datasets that provide semantic labels for sparse point clouds. We investigate a decent comparison of more than 10 state-of-the-art methods that directly process raw point clouds. With the evaluation of their effectiveness on the sparse data, we reveal that network architecture, neighborhood selection, and local sparsity are essential factors that affect the segmentation performance. This investigation has shown us advantages\/shortcomings of previous methods, and therefore we are motivated to propose our method, Multi-domain Neighborhood Embedding and Weighting (MNEW). The key idea is illustrated in Figure \\ref{fig1}. In MNEW, we collect multi-scale neighborhood points in both the static geometry domain and dynamic feature domain. Given a query point, its geometry neighbors are based on the Euclidean distance, and its feature neighbors are based on the similarity that are dynamically varied across different network layers. For each neighborhood point, we first assign attentions according to their location distance and feature similarity. We also compute the geometry\/feature sparsity at each neighbor point, which are then transformed as adaptive weighting factors. The embedded neighborhood feature is a combination of weighted convolution outputs in the two domains. The overall network structure inherits PointNet, which is able to capture both pointwise details and global semantics. In addition, MNEW also extends its capability to embody the local contextual information. Experiments in Section \\ref{Experiments} manifest the effectiveness of our method for the sparse data.\n\nThe major contributions of this paper are summarized as follows:\n\\begin{itemize}\n\t\\itemsep 0em\n\t\\item We introduce MNEW, a novel semantic segmentation model that is effective for per sweep LiDAR data, which is crucial for the application of autonomous driving perception.\n\t\\item We design a neighborhood embedding method in both static geometry domain and dynamic feature space, which embodies attention mechanism by location distance and feature similarity, as well as sparsity-adapted weighting mechanism.\n\t\\item We investigate a thorough comparison for a number of recent methods, evaluating their effectiveness on sparse point clouds. We claim that network architecture, neighborhood selection, and weighting mechanism are essential factors.\n\t\\item We achieve state-of-the-art performance on sparse point clouds, and we observe that performance is varied by distance and local sparsity.\n\\end{itemize}\n\n\n\n\n\n\t\n\n\n\n\\begin{table*}[!htb]\n\t\\begin{center}\n\t\t\\footnotesize\n\t\n\t\t\\begin{tabular}{c c c *3c c c}\n\t\t\t\\toprule\n\t\t\t\\multirow{2}{*}{Method}\t\t\t\t& \\multirow{2}{*}{Architecture} \t& \\multirow{2}{*}{\\makecell{Feature \\\\Extractor}}\t& \\multicolumn{3}{c}{Neighborhood}\t\t\t\t\t& \\multirow{2}{*}{Weighting}\t& \\multirow{2}{*}{Loss}\t\t\t\t\t\t\t\\\\ \\cline{4-6}\n\t\t\t&\t\t\t\t\t\t\t\t\t&\t\t\t\t\t\t\t\t\t\t\t\t\t& Domain\t\t\t& Selection\t\t& Embedding \t&\t\t\t\t\t\t\t\t&\t\t\t\t\t\t\t\t\t\t\t\t\\\\ \n\t\t\t\\midrule\n\t\t\tPointNet \\cite{qi2017pointnet} \t\t& Dilated \t\t\t\t\t\t\t& MLP\t\t\t\t\t\t\t\t\t\t\t\t& -\t\t\t\t\t& - \t\t\t& Points\t\t& -\t\t\t\t\t\t\t\t& $\\mathcal{L}_{CE}$\t\t\t\t\t\t\t\\\\\n\t\t\tPointNet++ \\cite{qi2017pointnet++}\t& Encoder-Decoder \t\t\t\t\t& MLP\t\t\t\t\t\t\t\t\t\t\t\t& Geometry\t\t\t& Multi-radius\t& Points \t\t& -\t\t\t\t\t\t\t\t& $\\mathcal{L}_{CE}$\t\t\t\t\t\t\t\\\\\n\t\t\tA-CNN \\cite{komarichev2019cnn}\t\t& Encoder-Decoder \t\t\t\t\t& MLP\t\t\t\t\t\t\t\t\t\t\t\t& Geometry\t\t\t& Ring-shaped \t& Points\t\t& -\t\t\t\t\t\t\t\t& $\\mathcal{L}_{CE}$\t\t\t\t\t\t\t\\\\ \n\t\t\tKP-FCNN \\cite{thomas2019kpconv}\t\t& Encoder-Decoder\t\t\t\t\t& KP-Conv\t\t\t\t\t\t\t\t\t\t\t& Geometry\t\t\t& kNN in Radius\t& Points\t\t& Geometry Distance\t\t\t\t& $\\mathcal{L}_{CE} + \\mathcal{L}_{Reg}$ \t\t\\\\\n\t\t\tDGCNN \\cite{wang2018dynamic}\t\t& Dilated\t\t\t\t\t\t\t& MLP\t\t\t\t\t\t\t\t\t\t\t\t& Feature\t\t\t& kNN\t\t\t& Query-Edges\t& -\t\t\t\t\t\t\t\t& $\\mathcal{L}_{CE}$ \t\t\t\t\t\t\t\\\\\n\t\t\tRS-CNN \\cite{liu2019relation}\t\t& Encoder-Decoder\t\t\t\t\t& RS-Conv\t\t\t\t\t\t\t\t\t\t\t& Geometry\t\t\t& Random-pick\t& Query-Edges\t& -\t\t\t\t\t\t\t\t& $\\mathcal{L}_{CE}$ \t\t\t\t\t\t\t\\\\\n\t\t\tPointWeb \\cite{zhao2019pointweb}\t& Encoder-Decoder\t\t\t\t\t& MLP\t\t\t\t\t\t\t\t\t\t\t\t& Geometry\t\t\t& kNN\t\t\t& Pairwise-Edges& -\t\t\t\t\t\t\t\t& $\\mathcal{L}_{CE}$ \t\t\t\t\t\t\t\\\\\n\t\t\tGACNet \\cite{wang2019graph}\t\t\t& Encoder-Decoder\t\t\t\t\t& MLP\t\t\t\t\t\t\t\t\t\t\t\t& Geometry\t\t\t& Radius\t\t& Points\t\t& Feature Similarity\t\t\t& $\\mathcal{L}_{CE}$ \t\t\t\t\t\t\t\\\\\n\t\t\tPointConv \\cite{wu2019pointconv}\t& Encoder-Decoder\t\t\t\t\t& MLP\t\t\t\t\t\t\t\t\t\t\t\t& Geometry\t\t\t& Radius\t\t& Points\t\t& Local Density\t\t\t\t\t& $\\mathcal{L}_{CE}$ \t\t\t\t\t\t\t\\\\\n\t\t\tASIS \\cite{wang2019associatively}\t& Encoder-Decoder\t\t\t\t\t& MLP\t\t\t\t\t\t\t\t\t\t\t\t& Geometry\t\t\t& Radius\t\t& Points\t\t& -\t\t\t\t\t\t\t\t& $\\mathcal{L}_{CE} + \\mathcal{L}_{Disc}$\t\t\\\\ \n\t\t\t\\hline\n\t\t\t\\multirow{3}{*}{Ours (MNEW)}\t& \\multirow{3}{*}{\\makecell{Dilated \\\\ (improved)}}\t& \\multirow{3}{*}{MLP}\t& \\multirow{3}{*}{\\makecell{Geometry \\\\+ Feature}}\t& \\multirow{3}{*}{\\makecell{Multi-radius \\\\+ Multi-kNN}}\t& \\multirow{3}{*}{Query-Edges}\t& \\multirow{3}{*}{\\makecell{Geometry Distance \\\\+ Feature Similarity \\\\+ Neighbor Sparsity}}\t& \\multirow{3}{*}{$\\mathcal{L}_{CE} + \\mathcal{L}_{Reg}$} \\\\ \n\t\t\n\t\t\t&\t\t\t\t\t\t\t\t\t&\t\t\t\t\t\t\t\t\t\t\t\t\t&\t\t\t\t\t&\t\t\t\t&\t\t\t\t&\t\t\t\t\t\t\t\t&\t\t\t\t\t\t\t\t\t\t\t\t\\\\\n\t\t\t&\t\t\t\t\t\t\t\t\t&\t\t\t\t\t\t\t\t\t\t\t\t\t&\t\t\t\t\t&\t\t\t\t&\t\t\t\t&\t\t\t\t\t\t\t\t&\t\t\t\t\t\t\t\t\t\t\t\t\\\\ \n\t\t\t\\bottomrule\n\t\t\\end{tabular}\n\t\\end{center}\n\t\\vspace{-0.5cm}\n\t\\caption{Comparison of methods that directly process raw point clouds.}\n\n\t\\label{tab1}\n\\end{table*}\n\n\n\t\n\\section{Related Work} \\label{Related Work}\n\n\\subsection{Methods} \\label{sec2.1}\nSince conversion-based approaches like 2D projection or 3D voxelization inevitably suffer the problem of losing points, in this section we focus on the related semantic segmentation methods that directly process raw point clouds, which are well-suited to explore the 3D data capability and close to our work.\n\nPointNet \\cite{qi2017pointnet} is considered a milestone method that inputs raw point clouds without any format transformation. This method adopts shared Multi-Layer Perceptrons (MLP) \\cite{haykin1994neural} as the key component to learn pointwise features, and a pooling operation is followed to obtain global features representing all-points maximal response. The limit of PointNet is that it does not consider the local spatial relationship with neighborhood points. To address this issue, PointNet++ \\cite{qi2017pointnet++} is proposed with a hierarchical encoder-decoder structure. Analogous to the Fully Convolutionally Networks (FCN) \\cite{long2015fully} used in image segmentation, PointNet++ extracts local features by grouping and subsampling points in increasing contextual scales, and propagates subsampled features to their original points by interpolation.\n\nSeveral subsequent studies improve PointNet and PointNet++ by their upgraded network designs. A-CNN \\cite{komarichev2019cnn} introduces annular convolution with ring-shape neighborhoods to reduce the duplicated computation that exists in PointNet++ multi-scale grouping. Inspired from kernel pixels in the image-based convolution, KP-FCNN \\cite{thomas2019kpconv} creates local 3D spatial filters using a set of kernel points. A function KP-Conv between kernel points and input points is defined, which is used to replace the MLP operation in PointNet\/PointNet++. Alternative to the process of independent points, DGCNN \\cite{wang2018dynamic} and RS-CNN \\cite{liu2019relation} employ the idea of graph which embeds edges between a query point and its neighborhood points. The difference is that DGCNN follows the PointNet pipeline whereas RS-CNN follows the encoder-decoder structure of PointNet++. PointWeb \\cite{zhao2019pointweb} extends this idea and proposes a pairwise edge embedding between every two points within the selected local region. Instead of the typical approach which collects neighborhood points based on their location distance, DGCNN collects neighbors in the dynamic feature space based on their similarity. Likewise, GACNet \\cite{wang2019graph} assigns proper attention weights to different geometry neighbor points according to their feature attributes, which is to focus on the most relevant part of the neighbors. Weighting mechanism is also utilized in PointConv \\cite{wu2019pointconv}, which estimates kernel density to re-weight the continuous function learned by MLP. SPG \\cite{landrieu2018large} and its continued work \\cite{landrieu2019point} partition point clouds into superpoint graphs and perform Edge-Conditioned Convolution (ECC) \\cite{simonovsky2017dynamic} to assign a label on each superpoint. The difference lies in the graph construction approach, which is solved as an unsupervised minimal geometric partition problem in \\cite{landrieu2018large} and a leaning-based method that minimizes the graph contrastive loss in \\cite{landrieu2019point}. However, neither of these two methods is end-to-end. The purpose of graph contrastive loss is to detect the borders between adjacent objects. It pulls points belonging to the same object towards their centroid, while repelling those belonging to different objects. This idea is intuitively derived as the discriminative loss ($\\mathcal{L}_{Disc}$) \\cite{de2017semantic} in ASIS \\cite{wang2019associatively}, which is added with the cross-entropy loss ($\\mathcal{L}_{CE}$) and regularization loss ($\\mathcal{L}_{Reg}$) for a joint semantic and instance segmentation.\n\nTable \\ref{tab1} summarizes an extensive comparison of selected methods, which are varied by the network architecture, feature extractor, neighborhood selection\/embedding, weighting mechanism, and loss function. Our proposed method is also listed to overview the relation and distinction. More details are discussed in Section \\ref{Method}.\n\n\n\n\n\\begin{table*}[!htp]\n\t\\begin{center}\n\t\t\\footnotesize\n\t\n\t\t\\begin{tabular}{c c c c c c c c}\n\t\t\t\\toprule\n\t\t\tDataset\t\t\t\t\t\t\t\t\t& Type \t\t\t\t& Attributes\t& Size (Train + Test)\t\t\t\t\t& Classes\t\t\t\t& Instance\t& Sequential\t& Train\/Valid \t\t\t\\\\\n\t\t\t\\midrule\n\t\t\tS3DIS \\cite{armeni20163d}\t\t\t\t& indoor dense\t\t& XYZ + RGB\t\t& 6 indoor area, 273M points\t\t\t& 13\t\t\t\t\t& Yes\t\t& No\t\t\t& -\t\t\t\t\t\t\\\\\n\t\t\tScanNet \\cite{dai2017scannet}\t\t\t& indoor dense\t\t& XYZ + RGB\t\t& 1.5K scans, 2.5M frames\t\t\t\t& 21\t\t\t\t\t& Yes\t\t& Yes\t\t\t& -\t\t\t\t\t\t\\\\\n\t\t\tSemantic3D \\cite{hackel2017semantic3d}\t& outdoor dense\t\t& XYZ + RGB\t\t& 30 scenarios, 4009M points\t\t\t& 9\t\t\t\t\t\t& No\t\t& No\t\t\t& -\t\t\t\t\t\t\\\\\n\t\t\tNPM3D \\cite{roynard2018paris}\t\t\t& outdoor dense\t\t& XYZ\t\t\t& 6 scenarios, 143M points\t\t\t\t& 50 (10)\t\t\t\t& Yes\t\t& No\t\t\t& -\t\t\t\t\t\t\\\\\n\t\t\t\\midrule\n\t\t\tVirtualKITTI \\cite{3dsemseg_ICCVW17}\t& outdoor sparse\t& XYZ + RGB\t\t& 4 simulated scenes, 90 frames\t\t\t& 14\t\t\t\t\t& No\t\t& No\t\t\t& 80\\%\/20\\%\trandom\t\t\\\\\n\t\t\tDeepenaiKITTI\t\t\t\t\t\t\t& outdoor sparse\t& XYZ\t\t\t& 1 sequence, 100 frames\t\t\t\t& 17\t\t\t\t\t& No\t\t& Yes\t\t\t& 80\\%\/20\\% random\t\t\\\\\n\t\t\tSemanticKITTI \\cite{behley2019dataset}\t& outdoor sparse\t& XYZ\t\t\t& 22 sequences, 43.5K frames\t\t\t& 28 (20)\t\t\t\t& Yes\t\t& Yes\t\t \t& 10\/1 sequence\t\t\t\\\\\n\t\t\n\t\t\n\t\t\t\\bottomrule\n\t\t\\end{tabular}\n\t\\end{center}\n\t\\vspace{-0.5cm}\n\t\\caption[Caption for LOF]{Comparison of selected datasets with dense and sparse point clouds.}\n\n\t\\label{tab2}\n\\end{table*}\n\t\n\t\n\\begin{table*}[!htb]\n\t\\begin{center}\n\t\t\\footnotesize\n\t\n\t\t\\begin{tabular}{c| *2c *2c *2c| *2c *2c *2c}\n\t\t\t\\toprule\n\t\t\t\\multirow{2}{*}{Method}\t\t\t\t& \\multicolumn{2}{c}{S3DIS} \t& \\multicolumn{2}{c}{ScanNet}\t& \\multicolumn{2}{c|}{Semantic3D}\t\t& \\multicolumn{2}{c}{VirtualKITTI}\t& \\multicolumn{2}{c}{DeepenaiKITTI}\t& \\multicolumn{2}{c}{SemanticKITTI}\t\t\\\\ \\cline{2-13}\n\t\t\t\t\t\t\t\t\t\t\t\t& OA\t\t\t& mIoU\t\t\t& OA\t\t\t& mIoU\t\t\t& OA\t\t\t& mIoU\t\t\t\t\t& OA\t\t\t& mIoU\t\t\t\t& OA\t\t\t& mIoU\t\t\t\t& OA\t\t\t& mIoU\t\t\t\t\t\\\\\n\t\t\t\\midrule\n\t\t\tPointNet \\cite{qi2017pointnet}\t\t& 78.62\t\t\t& 47.71\t\t\t& 73.9\t\t\t& -\t\t\t\t& -\t\t\t\t& -\t\t\t\t\t\t& 88.07\t\t\t& 50.36\t\t\t\t& 98.41\t\t\t& 64.85\t\t\t\t& 66.12\t\t\t& 19.74\t\t\t\t\t\\\\\n\t\t\tPointNet++ \\cite{qi2017pointnet++}\t& -\t\t\t\t& -\t\t\t\t& 84.5\t\t\t& 33.9\t\t\t& 82.5\t\t\t& 52.1\t\t\t\t\t& 81.99\t\t\t& 44.74\t\t\t\t& 96.66\t\t\t& 54.40\t\t\t\t& 72.35\t\t\t& 22.90\t\t\t\t\t\\\\\n\t\t\tA-CNN \\cite{komarichev2019cnn}\t\t& 87.3\t\t\t& 62.9\t\t\t& \\textbf{85.4}\t& -\t\t\t\t& -\t\t\t\t& -\t\t\t\t\t\t& 42.80\t\t\t& 18.75\t\t\t\t& 43.15\t\t\t& 7.31\t\t\t\t& 33.35\t\t\t& 7.85\t\t\t\t\t\\\\\n\t\t\tKP-FCNN \\cite{thomas2019kpconv}\t\t& -\t\t\t\t&\\textbf{65.4}\t& -\t\t\t\t& \\textbf{68.6}\t& \\textbf{92.9}\t& \\textbf{74.6}\t\t\t& 75.02\t\t\t& 30.49\t\t\t\t& 36.75\t\t\t& 4.54\t\t\t\t& 78.05\t\t\t& 26.71\t\t\t\t\t\\\\\n\t\t\tDGCNN \\cite{wang2018dynamic}\t\t& 84.1\t\t\t& 56.1\t\t\t& -\t\t\t\t& -\t\t\t\t& -\t\t\t\t& -\t\t\t\t\t\t& 92.04\t\t\t& 60.19\t\t\t\t& 98.28\t\t\t& 64.54\t\t\t\t&\\textbf{80.64}\t&\\textbf{30.51}\t\t\t\\\\\n\t\t\tPointWeb \\cite{zhao2019pointweb}\t& 86.97\t\t\t& 60.28\t\t\t& 85.9\t\t\t& -\t\t\t\t& -\t\t\t\t& -\t\t\t\t\t\t& 57.06\t\t\t& 18.94\t\t\t\t& 67.98\t\t\t& 16.67\t\t\t\t& 32.17\t\t\t& 6.84\t\t\t\t\t\\\\\n\t\t\tGACNet \\cite{wang2019graph}\t\t\t&\\textbf{87.79}\t& 62.85\t\t\t& -\t\t\t\t& -\t\t\t\t& 91.9\t\t\t& 70.8\t\t\t\t\t&\\textbf{92.57}\t&\\textbf{60.58}\t\t& 95.56\t\t\t& 51.38\t\t\t\t& 76.51\t\t\t& 26.06\t\t\t\t\t\\\\\n\t\t\tPointConv \\cite{wu2019pointconv}\t& -\t\t\t\t& -\t\t\t\t& -\t\t\t\t& 55.6\t\t\t& -\t\t\t\t& -\t\t\t\t\t\t& 85.26\t\t\t& 47.60\t\t\t\t&\\textbf{98.50}\t&\\textbf{65.74}\t\t& 72.51\t\t\t& 23.24\t\t\t\t\t\\\\\n\t\t\t\\bottomrule\n\t\t\\end{tabular}\n\t\\end{center}\n\t\\vspace{-0.5cm}\n\t\\caption{Comparison of existing methods on dense and sparse point clouds.}\n\n\t\\label{tab3}\n\\end{table*}\n\t\n\t\n\n\\begin{figure*}[t]\n\t\\centering\n\t\\begin{center}\n\t\t\\includegraphics[width=0.9\\linewidth]{img\/Fig3_MNEWdesign.png}\n\t\\end{center}\n\t\\caption{Design of our proposed network. The key module MNEW is zoomed in for a detailed illustration. In MNEW, the upper branch embeds static geometry neighbors based on their location distance (by multi-radius), and the lower branch embeds dynamic feature neighbors based on their similarity (by multi-kNN). Local sparsity is computed in both geometry and feature domain, and transformed to weight the convolution output. After concatenation, a pooling operation aggregates neighborhood features for each query point.}\n\t\\label{fig3}\n\\end{figure*}\n\n\n\n\\subsection{Datasets} \\label{sec2.2}\n\nFor the task of 3D scene semantic segmentation, publicly available datasets such as S3DIS \\cite{armeni20163d} and ScanNet \\cite{dai2017scannet} are indoor dense data, whereas Semantic3D \\cite{hackel2017semantic3d} and NPM3D \\cite{roynard2018paris} are outdoor. For all of the four benchmark datasets, point clouds are collected by accumulating multiple scans in stationary environments to obtain fine detailed measurements. However, sparse point clouds for the application of autonomous driving perception, like the example shown in Figure \\ref{fig2b}, are much different. A sweeping LiDAR sensor is mounted on a moving vehicle, correspondingly the scanning environment is also changing. In a single frame, point clouds are generally denser in close areas, and much sparser far away from the sensor. This is determined by hardware characteristics of the rotation LiDAR sensor. To the time of our work, we found three public datasets with sparse point clouds. VirtualKITTI \\cite{3dsemseg_ICCVW17} is simulated virtual data, while DeepenaiKITTI and SemanticKITTI \\cite{behley2019dataset} provide small\/large-sized semantic labels on the real world KITTI \\cite{geiger2012we} sequential data.\n\nTable \\ref{tab2} summarizes the selected datasets. Other popular benchmarks such as ModelNet40 \\cite{wu20153d} or ShapeNet \\cite{chang2015shapenet} are focused on small 3D CAD objects, which are beyond our interest scope. Note for the number of classes, NPM3D has 50-class fine annotations and 10-class coarse annotations; SemanticKITTI has 28-class labels to separate moving\/stationary objects, and mapped into 20-class for closest equivalent. Since labels for their test subsets are invisible before submission, we split the annotated training data into training\/validation in our experiments for verification.\n\n\n\n\\subsection{Analysis} \\label{sec2.3}\n\nTable \\ref{tab3} compares the performance of selected methods on selected datasets. For dense point clouds, results are borrowed from their original reports. For sparse data, we use the train\/valid splits in Table \\ref{tab2}, and re-produce experiments with their official implementations. Evaluation metrics are the overall accuracy (OA) for every point and the mean intersection-over-union (mIoU) for averaged IoU across all classes. From Table \\ref{tab3}, we summarize our findings as follows.\n\n{\\bf Architecture.} \nPointNet and DGCNN are the networks retaining the number of points unchanged, while other methods are hierarchical encoder-decoders where points are down-sampled and then up-sampled. Despite the fact that encoder-decoder networks performs better in dense data benchmarks, their effectiveness is depreciated for sparse point clouds. One possible explanation is that, 3D interpolations for up-sampling might be suitable for the near-uniformly distributed dense point clouds, but not for the irregular sparse data. Since no down-sample\/up-sampling exists in PointNet and DGCNN, they are similar to the dilated convolution \\cite{yu2015multi} in the image segmentation, which maintains resolution unchanged end-to-end for element-wise feature learning. \n\n{\\bf Neighborhood.} \nDGCNN selects neighboring points in the dynamic feature domain, which differs from other methods whose neighbors are selected by the static geometry distance. GACNet collects geometry neighbors but assigns attention weights based on their feature similarity. Since the performances of DGCNN and GACNet seem promising for SemanticKITTI and VirtualKITTI, we infer that dynamic feature-based neighborhoods are critical. This is interpretable, for example, sparse points on the road in far distances are isolated in geometry location, but they are similar in their feature representations. In contrast, traffic signs hidden in trees are close to leaves, but they should be distinguished from the surrounding vegetation.\n\n{\\bf Weighting.} \nSimilar to the attention schema in GACNet, PointConv compromises the density estimation as a weighting function, which is learned to adapt with different local densities. This method directly considers the data density and therefore obtaining encouraging performance on DeepenaiKITTI. We infer that weighting mechanisms in GACNet and PointConv could compensate for the effectiveness depreciation of their encoder-decoder architecture. \n\n{\\bf Dataset Discrepancy.} \nAccording to the experimental results in Table \\ref{tab3}, DGCNN, GACNet and PointConv are the preferred methods on sparse point clouds. However, the performance is inconsistent across the three selected sparse datasets. The major reason is essentially the data intrinsic discrepancy. VirtualKITTI is generated by the simulator, and it is the only sparse data with RGB colors. SemanticKITTI is a large-scale sequential dataset, which suggests 10 sequences for training and 1 other sequence for validation. DeepenaiKITTI is small-sized probe data, and all its currently available frames are extracted from the same sequence. Due to the small size and high correlation of DeepenaiKITTI, we neglect it in Section \\ref{Experiments} and evaluate our method on VirtualKITTI and SemanticKITTI.\n\t\n\t\n\n\n\n\\begin{table*}[!htb]\n\n\t\\scriptsize\n\t\\centering\n\t\\newcolumntype{M}{ >{\\arraybackslash} p{0.27cm} }\n\n\t\\begin{subtable}[!htb]{\\textwidth}\n\t\t\\centering\n\t\t\\begin{tabular}{c| M M M M M M M M M M M M M M| M M}\n\t\t\t\\toprule\n\t\t\tMethod\t&\\rotatebox[origin=c]{90}{Terrian}\t&\\rotatebox[origin=c]{90}{Tree}\t&\\rotatebox[origin=c]{90}{Vegetation}\t&\\rotatebox[origin=c]{90}{Building}\t&\\rotatebox[origin=c]{90}{Road}\t&\\rotatebox[origin=c]{90}{Guardrail}\t&\\rotatebox[origin=c]{90}{Traffic Sign}\t&\\rotatebox[origin=c]{90}{Traffic Light}\t&\\rotatebox[origin=c]{90}{Pole}\t&\\rotatebox[origin=c]{90}{Misc}\t&\\rotatebox[origin=c]{90}{Truck}\t&\\rotatebox[origin=c]{90}{Car}\t&\\rotatebox[origin=c]{90}{Van}\t&\\rotatebox[origin=c]{90}{Unlabeled} \t& OA\t& mIoU\t\\\\\n\t\t\n\t\t\t\\midrule\n\t\t\n\t\t\n\t\t\tDGCNN \\cite{wang2018dynamic}\t\t& 86.7\t\t\t& 92.5\t\t\t& 70.8\t\t\t& 81.2\t\t\t& 94.8\t\t\t& 93.9\t\t\t& 38.0\t\t\t& 78.0\t\t\t& 65.5\t\t\t& 27.3\t\t\t& 29.2\t\t\t& 76.3\t\t\t& 8.6\t\t\t& 0.0\t& 92.0\t\t\t& 60.2\t\\\\\n\t\t\tGACNet \\cite{wang2019graph}\t\t\t& 82.0\t\t\t& 95.2\t\t\t& 72.5\t\t\t& 86.6\t\t\t& 92.1\t\t\t& 90.6\t\t\t& 51.4\t\t\t& 48.1\t\t\t& 42.6\t\t\t& 31.1\t\t\t& 46.6\t\t\t& 81.6\t\t\t& \\textbf{27.8}\t& 0.0\t& 92.6\t\t\t& 60.6\t\\\\\n\t\t\tPointConv \\cite{wu2019pointconv}\t& 58.1\t\t\t& 89.4\t\t\t& 57.0\t\t\t& 76.0\t\t\t& 80.6\t\t\t& 66.9\t\t\t& 25.2\t\t\t& 59.3\t\t\t& 25.1\t\t\t& 35.6\t\t\t& 9.14\t\t\t& 72.4\t\t\t& 12.0\t\t\t& 0.0\t& 85.3\t\t\t& 47.6\t\\\\\n\t\t\t\\midrule\n\t\t\tMNEW-4096\t\t\t\t\t\t\t& 92.6\t\t\t& \\textbf{97.7}\t& 84.5\t\t\t& 90.7\t\t\t& 97.6\t\t\t& 97.3 \t\t\t& 68.8\t\t\t& 71.9\t\t\t& 62.6\t\t\t& 52.9\t\t\t& 11.0\t\t\t& 85.9\t\t\t& 23.5\t\t\t& 0.0\t& 95.9\t\t\t& 67.0\t\\\\\n\t\t\tMNEW-2048\t\t\t\t\t\t\t& \\textbf{97.0}\t& \\textbf{97.7} & \\textbf{91.2} & \\textbf{92.4} & \\textbf{98.8} & \\textbf{98.2} & \\textbf{70.6} & \\textbf{83.8} & \\textbf{72.8} & \\textbf{64.9} & \\textbf{58.4} & \\textbf{88.3} & 12.7\t\t\t& 0.0\t& \\textbf{97.1} & \\textbf{73.3}\t\\\\\n\t\t\t\\bottomrule\n\t\t\\end{tabular}\n\t\t\\vspace{-0.1cm}\n\t\t\\caption{Validation results on VirtualKITTI dataset}\n\t\t\\label{tab4a}\n\t\t\\vspace{0.1cm}\n\t\\end{subtable}\n\t\t\n\t\t\n\t\\begin{subtable}[!htb]{\\textwidth}\n\t\t\\centering\n\t\t\\begin{tabular}{c| M M M M M M M M M M M M M M M M M M M M| M M}\n\t\t\t\\toprule\n\t\t\tMethod\t&\\rotatebox[origin=c]{90}{Car}\t&\\rotatebox[origin=c]{90}{Bicycle}\t&\\rotatebox[origin=c]{90}{Motorcyclist}\t&\\rotatebox[origin=c]{90}{Truck}\t&\\rotatebox[origin=c]{90}{Other Vehicle}\t&\\rotatebox[origin=c]{90}{Person}\t&\\rotatebox[origin=c]{90}{Bicyclist}\t&\\rotatebox[origin=c]{90}{Motorcyclist}\t&\\rotatebox[origin=c]{90}{Road}\t&\\rotatebox[origin=c]{90}{Parking}\t&\\rotatebox[origin=c]{90}{Sidewalk}\t&\\rotatebox[origin=c]{90}{Other Ground}\t&\\rotatebox[origin=c]{90}{Building}\t&\\rotatebox[origin=c]{90}{Fence}\t&\\rotatebox[origin=c]{90}{Vegetation}\t&\\rotatebox[origin=c]{90}{Trunk}\t&\\rotatebox[origin=c]{90}{Terrain}\t&\\rotatebox[origin=c]{90}{Pole}\t&\\rotatebox[origin=c]{90}{Traffic Sign}\t&\\rotatebox[origin=c]{90}{Unlabeled}\t& OA\t& mIoU \\\\\n\t\t\n\t\t\t\\midrule\n\t\t\n\t\t\n\t\t\tDGCNN \\cite{wang2018dynamic}\t\t& 78.3\t\t\t& 0.0\t\t\t& 1.1\t\t\t& \\textbf{17.3}\t& 1.4\t\t\t& 1.9\t\t\t& 3.9\t\t\t& 0.0\t& 88.7\t\t\t& 10.2\t\t\t& 65.1\t\t\t& 0.1\t\t\t& 74.1\t\t\t& 18.8\t\t\t& 71.6\t\t\t& 25.0\t\t\t& 62.1\t\t\t& 28.5\t\t\t& 8.8\t\t\t& 47.8\t\t\t& 80.8\t\t\t& 30.2\t\t\t\\\\\n\t\t\tGACNet \\cite{wang2019graph}\t\t\t& 71.5\t\t\t& 0.0\t\t\t& 0.0 \t\t\t& 12.2\t\t\t& 1.4\t\t\t& 0.0\t\t\t& 0.0\t\t\t& 0.0\t& 80.3\t\t\t& 13.4\t\t\t& 55.3\t\t\t& 0.2\t\t\t& 63.1\t\t\t& 16.7\t\t\t& 67.8\t\t\t& 15.7\t\t\t& 56.4\t\t\t& 12.1\t\t\t& \\textbf{22.9}\t& 38.8\t\t\t& 76.0 \t\t\t& 26.4 \t\t\t\\\\\n\t\t\tPointConv \\cite{wu2019pointconv}\t& 60.5\t\t\t& 0.1\t\t\t& 0.2\t\t\t& 0.6\t\t\t& 3.3\t\t\t& 1.0\t\t\t& 0.9\t\t\t& 0.0\t& 82.1\t\t\t& 3.8\t\t\t& 55.4\t\t\t& \\textbf{0.4}\t& 63.6\t\t\t& 10.8\t\t\t& 59.6\t\t\t& 14.2 \t\t\t& 52.1\t\t\t& 14.1\t\t\t& 8.0\t\t\t& 34.4\t\t\t& 72.5\t\t\t& 23.2\t\t\t\\\\\n\t\t\tRangeNet \\cite{milioto2019rangenet++}\t& 74.1\t\t& \\textbf{14.3}\t& 2.6\t\t\t& 9.4\t\t\t& 10.5\t\t\t& \\textbf{7.2}\t& 21.9\t\t\t& 0.0\t& \\textbf{90.7}\t& \\textbf{36.2}\t& \\textbf{74.2}\t& 0.2\t\t\t& 67.8\t\t\t& \\textbf{33.4}\t& 71.9\t\t\t& 30.7\t\t\t& \\textbf{68.5}\t& 23.0\t\t\t& 22.2\t\t\t& 34.1\t\t\t& 81.4\t\t\t& 34.6 \t\t\t\\\\\n\t\t\t\\midrule\n\t\t\tMNEW-4096\t\t\t\t\t\t\t& \\textbf{81.3}\t& 0.0\t\t\t& \\textbf{13.3}\t& 8.8\t\t\t& \\textbf{12.9}\t& 6.2\t\t\t& \\textbf{31.7}\t& 0.0\t& 88.7\t\t\t& 22.3\t\t\t& 70.4\t\t\t& 0.1\t\t\t& \\textbf{79.3}\t& 30.0\t\t\t& \\textbf{76.9}\t& \\textbf{34.2}\t& 66.4\t\t\t& \\textbf{33.1}\t& 1.1\t\t\t& \\textbf{49.3}\t& \\textbf{84.1}\t& \\textbf{35.3} \\\\\n\t\t\tMNEW-2048\t\t\t\t\t\t\t& 79.8\t\t\t& 0.0\t\t\t& 10.5\t\t\t& 6.5\t\t\t& 7.8\t\t\t& 5.5\t\t\t& 25.5\t\t\t& 0.0\t& 88.8\t\t\t& 22.7\t\t\t& 67.4\t\t\t& 0.0\t\t\t& 77.2\t\t\t& 29.1\t\t\t& 75.0\t\t\t& 29.6\t\t\t& 61.9\t\t\t& 27.3\t\t\t& 1.4 \t\t\t& 47.5\t\t\t& 82.5\t\t\t& 33.2 \t\t\t\\\\\n\t\t\t\\bottomrule\n\t\t\\end{tabular}\n\t\t\\vspace{-0.1cm}\n\t\t\\caption{Validation results on SemanticKITTI dataset}\n\t\t\\label{tab4c}\n\t\t\\vspace{0.1cm}\n\t\\end{subtable}\n\t\n\t\\caption{Semantic segmentation results on sparse point clouds. Metrics are OA(\\%), mIoU(\\%), and per class IoU(\\%).}\n\t\\label{tab4}\n\t\\end{table*}\n\t\n\t\n\t\n\n\\section{Methodology} \\label{Method}\n\n\\subsection{Network Architecture} \\label{sec3.1}\n\nInspired from the findings in Section \\ref{sec2.3}, we propose our network design illustrated in Figure \\ref{fig3}. The overall architecture inherits a dilated structure like PointNet \\cite{qi2017pointnet} or DGCNN \\cite{wang2018dynamic}, which eliminates re-sampling operations end-to-end. The model takes batches of input $N_p$ points, and passes through a sequence of three MNEW modules to extract pointwise features $L_1$, $L_2$, and $L_3$ in a hierarchical order. Note that local neighborhood information is carried inside MNEW, which yields the upgrade from PointNet. We increase the number of neighbors in $L_1$, $L_2$, and $L_3$, which correspond to hierarchical-scale feature encoders but keep the number of query points fixed. A global 2D convolution, max pooling, and two fully convolution layers are followed to aggregate the global feature $G_3$. The hierarchical pointwise features and tiled global feature are concatenated as a descriptor for each point, and passed through three regular 1D convolutions to get the segmentation score, i.e., category-level probability. \n\n\\subsection{Multi-domain Neighborhood Embedding and Weighting} \\label{sec3.2}\n\nThe key component in our network design is the multi-domain neighborhood embedding and weighting (MNEW) module. As shown in Figure \\ref{fig3}, the input of MNEW is batches of points with their xyz coordinates and original features, shaped as $[B, N_p, D_{xyz+fea}]$. We compute pairwise distances in both geometry and feature domain, resulting in two matrices with shape $[B, N_p, N_p]$ representing the geometry distance and feature similarity between every two points. \n\nAs discussed in KP-Conv \\cite{thomas2019kpconv} and RS-Conv \\cite{liu2019relation}, radius-based geometry neighborhood selection is more robust than k-NN \\cite{weinberger2006distance} in the non-uniform sampling settings. However, feature neighborhoods are dynamically shifting and hard to be encircled. Therefore, we use multiple radius in the geometry domain and multiple k-NN in the feature domain to collection multi-scale neighbor indices. We gather their original features to compose two initial neighborhood embedding matrices $\\mathbf{X}_g^0$ and $\\mathbf{X}_f^0$ with shape $[B, N_p, N_{ng}, D_{embed}]$ and $[B, N_p, N_{nf}, D_{embed}]$ respectively, where $ng=\\sum r_i$ represents the number of accumulated neighbors in multi-radius geometry space, and $nf=\\sum k_i$ represents the number of accumulated neighbors in multi-kNN feature space. Similar to the graph embedding $G(V,E)$ in DGCNN and RS-Conv which includes vertices and edges, we embed each element in $\\mathbf{X}_g^0$ and $\\mathbf{X}_f^0$ as,\n\\begin{equation} \\label{eq1}\n\\mathbf{f}(x_{i,n_j}) = \\mathbf{f}(x_{n_j}, x_{n_j}-x_i), \\quad x_{i,n_j} \\in (\\mathbf{X}_g^0 \\cup \\mathbf{X}_f^0)\n\\end{equation}\nwhere $x_i$ denotes the $i$-th query point, and $x_{n_j}$ denotes the $j$-th neighbor point. Given indices of selected neighbors, we also gather their geometry distance matrix $\\mathbf{D}_g$ (shape $[B, N_p, N_{ng}, 1]$) and feature similarity matrix $\\mathbf{D}_f$ (shape $[B, N_p, N_{nf}, 1]$). Next, a transformation function $\\mathbf{T}(d_{i,n_j})=\\mathbf{w}_{i,n_j}\\cdot\\mathbf{f}(d_{i,n_j})$ is utilized to obtain adaptive attention weights. The attended embedding $\\mathbf{X}_g^a$ and $\\mathbf{X}_f^a$ are computed as, \n\\begin{equation} \\label{eq2}\n\\begin{split}\n\\mathbf{X}_g^a = \\mathbf{T}(\\mathbf{D}_g) \\cdot \\mathbf{X}_g^0 \\\\\n\\mathbf{X}_f^a = \\mathbf{T}(\\mathbf{D}_f) \\cdot \\mathbf{X}_f^0\n\\end{split}\n\\end{equation}\n\nMotivated by the density estimation in PointConv \\cite{wu2019pointconv}, we calculate the neighborhood sparsity using,\n\\begin{equation} \\label{eq3}\n\\mathbb{P}(x_{i,n_j}|\\mu, \\sigma^2) = \\frac{1}{\\sqrt{2\\pi\\sigma^2}} \\exp{[-\\frac{(x_{n_j}-x_i)^2}{2\\sigma^2}]}\n\\end{equation}\n\\begin{equation} \\label{eq4}\n\\mathbb{S}(x_i|\\mu, \\sigma^2) = (\\frac{1}{N_n}\\log[\\sum_{n_j \\in N_n} \\mathbb{P}(x_{i,n_j}|\\mu, \\sigma^2)])^{-1}\n\\end{equation}\n$\\mathbb{P}(x_{i,n_j}|\\mu, \\sigma^2)$ in Equation (\\ref{eq3}) is equivalent to the Gaussian probability density function computed for every neighbor $x_{n_j}$ with respect to the query point $x_{i}$. $\\mathbb{S}(x_{i,n_j})$ in Equation (\\ref{eq4}) is the estimated sparsity which inverses the averaged density. We also take the log-scale value to obtain a better sparsity distribution (see Figure \\ref{fig4b}). Geometry sparsity $\\mathbb{S}_g$ and feature sparsity $\\mathbb{S}_f$ are computed individually, which are transformed as the weighting factor for the 2D convolution activation $\\mathbf{h}(x)$. The weighted outputs $\\mathbf{X}_g^w$ (shape $[B, N_p, N_{ng}, D_{conv}]$) and $\\mathbf{X}_f^w$ (shape $[B, N_p, N_{nf}, D_{conv}]$) are computed as,\n\\begin{equation} \\label{eq5}\n\\begin{split}\n\\mathbf{X}_g^w = \\mathbf{T}(\\mathbb{S}_g) \\cdot \\mathbf{h}(\\mathbf{X}_g^a) \\\\\n\\mathbf{X}_f^w = \\mathbf{T}(\\mathbb{S}_f) \\cdot \\mathbf{h}(\\mathbf{X}_f^a)\n\\end{split}\n\\end{equation}\nAfter concatenating the neighborhood information from geometry and feature domain, an average pooling operation is followed to aggregate a feature vector for each query point, yielding the output of MNEW module $\\mathbf{X}_{mnew}^{out}$ with shape $[B, N_p, D_{conv}]$.\n\\begin{equation} \\label{eq6}\n\\mathbf{X}_{mnew}^{out} = \\frac{1}{N_p} \\sum_{i \\in N_p} (\\mathbf{X}_g^w \\oplus \\mathbf{X}_f^w)\n\\end{equation}\n\n\n\\subsection{Loss Function} \\label{sec3.3}\n\nThe loss function is a combination of softmax cross-entropy loss $\\mathcal{L}_{CE}$ and regularization loss $\\mathcal{L}_{Reg}$ adjusted by $\\lambda$. Since the task is semantic segmentation only (i.e., no instance-level labels), the discriminitive loss suggested by ASIS \\cite{wang2019associatively} is not applicable.\n\\begin{equation} \\label{eq7}\n\\mathcal{L}_{Total} = \\mathcal{L}_{CE} + \\lambda \\mathcal{L}_{Reg}\n\\end{equation}\n\n\n\n\n\\subsection{Comparison to Existing Methods} \\label{sec3.4}\n\nReferring to Table \\ref{tab1}, we compare existing methods and summarize the essential distinctions of MNEW as follows.\n\nThe dilated network architecture in our work excludes downsample grouping and upsample interpolation, which differs from all recent works that are based on hierarchical encoder-decoder structures. Compared with PointNet \\cite{qi2017pointnet} whose feature contains pointwise and global information, we also include local neighborhood features. Compared with DGCNN \\cite{wang2018dynamic} which collects neighbors in the feature space only, we embed neighbor points in both geometry and feature domain.\n\nIn terms of the neighborhood embedding, our method adopts multi-scaling in multi-domain. This differs from all existing methods where neighbors are collected in only one single domain (i.e., either geometry or feature). Hierarchical PointNet++ \\cite{qi2017pointnet++} and A-CNN \\cite{komarichev2019cnn} use multi-radius ball-shaped or ring-shaped scales in the geometry domain, while DGCNN using single-scale kNN in the feature domain. In our method, there may exist overlapping points selected in geometry and feature neighborhoods. However, since we compute adaptive attention \\& weighting factors in each domain separately, their impact are learned individually.\n\nFor the attention\/weighting mechanism, KP-FCNN \\cite{thomas2019kpconv} and GACNet \\cite{wang2019graph} compute the geometry distance or feature similarity as fixed weighting factors, while PointConv \\cite{wu2019pointconv} transforms the local sparsity as a learning-based flexible variable. In our method, all these factors are trainable.\n\n\n\n\n\n\n\\section{Experiments} \\label{Experiments}\n\n\\subsection{Sparse Point Cloud Segmentation} \\label{sec4.1}\n\nExperimental results on VirtualKITTI and SemanticKITTI are shown in Table \\ref{tab4}, using the train\/valid splits from Table \\ref{tab2}. Evaluation metrics include OA, mIoU, and per class IoU. We select DGCNN, GACNet, and PointConv as baselines since their accuracies are higher than other related methods (see Table \\ref{tab3}). Our proposed method, MNEW, achieves outstanding performances on both VirtualKITTI and SemanticKITTI. We experimentally set the number of points (i.e., $N_p$) 4096 or 2048, since it affects the global feature extraction and neighborhood selection. Table \\ref{tab4} indicates that MNEW-2048 performs better on VirtualKITTI, while MNEW-4096 is superior on SemanticKITTI. Comparing against the top-performed baseline method, MNEW facilitates 4.5\\% OA and 12.7\\% mIoU increments on VirtualKITTI, a greater margin than those on SemanticKITTI (3.3\\% OA and 5.1\\% mIoU higher than DGCNN). This is because VirtualKITTI provides RGB information as its original feature, which is independent from the XYZ geometry coordinates. For SemanticKITTI, we use intensity, and compute 2D\\_range and 3D\\_distance to fill the input feature slots. Therefore, the multi-domain neighborhood combination is more effective for VirtualKITTI. \n\nSemanticKITTI contains more categorical labels, but the percentage of unlabeled points (class-0) is also higher than VirtualKITTI (4.49\\% vs. $<$0.01\\%). For SemanticKITTI, our experimental results slightly differ from those reported in \\cite{behley2019dataset}. One of the critical reason is that unlabeled points, despite occupying considerable large percentage, are ignored in \\cite{behley2019dataset}. With the class-0 included, we reproduced the RangeNet \\cite{milioto2019rangenet++}, an upgraded DarkNet in \\cite{behley2019dataset}. We provide a consistent comparison as listed in Table \\ref{tab4c}. In addition to DarkNet\/RangeNet, \\cite{behley2019dataset} and \\cite{milioto2019rangenet++} also claim that projection-based methods such as SqueezeSeg \\cite{wu2018squeezeseg} or TangentConv \\cite{tatarchenko2018tangent} are more effective than those directly processing raw data. Since per sweep point clouds are collected by a rotational LiDAR, a cylinder-view projection exactly looks at those points from the perspective of LiDAR sensor. This could ensure its effectiveness for small objects, such as bicycles and traffic-signs which are better viewed from the sensor's perspective. Although reasonable, MNEW still obtains higher mIoU (35.3\\% vs. 34.6\\%) and OA (84.1\\% vs. 81.4\\%) than RangeNet.\n\n\n\n\n\n\n\\begin{table}[t]\n\t\\begin{center}\n\t\n\t\t\\scriptsize\n\t\t\\begin{tabular}{c c c c| c c}\n\t\t\t\\toprule\n\t\t\tGeometry\t\t& Feature\t\t\t& Distance\/Similarity\t& Sparsity\t\t& OA\t\t& mIoU\t\t\\\\\n\t\t\t\\midrule\n\t\t\t\\checkmark\t\t&\t\t\t\t\t&\t\t\t\t\t\t&\t\t\t\t& 92.1\t\t& 57.0\t\t\\\\\n\t\t\t\t\t\t\t& \\checkmark\t\t&\t\t\t \t\t\t&\t\t\t\t& 95.8\t\t& 65.9\t\t\\\\\n\t\t\t\\checkmark\t\t& \\checkmark\t\t&\t\t\t \t\t\t&\t\t\t\t& 95.9\t\t& 69.7\t\t\\\\\n\t\t\t\\midrule\n\t\t\t\\checkmark\t\t& \\checkmark\t\t& \\checkmark \t\t\t&\t\t\t\t& 96.3\t\t& 72.6\t\t\\\\\n\t\t\t\\checkmark\t\t& \\checkmark\t\t& \t\t\t\t\t\t& \\checkmark\t& 96.9\t\t& 70.6\t\t\\\\\n\t\t\t\\checkmark\t\t& \\checkmark\t\t& \\checkmark \t\t\t& \\checkmark\t& 97.1\t\t& 73.3\t\t\\\\\n\t\t\t\\bottomrule\n\t\t\\end{tabular}\n\t\\end{center}\n\t\\vspace*{-1mm}\n\t\\caption{Effect of neighborhood selection, distance\/similarity attention, and sparsity weighting. Experimented MNEW-2048 on VirtualKITTI}\n\t\\label{tab5}\n\\end{table}\n\n\n\\begin{figure*}[t!]\n\t\\centering\n\t\\begin{subfigure}[t]{\\textwidth}\n\t\t\\centering\n\t\t\\begin{subfigure}[t]{0.4\\textwidth}\n\t\t\t\\includegraphics[width=\\textwidth, height=3.8cm]{img\/Fig4-1_DistanceDistribution.png}\n\t\t\\end{subfigure}\n\t\n\t\n\t\n\t\t\\begin{subfigure}[t]{0.5\\textwidth}\n\t\t\t\\includegraphics[width=\\textwidth, height=3.8cm]{img\/Fig4-2_DistanceEffects.png}\n\t\t\\end{subfigure}\n\t\t\\vspace*{-1mm}\n\t\t\\caption{Distance (m) Distribution and OA (\\%) by Distance Effects}\n\t\t\\label{fig4a}\n\t\\end{subfigure}\n\t\\begin{subfigure}[t]{\\textwidth}\n\t\t\\centering\n\t\t\\begin{subfigure}[t]{0.4\\textwidth}\n\t\t\t\\includegraphics[width=\\textwidth, height=3.8cm]{img\/Fig4-3_SparsityDistribution.png}\n\t\t\\end{subfigure}\n\t\n\t\n\t\n\t\t\\begin{subfigure}[t]{0.5\\textwidth}\n\t\t\t\\includegraphics[width=\\textwidth, height=3.8cm]{img\/Fig4-4_SparsityEffects.png}\n\t\t\\end{subfigure}\n\t\t\\vspace*{-1mm}\n\t\t\\caption{Sparsity (normalized 0-1) Distribution and OA (\\%) by Sparsity Effects}\n\t\t\\label{fig4b}\n\t\\end{subfigure}\n\t\\caption{Points distribution and performance variation by distance and sparsity. Experimented MNEW-4096 on SemanticKITTI.}\n\t\\label{fig4}\n\\end{figure*}\n\n\n\n\n\\subsection{Ablation Studies} \\label{sec4.2}\n\n{\\bf Neighborhood Embedding and Weighting.}\nIn Table \\ref{tab5}, we compare several variations of the model design. For the neighborhood selection, we compare the neighbors collected by geometry location, feature similarity, and both. The experiment reveals that the accuracy significantly increased with the supplement of feature domain neighborhood. Next, we optionally enable the neighbor attention of geometry distance and feature similarity ($\\mathbf{T}(\\mathbf{D}_g)$+$\\mathbf{T}(\\mathbf{D}_f)$), as well as the weighting of local sparsity ($\\mathbf{T}(\\mathbb{S}_g)$+$\\mathbf{T}(\\mathbb{S}_f)$). Both attention\/weighting settings contribute to the performance improvement, which validate the effectiveness of our MNEW design.\n\n\n{\\bf Performance varied by Distance and Sparsity.}\nSince we target sparse point clouds segmentation towards the application of autonomous driving perception, it is interesting to know the performance for points with different distances (with respect to the LiDAR sensor) or sparsity (with respect to nearby points). Euclidean distances are computed in the geometry space, and sparsity is calculated using the normalized value of Equation (\\ref{eq4}). Investigated on SemanticKITTI, we demonstrate distribution histograms of distance and sparsity, as well as result variations in Figure \\ref{fig4}. We observe in Figure \\ref{fig4a}, the performance occurs an abrupt elevation at $\\approx$50 meters distance. We explicit that points in far away areas (e.g., $>$50m) are limited in quantity (refer to the distance distribution), but generally involved in well-leaned categories (e.g., road, building, terrain) which contain relatively large percentage of samples. In Figure \\ref{fig4b}, as the sparsity increases, the performance starts to decrease until 0.7$\\sim$0.8 and then increase. This is corresponded with the sparsity distribution, implying that the amount of samples affect the performance. Since sparsity distributions for dense datasets are relatively uniform, we infer it an essential reason for the effectiveness disparity of existing methods. It is also observed from Figure \\ref{fig4} that MNEW achieves winning performance against other methods across the distance and sparsity distribution.\n\n\n\n\\section{Conclusion} \\label{Conclusion}\n\nIn this work, we propose MNEW for sparse point clouds segmentation. MNEW inherits a dilation architecture to capture pointwise and global features, and involves multi-scale local semantics adopted from the hierarchical encoder-decoder structure. Neighborhood information is embedded in both static geometry and dynamic feature domain. The geometry distance, feature similarity, and local sparsity are computed and transformed as adaptive weighting factors. We obtain outstanding performances on both sparse point clouds. We believe this study will contribute to the application of LiDAR-based autonomous driving perception. \n\nIn the continued work, one direction is to extend the model for joint semantic and instance segmentation, i.e., panoptic segmentation \\cite{kirillov2019panoptic} for point clouds. The second ongoing direction is domain adaption to resolve the issue of cross-sensor disparity. The individual model could also be light-weighted and exported version as a LiDAR feature extractor, which is useful to fuse with other sensors such as radar and camera \\cite{zheng2019gfd}.\n\n\n\n\n{\\small\n\t\\bibliographystyle{ieee_fullname}\n\t","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{sec:Introduction}\nIn the field of illumination optics, optical engineers design optical elements to transport the light from a source, which can be an LED, laser, or incandescent lamp, to obtain a desired irradiance (spatial density of the luminous flux) or intensity (angular density of the luminous flux) \\citep{Grant2011}.\nTo transport the light from the source to the target, the optical engineer can construct a system consisting of various optical elements such as lenses, mirrors, diffusers, and light guides \\citep{John2013}.\nOne particular type of optic used in automotive and road lighting applications is the freeform lens, a lens without any form of symmetry \\citep{Falaggis2022, Mohedano2016}.\nThe design of these lenses is a complex problem. It is currently solved by numerically solving system-specific differential equations or through optimization, with every step validated using a (non-differentiable) non-sequential ray tracer \\citep{Wu2018}.\nGreat effort is involved in generalizing these methods to account for varying amounts of optical surfaces \\citep{Anthonissen2021}, their optical surface and volume properties \\citep{Kronberg2022, lippman_prescribed_2020}, or the source model~\\citep{Muschaweck2022, tukker_efficient_2007, sorgato_design_2019}. \n\nThe performance of an optical system is evaluated using ray tracing, which is the process of calculating the path of a ray originating from a source through the optical system. Sequential ray tracers such as Zemax~\\citep{zemax} and Code V~\\citep{codev}, primarily used in the design of imaging optics, trace a small number of rays to determine the quality of the image. \nNon-sequential ray tracers such as LightTools~\\citep{LightTools} and Photopia~\\citep{photopia} use many rays to simulate the optical flux through the system and share similarities with the rendering procedures in computer graphics, with the main difference being that the rays are traced from source to camera.\n\nAlgorithmically differentiable ray tracing, a generalization of differential ray tracing \\citep{feder_differentiation_1968, stone_differential_1997, oertmann_differential_1989, chen_second-order_2012}, is a tool that is being developed for both sequential~\\citep{Sun2021, Volatier2017} and non-sequential \\citep{Mitsuba2} ray tracing.\n\\emph{Differential ray tracing} obtains system parameter gradients using numerical or algebraic differentiation. The gradient can be calculated numerically using numerical differentiation or the adjoint method \\citep{givoli_tutorial_2021}, requiring the system to be ray traced twice, once for its current state and once with perturbed system parameters. Analytic expressions for the gradient can be obtained by tracing the rays analytically through the system, calculating where the ray intersects the system's surfaces and how the ray's trajectory is altered. However, these expressions can become long and complicated depending on the system. In addition, the method is limited to optics described by conics as finding analytic ray surface intersection with surfaces of higher degrees becomes complicated or even impossible. Algorithmic differentiable ray tracing can handle these issues by obtaining the gradients with one single forward simulation for an almost arbitrary system. In addition, it can be seamlessly integrated into gradient-descent-based optimization pipelines. A modern framework for this is \\emph{Physics Informed Machine Learning} \\citep{Karniadakis2021}, where a neural network is trained to approximate the solution to a physics problem formulated using data, a set of differential equations, or an implemented physics simulation (or a combination of these). \n\nWe investigate the reliability of designing freeform lenses with B-spline surfaces \\citep{Piegl1995} using algorithmically differentiable non-sequential ray tracing and gradient-based optimization to redirect the light of a light source into a prescribed irradiance distribution. The source models will be the collimated light source, point source, and finally, sources with a finite extent. The results are validated using the commercial ray trace program LightTools \\citep{LightTools}. In addition, we investigate the effectiveness of optimizing a network to determine the optimal B-spline control points as proposed in \\citep{Moller2021PIML} and \\citep{GASICK2023115839}, and compare it to optimizing the control points directly and seeing the possible speed-up.\n\n\\section{Gradient-based freeform design}\nThe overall structure of our pipeline is depicted in Fig.~\\ref{fig:optimizationloop}. \nA freeform surface is defined by the parameters $P \\in \\mathscr{P}$, where $\\mathscr{P}$ is the set of permissible parameter values. This surface is combined with a flat surface to create a lens, and an irradiance distribution $\\mathcal{I}$ is produced by tracing rays through the lens onto a screen. The irradiance distribution is compared to a target $\\mathcal{I}_\\text{ref}$ yielding a loss $\\mathscr{L}(\\mathbf{P};\\mathcal{I}_\\text{ref})$. The optimization problem we are trying to solve can then be formulated as\n\\begin{equation}\n \\min_\\mathbf{P \\in \\mathscr{P}} \\; \\mathscr{L}(\\mathbf{P};\\mathcal{I}_\\text{ref}),\n\\end{equation}\nwhich we solve by using gradient descent.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width = \\textwidth]{figures\/Caustic_design_optimization.png}\n \\caption{Overview of our learning-based freeform design pipeline.}\n \\label{fig:optimizationloop}\n\\end{figure}\n\nThe freeform surface of the lens is defined in terms of a B-spline surface. From a manufacturing standpoint, this is convenient since B-spline surfaces can be chosen to be $C^1$ smooth (in fact, B-spline surfaces can be $C^n$ smooth for arbitrarily large $n$). From an optimization perspective, B-spline surfaces have the property that the control points that govern the shape of the surface and which will be optimized have a local influence on the surface geometry, which in turn has a local influence on the resulting irradiance distribution.\n\n\\subsection{The lens model using a B-spline surface}\n\\begin{figure}\n \\centering\n \\includegraphics[width = 0.2\\textwidth]{figures\/lens_schematic.png}\n \\caption{The used lens type: a volume enclosed between a flat surface and a freeform surface with a uniform refractive index.}\n \\label{fig:lenschematic}\n\\end{figure}\n\nWe define a lens as in Fig.~\\ref{fig:lenschematic} as the volume between a flat surface and a B-spline surface, with a uniform refractive index.\n\nA B-spline surface $\\mathbf{S}$ in $\\mathbb{R}^3$ is a parametric surface, see Fig.~\\ref{fig:Bsplinesurface}. It has rectangular support $[a,b]\\times[c,d]$ where $a z_\\text{in} \\quad \\forall (i,j). \\label{eq:nosurfintersect}\n\\end{equation}\nManufacturing can require that the lens has some minimal thickness $\\delta$, so that the constraint is stronger:\n\\begin{equation}\n P^z_{i,j} \\ge \\delta + z_\\text{in} \\quad \\forall (i,j).\n\\end{equation}\n\n\n\\subsection{Differentiable ray tracer} \\label{subsec:raytracer}\nOur implementation traces rays from a source through the flat lens surface and the freeform lens surface to the detector screen as depicted in Figs.~\\ref{fig:planetraceschematic} and \\ref{fig:pointtraceschematic}. Other ray paths, e.g., total internal reflection at lens surfaces, are not considered since it is assumed that the contribution of these to the resulting irradiance distribution is negligible.\n\n\\subsubsection{Sources and ray-sampling}\nNon-sequential ray tracing is a Monte-Carlo approximation method of the solution to the continuous integration formulation of light transport through an optical system. For a detailed discussion of this topic, see \\cite[ch. 14]{pharr2016physically}. Thus to perform ray tracing, the light emitted by a source must be discretized into a finite set of rays\n\\begin{equation}\n l: t \\rightarrow \\mathbf{o} + \\hat{\\mathbf{d}}t,\n\\end{equation}\nwhere $\\mathbf{o}$ is the origin of the ray and $\\hat{\\mathbf{d}}$ its normalized direction vector. Both collimated ray bundle and point sources will be considered, see Figs.~\\ref{fig:planetraceschematic} and \\ref{fig:pointtraceschematic}, respectively. \n\n\\begin{figure}\n \\centering\n \\includegraphics[width = 0.6\\textwidth]{figures\/plane_trace_schematic.png}\n \\caption{Schematic of the ray tracing with a collimated ray bundle source.}\n \\label{fig:planetraceschematic}\n\\end{figure}\n\n\\begin{figure}\n \\centering\n \\includegraphics[width = 0.6\\textwidth]{figures\/point_trace_schematic.png}\n \\caption{Schematic of the ray tracing with a point source.}\n \\label{fig:pointtraceschematic}\n\\end{figure}\n\nTracing rays from a collimated ray bundle can be understood from Fig.~\\ref{fig:planetraceschematic}. The path of all rays from the source plane to the B-spline surface is a line segment parallel to the $z$-axis. Therefore, we can sample the incoming rays directly on the B-spline surface, with $\\hat{\\mathbf{d}} = (0,0,1)^\\top$. By the linearity of $X$ and $Y$ sampling on the B-spline domain $[0,1]^2$ is analogous to sampling on the lens extent $[-r_x,r_x]\\times [-r_y,r_y]$ in terms of distribution. Rays are sampled in a (deterministic) square grid on $[0,1]^2$. \n\nFor a point source, each ray starts at the location of the source, and the direction vector $\\hat{\\mathbf{d}}$ is sampled over the unit sphere $\\mathbb{S}^2$. More precisely, $\\hat{\\mathbf{d}}$ is given by\n\\begin{equation}\n \\hat{\\mathbf{d}} = \\left(\\cos\\theta\\sin\\phi,\\sin\\theta\\sin\\phi,\\cos\\phi\\right)^\\top,\n\\end{equation}\nwith $\\theta \\in [0,2\\pi)$ and $\\phi \\in [0,\\phi_\\text{max}]$ for some $0\\le \\phi_\\text{max} < \\frac{\\pi}{2}$, see Fig.~\\ref{fig:pointtraceschematic}. $\\phi_\\text{max}$ is chosen as small enough to minimize the number of rays that miss the lens entrance surface but large enough such that the whole surface is illuminated. For instance, if the source is on the $z$-axis, then $\\phi_\\text{max} = \\arctan\\left(\\frac{\\sqrt{r_x^2 + r_y^2}}{z_\\text{in}-z_s}\\right)$ where $z_\\text{in}$ is the $z$-coordinate location of the entrance surface and $z_s$ the $z$-coordinate of the source. To uniformly sample points on this sphere segment, $\\theta$ is sampled (non-deterministically) uniformly in $[0,2\\pi)$ and $\\phi$ is given by\n\\begin{equation}\n \\phi = \\arccos\\left(1-(1-\\cos\\phi_\\text{max})a\\right)\n\\end{equation}\nwhere $a$ is sampled (non-deterministically) uniformly in $[0,1]$. This sampling is used to produce the results in Section~\\ref{sec:results}.\n\nFor the point source, the calculation of the intersection of a ray with the B-spline surface is non-trivial. This calculation comes down to finding the smallest positive root of the $p+q$ degree piece-wise polynomial function\n\\begin{equation}\n f(t) = \n Z\\left(\\begin{pmatrix}o_u \\\\ o_v\\end{pmatrix} + \n \\begin{pmatrix}d_u \\\\ d_v\\end{pmatrix}t\n \\right)\n - d_zt - o_z, \\label{eq:surfaceintersect}\n\\end{equation}\nif such a root exists and yields a point in the domain of $Z$. Here the subscripts $u$ and $v$ denote that the ray is considered in $(u,v,z)$ space instead of $(x,y,z)$ space, so for instance\n\\begin{equation}\n o_u = X^{-1}(o_x) = \\frac{1}{2}\\left(\\frac{o_x}{r_y}+1\\right), \\quad d_v = \\frac{d_y}{2r_y}.\n\\end{equation}\nThe roots of eq. \\ref{eq:surfaceintersect} cannot generally be found analytically for $p+q>4$, and thus an intersection algorithm is implemented, which is explained in the next section.\n\n\\subsubsection{B-spline surface intersection algorithm}\nThe intersection algorithm is based on constructing a triangle mesh approximation of the B-spline surface and computing intersections with that mesh.\n\n\\paragraph{Triangle mesh intersection phase 1: bounding boxes}\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.6\\textwidth]{figures\/bb_per_knotspanproduct.png}\n \\caption{Triangles and corresponding bounding box for a few knot span products of a spherical surface.}\n \\label{fig:bb_per_knotspanproduct}\n\\end{figure}\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.6\\textwidth]{figures\/uv_triangle_check.png}\n \\caption{Example of which triangles are candidates for a ray-surface intersection with the ray plotted in red, based on their $u,v$-domain.}\n \\label{fig:uvtriangleint}\n\\end{figure}\nChecking every ray against every triangle for intersection is computationally expensive, so it is helpful to have bounding box tests that provide rough information about whether the ray is even near some section of the B-spline surface. B-spline theory provides a tool for this: the strong convex hull property, which yields the bounding box \n\n\\begin{equation}\n B_{i_0,j_0} = \\left[u_{i_0},u_{i_0 + 1}\\right) \\times \\left[v_{j_0},v_{j_0 + 1}\\right) \\times \\left[z^{\\min}_{i_0,j_0}, z^{\\max}_{i_0,j_0}\\right]\n\\end{equation}\nwhere $z^{\\min}_{i,j}$ and $z^{\\max}_{i,j}$ are the minimum and maximum $z$-values of the control points that affect the B-spline surface on the knot span product $\\left[u_{i_0},u_{i_0 + 1}\\right) \\times \\left[v_{j_0},u_{j_0 + 1}\\right)$, hence those with indices $i_0-p\\le i\\le i_0, j_0-q \\le j\\le j_0$. Formulated in terms of $Z(u,v)$ this yields\n\\begin{equation}\n z^{\\min}_{i_0,j_0} \\le Z(u,v) \\le z^{\\max}_{i_0,j_0}, \\quad\n (u,v) \\in \\left[u_{i_0},u_{i_0 + 1}\\right) \\times \\left[v_{j_0},v_{j_0 + 1}\\right).\n\\end{equation}\nExamples of such bounding boxes are shown in Fig.~\\ref{fig:bb_per_knotspanproduct}.\n\nThere are two steps in applying the bounding boxes in the intersection algorithm. First, a test for the entire surface (in $(u,v,z)$-space):\n\\begin{equation}\n [0,1]^2 \\times \\left[\\min_{i,j}P^z_{i,j},\\max_{i,j}P^z_{i,j}\\right].\n\\end{equation}\nSecond, a recursive method where, starting with all knot span products, each rectangle of knot span products is divided into at most 4 sub-rectangles for a new bounding box test until individual knot span products are reached.\n\n\n\\paragraph{Triangle mesh intersection phase 2: $(u,v)$-space triangle intersection}\nEach non-trivial knot span product $[u_{i_0},u_{i_0+1}) \\times [v_{j_0},v_{j_0+1})$ is divided into a grid of $n_u$ by $n_v$ rectangles. Thus we can define the boundary points\n\\begin{subequations}\n \\begin{align}\n u_{i_0,k} =& u_{i_0} + k\\Delta u_{i_0},\n \\quad \\Delta u_{i_0} = \\frac{u_{i_0+1}-u_{i_0}}{n_u}, \n \\quad k = 0, \\ldots, n_u, \\\\\n v_{i_0,\\ell} =& v_{j_0}+ \\ell \\Delta v_{j_0}, \\quad \\Delta v_{j_0} = \\frac{v_{j_0+1}-v_{j_0}}{n_v},\n \\quad \\ell = 0,\\ldots, n_v.\n \\end{align}\n\\end{subequations}\n\nEach rectangle is divided into a lower left and an upper right triangle, as demonstrated in Fig.~\\ref{fig:uvtriangleint}. In this figure it is shown for a ray projected onto the $(u,v)$-plane in some knot span which triangles are candidates for an intersection in $(u,v,z)$-space. This is determined by the following rules:\n\\begin{itemize}\n \\item A lower left triangle is intersected in the $(u,v)$-plane if either its left or lower boundary is intersected by the ray;\n \\item an upper right triangle is intersected in the $(u,v)$-plane if either its right or upper boundary is intersected by the ray.\n\\end{itemize}\n\nThe intersection of these boundaries can be determined by finding the indices of the horizontal lines at which the vertical lines are intersected:\n\\begin{equation}\n \\ell_k = \\left\\lfloor\\frac{ o_v+(u_{i_0,k}-o_u)\\frac{d_v}{d_u} - v_{j_0}}{\\Delta v_{j_0}}\\right\\rfloor,\n\\end{equation}\nand analogously $k_\\ell$.\n\n\\paragraph{Triangle mesh intersection phase 3: $u,v,z$-space triangle intersection}\nA lower left triangle can be expressed by a plane\n\\begin{equation}\n T(u,v) = Au + Bv + C\n\\end{equation}\n defined by the following linear system:\n\\begin{equation}\n \\begin{pmatrix}\n u_{i_0,k} & v_{j_0,\\ell} & 1 \\\\\n u_{i_0,k+1} & v_{j_0,\\ell} & 1 \\\\\n u_{i_0,k} & v_{j_0,\\ell+1} & 1\n \\end{pmatrix}\n \\begin{pmatrix}\n A \\\\ B \\\\ C\n \\end{pmatrix}\n =\n \\begin{pmatrix}[1.75]\n z_{i_0,k}^{j_0,\\ell} \\\\ z_{i_0,k+1}^{j_0,\\ell} \\\\ z_{i_0,k}^{j_0,\\ell+1}\n \\end{pmatrix}.\n\\end{equation}\nHere we use the following definition:\n\\begin{equation}\n z_{i_0,k}^{j_0,\\ell} = Z(u_{i_0,k},v_{j_0,\\ell}).\n\\end{equation}\nThis yields the plane\n\\begin{align}\n T(u,v) =&& z_{i_0,k}^{j_0,\\ell} + n_u\\left(z_{i_0,k+1}^{j_0,\\ell}-z_{i_0,k}^{j_0,\\ell}\\right)\\frac{u-u_{i_0,k}}{u_{i_0+1}-u_{i_0}} \\\\ \n && +n_v\\left(z_{i_0,k}^{j_0,\\ell+1}-z_{i_0,k}^{j_0,\\ell}\\right)\\frac{v-v_{j_0,\\ell}}{v_{j_0+1}-v_{j_0}}. \\label{eq:trianglefuncdetermined}\n\\end{align}\nNote that to define this triangle, the B-spline basis functions are evaluated at fixed points in $[0,1]^2$ independent of the rays or the $P^z_{i,j}$. This means that for a lens that will be optimized these basis function values can be evaluated and stored only once rather than in every iteration, for computational efficiency.\n\nComputing the intersection with the ray $\\tilde{\\mathbf{r}}(t) = \\tilde{\\mathbf{o}} + \\tilde{\\hat{\\mathbf{d}}}t$ is now straight-forward, and yields\n\\begin{equation}\n t_\\text{int} = - \\frac{C+\\langle\\tilde{\\mathbf{o}}, \\mathbf{n}\\rangle}{\\langle\\tilde{\\hat{\\mathbf{d}}},\\mathbf{n}\\rangle}, \\quad \\mathbf{n} = \n \\begin{pmatrix} 0 \\\\ 1 \\\\ \\partial_u T\\end{pmatrix} \\times\n \\begin{pmatrix} 1 \\\\ 0 \\\\ \\partial_v T\\end{pmatrix} = \n \\begin{pmatrix}A \\\\ B \\\\ -1\\end{pmatrix},\n\\end{equation}\nwhere $\\mathbf{n}$ is a normal vector to the triangle, computed using the cross product. This also explains why $\\langle\\tilde{\\hat{\\mathbf{d}}},\\mathbf{n}\\rangle=0$ does not yield a well-defined result: in this situation the ray is parallel to the triangle.\n\nThe last thing to check is whether $\\tilde{l}(t_\\text{int})$ lies in the $(u,v)$-domain of the triangle, which can be checked by three inequalities for the three boundaries of the triangle:\n\n\\begin{subequations}\n \\begin{align}\n o_u + d_u t_\\text{int} \\ge u_{i_0,k} \\\\\n 0 \\leq o_v + d_v t_\\text{int} - v_{j_0,\\ell}< \\frac{n_u}{n_v}\\frac{v_{j_0+1}-v_{j_0}}{u_{i_0+1}-u_{i_0}}(u_{i_0,k+1}-(o_u + d_u t_\\text{int})).\n \\end{align}\n\\end{subequations}\n\nThe computation for an upper right triangle is completely analogous. The upper triangle has a closed boundary, whereas the lower triangle has an open one and vice versa, which means that the $(u,v)$ domains of the triangles form an exact partition of $[0,1]^2$. Thus the triangle mesh is `water-tight', meaning that no ray intersection should be lost by rays passing in between triangles.\n\n\\subsection{Image reconstruction}\nThe ray tracing produces an irradiance distribution in the form of an image matrix $\\mathcal{I} \\in \\mathbb{R}^{n_x \\times n_y}_{\\ge 0}$, where the elements correspond to a grid of rectangles called pixels that partition the detector screen positioned at $z=z_\\text{screen} > \\max_{i,j} P_{i,j}^z$. The screen resolution $(n_x,n_y)$ and the screen radii $(R_x,R_y)$ together yield the pixel size\n\\begin{equation}\n (w_x,w_y) = \\left(\\frac{2R_x}{n_x},\\frac{2R_y}{n_y}\\right).\n\\end{equation}\nFor reasons explained later in this section, sometimes a few `ghost pixels' are added, so the effective screen radii are\n\\begin{equation}\n R_x^* := R_x + \\frac{\\nu_x - 1}{2}w_x, \\quad\n R_y^* := R_y + \\frac{\\nu_y - 1}{2}w_y,\n\\end{equation}\nand the effective screen resolution is $(n_x + \\nu_x -1, n_y + \\nu_y - 1)$ where $\\nu_x$ and $\\nu_y$ are odd positive integers whose meaning will become clear later in this section.\n\n\nProducing the irradiance distribution from the rays that intersect the detector screen is called image reconstruction \\cite[sec. 7.8]{pharr2016physically}. The way that a ray contributes to a pixel with indices $i,j$ is governed by a reconstruction filter\n\\begin{equation}\n F_{i,j} : [-R_x,R_x] \\times [-R_y,R_y] \\rightarrow \\mathbb{R}_{\\ge 0},\n\\end{equation}\nyielding for the irradiance distribution\n\\begin{equation}\n \\mathcal{I}_{i,j} = \\sum_{k=1}^N \\omega_k F_{i,j}(\\mathbf{x}_k),\n\\end{equation}\nfor a set of ray intersections $\\{\\mathbf{x}_k\\}_{k=1}^N$ with corresponding final ray weights $\\{\\omega_k\\}_{k=1}^N$. The ray weights are initialized at the sampling of the ray at the source. They are slightly modified by the lens boundary interactions as a small portion of the light is reflected rather than refracted. The amount by which the ray weights are modified is governed by the Fresnel equations \\cite[sec. 2.7.1]{Fowles1975}. In our implementation, the Fresnel equations are approximated by Schlick's approximation \\cite[eq. 24]{Schlick1994}. In the current implementation, all ray weights are initialized equally. The precise value does not matter since the relationship between the initial and final weights is linear. The loss function (section \\ref{lossfunc}) compares scaled versions of the produced and target irradiance distribution.\n\nIn the simplest reconstruction case, the value of a pixel is given by the sum of the weights of the rays that intersect the detector screen at that pixel (called box reconstruction in \\cite[sec. 7.8.1]{pharr2016physically}). In this case the reconstruction filter of pixel $i,j$ is simply the indicator function of the pixel $\\left[(i-1)w_x,iw_x\\right) \\times \\left[(j-1)w_y,jw_y\\right)$.\n\nTo obtain a ray tracing implementation where the irradiance $\\mathcal{I}$ is differentiable with respect to geometry parameters of the lens, say, the parameter $\\theta$, the irradiance distribution must vary smoothly with this parameter. The dependency on this parameter is carried from the lens to the screen by the rays through the screen intersections $\\mathbf{x}_k = \\mathbf{x}_k(\\theta)$. Thus to obtain a useful gradient $\\frac{\\partial \\mathcal{I}}{\\partial \\theta}$ the filter function $F_{i,j}$ should be at least $C^1$, see Fig.~\\ref{fig:reconstructiondiffb} which is achieved by introducing a filter function that spreads out the contribution of a ray over a kernel of pixels of size $(\\nu_x,\\nu_y)$ centered at the intersection location. For the conservation of light, we require that $\\sum_{i,j}F_{i,j}(\\mathbf{x}) \\equiv 1$.\n\\begin{figure}\n \\centering\n \\includegraphics[width = 0.7\\textwidth]{figures\/reconstruction_diffb.png}\n \\caption{$\\mathbf{x}(\\theta)$ in the left plot shows the intersection location of a ray with the screen, dependent on a lens geometry parameter $\\theta$. The right plot then shows the reconstruction filter value for the green pixel in the left plot dependent on $\\theta$. In order to obtain a useful gradient of the pixel value with respect to $\\theta$, a smooth reconstruction filter is needed.}\n \\label{fig:reconstructiondiffb}\n\\end{figure}\n\nTherefore, the Gaussian reconstruction function is introduced, based on the identically named one described in \\cite[sec. 7.8.1]{pharr2016physically}. This filter function is based on the product \n\\begin{equation}\n \\tilde{F}_{i,j}(x,y;\\alpha,\\nu_x,\\nu_y) := f_{i}^x(x;\\alpha,\\nu_x)f_{j}^y(y;\\alpha,\\nu_y),\n\\end{equation}\nwhere\n\\begin{equation}\n f_{i_0}^x(x;\\alpha,\\nu_x) = \n \\begin{cases}\n e^{-\\alpha\\left(x-c^x_{i_0}\\right)^2} - e^{-\\alpha\\left(\\frac{\\nu_x w_x}{2}\\right)^2} &\\text{ if } \\lvert x-c^x_i\\rvert < \\frac{\\nu_x w_x}{2},\\\\\n 0 & \\text{otherwise.}\n \\end{cases} \\label{eq:filter1dim}\n\\end{equation}\nThe centers of the pixels are given by\n\\begin{equation}\n (c_i^x,c_j^y) := \\left(\\left(i + \\textstyle\\frac{1}{2}\\right)w_x - R_x, \\left(j + \\textstyle\\frac{1}{2}\\right)w_y - R_y\\right).\n\\end{equation}\nNote that the support of $\\tilde{F}_{i,j}$ is of size $\\nu_xw_x$ by $\\nu_yw_y$, the size of the kernel on the detector screen. The normalized reconstruction filter is then given by\n\\begin{equation}\n F_{i,j}(x,y;\\alpha,\\nu_x,\\nu_y) = \\frac{\\tilde{F}_{i,j}(x,y;\\alpha,\\nu_x,\\nu_y)}{\\sum_{i',j'}\\tilde{F}_{i',j'}(x,y;\\alpha,\\nu_x,\\nu_y)}.\n\\end{equation}\nThe function $F_{i,j}$ is plotted in Fig.~\\ref{fig:recfilter3d}. Note that the function is not differentiable at the boundary of its support, but this yields no problems in the optimization.\n\\begin{figure}\n \\centering\n \\includegraphics{figures\/Gaussianfilter.png}\n \\caption{Gaussian reconstruction filter $F_{i_0,j_0}$ for $\\alpha = 1$ and $(\\nu_x,\\nu_y)=(3,3)$.}\n \\label{fig:recfilter3d}\n\\end{figure}\n\n\\begin{figure}\n \\centering\n \\includegraphics[width = \\textwidth]{figures\/image_reconstr_examples.png}\n \\caption{Image reconstruction based on a small set of ray-screen intersections, for bincount and various reconstruction filter sizes and $\\alpha = 1$.}\n \\label{fig:imagereconstr}\n\\end{figure}\n\nGaussian image reconstruction is shown in Fig.~\\ref{fig:imagereconstr} for various values of $\\nu_x = \\nu_y$. There is a trade-off here since the larger $\\nu_x$, and $\\nu_y$ are the blurrier the resulting image is, and the larger the computational graph becomes, but also the larger the section of the image is that is aware of a particular ray which yields more informative gradients. \n\nUp to this point, this section has discussed the ray tracing part of the pipeline, the next subsections will discuss the role of the neural network and the optimization.\n\n\\subsection{Multi-layer perceptron as optimization accelerator} \\label{subsec:nnarchitectures}\n\\begin{figure}\n \\centering\n \\includegraphics[width = 0.6\\textwidth]{figures\/nn_architecture_dense.png}\n \\caption{The dense multi-layer perceptron architecture based on the size of the control net $(n_1+1)\\times(n_2+1)$.}\n \\label{fig:densenn}\n\\end{figure}\n\nSeveral neural network architectures are considered, all with a trivial input of 1, meaning that the neural networks will not, strictly speaking, be used to approximate a function since the considered domain is trivial. Non-trivial network inputs of system parameters like the source location will probably be part of follow-up research.\n\nIn this configuration, the neural network can be considered a transformation of the space over which is optimized: from the space of trainable neural network parameters to the space of control point $z$-coordinate values.\nThe goal of choosing the network architecture is that optimizing the trainable neural network parameters of this architecture yields better training behavior than optimizing the control point z-coordinate values directly. The used networks are multi-layer perceptions (MLPs), feed-forward networks consisting of several layers of neurons, as depicted in Fig.~\\ref{fig:densenn}. The considered architectures are:\n\\begin{enumerate}\n \\item No network at all. \\label{item:nonetcase}\n \\item A sparse MLP where the sparsity structure is informed by the overlap of the B-spline basis function supports on the knot spans. In other words: this architecture aims to precisely let those control points 'communicate' within the network that share influence on some knot span product on the B-spline surface, yielding a layer with the same connectivity as a convolutional layer with kernel size $(2p+1,2q+1)$. However, each connection has its own weight and each kernel its own bias, instead of only having a weight per element of the convolution kernel and one single bias for all kernels.\n \\item Larger fully connected architectures are also considered, with 3 layers of control net size. Note that two consecutive such layers yield many weight parameters: $n^4$ for a square control net with `side length' $n$.\n\\end{enumerate}\nThe activation function used for all neurons is the hyperbolic tangent, which is motivated below.\n\n\\subsubsection{Control point freedom} \\label{subsec:controlpointfreedom}\nControl over the range of values that can be assumed by the control point $z$-coordinates is essential to make sure that the systems stays physical (as mentioned in Section~\\ref{subsubsec:lensconstraints}), but also to be able to take into account restrictions imposed on the lens as part of mechanical construction in a real-world application. Note that the restriction $P_{i,j}^z > z_\\text{in}$ for the control points being above the lens entrance surface is not critical for a collimated ray bundle simulation since, the entrance surface can be moved arbitrarily to the $-z$ direction without affecting the ray tracing. \n\nSince the final activation function $\\tanh$ has finite range $(-1,1)$, this can easily be mapped to a desired interval $(z_{\\min},z_{\\max})$:\n\\begin{equation}\n y_{i,j} \\mapsto z_{\\min} + \\textstyle\\frac{1}{2} (y_{i,j} + 1)(z_{\\max} - z_{\\min}), \\label{eq:outputcorrection}\n\\end{equation}\nwhich can even vary per control point if desired. Here $y_{i,j}$ denotes an element of the total output $Y$ of the network.\nThe above can also be used as an offset from certain fixed values:\n\\begin{equation}\n y_{i,j} \\mapsto f\\left(P^x_{i,j},P^y_{i,j}\\right) + z_{\\min} + \\textstyle\\frac{1}{2} (y_{i,j} + 1)(z_{\\max} - z_{\\min}). \\label{eq:outputcorrectionwfunc}\n\\end{equation}\nThe resulting B-spline surface approximates the surface given by $f(x,y) + \\textstyle\\frac{1}{2}(z_{\\max}+z_{\\min})$ if $Y \\approx 0$ can be used to optimize a lens that is globally at least approximately convex\/concave. The choice of the hyperbolic tangent activation function accommodates this: since this activation function is smooth around its fixed point $0$ when initializing the weights and biases of the network close to $0$, there is no cumulative value-increasing effect in a forward pass through the network so that indeed $Y\\approx 0$ in this case.\n\nFor comparability, in the case without a network, the optimization is not performed directly on the control point $z$-coordinates. Instead, for each control point, a new variable for optimization is created, which is passed through the activation function and the correction as in Eq.~\\ref{eq:outputcorrection} or \\ref{eq:outputcorrectionwfunc} before being assigned to the control point.\n\n\\subsection{The optimization}\n\n\\label{lossfunc}\nThe lens is optimized such that the irradiance distribution $\\mathcal{I}$ projected by the lens approximates a reference image $\\mathcal{I}_\\text{ref}$, where $\\mathcal{I} ,\\mathcal{I}_\\text{ref}\\in \\mathbb{R}^{n_x\\times n_y}_{\\ge 0 }$. The loss function used to calculate the difference between the two uses the normalized matrices: \n\\begin{equation}\n \\widehat{\\mathcal{I}} = \\frac{\\mathcal{I}}{\\sum_{i,j}^{n_x,n_y} \\mathcal{I}_{i,j}} \n \\quad\n \\mathrm{and}\n \\quad\n \\widehat{\\mathcal{I}}_\\text{ref} = \\frac{\\mathcal{I}_\\text{ref}}{\\sum_{i,j}^{n_x,n_y} \\mathcal{I}_{\\mathrm{ref},i,j}}.\n\\end{equation}\n\n\\noindent\nThe loss function is given by\n\\begin{equation}\n \\mathcal{L}(\\mathcal{I};\\mathcal{I}_\\text{ref}) = \\frac{1}{\\sqrt{n_x n_y}}\n \\left\\| \\widehat{\\mathcal{I}}-\\widehat{\\mathcal{I}}_\\text{ref} \\right\\|_F \n \\label{eq:pipelineLoss},\n\\end{equation}\nwhere $\\| \\cdot \\|_F$ is the Frobenius or matrix norm, which is calculated as follows:\n\\begin{equation}\n \\| \\mathcal{A} \\|_F = \\sqrt{\\sum_{i}^{n_x}\\sum_{j}^{n_y} \\lvert a_{i,j} \\rvert ^2}.\n\\end{equation}\nFig.~\\ref{fig:optimizationloop} shows the conventional stopping criterion of the loss value being smaller than some $\\varepsilon > 0$, but in our experiments, we use a fixed number of iterations.\n\nThe neural network parameters (weights and biases) are updated using the Adam optimizer \\citep{Kingma2014} by back-propagation of the loss to these parameters.\n\n\\section{Results} \\label{sec:results}\n\nSeveral results produced with the optimization pipeline discussed in the previous sections are displayed and discussed in this section. The implementation mainly uses PyTorch, a Python wrapper of Torch \\citep{Collobert2002Torch}.\n\nNone of the optimizations performed for this section took more than a few hours to complete, on a \\verb|HP ZBook Power G7 Mobile Workstation| with a \\verb|NVIDIA Quadro T1000 with Max-Q Design| GPU.\n\nMost of the results have been validated with \\emph{LightTools} \\citep{LightTools}, an established ray tracing software package in the optics community. Lens designs were imported to LightTools as a point cloud, then interpolated to obtain a continuous surface, and all simulations were conducted using $10^6$ rays.\n\nUnits of length are mostly unspecified since the obtained irradiance distributions are invariant under uniform scaling of the optical system. This invariance to scaling is reasonable as long as the lens details are orders of magnitude larger than the wavelength of the incident light such that diffraction effects do not play a role. Furthermore, the irradiance distributions are directly proportional to the scaling of all ray weights and thus the source power, so the source and screen power also need no unit specification. Note that relative changes have a non-trivial effect, like changes to the power proportion between sources or the distance proportions of the optical system.\n\n\\newpage\n\\subsection{Irradiance derivatives with respect to a control point}\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.75\\textwidth]{figures\/results\/controlpoint_derivatives.png}\n \\caption{Gradients of an irradiance distribution of a collimated ray bundle through a flat lens (parallel sides), with respect to the $z$-coordinate of one control point. The zeros are masked with white to show the extend of the influence of the control point. These irradiation distributions differ by: (a): degrees $(3,3)$, reconstruction filter size $(3,3)$, (b): degrees $(3,3)$, reconstruction filter size $(11,11)$, (c): degrees $(5,3)$, reconstruction filter size $(3,3)$.}\n \\label{fig:controlpointderivs}\n\\end{figure}\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.9\\textwidth]{figures\/results\/renderderivsexplanation.png}\n \\caption{Demonstration of how one control point influences the irradiance distribution in the case of a flat lens with B-spline degrees $(3,3)$ and a collimated ray bundle source.}\n \\label{fig:renderderivsexplanation}\n\\end{figure}\nThis section gives a simple first look at the capabilities of the implemented differentiable ray tracer: computing the derivative of an irradiance distribution with respect to a single control point. Obtaining this data is inefficient in the current PyTorch implementation as a forward mode automatic differentiation pass is required, which is not currently (entirely) supported by PyTorch. Therefore these derivatives are computed with pixel-wise back-propagation.\n\nFig.~\\ref{fig:controlpointderivs} shows the derivative of an irradiance distribution produced by a collimated ray bundel through a flat lens for various B-spline degrees and reconstruction filter sizes, and Fig.~\\ref{fig:renderderivsexplanation} shows what one of these systems looks like. The overall `mountain with a surrounding valley' structure can be understood as follows: as one of the control points rises, it creates a local convexity in the otherwise flat surface. This convexity has a focusing effect, redirecting light from the negative valley region toward the positive mountain region.\n\nNoteworthy of these irradiance derivatives is also their total sum: (a) $\\SI{-1.8161e-08}{}$, (b) $\\SI{3.4459e-08}{}$, (c) $\\SI{9.7095e-05}{}$. These small numbers with respect to the total irradiance of about $93$ and therefore indicate conservation of light; as the control point moves out of the flat configuration, at first, the total amount of power received by the screen will not change much. This is expected from cases (a) and (b), where the control point does not affect rays that reach the screen on the boundary pixels. However, in all cases, all rays intersect the lens at right angles. Around $\\theta = 0$, the slope of Schlick's approximation is very shallow, indicating a small decrease in refraction in favor of reflection.\n\n\\subsection{Sensitivity of the optimization to initial state and neural network architecture} \\label{subsec:results_nnsensitivity}\nAs with almost any iterative optimization procedure, choosing a reasonable initial guess of the solution is crucial for reaching a good local\/global minimum. For training neural networks, this comes down to how the network weights and biases are initiated. In this section, we look at three target illuminations: the circular top hat distribution (Fig.~\\ref{fig:circtophat}), the TU Delft logo (Fig.~\\ref{fig:TUDflameinv}), and an image of a faceted ball (Fig.~\\ref{fig:facball}). For some experiments, black padding or Gaussian blurring is applied to these images. We design lenses to produce these distributions from a collimated ray bundle, given various neural network architectures (introduced in section \\ref{subsec:nnarchitectures}) and parameter initializations.\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.4\\textwidth]{figures\/results\/circular_tophat.png}\n \\caption{The circular tophat target illumination.}\n \\label{fig:circtophat}\n\\end{figure}\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.4\\textwidth]{figures\/TUD_400_inverse.png}\n \\caption{The TU Delft flame target illumination.}\n \\label{fig:TUDflameinv}\n\\end{figure}\n\n\\begin{figure}\n \\centering\n \\includegraphics[width = 0.4\\textwidth]{figures\/results\/shaded_faceted_ball.png}\n \\caption{The faceted ball target illumination.}\n \\label{fig:facball}\n\\end{figure}\n\n\\paragraph{Circular top hat distribution from collimated ray bundle} \\label{subsec:circtophat}\nFig.~\\ref{fig:tophatloss} shows the progress of the loss over 1000 iterations, with each iteration taking $2.5$ seconds, for various neural network architectures and parameter initialization combinations. For the other parameters in these simulations, see the supplementary information. For a few moments during the training, the resulting freeform surfaces and irradiance distributions are shown in Figs.~\\ref{fig:lensshapes}, \\ref{fig:tophatrandomsparse}, \\ref{fig:tophatunifsparse}, \\ref{fig:tophatunifnonet} and \\ref{fig:tophatunifdense}. Uniform here means that the initial trainable parameter values are sampled from a small interval: $U\\left(\\left[-10^{-4},10^{-4}\\right]\\right)$, except for the no-network case; this is initialized with all zeros.\n\nThe first notable difference is between the random and uniformly initialized sparse neural networks. The uniformly initialized neural network performs much better, and no network performs better. This is probably because the uniformly initialized cases converge to a better (local) minimum than the randomly initialized case. Of course, it could happen that the random initialization lands in a very favorable spot in the design landscape, but intuitively this seems very unlikely.\n\nAnother property of the uniformly initialized cases is their preservation of symmetry in these setups. As Fig.~\\ref{fig:lensshapes} shows, this leads to much simpler lenses, which are probably much less sensitive to manufacturing errors due to their relative lack of small detail. What is interesting to note here is that if the sparse network is initialized with all parameters set to $0$, then its optimization is identical to the no-network case, as only the biases in the last layer achieve non-zero gradients.\n\nNo rigorous investigation has been conducted to the extent that this behavior of increased convergence speed carries over to other target distributions and system configurations and what the optimal hyper-parameters are. A thorough investigation of the hyper-parameter space that defines a family of network architectures could reveal where in the increase of the architecture complexity, diminishing returns for optimizing these lenses arises. However, based on these initial findings the fully connected network is used for all the following optimizations in the results.\n\n\n\n\n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=0.7\\textwidth]{figures\/results\/tophat_loss_progress.png}\n \\caption{Loss progress over the iterations for various pipeline-setups for forming a tophat distribution from a collimated ray bundle.}\n \\label{fig:tophatloss}\n\\end{figure}\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.9\\textwidth]{figures\/results\/tophat_lens.png}\n \\caption{The lens height field after initialization ($n=0$), and $n=50,100$ and $1000$ iterations respectively, for different network architectures (Section~\\ref{subsec:nnarchitectures}) and network parameter initializations (Section~\\ref{subsec:circtophat}).}\n \\label{fig:lensshapes}\n\\end{figure}\n\n\\begin{figure}[p]\n \\centering\n \\includegraphics[width=0.9\\textwidth]{figures\/results\/tophat_randomsparse.png}\n \\caption{Irradiance distributions and pixel-wise errors in the optimization progress of a random lens with a sparse network towards a circular tophat illumination.}\n \\label{fig:tophatrandomsparse}\n\\end{figure}\n\n\\begin{figure}[p]\n \\centering\n \\includegraphics[width=0.9\\textwidth]{figures\/results\/tophat_unifsparse.png}\n \\caption{Irradiance distributions and pixel-wise errors in the optimization progress of a flat lens with a sparse network towards a circular tophat illumination.}\n \\label{fig:tophatunifsparse}\n\\end{figure}\n\n\\begin{figure}[p]\n \\centering\n \\includegraphics[width=0.9\\textwidth]{figures\/results\/tophat_unifnonet.png}\n \\caption{Irradiance distributions and pixel-wise errors in the optimization progress of a flat lens without a network towards a circular tophat illumination.}\n \\label{fig:tophatunifnonet}\n\\end{figure}\n\n\\begin{figure}[p]\n \\centering\n \\includegraphics[width=0.9\\textwidth]{figures\/results\/tophat_unifdense.png}\n \\caption{Irradiance distributions and pixel-wise errors in the optimization progress of a flat lens with a dense network towards a circular tophat illumination.}\n \\label{fig:tophatunifdense}\n\\end{figure}\n\n\\clearpage\n\\paragraph{TU flame and faceted ball from collimated ray bundle}\nIn what follows, we consider complex target distributions: the TU Delft flame (for a complex shape) and a faceted ball (for a target with various brightness levels). Here we still use the collimated ray bundle illumination, but lenses are now optimized for various magnifications; see Table.~\\ref{tab:magndata}. These magnifications are defined as the scaling of the screen size with respect to the smallest screen size $(0.64,0.64)$. The other parameters of these optimizations are shown in the supplementary information. All these iterations took about $4$ seconds each.\n\nThe final irradiance distributions and corresponding LightTools results are shown in Figs.~\\ref{fig:flamerenders} and \\ref{fig:facetedballrenders}, respectively. These figures show that the optimization pipeline can handle these more complex target illuminations well. The LightTools results predict some artifacts within the irradiance distribution, which the implemented ray tracer does not, especially in the TU flame magnification 1 case. By visual inspection, based on the LightTools results, one would probably rate these results in the exact opposite order than as indicated by the losses shown in Fig.~\\ref{fig:ManufacturingLoss}.\n\nA potential explanation of the increase in loss with the magnification factor in Fig.~\\ref{fig:ManufacturingLoss} is that the bigger the screen is: the rays require higher angles to reach the edges of the screen, which is apparent in the cases of magnification 3 and 5 Fig.~\\ref{fig:ManufacturingRays}. This results in a larger sensitivity of the irradiance to the angle with which a ray leaves the screen. This in turn gives larger gradients of the irradiance with respect to the control points. Therefore the optimization takes larger steps in the neural network parameter space, possibly overshooting points that result in a lower loss.\n\nFor the magnification, $3$ and $5$, the irradiance distributions from LightTools show artifacts at the screen boundaries. A possible explanation for this is that the way the B-spline surfaces are transferred to LightTools is inaccurate at the surface boundaries.\\footnote{Assuming only rays from the B-spline surface boundaries reach the screen boundary area.} This is because surface normals are inferred from fewer points on the B-spline surface at the boundary than in the middle of the surface by LightTools.\n\nFurthermore, a significant amount of rays are lost during optimization because the target illuminations are black at the borders, so rays near the screen boundary will be forced off the screen by the optimization. Once rays are off the screen, they no longer contribute to the loss function. Once a ray misses the screen, the patch on the B-spline surface these rays originate from does not influence the irradiance and, thus, the loss function. \nHowever, this does not mean that this patch is idle for the rest of the optimization, as this patch can be in support of a basis function that corresponds to a control point that still affects rays that hit the screen. Therefore, the probability of getting idle lens patches with this setup decreases with the B-spline degrees since these determine the size of the support of the B-spline basis functions but might, in some cases, lead to oscillatory behavior, with rays alternating between hitting and missing the screen. \n\nFig. \\ref{fig:ManufacturingSurfaces} shows the optimized B-spline lens surface height field. A densely varying color map is chosen since the deviations from a flat or smooth concave shape are quite subtle, which is due to the large lens exit angle sensitivity of the ray-screen intersections since the ratio lens size to screen size is large with respect to the ratio lens size to screen distance.\n\n\\begin{table}[h]\n \\centering\n {\n \\def1.5{1.5}\n \\begin{tabular}{c|c|c|c}\n \\textbf{Magnification} & \\textbf{screen size} & $f(x,y)$ & \\textbf{starting shape type}\\\\\n \\hline\n $1$ & $(0.64,0.64)$ & $\\textstyle\\frac{1}{2}$ & flat\\\\\n $3$ & $(1.92,1.92)$ & $\\textstyle\\frac{1}{2} + 8 - \\sqrt{8^2-x^2-y^2}$ & concave\\\\\n $5$ & $(3.20,3.20)$ & $\\textstyle\\frac{1}{2} + 4 - \\sqrt{4^2 - x^2 - y^2}$ & concave\n \\end{tabular}\n }\n \\caption{The screen size and control point offset function $f$ used per magnification in the TU flame and faceted ball optimizations (distances in centimeters).}\n \\label{tab:magndata}\n\\end{table}\n\n\n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=0.7\\textwidth]{figures\/results\/Manufacturing_Loss.png}\n \\caption{Loss progress for the various magnifications and target distributions.}\n \\label{fig:ManufacturingLoss}\n\\end{figure}\n\n\\clearpage\n\n\\begin{figure}[p]\n \\centering\n \\includegraphics[width=\\textwidth]{figures\/results\/Manufacturing_flames.png}\n \\caption{Implementation and LightTools irradiance distributions of the TU flame target from the final lens design of the optimization.}\n \\label{fig:flamerenders}\n\\end{figure}\n\n\\begin{figure}[p]\n \\centering\n \\includegraphics[width=\\textwidth]{figures\/results\/Manufacturing_balls.png}\n \\caption{Implementation and LightTools irradiance distributions of the faceted ball target from the final lens design of the optimization.}\n \\label{fig:facetedballrenders}\n\\end{figure}\n\n\\begin{figure}[p]\n \\centering\n \\includegraphics[width=\\textwidth]{figures\/results\/Manufacturing_surfaces.png}\n \\caption{The lens designs for the different magnifications and two target distributions.}\n \\label{fig:ManufacturingSurfaces}\n\\end{figure}\n\n\\begin{figure}[p]\n \\centering\n \\includegraphics[width=\\textwidth]{figures\/results\/Manufacturing_rays.png}\n \\caption{$25\\times25$ traced rays through the final lens designs for the different magnifications and two target distributions.}\n \\label{fig:ManufacturingRays}\n\\end{figure}\n\n\n\n\n\\clearpage\n\\subsection{Optimization with a point source and a grid of point sources}\nWe now consider an optimization that uses the B-spline intersection algorithm. First, we design a lens with one point source at $(0,0,-5)$ with $5\\times 10^5$ rays to again form the TU flame. Then after $\\sim 200$ iterations, we change the source to an equispaced grid of $25\\times 25$ point sources with $10^3$ rays each on $[-1,1]\\times[-1,1]\\times\\{-5\\}$, approximating a source of non-negligible size. The other (hyper-)~parameters of this optimization are shown in the supplementary information. Due to the additional B-spline intersection procedures, each iteration takes approximately $50$ seconds.\nThe resulting final irradiance distribution and LightTools verifications can be seen in Fig.~\\ref{fig:point_source_renders}. The final irradiance distribution similar to the that obtained by LightTools, indicating that ray tracing with the implemented B-spline intersection algorithm works correctly. The irradiance are blurred due to the reconstruction filter.\nThe single-source point optimization performs well, although the illumination is less uniform than in the collimated ray bundle case (Figs.~\\ref{fig:flamerenders} and \\ref{fig:facetedballrenders}). \nThe non-uniformity can be attributed to the gaussian reconstruction filter used during optimization, as it smoothes out the small uniformities. \n\n\nAs seen in Fig.~\\ref{fig:point_source_renders} the irradiance distribution obtained with a grid of point sources accurately approximates the extended source illumination distribution quite well for the unoptimized case. \nFinding a lens design that redirects light from a source of non-negligible size into a desired irradiance distribution is a complex problem for which it is hard to indicate how good the optimal irradiance distribution can become.\nThe progress of the loss, as seen in Fig.~\\ref{fig:point_source_loss}, shows that the optimization can still improve the loss, even after the transition to the grid of point sources. Interestingly, looking at Fig.~\\ref{fig:point_source_renders} again, the optimization seems to adopt the coarse strategy of filling up the target distribution with images of the source square, as shown in Fig.~\\ref{fig:sourceimages}. This strategy does hinder the possible quality of the final irradiance distribution as the image of the source on the target is larger than the fine details in the desired irradiance. Optimizing both the front and back surfaces of the freeform could resolve this issue, as this will cause the image of the source to change shape depending on where it ends up on the target screen.\n\n\\begin{figure}\n \\centering\n \\includegraphics{figures\/results\/source_images_indication.png}\n \\caption{Indication of images of the source square in the irradiance distribution obtained by LightTools using the point source grid.}\n \\label{fig:sourceimages}\n\\end{figure}\n\n\\begin{figure}[p]\n \\centering\n \\includegraphics[width=\\textwidth]{figures\/results\/point_source_renders.png}\n \\caption{The final irradiation distribution of the lens optimizations with point sources and the corresponding LightTools verifications. The extended source is not implemented in our ray tracer, but is approximated by the point source grid.}\n \\label{fig:point_source_renders}\n\\end{figure}\n\n\\begin{figure}[p]\n \\centering\n \\includegraphics[width=0.75\\textwidth]{figures\/results\/point_source_surfaces.png}\n \\caption{Height fields of the lenses optimized for the TU flame with point sources.}\n \\label{fig:point_source_surfaces}\n\\end{figure}\n\n\\begin{figure}[p]\n \\centering\n \\includegraphics[width= 0.6\\textwidth]{figures\/results\/point_source_loss.png}\n \\caption{Loss over the iterations optimizing for the TU flame. The system is initiated with a point source, and after $\\sim 200$ iterations the point source is replaced by an equispaced grid of $25\\times 25$ point sources.}\n \\label{fig:point_source_loss}\n\\end{figure}\n\n\\section{Conclusion} \\label{sec:conclusion}\n\nWe demonstrated that non-sequential differentiable ray tracing is a viable tool for designing freeform lenses for collimated ray bundles, points, and extended sources.\nUsing a B-spline allows for the design of a continuous surface, which is desirable for manufacturing, and its control point allows for locally altering the irradiance distribution. For both cases, collimated and point source lens designs were found that could accurately project the desired irradiance distribution in both the differentiable ray tracer and in commercial software LightTools. Some artifacts still exist and resolving this issue will be a part of further research.\n\nFor the source with a finite extent, the optimizer improved upon the design obtained for a point source. However, the final irradiance distribution was made up of images of the source, which hinders the minimum that can be obtained as the source image is larger than the details in the desired irradiance distribution. This issue can be resolved by optimizing multiple surfaces simultaneously, as the image of the source on the target plane can then be optimized to vary with location.\n\nUsing a neural network to remap the optimization space provides an interesting way to increase the convergence speed of the optimization. However, further investigation is required to see whether this generally holds and what the effect is on other network architectures.\n\nThe developed ray tracing implementation is currently a proof of concept and needs to be optimized for speed. The B-spline intersection algorithm, in particular, adds roughly a factor of $10$ to the computation time. A significant speedup can be achieved here by leveraging efficient lower-level GPU programming languages, such as CUDA. \n\n\n\\section{Acknowledgements}\nWe acknowledge support by NWO-TTW Perspectief program (P15-36) ``Free-form scattering optics\".\n\n\n\\section{Introduction}\\label{sec:Introduction}\nIn the field of illumination optics, optical engineers design optical elements to transport the light from a source, which can be an LED, laser, or incandescent lamp, to obtain a desired irradiance (spatial density of the luminous flux) or intensity (angular density of the luminous flux) \\citep{Grant2011}.\nTo transport the light from the source to the target, the optical engineer can construct a system consisting of various optical elements such as lenses, mirrors, diffusers, and light guides \\citep{John2013}.\nOne particular type of optic used in automotive and road lighting applications is the freeform lens, a lens without any form of symmetry \\citep{Falaggis2022, Mohedano2016}.\nThe design of these lenses is a complex problem. It is currently solved by numerically solving system-specific differential equations or through optimization, with every step validated using a (non-differentiable) non-sequential ray tracer \\citep{Wu2018}.\nGreat effort is involved in generalizing these methods to account for varying amounts of optical surfaces \\citep{Anthonissen2021}, their optical surface and volume properties \\citep{Kronberg2022, lippman_prescribed_2020}, or the source model~\\citep{Muschaweck2022, tukker_efficient_2007, sorgato_design_2019}. \n\nThe performance of an optical system is evaluated using ray tracing, which is the process of calculating the path of a ray originating from a source through the optical system. Sequential ray tracers such as Zemax~\\citep{zemax} and Code V~\\citep{codev}, primarily used in the design of imaging optics, trace a small number of rays to determine the quality of the image. \nNon-sequential ray tracers such as LightTools~\\citep{LightTools} and Photopia~\\citep{photopia} use many rays to simulate the optical flux through the system and share similarities with the rendering procedures in computer graphics, with the main difference being that the rays are traced from source to camera.\n\nAlgorithmically differentiable ray tracing, a generalization of differential ray tracing \\citep{feder_differentiation_1968, stone_differential_1997, oertmann_differential_1989, chen_second-order_2012}, is a tool that is being developed for both sequential~\\citep{Sun2021, Volatier2017} and non-sequential \\citep{Mitsuba2} ray tracing.\n\\emph{Differential ray tracing} obtains system parameter gradients using numerical or algebraic differentiation. The gradient can be calculated numerically using numerical differentiation or the adjoint method \\citep{givoli_tutorial_2021}, requiring the system to be ray traced twice, once for its current state and once with perturbed system parameters. Analytic expressions for the gradient can be obtained by tracing the rays analytically through the system, calculating where the ray intersects the system's surfaces and how the ray's trajectory is altered. However, these expressions can become long and complicated depending on the system. In addition, the method is limited to optics described by conics as finding analytic ray surface intersection with surfaces of higher degrees becomes complicated or even impossible. Algorithmic differentiable ray tracing can handle these issues by obtaining the gradients with one single forward simulation for an almost arbitrary system. In addition, it can be seamlessly integrated into gradient-descent-based optimization pipelines. A modern framework for this is \\emph{Physics Informed Machine Learning} \\citep{Karniadakis2021}, where a neural network is trained to approximate the solution to a physics problem formulated using data, a set of differential equations, or an implemented physics simulation (or a combination of these). \n\nWe investigate the reliability of designing freeform lenses with B-spline surfaces \\citep{Piegl1995} using algorithmically differentiable non-sequential ray tracing and gradient-based optimization to redirect the light of a light source into a prescribed irradiance distribution. The source models will be the collimated light source, point source, and finally, sources with a finite extent. The results are validated using the commercial ray trace program LightTools \\citep{LightTools}. In addition, we investigate the effectiveness of optimizing a network to determine the optimal B-spline control points as proposed in \\citep{Moller2021PIML} and \\citep{GASICK2023115839}, and compare it to optimizing the control points directly and seeing the possible speed-up.\n\n\\section{Gradient-based freeform design}\nThe overall structure of our pipeline is depicted in Fig.~\\ref{fig:optimizationloop}. \nA freeform surface is defined by the parameters $P \\in \\mathscr{P}$, where $\\mathscr{P}$ is the set of permissible parameter values. This surface is combined with a flat surface to create a lens, and an irradiance distribution $\\mathcal{I}$ is produced by tracing rays through the lens onto a screen. The irradiance distribution is compared to a target $\\mathcal{I}_\\text{ref}$ yielding a loss $\\mathscr{L}(\\mathbf{P};\\mathcal{I}_\\text{ref})$. The optimization problem we are trying to solve can then be formulated as\n\\begin{equation}\n \\min_\\mathbf{P \\in \\mathscr{P}} \\; \\mathscr{L}(\\mathbf{P};\\mathcal{I}_\\text{ref}),\n\\end{equation}\nwhich we solve by using gradient descent.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width = \\textwidth]{figures\/Caustic_design_optimization.png}\n \\caption{Overview of our learning-based freeform design pipeline.}\n \\label{fig:optimizationloop}\n\\end{figure}\n\nThe freeform surface of the lens is defined in terms of a B-spline surface. From a manufacturing standpoint, this is convenient since B-spline surfaces can be chosen to be $C^1$ smooth (in fact, B-spline surfaces can be $C^n$ smooth for arbitrarily large $n$). From an optimization perspective, B-spline surfaces have the property that the control points that govern the shape of the surface and which will be optimized have a local influence on the surface geometry, which in turn has a local influence on the resulting irradiance distribution.\n\n\\subsection{The lens model using a B-spline surface}\n\\begin{figure}\n \\centering\n \\includegraphics[width = 0.2\\textwidth]{figures\/lens_schematic.png}\n \\caption{The used lens type: a volume enclosed between a flat surface and a freeform surface with a uniform refractive index.}\n \\label{fig:lenschematic}\n\\end{figure}\n\nWe define a lens as in Fig.~\\ref{fig:lenschematic} as the volume between a flat surface and a B-spline surface, with a uniform refractive index.\n\nA B-spline surface $\\mathbf{S}$ in $\\mathbb{R}^3$ is a parametric surface, see Fig.~\\ref{fig:Bsplinesurface}. It has rectangular support $[a,b]\\times[c,d]$ where $a z_\\text{in} \\quad \\forall (i,j). \\label{eq:nosurfintersect}\n\\end{equation}\nManufacturing can require that the lens has some minimal thickness $\\delta$, so that the constraint is stronger:\n\\begin{equation}\n P^z_{i,j} \\ge \\delta + z_\\text{in} \\quad \\forall (i,j).\n\\end{equation}\n\n\n\\subsection{Differentiable ray tracer} \\label{subsec:raytracer}\nOur implementation traces rays from a source through the flat lens surface and the freeform lens surface to the detector screen as depicted in Figs.~\\ref{fig:planetraceschematic} and \\ref{fig:pointtraceschematic}. Other ray paths, e.g., total internal reflection at lens surfaces, are not considered since it is assumed that the contribution of these to the resulting irradiance distribution is negligible.\n\n\\subsubsection{Sources and ray-sampling}\nNon-sequential ray tracing is a Monte-Carlo approximation method of the solution to the continuous integration formulation of light transport through an optical system. For a detailed discussion of this topic, see \\cite[ch. 14]{pharr2016physically}. Thus to perform ray tracing, the light emitted by a source must be discretized into a finite set of rays\n\\begin{equation}\n l: t \\rightarrow \\mathbf{o} + \\hat{\\mathbf{d}}t,\n\\end{equation}\nwhere $\\mathbf{o}$ is the origin of the ray and $\\hat{\\mathbf{d}}$ its normalized direction vector. Both collimated ray bundle and point sources will be considered, see Figs.~\\ref{fig:planetraceschematic} and \\ref{fig:pointtraceschematic}, respectively. \n\n\\begin{figure}\n \\centering\n \\includegraphics[width = 0.6\\textwidth]{figures\/plane_trace_schematic.png}\n \\caption{Schematic of the ray tracing with a collimated ray bundle source.}\n \\label{fig:planetraceschematic}\n\\end{figure}\n\n\\begin{figure}\n \\centering\n \\includegraphics[width = 0.6\\textwidth]{figures\/point_trace_schematic.png}\n \\caption{Schematic of the ray tracing with a point source.}\n \\label{fig:pointtraceschematic}\n\\end{figure}\n\nTracing rays from a collimated ray bundle can be understood from Fig.~\\ref{fig:planetraceschematic}. The path of all rays from the source plane to the B-spline surface is a line segment parallel to the $z$-axis. Therefore, we can sample the incoming rays directly on the B-spline surface, with $\\hat{\\mathbf{d}} = (0,0,1)^\\top$. By the linearity of $X$ and $Y$ sampling on the B-spline domain $[0,1]^2$ is analogous to sampling on the lens extent $[-r_x,r_x]\\times [-r_y,r_y]$ in terms of distribution. Rays are sampled in a (deterministic) square grid on $[0,1]^2$. \n\nFor a point source, each ray starts at the location of the source, and the direction vector $\\hat{\\mathbf{d}}$ is sampled over the unit sphere $\\mathbb{S}^2$. More precisely, $\\hat{\\mathbf{d}}$ is given by\n\\begin{equation}\n \\hat{\\mathbf{d}} = \\left(\\cos\\theta\\sin\\phi,\\sin\\theta\\sin\\phi,\\cos\\phi\\right)^\\top,\n\\end{equation}\nwith $\\theta \\in [0,2\\pi)$ and $\\phi \\in [0,\\phi_\\text{max}]$ for some $0\\le \\phi_\\text{max} < \\frac{\\pi}{2}$, see Fig.~\\ref{fig:pointtraceschematic}. $\\phi_\\text{max}$ is chosen as small enough to minimize the number of rays that miss the lens entrance surface but large enough such that the whole surface is illuminated. For instance, if the source is on the $z$-axis, then $\\phi_\\text{max} = \\arctan\\left(\\frac{\\sqrt{r_x^2 + r_y^2}}{z_\\text{in}-z_s}\\right)$ where $z_\\text{in}$ is the $z$-coordinate location of the entrance surface and $z_s$ the $z$-coordinate of the source. To uniformly sample points on this sphere segment, $\\theta$ is sampled (non-deterministically) uniformly in $[0,2\\pi)$ and $\\phi$ is given by\n\\begin{equation}\n \\phi = \\arccos\\left(1-(1-\\cos\\phi_\\text{max})a\\right)\n\\end{equation}\nwhere $a$ is sampled (non-deterministically) uniformly in $[0,1]$. This sampling is used to produce the results in Section~\\ref{sec:results}.\n\nFor the point source, the calculation of the intersection of a ray with the B-spline surface is non-trivial. This calculation comes down to finding the smallest positive root of the $p+q$ degree piece-wise polynomial function\n\\begin{equation}\n f(t) = \n Z\\left(\\begin{pmatrix}o_u \\\\ o_v\\end{pmatrix} + \n \\begin{pmatrix}d_u \\\\ d_v\\end{pmatrix}t\n \\right)\n - d_zt - o_z, \\label{eq:surfaceintersect}\n\\end{equation}\nif such a root exists and yields a point in the domain of $Z$. Here the subscripts $u$ and $v$ denote that the ray is considered in $(u,v,z)$ space instead of $(x,y,z)$ space, so for instance\n\\begin{equation}\n o_u = X^{-1}(o_x) = \\frac{1}{2}\\left(\\frac{o_x}{r_y}+1\\right), \\quad d_v = \\frac{d_y}{2r_y}.\n\\end{equation}\nThe roots of eq. \\ref{eq:surfaceintersect} cannot generally be found analytically for $p+q>4$, and thus an intersection algorithm is implemented, which is explained in the next section.\n\n\\subsubsection{B-spline surface intersection algorithm}\nThe intersection algorithm is based on constructing a triangle mesh approximation of the B-spline surface and computing intersections with that mesh.\n\n\\paragraph{Triangle mesh intersection phase 1: bounding boxes}\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.6\\textwidth]{figures\/bb_per_knotspanproduct.png}\n \\caption{Triangles and corresponding bounding box for a few knot span products of a spherical surface.}\n \\label{fig:bb_per_knotspanproduct}\n\\end{figure}\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.6\\textwidth]{figures\/uv_triangle_check.png}\n \\caption{Example of which triangles are candidates for a ray-surface intersection with the ray plotted in red, based on their $u,v$-domain.}\n \\label{fig:uvtriangleint}\n\\end{figure}\nChecking every ray against every triangle for intersection is computationally expensive, so it is helpful to have bounding box tests that provide rough information about whether the ray is even near some section of the B-spline surface. B-spline theory provides a tool for this: the strong convex hull property, which yields the bounding box \n\n\\begin{equation}\n B_{i_0,j_0} = \\left[u_{i_0},u_{i_0 + 1}\\right) \\times \\left[v_{j_0},v_{j_0 + 1}\\right) \\times \\left[z^{\\min}_{i_0,j_0}, z^{\\max}_{i_0,j_0}\\right]\n\\end{equation}\nwhere $z^{\\min}_{i,j}$ and $z^{\\max}_{i,j}$ are the minimum and maximum $z$-values of the control points that affect the B-spline surface on the knot span product $\\left[u_{i_0},u_{i_0 + 1}\\right) \\times \\left[v_{j_0},u_{j_0 + 1}\\right)$, hence those with indices $i_0-p\\le i\\le i_0, j_0-q \\le j\\le j_0$. Formulated in terms of $Z(u,v)$ this yields\n\\begin{equation}\n z^{\\min}_{i_0,j_0} \\le Z(u,v) \\le z^{\\max}_{i_0,j_0}, \\quad\n (u,v) \\in \\left[u_{i_0},u_{i_0 + 1}\\right) \\times \\left[v_{j_0},v_{j_0 + 1}\\right).\n\\end{equation}\nExamples of such bounding boxes are shown in Fig.~\\ref{fig:bb_per_knotspanproduct}.\n\nThere are two steps in applying the bounding boxes in the intersection algorithm. First, a test for the entire surface (in $(u,v,z)$-space):\n\\begin{equation}\n [0,1]^2 \\times \\left[\\min_{i,j}P^z_{i,j},\\max_{i,j}P^z_{i,j}\\right].\n\\end{equation}\nSecond, a recursive method where, starting with all knot span products, each rectangle of knot span products is divided into at most 4 sub-rectangles for a new bounding box test until individual knot span products are reached.\n\n\n\\paragraph{Triangle mesh intersection phase 2: $(u,v)$-space triangle intersection}\nEach non-trivial knot span product $[u_{i_0},u_{i_0+1}) \\times [v_{j_0},v_{j_0+1})$ is divided into a grid of $n_u$ by $n_v$ rectangles. Thus we can define the boundary points\n\\begin{subequations}\n \\begin{align}\n u_{i_0,k} =& u_{i_0} + k\\Delta u_{i_0},\n \\quad \\Delta u_{i_0} = \\frac{u_{i_0+1}-u_{i_0}}{n_u}, \n \\quad k = 0, \\ldots, n_u, \\\\\n v_{i_0,\\ell} =& v_{j_0}+ \\ell \\Delta v_{j_0}, \\quad \\Delta v_{j_0} = \\frac{v_{j_0+1}-v_{j_0}}{n_v},\n \\quad \\ell = 0,\\ldots, n_v.\n \\end{align}\n\\end{subequations}\n\nEach rectangle is divided into a lower left and an upper right triangle, as demonstrated in Fig.~\\ref{fig:uvtriangleint}. In this figure it is shown for a ray projected onto the $(u,v)$-plane in some knot span which triangles are candidates for an intersection in $(u,v,z)$-space. This is determined by the following rules:\n\\begin{itemize}\n \\item A lower left triangle is intersected in the $(u,v)$-plane if either its left or lower boundary is intersected by the ray;\n \\item an upper right triangle is intersected in the $(u,v)$-plane if either its right or upper boundary is intersected by the ray.\n\\end{itemize}\n\nThe intersection of these boundaries can be determined by finding the indices of the horizontal lines at which the vertical lines are intersected:\n\\begin{equation}\n \\ell_k = \\left\\lfloor\\frac{ o_v+(u_{i_0,k}-o_u)\\frac{d_v}{d_u} - v_{j_0}}{\\Delta v_{j_0}}\\right\\rfloor,\n\\end{equation}\nand analogously $k_\\ell$.\n\n\\paragraph{Triangle mesh intersection phase 3: $u,v,z$-space triangle intersection}\nA lower left triangle can be expressed by a plane\n\\begin{equation}\n T(u,v) = Au + Bv + C\n\\end{equation}\n defined by the following linear system:\n\\begin{equation}\n \\begin{pmatrix}\n u_{i_0,k} & v_{j_0,\\ell} & 1 \\\\\n u_{i_0,k+1} & v_{j_0,\\ell} & 1 \\\\\n u_{i_0,k} & v_{j_0,\\ell+1} & 1\n \\end{pmatrix}\n \\begin{pmatrix}\n A \\\\ B \\\\ C\n \\end{pmatrix}\n =\n \\begin{pmatrix}[1.75]\n z_{i_0,k}^{j_0,\\ell} \\\\ z_{i_0,k+1}^{j_0,\\ell} \\\\ z_{i_0,k}^{j_0,\\ell+1}\n \\end{pmatrix}.\n\\end{equation}\nHere we use the following definition:\n\\begin{equation}\n z_{i_0,k}^{j_0,\\ell} = Z(u_{i_0,k},v_{j_0,\\ell}).\n\\end{equation}\nThis yields the plane\n\\begin{align}\n T(u,v) =&& z_{i_0,k}^{j_0,\\ell} + n_u\\left(z_{i_0,k+1}^{j_0,\\ell}-z_{i_0,k}^{j_0,\\ell}\\right)\\frac{u-u_{i_0,k}}{u_{i_0+1}-u_{i_0}} \\\\ \n && +n_v\\left(z_{i_0,k}^{j_0,\\ell+1}-z_{i_0,k}^{j_0,\\ell}\\right)\\frac{v-v_{j_0,\\ell}}{v_{j_0+1}-v_{j_0}}. \\label{eq:trianglefuncdetermined}\n\\end{align}\nNote that to define this triangle, the B-spline basis functions are evaluated at fixed points in $[0,1]^2$ independent of the rays or the $P^z_{i,j}$. This means that for a lens that will be optimized these basis function values can be evaluated and stored only once rather than in every iteration, for computational efficiency.\n\nComputing the intersection with the ray $\\tilde{\\mathbf{r}}(t) = \\tilde{\\mathbf{o}} + \\tilde{\\hat{\\mathbf{d}}}t$ is now straight-forward, and yields\n\\begin{equation}\n t_\\text{int} = - \\frac{C+\\langle\\tilde{\\mathbf{o}}, \\mathbf{n}\\rangle}{\\langle\\tilde{\\hat{\\mathbf{d}}},\\mathbf{n}\\rangle}, \\quad \\mathbf{n} = \n \\begin{pmatrix} 0 \\\\ 1 \\\\ \\partial_u T\\end{pmatrix} \\times\n \\begin{pmatrix} 1 \\\\ 0 \\\\ \\partial_v T\\end{pmatrix} = \n \\begin{pmatrix}A \\\\ B \\\\ -1\\end{pmatrix},\n\\end{equation}\nwhere $\\mathbf{n}$ is a normal vector to the triangle, computed using the cross product. This also explains why $\\langle\\tilde{\\hat{\\mathbf{d}}},\\mathbf{n}\\rangle=0$ does not yield a well-defined result: in this situation the ray is parallel to the triangle.\n\nThe last thing to check is whether $\\tilde{l}(t_\\text{int})$ lies in the $(u,v)$-domain of the triangle, which can be checked by three inequalities for the three boundaries of the triangle:\n\n\\begin{subequations}\n \\begin{align}\n o_u + d_u t_\\text{int} \\ge u_{i_0,k} \\\\\n 0 \\leq o_v + d_v t_\\text{int} - v_{j_0,\\ell}< \\frac{n_u}{n_v}\\frac{v_{j_0+1}-v_{j_0}}{u_{i_0+1}-u_{i_0}}(u_{i_0,k+1}-(o_u + d_u t_\\text{int})).\n \\end{align}\n\\end{subequations}\n\nThe computation for an upper right triangle is completely analogous. The upper triangle has a closed boundary, whereas the lower triangle has an open one and vice versa, which means that the $(u,v)$ domains of the triangles form an exact partition of $[0,1]^2$. Thus the triangle mesh is `water-tight', meaning that no ray intersection should be lost by rays passing in between triangles.\n\n\\subsection{Image reconstruction}\nThe ray tracing produces an irradiance distribution in the form of an image matrix $\\mathcal{I} \\in \\mathbb{R}^{n_x \\times n_y}_{\\ge 0}$, where the elements correspond to a grid of rectangles called pixels that partition the detector screen positioned at $z=z_\\text{screen} > \\max_{i,j} P_{i,j}^z$. The screen resolution $(n_x,n_y)$ and the screen radii $(R_x,R_y)$ together yield the pixel size\n\\begin{equation}\n (w_x,w_y) = \\left(\\frac{2R_x}{n_x},\\frac{2R_y}{n_y}\\right).\n\\end{equation}\nFor reasons explained later in this section, sometimes a few `ghost pixels' are added, so the effective screen radii are\n\\begin{equation}\n R_x^* := R_x + \\frac{\\nu_x - 1}{2}w_x, \\quad\n R_y^* := R_y + \\frac{\\nu_y - 1}{2}w_y,\n\\end{equation}\nand the effective screen resolution is $(n_x + \\nu_x -1, n_y + \\nu_y - 1)$ where $\\nu_x$ and $\\nu_y$ are odd positive integers whose meaning will become clear later in this section.\n\n\nProducing the irradiance distribution from the rays that intersect the detector screen is called image reconstruction \\cite[sec. 7.8]{pharr2016physically}. The way that a ray contributes to a pixel with indices $i,j$ is governed by a reconstruction filter\n\\begin{equation}\n F_{i,j} : [-R_x,R_x] \\times [-R_y,R_y] \\rightarrow \\mathbb{R}_{\\ge 0},\n\\end{equation}\nyielding for the irradiance distribution\n\\begin{equation}\n \\mathcal{I}_{i,j} = \\sum_{k=1}^N \\omega_k F_{i,j}(\\mathbf{x}_k),\n\\end{equation}\nfor a set of ray intersections $\\{\\mathbf{x}_k\\}_{k=1}^N$ with corresponding final ray weights $\\{\\omega_k\\}_{k=1}^N$. The ray weights are initialized at the sampling of the ray at the source. They are slightly modified by the lens boundary interactions as a small portion of the light is reflected rather than refracted. The amount by which the ray weights are modified is governed by the Fresnel equations \\cite[sec. 2.7.1]{Fowles1975}. In our implementation, the Fresnel equations are approximated by Schlick's approximation \\cite[eq. 24]{Schlick1994}. In the current implementation, all ray weights are initialized equally. The precise value does not matter since the relationship between the initial and final weights is linear. The loss function (section \\ref{lossfunc}) compares scaled versions of the produced and target irradiance distribution.\n\nIn the simplest reconstruction case, the value of a pixel is given by the sum of the weights of the rays that intersect the detector screen at that pixel (called box reconstruction in \\cite[sec. 7.8.1]{pharr2016physically}). In this case the reconstruction filter of pixel $i,j$ is simply the indicator function of the pixel $\\left[(i-1)w_x,iw_x\\right) \\times \\left[(j-1)w_y,jw_y\\right)$.\n\nTo obtain a ray tracing implementation where the irradiance $\\mathcal{I}$ is differentiable with respect to geometry parameters of the lens, say, the parameter $\\theta$, the irradiance distribution must vary smoothly with this parameter. The dependency on this parameter is carried from the lens to the screen by the rays through the screen intersections $\\mathbf{x}_k = \\mathbf{x}_k(\\theta)$. Thus to obtain a useful gradient $\\frac{\\partial \\mathcal{I}}{\\partial \\theta}$ the filter function $F_{i,j}$ should be at least $C^1$, see Fig.~\\ref{fig:reconstructiondiffb} which is achieved by introducing a filter function that spreads out the contribution of a ray over a kernel of pixels of size $(\\nu_x,\\nu_y)$ centered at the intersection location. For the conservation of light, we require that $\\sum_{i,j}F_{i,j}(\\mathbf{x}) \\equiv 1$.\n\\begin{figure}\n \\centering\n \\includegraphics[width = 0.7\\textwidth]{figures\/reconstruction_diffb.png}\n \\caption{$\\mathbf{x}(\\theta)$ in the left plot shows the intersection location of a ray with the screen, dependent on a lens geometry parameter $\\theta$. The right plot then shows the reconstruction filter value for the green pixel in the left plot dependent on $\\theta$. In order to obtain a useful gradient of the pixel value with respect to $\\theta$, a smooth reconstruction filter is needed.}\n \\label{fig:reconstructiondiffb}\n\\end{figure}\n\nTherefore, the Gaussian reconstruction function is introduced, based on the identically named one described in \\cite[sec. 7.8.1]{pharr2016physically}. This filter function is based on the product \n\\begin{equation}\n \\tilde{F}_{i,j}(x,y;\\alpha,\\nu_x,\\nu_y) := f_{i}^x(x;\\alpha,\\nu_x)f_{j}^y(y;\\alpha,\\nu_y),\n\\end{equation}\nwhere\n\\begin{equation}\n f_{i_0}^x(x;\\alpha,\\nu_x) = \n \\begin{cases}\n e^{-\\alpha\\left(x-c^x_{i_0}\\right)^2} - e^{-\\alpha\\left(\\frac{\\nu_x w_x}{2}\\right)^2} &\\text{ if } \\lvert x-c^x_i\\rvert < \\frac{\\nu_x w_x}{2},\\\\\n 0 & \\text{otherwise.}\n \\end{cases} \\label{eq:filter1dim}\n\\end{equation}\nThe centers of the pixels are given by\n\\begin{equation}\n (c_i^x,c_j^y) := \\left(\\left(i + \\textstyle\\frac{1}{2}\\right)w_x - R_x, \\left(j + \\textstyle\\frac{1}{2}\\right)w_y - R_y\\right).\n\\end{equation}\nNote that the support of $\\tilde{F}_{i,j}$ is of size $\\nu_xw_x$ by $\\nu_yw_y$, the size of the kernel on the detector screen. The normalized reconstruction filter is then given by\n\\begin{equation}\n F_{i,j}(x,y;\\alpha,\\nu_x,\\nu_y) = \\frac{\\tilde{F}_{i,j}(x,y;\\alpha,\\nu_x,\\nu_y)}{\\sum_{i',j'}\\tilde{F}_{i',j'}(x,y;\\alpha,\\nu_x,\\nu_y)}.\n\\end{equation}\nThe function $F_{i,j}$ is plotted in Fig.~\\ref{fig:recfilter3d}. Note that the function is not differentiable at the boundary of its support, but this yields no problems in the optimization.\n\\begin{figure}\n \\centering\n \\includegraphics{figures\/Gaussianfilter.png}\n \\caption{Gaussian reconstruction filter $F_{i_0,j_0}$ for $\\alpha = 1$ and $(\\nu_x,\\nu_y)=(3,3)$.}\n \\label{fig:recfilter3d}\n\\end{figure}\n\n\\begin{figure}\n \\centering\n \\includegraphics[width = \\textwidth]{figures\/image_reconstr_examples.png}\n \\caption{Image reconstruction based on a small set of ray-screen intersections, for bincount and various reconstruction filter sizes and $\\alpha = 1$.}\n \\label{fig:imagereconstr}\n\\end{figure}\n\nGaussian image reconstruction is shown in Fig.~\\ref{fig:imagereconstr} for various values of $\\nu_x = \\nu_y$. There is a trade-off here since the larger $\\nu_x$, and $\\nu_y$ are the blurrier the resulting image is, and the larger the computational graph becomes, but also the larger the section of the image is that is aware of a particular ray which yields more informative gradients. \n\nUp to this point, this section has discussed the ray tracing part of the pipeline, the next subsections will discuss the role of the neural network and the optimization.\n\n\\subsection{Multi-layer perceptron as optimization accelerator} \\label{subsec:nnarchitectures}\n\\begin{figure}\n \\centering\n \\includegraphics[width = 0.6\\textwidth]{figures\/nn_architecture_dense.png}\n \\caption{The dense multi-layer perceptron architecture based on the size of the control net $(n_1+1)\\times(n_2+1)$.}\n \\label{fig:densenn}\n\\end{figure}\n\nSeveral neural network architectures are considered, all with a trivial input of 1, meaning that the neural networks will not, strictly speaking, be used to approximate a function since the considered domain is trivial. Non-trivial network inputs of system parameters like the source location will probably be part of follow-up research.\n\nIn this configuration, the neural network can be considered a transformation of the space over which is optimized: from the space of trainable neural network parameters to the space of control point $z$-coordinate values.\nThe goal of choosing the network architecture is that optimizing the trainable neural network parameters of this architecture yields better training behavior than optimizing the control point z-coordinate values directly. The used networks are multi-layer perceptions (MLPs), feed-forward networks consisting of several layers of neurons, as depicted in Fig.~\\ref{fig:densenn}. The considered architectures are:\n\\begin{enumerate}\n \\item No network at all. \\label{item:nonetcase}\n \\item A sparse MLP where the sparsity structure is informed by the overlap of the B-spline basis function supports on the knot spans. In other words: this architecture aims to precisely let those control points 'communicate' within the network that share influence on some knot span product on the B-spline surface, yielding a layer with the same connectivity as a convolutional layer with kernel size $(2p+1,2q+1)$. However, each connection has its own weight and each kernel its own bias, instead of only having a weight per element of the convolution kernel and one single bias for all kernels.\n \\item Larger fully connected architectures are also considered, with 3 layers of control net size. Note that two consecutive such layers yield many weight parameters: $n^4$ for a square control net with `side length' $n$.\n\\end{enumerate}\nThe activation function used for all neurons is the hyperbolic tangent, which is motivated below.\n\n\\subsubsection{Control point freedom} \\label{subsec:controlpointfreedom}\nControl over the range of values that can be assumed by the control point $z$-coordinates is essential to make sure that the systems stays physical (as mentioned in Section~\\ref{subsubsec:lensconstraints}), but also to be able to take into account restrictions imposed on the lens as part of mechanical construction in a real-world application. Note that the restriction $P_{i,j}^z > z_\\text{in}$ for the control points being above the lens entrance surface is not critical for a collimated ray bundle simulation since, the entrance surface can be moved arbitrarily to the $-z$ direction without affecting the ray tracing. \n\nSince the final activation function $\\tanh$ has finite range $(-1,1)$, this can easily be mapped to a desired interval $(z_{\\min},z_{\\max})$:\n\\begin{equation}\n y_{i,j} \\mapsto z_{\\min} + \\textstyle\\frac{1}{2} (y_{i,j} + 1)(z_{\\max} - z_{\\min}), \\label{eq:outputcorrection}\n\\end{equation}\nwhich can even vary per control point if desired. Here $y_{i,j}$ denotes an element of the total output $Y$ of the network.\nThe above can also be used as an offset from certain fixed values:\n\\begin{equation}\n y_{i,j} \\mapsto f\\left(P^x_{i,j},P^y_{i,j}\\right) + z_{\\min} + \\textstyle\\frac{1}{2} (y_{i,j} + 1)(z_{\\max} - z_{\\min}). \\label{eq:outputcorrectionwfunc}\n\\end{equation}\nThe resulting B-spline surface approximates the surface given by $f(x,y) + \\textstyle\\frac{1}{2}(z_{\\max}+z_{\\min})$ if $Y \\approx 0$ can be used to optimize a lens that is globally at least approximately convex\/concave. The choice of the hyperbolic tangent activation function accommodates this: since this activation function is smooth around its fixed point $0$ when initializing the weights and biases of the network close to $0$, there is no cumulative value-increasing effect in a forward pass through the network so that indeed $Y\\approx 0$ in this case.\n\nFor comparability, in the case without a network, the optimization is not performed directly on the control point $z$-coordinates. Instead, for each control point, a new variable for optimization is created, which is passed through the activation function and the correction as in Eq.~\\ref{eq:outputcorrection} or \\ref{eq:outputcorrectionwfunc} before being assigned to the control point.\n\n\\subsection{The optimization}\n\n\\label{lossfunc}\nThe lens is optimized such that the irradiance distribution $\\mathcal{I}$ projected by the lens approximates a reference image $\\mathcal{I}_\\text{ref}$, where $\\mathcal{I} ,\\mathcal{I}_\\text{ref}\\in \\mathbb{R}^{n_x\\times n_y}_{\\ge 0 }$. The loss function used to calculate the difference between the two uses the normalized matrices: \n\\begin{equation}\n \\widehat{\\mathcal{I}} = \\frac{\\mathcal{I}}{\\sum_{i,j}^{n_x,n_y} \\mathcal{I}_{i,j}} \n \\quad\n \\mathrm{and}\n \\quad\n \\widehat{\\mathcal{I}}_\\text{ref} = \\frac{\\mathcal{I}_\\text{ref}}{\\sum_{i,j}^{n_x,n_y} \\mathcal{I}_{\\mathrm{ref},i,j}}.\n\\end{equation}\n\n\\noindent\nThe loss function is given by\n\\begin{equation}\n \\mathcal{L}(\\mathcal{I};\\mathcal{I}_\\text{ref}) = \\frac{1}{\\sqrt{n_x n_y}}\n \\left\\| \\widehat{\\mathcal{I}}-\\widehat{\\mathcal{I}}_\\text{ref} \\right\\|_F \n \\label{eq:pipelineLoss},\n\\end{equation}\nwhere $\\| \\cdot \\|_F$ is the Frobenius or matrix norm, which is calculated as follows:\n\\begin{equation}\n \\| \\mathcal{A} \\|_F = \\sqrt{\\sum_{i}^{n_x}\\sum_{j}^{n_y} \\lvert a_{i,j} \\rvert ^2}.\n\\end{equation}\nFig.~\\ref{fig:optimizationloop} shows the conventional stopping criterion of the loss value being smaller than some $\\varepsilon > 0$, but in our experiments, we use a fixed number of iterations.\n\nThe neural network parameters (weights and biases) are updated using the Adam optimizer \\citep{Kingma2014} by back-propagation of the loss to these parameters.\n\n\\section{Results} \\label{sec:results}\n\nSeveral results produced with the optimization pipeline discussed in the previous sections are displayed and discussed in this section. The implementation mainly uses PyTorch, a Python wrapper of Torch \\citep{Collobert2002Torch}.\n\nNone of the optimizations performed for this section took more than a few hours to complete, on a \\verb|HP ZBook Power G7 Mobile Workstation| with a \\verb|NVIDIA Quadro T1000 with Max-Q Design| GPU.\n\nMost of the results have been validated with \\emph{LightTools} \\citep{LightTools}, an established ray tracing software package in the optics community. Lens designs were imported to LightTools as a point cloud, then interpolated to obtain a continuous surface, and all simulations were conducted using $10^6$ rays.\n\nUnits of length are mostly unspecified since the obtained irradiance distributions are invariant under uniform scaling of the optical system. This invariance to scaling is reasonable as long as the lens details are orders of magnitude larger than the wavelength of the incident light such that diffraction effects do not play a role. Furthermore, the irradiance distributions are directly proportional to the scaling of all ray weights and thus the source power, so the source and screen power also need no unit specification. Note that relative changes have a non-trivial effect, like changes to the power proportion between sources or the distance proportions of the optical system.\n\n\\newpage\n\\subsection{Irradiance derivatives with respect to a control point}\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.75\\textwidth]{figures\/results\/controlpoint_derivatives.png}\n \\caption{Gradients of an irradiance distribution of a collimated ray bundle through a flat lens (parallel sides), with respect to the $z$-coordinate of one control point. The zeros are masked with white to show the extend of the influence of the control point. These irradiation distributions differ by: (a): degrees $(3,3)$, reconstruction filter size $(3,3)$, (b): degrees $(3,3)$, reconstruction filter size $(11,11)$, (c): degrees $(5,3)$, reconstruction filter size $(3,3)$.}\n \\label{fig:controlpointderivs}\n\\end{figure}\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.9\\textwidth]{figures\/results\/renderderivsexplanation.png}\n \\caption{Demonstration of how one control point influences the irradiance distribution in the case of a flat lens with B-spline degrees $(3,3)$ and a collimated ray bundle source.}\n \\label{fig:renderderivsexplanation}\n\\end{figure}\nThis section gives a simple first look at the capabilities of the implemented differentiable ray tracer: computing the derivative of an irradiance distribution with respect to a single control point. Obtaining this data is inefficient in the current PyTorch implementation as a forward mode automatic differentiation pass is required, which is not currently (entirely) supported by PyTorch. Therefore these derivatives are computed with pixel-wise back-propagation.\n\nFig.~\\ref{fig:controlpointderivs} shows the derivative of an irradiance distribution produced by a collimated ray bundel through a flat lens for various B-spline degrees and reconstruction filter sizes, and Fig.~\\ref{fig:renderderivsexplanation} shows what one of these systems looks like. The overall `mountain with a surrounding valley' structure can be understood as follows: as one of the control points rises, it creates a local convexity in the otherwise flat surface. This convexity has a focusing effect, redirecting light from the negative valley region toward the positive mountain region.\n\nNoteworthy of these irradiance derivatives is also their total sum: (a) $\\SI{-1.8161e-08}{}$, (b) $\\SI{3.4459e-08}{}$, (c) $\\SI{9.7095e-05}{}$. These small numbers with respect to the total irradiance of about $93$ and therefore indicate conservation of light; as the control point moves out of the flat configuration, at first, the total amount of power received by the screen will not change much. This is expected from cases (a) and (b), where the control point does not affect rays that reach the screen on the boundary pixels. However, in all cases, all rays intersect the lens at right angles. Around $\\theta = 0$, the slope of Schlick's approximation is very shallow, indicating a small decrease in refraction in favor of reflection.\n\n\\subsection{Sensitivity of the optimization to initial state and neural network architecture} \\label{subsec:results_nnsensitivity}\nAs with almost any iterative optimization procedure, choosing a reasonable initial guess of the solution is crucial for reaching a good local\/global minimum. For training neural networks, this comes down to how the network weights and biases are initiated. In this section, we look at three target illuminations: the circular top hat distribution (Fig.~\\ref{fig:circtophat}), the TU Delft logo (Fig.~\\ref{fig:TUDflameinv}), and an image of a faceted ball (Fig.~\\ref{fig:facball}). For some experiments, black padding or Gaussian blurring is applied to these images. We design lenses to produce these distributions from a collimated ray bundle, given various neural network architectures (introduced in section \\ref{subsec:nnarchitectures}) and parameter initializations.\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.4\\textwidth]{figures\/results\/circular_tophat.png}\n \\caption{The circular tophat target illumination.}\n \\label{fig:circtophat}\n\\end{figure}\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.4\\textwidth]{figures\/TUD_400_inverse.png}\n \\caption{The TU Delft flame target illumination.}\n \\label{fig:TUDflameinv}\n\\end{figure}\n\n\\begin{figure}\n \\centering\n \\includegraphics[width = 0.4\\textwidth]{figures\/results\/shaded_faceted_ball.png}\n \\caption{The faceted ball target illumination.}\n \\label{fig:facball}\n\\end{figure}\n\n\\paragraph{Circular top hat distribution from collimated ray bundle} \\label{subsec:circtophat}\nFig.~\\ref{fig:tophatloss} shows the progress of the loss over 1000 iterations, with each iteration taking $2.5$ seconds, for various neural network architectures and parameter initialization combinations. For the other parameters in these simulations, see the supplementary information. For a few moments during the training, the resulting freeform surfaces and irradiance distributions are shown in Figs.~\\ref{fig:lensshapes}, \\ref{fig:tophatrandomsparse}, \\ref{fig:tophatunifsparse}, \\ref{fig:tophatunifnonet} and \\ref{fig:tophatunifdense}. Uniform here means that the initial trainable parameter values are sampled from a small interval: $U\\left(\\left[-10^{-4},10^{-4}\\right]\\right)$, except for the no-network case; this is initialized with all zeros.\n\nThe first notable difference is between the random and uniformly initialized sparse neural networks. The uniformly initialized neural network performs much better, and no network performs better. This is probably because the uniformly initialized cases converge to a better (local) minimum than the randomly initialized case. Of course, it could happen that the random initialization lands in a very favorable spot in the design landscape, but intuitively this seems very unlikely.\n\nAnother property of the uniformly initialized cases is their preservation of symmetry in these setups. As Fig.~\\ref{fig:lensshapes} shows, this leads to much simpler lenses, which are probably much less sensitive to manufacturing errors due to their relative lack of small detail. What is interesting to note here is that if the sparse network is initialized with all parameters set to $0$, then its optimization is identical to the no-network case, as only the biases in the last layer achieve non-zero gradients.\n\nNo rigorous investigation has been conducted to the extent that this behavior of increased convergence speed carries over to other target distributions and system configurations and what the optimal hyper-parameters are. A thorough investigation of the hyper-parameter space that defines a family of network architectures could reveal where in the increase of the architecture complexity, diminishing returns for optimizing these lenses arises. However, based on these initial findings the fully connected network is used for all the following optimizations in the results.\n\n\n\n\n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=0.7\\textwidth]{figures\/results\/tophat_loss_progress.png}\n \\caption{Loss progress over the iterations for various pipeline-setups for forming a tophat distribution from a collimated ray bundle.}\n \\label{fig:tophatloss}\n\\end{figure}\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.9\\textwidth]{figures\/results\/tophat_lens.png}\n \\caption{The lens height field after initialization ($n=0$), and $n=50,100$ and $1000$ iterations respectively, for different network architectures (Section~\\ref{subsec:nnarchitectures}) and network parameter initializations (Section~\\ref{subsec:circtophat}).}\n \\label{fig:lensshapes}\n\\end{figure}\n\n\\begin{figure}[p]\n \\centering\n \\includegraphics[width=0.9\\textwidth]{figures\/results\/tophat_randomsparse.png}\n \\caption{Irradiance distributions and pixel-wise errors in the optimization progress of a random lens with a sparse network towards a circular tophat illumination.}\n \\label{fig:tophatrandomsparse}\n\\end{figure}\n\n\\begin{figure}[p]\n \\centering\n \\includegraphics[width=0.9\\textwidth]{figures\/results\/tophat_unifsparse.png}\n \\caption{Irradiance distributions and pixel-wise errors in the optimization progress of a flat lens with a sparse network towards a circular tophat illumination.}\n \\label{fig:tophatunifsparse}\n\\end{figure}\n\n\\begin{figure}[p]\n \\centering\n \\includegraphics[width=0.9\\textwidth]{figures\/results\/tophat_unifnonet.png}\n \\caption{Irradiance distributions and pixel-wise errors in the optimization progress of a flat lens without a network towards a circular tophat illumination.}\n \\label{fig:tophatunifnonet}\n\\end{figure}\n\n\\begin{figure}[p]\n \\centering\n \\includegraphics[width=0.9\\textwidth]{figures\/results\/tophat_unifdense.png}\n \\caption{Irradiance distributions and pixel-wise errors in the optimization progress of a flat lens with a dense network towards a circular tophat illumination.}\n \\label{fig:tophatunifdense}\n\\end{figure}\n\n\\clearpage\n\\paragraph{TU flame and faceted ball from collimated ray bundle}\nIn what follows, we consider complex target distributions: the TU Delft flame (for a complex shape) and a faceted ball (for a target with various brightness levels). Here we still use the collimated ray bundle illumination, but lenses are now optimized for various magnifications; see Table.~\\ref{tab:magndata}. These magnifications are defined as the scaling of the screen size with respect to the smallest screen size $(0.64,0.64)$. The other parameters of these optimizations are shown in the supplementary information. All these iterations took about $4$ seconds each.\n\nThe final irradiance distributions and corresponding LightTools results are shown in Figs.~\\ref{fig:flamerenders} and \\ref{fig:facetedballrenders}, respectively. These figures show that the optimization pipeline can handle these more complex target illuminations well. The LightTools results predict some artifacts within the irradiance distribution, which the implemented ray tracer does not, especially in the TU flame magnification 1 case. By visual inspection, based on the LightTools results, one would probably rate these results in the exact opposite order than as indicated by the losses shown in Fig.~\\ref{fig:ManufacturingLoss}.\n\nA potential explanation of the increase in loss with the magnification factor in Fig.~\\ref{fig:ManufacturingLoss} is that the bigger the screen is: the rays require higher angles to reach the edges of the screen, which is apparent in the cases of magnification 3 and 5 Fig.~\\ref{fig:ManufacturingRays}. This results in a larger sensitivity of the irradiance to the angle with which a ray leaves the screen. This in turn gives larger gradients of the irradiance with respect to the control points. Therefore the optimization takes larger steps in the neural network parameter space, possibly overshooting points that result in a lower loss.\n\nFor the magnification, $3$ and $5$, the irradiance distributions from LightTools show artifacts at the screen boundaries. A possible explanation for this is that the way the B-spline surfaces are transferred to LightTools is inaccurate at the surface boundaries.\\footnote{Assuming only rays from the B-spline surface boundaries reach the screen boundary area.} This is because surface normals are inferred from fewer points on the B-spline surface at the boundary than in the middle of the surface by LightTools.\n\nFurthermore, a significant amount of rays are lost during optimization because the target illuminations are black at the borders, so rays near the screen boundary will be forced off the screen by the optimization. Once rays are off the screen, they no longer contribute to the loss function. Once a ray misses the screen, the patch on the B-spline surface these rays originate from does not influence the irradiance and, thus, the loss function. \nHowever, this does not mean that this patch is idle for the rest of the optimization, as this patch can be in support of a basis function that corresponds to a control point that still affects rays that hit the screen. Therefore, the probability of getting idle lens patches with this setup decreases with the B-spline degrees since these determine the size of the support of the B-spline basis functions but might, in some cases, lead to oscillatory behavior, with rays alternating between hitting and missing the screen. \n\nFig. \\ref{fig:ManufacturingSurfaces} shows the optimized B-spline lens surface height field. A densely varying color map is chosen since the deviations from a flat or smooth concave shape are quite subtle, which is due to the large lens exit angle sensitivity of the ray-screen intersections since the ratio lens size to screen size is large with respect to the ratio lens size to screen distance.\n\n\\begin{table}[h]\n \\centering\n {\n \\def1.5{1.5}\n \\begin{tabular}{c|c|c|c}\n \\textbf{Magnification} & \\textbf{screen size} & $f(x,y)$ & \\textbf{starting shape type}\\\\\n \\hline\n $1$ & $(0.64,0.64)$ & $\\textstyle\\frac{1}{2}$ & flat\\\\\n $3$ & $(1.92,1.92)$ & $\\textstyle\\frac{1}{2} + 8 - \\sqrt{8^2-x^2-y^2}$ & concave\\\\\n $5$ & $(3.20,3.20)$ & $\\textstyle\\frac{1}{2} + 4 - \\sqrt{4^2 - x^2 - y^2}$ & concave\n \\end{tabular}\n }\n \\caption{The screen size and control point offset function $f$ used per magnification in the TU flame and faceted ball optimizations (distances in centimeters).}\n \\label{tab:magndata}\n\\end{table}\n\n\n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=0.7\\textwidth]{figures\/results\/Manufacturing_Loss.png}\n \\caption{Loss progress for the various magnifications and target distributions.}\n \\label{fig:ManufacturingLoss}\n\\end{figure}\n\n\\clearpage\n\n\\begin{figure}[p]\n \\centering\n \\includegraphics[width=\\textwidth]{figures\/results\/Manufacturing_flames.png}\n \\caption{Implementation and LightTools irradiance distributions of the TU flame target from the final lens design of the optimization.}\n \\label{fig:flamerenders}\n\\end{figure}\n\n\\begin{figure}[p]\n \\centering\n \\includegraphics[width=\\textwidth]{figures\/results\/Manufacturing_balls.png}\n \\caption{Implementation and LightTools irradiance distributions of the faceted ball target from the final lens design of the optimization.}\n \\label{fig:facetedballrenders}\n\\end{figure}\n\n\\begin{figure}[p]\n \\centering\n \\includegraphics[width=\\textwidth]{figures\/results\/Manufacturing_surfaces.png}\n \\caption{The lens designs for the different magnifications and two target distributions.}\n \\label{fig:ManufacturingSurfaces}\n\\end{figure}\n\n\\begin{figure}[p]\n \\centering\n \\includegraphics[width=\\textwidth]{figures\/results\/Manufacturing_rays.png}\n \\caption{$25\\times25$ traced rays through the final lens designs for the different magnifications and two target distributions.}\n \\label{fig:ManufacturingRays}\n\\end{figure}\n\n\n\n\n\\clearpage\n\\subsection{Optimization with a point source and a grid of point sources}\nWe now consider an optimization that uses the B-spline intersection algorithm. First, we design a lens with one point source at $(0,0,-5)$ with $5\\times 10^5$ rays to again form the TU flame. Then after $\\sim 200$ iterations, we change the source to an equispaced grid of $25\\times 25$ point sources with $10^3$ rays each on $[-1,1]\\times[-1,1]\\times\\{-5\\}$, approximating a source of non-negligible size. The other (hyper-)~parameters of this optimization are shown in the supplementary information. Due to the additional B-spline intersection procedures, each iteration takes approximately $50$ seconds.\nThe resulting final irradiance distribution and LightTools verifications can be seen in Fig.~\\ref{fig:point_source_renders}. The final irradiance distribution similar to the that obtained by LightTools, indicating that ray tracing with the implemented B-spline intersection algorithm works correctly. The irradiance are blurred due to the reconstruction filter.\nThe single-source point optimization performs well, although the illumination is less uniform than in the collimated ray bundle case (Figs.~\\ref{fig:flamerenders} and \\ref{fig:facetedballrenders}). \nThe non-uniformity can be attributed to the gaussian reconstruction filter used during optimization, as it smoothes out the small uniformities. \n\n\nAs seen in Fig.~\\ref{fig:point_source_renders} the irradiance distribution obtained with a grid of point sources accurately approximates the extended source illumination distribution quite well for the unoptimized case. \nFinding a lens design that redirects light from a source of non-negligible size into a desired irradiance distribution is a complex problem for which it is hard to indicate how good the optimal irradiance distribution can become.\nThe progress of the loss, as seen in Fig.~\\ref{fig:point_source_loss}, shows that the optimization can still improve the loss, even after the transition to the grid of point sources. Interestingly, looking at Fig.~\\ref{fig:point_source_renders} again, the optimization seems to adopt the coarse strategy of filling up the target distribution with images of the source square, as shown in Fig.~\\ref{fig:sourceimages}. This strategy does hinder the possible quality of the final irradiance distribution as the image of the source on the target is larger than the fine details in the desired irradiance. Optimizing both the front and back surfaces of the freeform could resolve this issue, as this will cause the image of the source to change shape depending on where it ends up on the target screen.\n\n\\begin{figure}\n \\centering\n \\includegraphics{figures\/results\/source_images_indication.png}\n \\caption{Indication of images of the source square in the irradiance distribution obtained by LightTools using the point source grid.}\n \\label{fig:sourceimages}\n\\end{figure}\n\n\\begin{figure}[p]\n \\centering\n \\includegraphics[width=\\textwidth]{figures\/results\/point_source_renders.png}\n \\caption{The final irradiation distribution of the lens optimizations with point sources and the corresponding LightTools verifications. The extended source is not implemented in our ray tracer, but is approximated by the point source grid.}\n \\label{fig:point_source_renders}\n\\end{figure}\n\n\\begin{figure}[p]\n \\centering\n \\includegraphics[width=0.75\\textwidth]{figures\/results\/point_source_surfaces.png}\n \\caption{Height fields of the lenses optimized for the TU flame with point sources.}\n \\label{fig:point_source_surfaces}\n\\end{figure}\n\n\\begin{figure}[p]\n \\centering\n \\includegraphics[width= 0.6\\textwidth]{figures\/results\/point_source_loss.png}\n \\caption{Loss over the iterations optimizing for the TU flame. The system is initiated with a point source, and after $\\sim 200$ iterations the point source is replaced by an equispaced grid of $25\\times 25$ point sources.}\n \\label{fig:point_source_loss}\n\\end{figure}\n\n\\section{Conclusion} \\label{sec:conclusion}\n\nWe demonstrated that non-sequential differentiable ray tracing is a viable tool for designing freeform lenses for collimated ray bundles, points, and extended sources.\nUsing a B-spline allows for the design of a continuous surface, which is desirable for manufacturing, and its control point allows for locally altering the irradiance distribution. For both cases, collimated and point source lens designs were found that could accurately project the desired irradiance distribution in both the differentiable ray tracer and in commercial software LightTools. Some artifacts still exist and resolving this issue will be a part of further research.\n\nFor the source with a finite extent, the optimizer improved upon the design obtained for a point source. However, the final irradiance distribution was made up of images of the source, which hinders the minimum that can be obtained as the source image is larger than the details in the desired irradiance distribution. This issue can be resolved by optimizing multiple surfaces simultaneously, as the image of the source on the target plane can then be optimized to vary with location.\n\nUsing a neural network to remap the optimization space provides an interesting way to increase the convergence speed of the optimization. However, further investigation is required to see whether this generally holds and what the effect is on other network architectures.\n\nThe developed ray tracing implementation is currently a proof of concept and needs to be optimized for speed. The B-spline intersection algorithm, in particular, adds roughly a factor of $10$ to the computation time. A significant speedup can be achieved here by leveraging efficient lower-level GPU programming languages, such as CUDA. \n\n\n\\section{Acknowledgements}\nWe acknowledge support by NWO-TTW Perspectief program (P15-36) ``Free-form scattering optics\".\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nHigh-redshift star-forming galaxies are becoming an important probe of galaxy\nformation, reionization and cosmology (Robertson et al. 2010; Shapley 2011). A\npopular method for finding high redshift star forming galaxies is to target\ntheir often bright Ly$\\alpha$ emission (Partridge \\& Peebles 1967). This\nemission can be easily detected in narrow-band imaging surveys, and can be\nfurther confirmed by spectroscopic observations (Hu et al. 1998; Ouchi et al.\n2008; Yamada et al. 2012a,b). In addition to discovering numerous Ly$\\alpha$\nemitters (LAEs), a particular class of objects, also known as \"Ly$\\alpha$\nblobs\" (LABs), has been most commonly found in the dense environment of\nstar-forming galaxies at high redshift, and they are very extended (30 to 200\nkpc) and Ly$\\alpha$-luminous (10$^{43}$ to 10$^{44}$ erg~s$^{-1}$) (see,\ne.g., Francis et al. 1996; Steidel et al. 2000; Palunas et al. 2004; Matsuda et\nal. 2004, 2009, 2011; Dey et al. 2005; Saito et al. 2006; Yang et al. 2009,\n2010; Erb et al. 2011; Prescott et al. 2012a, 2013; Bridge et al. 2013). In\ncontrast to the large Ly$\\alpha$ nebulae surrounding some high-redshift radio\ngalaxies (e.g., Reuland et al. 2003; Venemans et al. 2007), these objects do\nnot always have obvious sources for energy responsible for their strong\nemission.\n\nWhile the LABs' preferential location in overdense environments indicates an\nassociation with massive galaxy formation, the origin of Ly$\\alpha$ emission in\nthe LABs is still unclear and under debate (Faucher-Giguere et al. 2010; Cen \\&\nZheng 2013; Yajima et al. 2013). Proposed sources have generally fallen into\ntwo categories: cooling radiation from cold streams of gas accreting onto\ngalaxies (e.g., Haiman et al. 2000; Dijkstra \\& Loeb 2009; Goerdt et al. 2010)\nand photoionization\/recombination from starbursts or active\ngalactic nuclei (AGNs) (e.g., Taniguchi \\& Shioya 2000; Furlanetto et al. 2005;\nMori \\& Umemura 2006; Zheng et al. 2011). Supporting evidence for the cooling\nflow scenario comes from those LABs lacking any visible power source (e.g.,\nNilsson et al. 2006; Smith \\& Jarvis 2007). Ionizing photons from young stars\nin star-forming galaxies and\/or AGNs can ionize neutral hydrogen atoms and the\nsubsequent recombination gives off Ly$\\alpha$\\, emission. The resonant scattering of\nLy$\\alpha$\\, photons in the circumgalactic medium makes the emission extended (Geach\net al. 2005, 2009; Colbert et al. 2006, 2011; Beelen et al. 2008; Webb et al.\n2009; Zheng et al. 2011; Cen \\& Zheng 2013; Overzier et al. 2013). \n\nExcept for cooling flows and photoionization from star-forming galaxies and\/or\nAGNs, other possible mechanisms, such as galactic super-winds and obscured AGNs,\nare also proposed to explain the nature of the LABs (e.g., Ohyama et al. 2003;\nWilman et al. 2005; Colbert et al. 2006; Matsuda et al. 2007). All these\nsources of energy may be activated in an environment where violent interactions are\nfrequent between gas rich galaxies as expected in over-dense regions at high\nredshift (Matsuda et al. 2009, 2011; Prescott et al. 2012b; Kubo et al. 2013).\n\nThe 110 Mpc filament with 37 LAEs related to the protocluster J2143-4423 at\n$z$=2.38 (Francis et al. 1996, 2004; Palunas et al. 2004) is one of the largest\nknown structures at high redshift, and this field also includes four large\nextended LABs with extensions of $\\sim$ 50 kpc and above, named {B1}, {B5}, {B6}\\,\nand {B7}. In this paper, we present our deep radio observations and {\\it\nHerschel} released far-infrared (FIR) data in J2143-4423\\, to study the powering\nsource of these LABs. Throughout this paper, we use a $\\Lambda$ cosmology with\n$\\rm H_{\\rm0}$ = 67.3~$\\rm \\ifmmode{{\\rm \\ts km\\ts s}^{-1}}\\else{\\ts km\\ts s$^{-1}$}\\fi\\ Mpc^{-1}$, $\\rm \\Omega_\\Lambda$ = 0.685 and\n$\\rm \\Omega_{\\rm m}$ = 0.315 (Planck Collaboration XVI 2013), and 1$\\arcsec$\ncorresponds to 8.37~kpc at $z$=2.38.\n\n\n\\section{Observations}\n\\subsection{ATCA observations}\nWe observed J2143-4423\\, with the Australia Telescope Compact Array\n(ATCA)\\footnote{The Australia Telescope Compact Array is part of the Australia\nTelescope, which is funded by the Commonwealth of Australia for operation as a\nNational Facility managed by CSIRO.} in its extended configuration 6A. During\nthe observations from 2009 June 14 to 17, only five out of six antennas were\navailable. The observations were performed at a central frequency of 1.75 GHz.\nWe used the Compact Array Broadband Backend (Wilson et al. 2011) in a\nwide-band mode, with a total bandwidth of 2~GHz and a channel width of 1~MHz.\nThe nearby source PKS~2134-470 served as a gain calibrator. Absolute fluxes\nwere calibrated with the ATCA standard PKS~1934-638. The total observing time\nwas about 70 hours. \n\nThe data were reduced with the MIRIAD software package. Although the observations\nwere carried out with a total bandwidth of 2~GHz, the effective bandwidth was\nabout 489 MHz with a central frequency of 1.51~GHz. We carefully flagged the\nchannels affected by radio frequency interference (RFI) by checking the\nvisibility data sorted by time, channels and baselines.\nThe image was deconvolved with MIRIAD task MFCLEAN, and\ntask SELFCAL was used to reduce the noise from strong radio continuum sources.\nWe first created cleaned images in a normal procedure and made model images for\nthe strong sources. The models were used as inputs for task SELFCAL to perform\nself-calibration of visibility data. We ran this cycle for three times, and\nthen obtained the model images to create the visibility data with\nself-calibration, which were used to make the final images. The noise of the\nimages after applying self-calibration was about one order of magnitude lower\nthan that without self-calibration. The field of view was about 31 arcmins and\nthe synthesized beam size was 7.8$\\arcsec$$\\times$4.8$\\arcsec$. The noise was\nabout 15~$\\mu$Jy\/beam before applying primary beam correction.\n\n\\begin{figure*}[t]\n\\vspace{-0.0cm}\n\\centering\n\\includegraphics[angle=0,width=1.0\\textwidth]{fig1.pdf}\n\\vspace{-0.0cm}\n\\caption{ATCA 20~cm, {\\it Spitzer} MIPS 24$\\mu$m and {\\it Herschel}\\, PACS and SPIRE data for the four Ly$\\alpha$\\, blobs\n(LABs) in J2143-4423. {\\bf a) }Contours and gray scale maps of ATCA radio\nemission. The contours are -2, 2, 3, 4, 5 and 6~$\\times$ 15~$\\mu$Jy\n(1~$\\sigma$), with a synthesized beam of 7.8$\\arcsec$$\\times$4.8$\\arcsec$,\nwhich is shown in the lower left corner of each panel. {\\bf b) }Gray maps of\n{\\it Spitzer} MIPS 24~$\\mu$m emission (Colbert et al. 2006). {\\bf c-g)\n}Contours and gray scale maps of {\\it Herschel}\\, FIR emission. The contours are\n-2$\\sigma$, 2$\\sigma$, 3$\\sigma$, 4$\\sigma$, 5$\\sigma$ and 6$\\sigma$ (see\n$\\S$~\\ref{obsher} for the noise level of each band). A circle with a diameter\nof 40$\\arcsec$ is shown in each panel. The circles in\n{B7}\\, are on an off-center position (5$\\arcsec$, 0$\\arcsec$) to cover most FIR\nemission. All sources are centered on the positions of the four LABs (see\nColbert et al. 2006) as shown with plus signs in each panel. All offsets are\nrelative to the positions of the LABs.}\n\\label{map} \n\\end{figure*}\n\n\\subsection{Archival {\\it Herschel}\\, observations}\\label{obsher}\n{\\it Herschel}\\, observations towards J2143-4423\\, were carried out with PACS (Poglitsch et\nal. 2010) at 100 and 160~$\\mu$m and SPIRE (Griffin et al. 2010) at 250, 350 and\n500~$\\mu$m in 2010 to 2011. J2143-4423\\, was imaged in a field size of\n15$^\\prime$$\\times$15$^\\prime$ for each band, and the observing time was\n$\\sim$2.9 hours for PACS ({\\it Herschel}\\, OD: 686) and $\\sim$0.6 hours for SPIRE ({\\it Herschel}\\,\nOD: 558). The level 2.5 product for PACS and the level 2 product for SPIRE from\nthe pipeline procedures are used for our data analysis. Source photometry\nis carried out using DAOphot algorithm in the Herschel Interactive Processing\nEnvironment (HIPE). We apply beam correction, colour correction, aperture\ncorrection for a spectral index of $-$2 and adopt a flux calibration error of 5\\%\nat PACS bands and 7\\% at SPIRE bands as recommended in the PACS and SPIRE\nObserver's Manual. The full width at half power (FWHP) beam sizes are\n6.8$\\arcsec$ at 100~$\\mu$m, 11.4$\\arcsec$ at 160~$\\mu$m, 17.6$\\arcsec$ at\n250~$\\mu$m, 23.9$\\arcsec$ at 350~$\\mu$m and 35.2$\\arcsec$ at 500~$\\mu$m,\nrespectively.\n\n\n\\section{Results}\n\\subsection{Radio emission from ATCA observations}\nIn Fig.~\\ref{map}(a) we present the radio continuum emission images at 20~cm\nfrom the ATCA. Among the four LABs, {B6}\\, and {B7}\\, are detected with fluxes\nof 67\\ppm17~$\\mu$Jy and 77\\ppm16~$\\mu$Jy, respectively, and {B5}\\, is marginally\ndetected at 3~$\\sigma$ (51\\ppm16~$\\mu$Jy). For all detected sources, their\npositions are consistent with the central positions of the LABs. Only {B1}\\, is\nnot detected by the observations. \n\n\\subsection{FIR emission from {\\it Herschel}\\, observations}\nAll four LABs are observed with {\\it Herschel}\\, PACS at 100 and 160~$\\mu$m and SPIRE at\n250, 350 and 500~$\\mu$m, and the images are shown in Fig.~\\ref{map}(c-g). The\nobserved flux densities are calculated for the areas within the blue circles as\nshown in Fig.~\\ref{map} and are listed in Table~\\ref{tab1}. {B1}\\, is not\ndetected but contaminated by a nearby strong source about 20$\\arcsec$ in the\nnorth-west, which is the background QSO LBQS2138-4427 at $z$\\,=\\,3.2 (Francis \\&\nHewett 1993), and its emission features at different FIR bands appear to reach\nout to {B1}\\, from this location. There is no FIR counterpart for {B5}\\, in any\n{\\it Herschel}\\, band.\n\n\\begin{center}\n\\begin{table*}[t]\n\\centering\n\\caption{Observational and derived parameters towards the four LABs$^a$}\\label{tab1}\n\\begin{tabular}{ccccccccc}\n\\hline\nSource & 20~cm$^{b}$ & 100~$\\mu$m & 160~$\\mu$m & 250~$\\mu$m & 350~$\\mu$m & 500~$\\mu$m \n& \\ifmmode{L_{\\rm FIR}}\\else{$L_{\\rm FIR}$}\\fi$^c$ & M$\\rm_{dust}$\\\\ \n & [$\\mu$Jy] & [mJy] & [mJy] & [mJy] & [mJy] & [mJy] & [10$^{12}$\\ifmmode{L_{\\odot}}\\else{$L_{\\odot}$}\\fi] & [$10^8$\\ifmmode{{\\rm M}_{\\odot} }\\else{M$_{\\odot}$}\\fi] \\\\\n\\hline\nB1 & $<$51 & $<$4.2 & $<$9.0 & $<$17.9 & $<$19.6 & $<$22.5 & $<$2.8 & \\\\\nB5 & 51\\ppm16 & $<$2.1 & $<$11.1 & $<$17.5 & $<$18.7 & $<$19.8 & $<$2.5 & \\\\\nB6 & 67\\ppm17 & 13.2\\ppm3.2 & 53.9\\ppm8.0 & 49.7\\ppm9.0 & 53.7\\ppm10.7 & 36.7\\ppm10.3 & 10.0\\ppm1.9 & 3.2\\ppm0.6 \\\\\nB7 & 77\\ppm16 & 12.9\\ppm4.0 & 33.5\\ppm10.0 & 41.6\\ppm7.8 & 48.0\\ppm10.6 & 39.2\\ppm8.6 & 8.6\\ppm2.3 & 5.0\\ppm1.0\\\\\n\\hline\n\\end{tabular}\n\\begin{list}{}{}\n\\item{$^{\\mathrm{a}}$ The wavelengths shown in this table are the redshifted values.}\n\\item{$^{\\mathrm{b}}$ Measured fluxes have been modified by a primary beam correction (less than 15\\%).}\n\\item{$^{\\mathrm{c}}$ The total luminosities are calculated between rest frame\nwavelengths of 40~$\\mu$m to 200~$\\mu$m from the dust models (see\n$\\S$~\\ref{dust} for details). The 3~$\\sigma$ upper limits are given for undetected sources.} \n\\end{list}\n\\end{table*}\n\\end{center}\n\n\\subsection{Redshifts of the FIR sources}\\label{red}\nTo estimate the redshift of the FIR sources associated with {B6}\\, and {B7}, we\ntry to fit the data with the SEDs of different templates (Polletta et al.\n2007) at different redshifts and find that the starburst templates can well\nreproduce the data. With the observational data and the SEDs of the templates,\nthe minimum reduced $\\chi^2$ value for each redshift can be calculated and the\ncorresponding probability can be estimated. In this analysis, we include\nfive {\\it Herschel}\\, band, APEX 870~$\\mu$m data (Beelen et al. 2008), and {\\it Spitzer}\nMIPS 24~$\\mu$m data (Colbert et al. 2006).\n\nAmong four typical templates, Arp~220, M~82, Mrk~231 and NGC~6240, we find\nthat the spectral energy distribution of starburst galaxies NGC 6240 and Arp 220\nfit the data best, and Mrk 231 doesn't fit well because it has warm IR emission \nfrom its AGN which is not really consistent with the data.\nFig.~\\ref{redshift} shows the probability distribution\nagainst redshift for both LABs. The estimated redshifts are\n2.20$^{+0.30}_{-0.35}$ for {B6}\\, and 2.20$^{+0.45}_{-0.30}$ for {B7}\\,\nrespectively. Considering the uncertainty of this method to determine the\nredshifts, both values are consistent with the Ly$\\alpha$\\, redshift of 2.38 of the\nLABs. Adopting the number count study of {\\it Herschel}\\, sources in Clements et al.\n(2010), the probability of finding a 350 $\\mu$m source with a flux greater than 40 mJy\nwithin 20 arcsec is 2\\%. According for\nsuch a low number density of strong FIR sources and the positional coincidence\nof the LABs with strong FIR sources, the FIR sources are very likely associated\nwith the LABs. Nevertheless, future spectroscopic observations from molecular\nlines at millimeter or from forbidden lines at near-infrared will be quite\nimportant to confirm it. In the following sections, the Ly$\\alpha$\\, redshift of 2.38\nwill be adopted for the LABs.\n\n\n\\begin{figure*}[t]\n\\vspace{-0.0cm}\n\\centering\n\\includegraphics[angle=0,width=1.0\\textwidth]{fig2.pdf}\n\\vspace{-0.0cm}\n\\caption{Probability as a function of redshift for {B6}\\, and {B7}. NGC~6240 and\nArp~220 are adopted as the most appropriate starburst templates for B6 and B7, respectively.\nA red vertical line denotes a redshift of 2.38 for\nLy$\\alpha$\\, emission.}\n\\label{redshift} \n\\end{figure*}\n\n\\subsection{Dust properties}\\label{dust}\nFor {B6}\\,and {B7}, we have included the measurements from the five {\\it Herschel}\\, bands\nas well as the 870~$\\mu$m data taken from Beelen et al. (2008)\nin the dust continuum analysis using a single-component dust model as described in\nWei\\ss\\ et al. (2007). \n{\\it Spitzer} MIPS 24~$\\mu$m data (Colbert et al. 2006) are not used in the\nmodel fitting because they are strongly affected by PAH features, but are shown\nin Fig.~\\ref{sed} to allow for a better comparison with overlaid templates.\nWe find a dust temperature, $T\\rm_{dust}$, of\n70\\ppm5 K and a dust mass, M$\\rm_{dust}$, of (3.2\\ppm0.8)$\\times$10$^8$ M$\\rm\n_\\odot$ for {B6}, and $T\\rm_{dust}$\\,=\\,70\\ppm5 K and\nM$\\rm_{dust}$\\,=\\,(5.0\\ppm1.0)$\\times$10$^8$ M$\\rm _\\odot$ for {B7}, respectively. \nThe implied FIR luminosities are \\ifmmode{L_{\\rm FIR}}\\else{$L_{\\rm FIR}$}\\fi\\, = (10.0\\ppm1.9)$\\times$10$^{12}$\\,\\ifmmode{L_{\\odot}}\\else{$L_{\\odot}$}\\fi\\,\nfor {B6}, and \\ifmmode{L_{\\rm FIR}}\\else{$L_{\\rm FIR}$}\\fi\\, = (8.6\\ppm2.3)$\\times$10$^{12}$\\,\\ifmmode{L_{\\odot}}\\else{$L_{\\odot}$}\\fi\\, for {B7},\nrespectively, where \\ifmmode{L_{\\rm FIR}}\\else{$L_{\\rm FIR}$}\\fi\\, is integrated from 40~$\\mu$m to 200~$\\mu$m in the\nrest frame. The upper \\ifmmode{L_{\\rm FIR}}\\else{$L_{\\rm FIR}$}\\fi\\,limits for both {B1}\\, and {B5}\\, are\n$\\sim$2.5$-$2.8$\\times$10$^{12}$\\,\\ifmmode{L_{\\odot}}\\else{$L_{\\odot}$}\\fi.\n\n\n\\begin{figure*}[t]\n\\vspace{-0.0cm}\n\\centering\n\\includegraphics[angle=0,width=1.0\\textwidth]{fig3.pdf}\n\\vspace{-0.0cm}\n\\caption{Single-component dust models for B6 and B7 (a redshift of 2.38 is\nadopted). The black solid lines show the thermal dust continuum emission\nof the 70~K dust components for both {B6} and {B7}. The open circles represent\nthe measurements at five {\\it Herschel}\\, bands in this paper and the filled circles\nindicate the flux densities at 24~$\\mu$m (Colbert et al. 2006). The filled\nsquare denotes the flux density (or its upper limit) at 870~$\\mu$m, taken from\nBeelen et al. (2008). The wavelengths at the rest frame are labelled on the\ntop. For the single-component dust models adopted in the figure (see\n$\\S~\\ref{dust}$ for details of the dust models.), the $\\chi^2$ values are 1.1\nfor {B6}\\, and 1.0 for {B7}, respectively. In $\\S~\\ref{red}$, four typical\nstarburst templates, NGC~6240, M~82, Mrk~231 and Arp~220 (Polletta et al.\n2007), are adopted to estimate the redshifts for {B6}\\, and {B7}, and their best\nfits are overlaid in colored lines.\n}\n\\label{sed} \n\\end{figure*}\n\n\\subsection{Star formation rates}\nHere we derive the star formation rates from the Ly$\\alpha$, far-infrared and\nradio luminosities. To estimate the star formation rate (SFR) from the Ly$\\alpha$\\,\nluminosity, we first assume that star formation (SF) powers the observed\nLy$\\alpha$\\, flux. We use an unreddened Ly$\\alpha$\/H$\\alpha$ ratio of 8:1 and the\nconversion factor between H$\\alpha$ luminosity and SFR (Kennicutt 1998),\nyielding SFR($\\rm{Ly\\alpha}$)\/(\\ifmmode{{\\rm M}_{\\odot} }\\else{M$_{\\odot}$}\\fi\/yr)\\,=\\,$L_{\\rm Ly\\alpha}$\/(10$^{42}$ erg\ns$^{-1}$). This provides a lower limit because the extinction of Ly$\\alpha$\\, emission\ncaused by dust will largely reduce the observed Ly$\\alpha$\\, luminosity. With the FIR\nluminosity derived from {\\it Herschel}\\,data, we can estimate the SFR by using the\nrelation SFR($L_{\\rm FIR}$)\/(\\ifmmode{{\\rm M}_{\\odot} }\\else{M$_{\\odot}$}\\fi\/yr)\\,=\\,$1.7\\times$$L_{\\rm FIR}$\/(10$^{10}$\n\\ifmmode{L_{\\odot}}\\else{$L_{\\odot}$}\\fi) (Kennicutt 1998). If the observed radio emission, with a rest wavelength\nof 6~cm, is dominated by free-free emission in H{\\small II} regions, one can\nalso relate the SFR by the relation SFR($L_{\\rm\n1.4~GHz}$)\/(\\ifmmode{{\\rm M}_{\\odot} }\\else{M$_{\\odot}$}\\fi\/yr)\\,=\\,5.52$\\times$10$^{-22}$~$L_{\\rm 1.4~GHz}$\/(W Hz$^{-1}$)\n(Bell 2003). The radio luminosity at 1.4 GHz at the rest frame can be estimated\nfrom the observed flux at 1.51 GHz by assuming a relation\n$S \\propto \\nu^\\alpha$, where S is the flux density and the typical spectral\nindex $\\alpha$ of $-$0.8 is commonly adopted for the SMGs (e.g., Ivison et al.\n2010). These values are listed in Table~\\ref{tab2}.\n\n\n\\begin{center}\n\\begin{table}[h]\n\\centering\n\\caption{Derived star formation rates towards the four LABs}\\label{tab2}\n\\begin{tabular}{ccccc}\n\\hline\nSource & SFR(${L_{\\rm FIR}}$) & SFR($L_{\\rm 1.4GHz}$) & log $L_{\\rm Ly\\alpha}$$^a$ & SFR($\\rm{Ly\\alpha}$) \\\\ \n & [\\ifmmode{{\\rm M}_{\\odot} }\\else{M$_{\\odot}$}\\fi\/yr] & [\\ifmmode{{\\rm M}_{\\odot} }\\else{M$_{\\odot}$}\\fi\/yr] & [ergs~s$^{-1}$] & [\\ifmmode{{\\rm M}_{\\odot} }\\else{M$_{\\odot}$}\\fi\/yr] \\\\\n\\hline\nB1 & $<$480 & $<$1090 & 43.9 & 79 \\\\\nB5 & $<$430 & 1090\\ppm340 & 43.8 & 63 \\\\\nB6 & 1700\\ppm320 & 1430\\ppm360 & 43.8 & 63 \\\\\nB7 & 1460\\ppm390 & 1650\\ppm340 & 43.5 & 32 \\\\\n\\hline\n\\end{tabular}\n\\begin{list}{}{}\n\\item{$^{\\mathrm{a}}$ The Ly$\\alpha$\\,luminosities are adopted from Colbert et al. (2006).}\n\\end{list}\n\\end{table}\n\\end{center}\n\n\n\n\\section{Discussion and Conclusions}\n\nA high detection rate of radio emission (three out of four) around LABs\nsuggests that most LABs do not originate from cooling radiation. Instead,\nphotoionization from starbursts and\/or AGNs may power the LABs in most cases.\nThe high rate of FIR detections (two out of four) points to a star-formation\norigin of the LABs. \nThe SEDs of {B6}\\, and {B7}\\, can be well described by starburst dominated\ntemplates, as shown in Fig.~\\ref{sed}, further supporting Ly$\\alpha$\\, emission\nrelated to the SF in the LABs. In {B6}\\, and {B7}, the SFRs derived from Ly$\\alpha$\\,\nfluxes are far below those estimated from FIR luminosities (Table~\\ref{tab2}).\nThis suggests that the dust indeed greatly reduces the measured Ly$\\alpha$\\, flux.\nComparing the different SFRs, the dust absorption optical depth of the Ly$\\alpha$\\,\nemission becomes $\\sim$3.1$-$3.6. The SFRs estimated from the FIR and radio\nluminosities are comparable, indicating that the radio emission is dominated by\nSF, not by AGNs. The energetic starbursts can provide enough ionizing photons\nto ionize neutral hydrogen atoms in the interstellar medium (ISM), and each\nsubsequent recombination has a probability of $\\sim$ 2\/3 of ending up as a\nLy$\\alpha$ photon (Partridge \\& Peebles 1967). After escaping the galaxy's\nISM, these Ly$\\alpha$ photons can be resonantly scattered by neutral hydrogen\natoms in the intergalactic medium (IGM), which tends to make the Ly$\\alpha$\nemission extended (Zheng et al. 2011). \n\nCen \\& Zheng (2013) propose an SF-based model and predict that LABs at high\nredshift correspond to protoclusters containing the most massive galaxies and\ncluster halos in the early universe as well as ubiquitous strong infrared\nsources undergoing extreme starbursts. \nThis may be supported by the multiple Spitzer\/MIPS sources detected in both\nLABs (see Fig~\\ref{map}(b), Colbert et al. 2006, 2011). Indeed, Prescott et\nal. (2012b) suggest that LABs may be the seeds of galaxy clusters by resolving\nthe galaxies within a LAB at $z$\\,=\\,2.7. The strong FIR emission and the\ninferred high SFRs support the presence of a strong starburst in both {B6}\\, and\n{B7}. However, AGN-dominated templates like Mrk~231 can not well reproduce\nthe data (see $\\S$~\\ref{red}), suggesting that the SF instead of AGN may power\nthe Ly$\\alpha$\\, emission in both LABs. The model also predicts that the most\nluminous FIR source in each LAB is likely representing the gravitational center\nof the protocluster. Fig.~\\ref{map}(c-g) shows that the FIR emission indeed\npeaks in the centers of {B6}\\, and {B7}. The radio continuum emission is detected\nexclusively in the centers, which suggests that the source with most luminous\nFIR emission (therefore highest SFR) is in the gravitational center of each\nLAB. Another very important prediction of this model is that the Ly$\\alpha$\\, emission\nfrom photons that escape the galaxy are expected to be significantly polarized,\nwhich has been for the first time confirmed towards LAB1 in the SSA22 field by\nHayes et al. (2011), supporting models with central power sources. Adopting a\ngas-to-dust mass ratio of 150 and the SFRs estimated above, the timescales of\n{B6}\\, and {B7}\\, are relatively short ($\\sim$100 Myr), which is much shorter\nthan the galaxy building timescale. Note that this timescale is a lower limit\nbecause (1) the LABs may have been alive for a while now, and (2) additional\ngas may be continuously accreted. In any case, the LABs are visible only during\na short time interval during the lifetime of their parent clusters.\n\nNote that the so-called ``SF-based model'' proposed by Cen \\& Zheng (2013) also\nincludes AGN powering or any central powering. The morphologies of the\nLy$\\alpha$\\,emission of the four LABs are quite different (Palunas et al. 2004):\n{B1}\\,and {B5}\\, have core-like structures, while {B6}\\,and {B7}\\, are\ncharacterized by diffuse and extended emission with physical sizes of\n$\\sim$60-70~kpc. The latter may be driven by multiple sources as suggested\nby the MIPS data and are consistent with the SF-based model. There is no\nclear FIR emission detected around {B1}\\,and {B5}. Therefore, the Ly$\\alpha$\\, emission\nin both LABs is unlikely predominantly triggered by SF. Overzier et al. (2013)\nconclude that in {B1}\\, the photoionization from an AGN is the main driver of\nLy$\\alpha$\\, emission. However, Francis et al. (2013) shows that the observed Ly$\\alpha$\\,\nemission in {B1}\\, is of complex origin, dominated by the sum of the emission\nfrom the sub-haloes where the cold gas is being lit up most likely by a\ncombination of tidally triggered star formation, bow shocks, resonant\nscattering of Ly$\\alpha$\\, from the filament collisions and tidal stripping of the\ngas. In {B5}\\, radio emission is tentatively detected and therefore the AGN may\nalso power the Ly$\\alpha$\\, emission. Among the four LABs in J2143-4423, two of them, {B6}\\,\nand {B7}, are mainly driven by SF. However, the other two LABs, {B1}\\, and {B5},\nwithout clear FIR detection, are predominantly driven by the AGNs or other\nsources of energy still to be specified, but not mainly by star formation. We\nthus conclude that LABs must be powered by quite diverse sources of energy.\n\nWith its high angular resolution and superb sensitivity, future observations\nwith the Large Atacama Millimeter Array (ALMA) will reveal more details about\nthe nature of LABs such as testing the predictions of models where the\nionization is provided by intense star formation and confirming the\nsignificantly polarized dust emission at mm\/submm wavelength.\n\n\\begin{acknowledgements}\nWe thank the anonymous referee for valuable comments that improved this manuscript. \nY.A. acknowledges partial support by NSFC grant 11373007 and Youth Innovation Promotion Association CAS.\nR.C. is supported in part by NASA grant NNX11AI23G.\nY.M. acknowledges support from JSPS KAKENHI Grant Number 20647268. \nZZ was partially supported by NSF grant AST-1208891 and NASA grant NNX14AC89G.\nThis research has made use of NASA's Astrophysical Data System (ADS).\n\nPACS has been developed by a consortium of institutes led by MPE (Germany) and\nincluding UVIE (Austria); KU Leuven, CSL, IMEC (Belgium); CEA, LAM (France);\nMPIA (Germany); INAF-IFSI\/OAA\/OAP\/OAT, LENS, SISSA (Italy); IAC (Spain). This\ndevelopment has been supported by the funding agencies BMVIT (Austria),\nESA-PRODEX (Belgium), CEA\/CNES (France), DLR (Germany), ASI\/INAF (Italy), and\nCICYT\/MCYT (Spain).\n\nSPIRE has been developed by a consortium of institutes led by Cardiff\nUniversity (UK) and including Univ. Lethbridge (Canada); NAOC (China); CEA, LAM\n(France); IFSI, Univ. Padua (Italy); IAC (Spain); Stockholm Observatory\n(Sweden); Imperial College London, RAL, UCL-MSSL, UKATC, Univ. Sussex (UK); and\nCaltech, JPL, NHSC, Univ. Colorado (USA). This development has been supported\nby national funding agencies: CSA (Canada); NAOC (China); CEA, CNES, CNRS\n(France); ASI (Italy); MCINN (Spain); SNSB (Sweden); STFC, UKSA (UK); and NASA\n(USA).\n\\end{acknowledgements}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{PACS and its view}\nThe PACS spectrometer (\\citeauthor{pog2010}~\\citeyear{pog2010}) on-board the Herschel Space Telescope (\\citeauthor{pil2010}~\\citeyear{pil2010}), covers the wavelength range between $50\\ \\mu$m and $200\\ \\mu$m at a resolution of typically 1000. In comparison to the ISO LWS instrument, PACS offers a higher resolution and a better sensitivity. With PACS, we can observe a wide range of molecular emission lines in the innermost regions of circumstellar envelopes (CSE) of AGB stars, including CO and H$_2$O, which are major coolants and thus important for determining the thermodynamical structure of these environments.\n\n\\section{Methodology and preliminary results}\nKinematical, thermodynamical and chemical information about the circumstellar shell is provided by molecular emission lines and dust features. This information is derived through the use of two radiative transfer codes. The non-LTE line radiative transfer code, \\emph{GASTRoNOoM} (\\citeauthor{dec2006}~\\citeyear{dec2006}), calculates the velocity, temperature and density profiles of the gas envelope, the level populations of the molecules accounted for and the emergent line profiles for the different transitions of each molecule. The continuum radiative transfer code, \\emph{MCMax} (\\citeauthor{min2009a}~\\citeyear{min2009a}), calculates the temperature structure of the dust envelope and the final SED. In order to get a full understanding of the entire envelope around an AGB source, both modelling approaches need be used while maintaining consistency between dust and gas (see Lombaert et al., in prep).\n\nThe need for a consistent treatment of both the gas and dust components is important because of the high sensitivity of the water emission models to the dust-to-gas ratio. Modelling has shown that CO emission lines do not share this sensitivity. This behaviour is shown in Figure~1, which gives an excerpt of the PACS data of V669 Cas (full black), overlayed with two models differing only in the dust-to-gas ratio. This indicates that CO can be safely used to determine the gas temperature profile and the mass loss rate and that the water abundance profile can then be derived if the dust-to-gas ratio is constrained. Therefore, we suggest to determine the dust-to-gas ratio empirically from both gas emission (i.e. CO lines) and SED (i.e. dust continuum) modelling, with a consistent iterative treatment of both gas and dust. Consequently, one can improve the constraints on the water abundance profile.\n\n\\begin{figure}\\centering\n\\includegraphics[height=5.7cm]{lombaert_fig1.ps}\n\\caption{An excerpt of the PACS spectrum of V669 Cas (full black). Two models are overplotted, as well as the molecular transitions for CO, $^{13}$CO, o-H$_2$O, p-H$_2$O, o-H$_2^{18}$O, p-H$_2^{18}$O and SiO at their expected frequencies. Model 1 (full gray) has a dust-to-gas ratio $\\psi = 0.001$, whereas Model 2 (dashed) has $\\psi = 0.005$.}\n\\end{figure}\n\\acknowledgements\nRL acknowledges support from the KULeuven under grant number GOA-B6995, BdV and LD from the Fund for Scientific Research of Flanders (FWO), EDB from FWO under grant number G.0470.07 and JB and PR from the Belgian Federal Science Policy Office via the PRODEX Programme of ESA.\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\vspace{-0.3em}\n\n\n Ultra-reliable and low-latency communications (URLLC) have attracted a lot of attention in 5G and the upcoming 6G for mission-critical services\\cite{3GPPRelease16,Mahyar2019ShortCode,popovski2019wireless}. URLLC is renowned by its requirements for significantly lower latency compared to 4G long term evolution (4G-LTE) and high transmission reliability requiring the block error rate (BLER) lower than $10^{-4}$. These stringent requirements necessitate the use of low-complexity decoders and short block-length codes ($\\leq 150$ bits) in coding scheme design \\cite{R1-1608770}. Additionally, URLLC requires bit-level granularity in the block lengths and coding rates, to accommodate scenarios with varying latency and bandwidth constraints \\cite{Mahyar2019ShortCode}, which complicates the coding system further.\n \n Main candidates of short block-length codes for URLLC have been thoroughly reviewed in \\cite{Mahyar2019ShortCode,liva2016codeSurvey}. It has been identified that short Bose-Chaudhuri-Hocquenghem (BCH) codes and cyclic-redundancy-check-aided Polar (CRC-Polar) codes have superior BLER performance that is close to the normal approximation (NA) bound\\cite{erseghe2016coding}. However, it is challenging for BCH and CRC-Polar to provide bit-level granularity with the optimal error-correction capability, as they are originally available at certain block-lengths and rates \\cite{lin2004ECC,Designofpolar}, while the best known linear codes of different lengths and rates have different structures \\cite{Grassl:codetables}. Recently, \\cite{papadopoulou2021short} demonstrated that when using universal decoders, high-performance rate-compatible short codes can also be conveniently constructed by selecting good random codes. Therefore, universal decoders are favored in URLLC, as they can decode any linear block codes, including BCH and CRC-Polar codes, as well as codes that do not adhere to specific code structures.\n \n Ordered-statistics decoding (OSD) \\cite{Fossorier1995OSD} is a near-maximum-likelihood (near-ML) universal decoder that rekindles interests recently. It can decode any linear block code with near-ML BLER performance. OSD has two main phases, namely, \\textit{preprocessing} and \\textit{reprocessing}. In preprocessing, it sorts the received codeword bits according to their reliabilities (by which high reliable bits are distinguished), and permutes the columns of the code generator matrix accordingly. The permuted matrix is then transformed into the systematic form by Gaussian elimination (GE), where the information set is associated with high reliable bits. In reprocessing, a number of test error patterns (TEPs) are attempted to flip the high reliable bits. The remaining low reliable bits are recovered by \\textit{re-encoding} the flipped high reliable bits with the systematic permuted generator matrix.\n \n Let $C$ be the notation of complexity. In general, the average complexity of OSD is roughly characterized as\n \\begin{equation} \\label{equ::CmpOSD}\n C_{\\mathrm{OSD}} = C_{\\mathrm{Preprocessing}} + N_a C_{\\mathrm{Re-encoding}},\n \\end{equation}\n where $N_a$ is the average number of TEPs attempted in single decoding. Recent years have seen many works towards reducing $C_{\\mathrm{OSD}}$ by reducing the number $N_a$ \\cite{yue2021probability,Chentao2019SDD,NewOSD-5GNR,yue2021linear,Wu2007OSDMRB,FossorierBoxandMatch,improvedTEPwithMean,choi2019fast, jin2006probabilisticConditions,WJin2007MultipleBiases}. For example, \\cite{yue2021probability,Chentao2019SDD,Wu2007OSDMRB,choi2019fast} proposed techniques to identify unpromising TEPs that can be discarded without processing, and \\cite{yue2021probability,Wu2007OSDMRB, jin2006probabilisticConditions} designed approaches to terminate OSD early rather than processing all possible TEPs. These approaches can usually reduce $N_a$ to a very low level at high signal-to-noise ratios (SNRs). In this situation, $C_{\\mathrm{OSD}}$ will be dominated by preprocessing rather than reprocessing. Specifically, GE has the complexity as high as $O(nk^2)$ for a $k\\times n$ generator matrix \\cite{Fossorier1995OSD}, where $O(\\cdot)$ is the big-O operand. As a result, when $N_a$ is not large, GE introduces a \\textit{complexity floor} to OSD. That is, a complexity component hardly reducible. This complexity floor hinders the application of OSD in URLLC, especially for scenarios operating at high SNRs. Recently, \\cite{choi2021fast} proposed using multiple offline produced generator matrices to replace GE in OSD. Nevertheless, this approach could introduce extra overheads at low-to-moderate SNRs, as it performs several reprocessings over multiple generator matrices.\n \n In this paper, we design an OSD decoder with adaptive GE reduction. Specifically, the decoder will skip GE if doing so is still very likely to produce the correct decoding result. Two conditions are introduced in the proposed decoder. The first condition decides whether to skip GE based on evaluating BLER performance with and without GE. If GE is skipped, the re-encoding process will be performed with the original generator matrix of the code. The second condition determines whether the correct decoding result has been found in the decoding process without GE. If so, decoding is terminated early to reduce the complexity, and if not, the standard OSD is performed as a supplement to prevent BLER from degrading. Our verification shows that at high SNRs, the proposed decoder can avoid GE in almost all decoding attemps with nearly the same BLER performance as the standard OSD. Owing to the effective GE reduction, the proposed decoder can achieve a significantly lowered complexity at high SNRs compared to the latest OSD approaches from the literature \\cite{choi2021fast,yue2021probability}.\n \n The rest of this paper is organized as follows. Section \\ref{sec::Preliminaries} presents the preliminaries. Section \\ref{sec::Algorithm} describes the proposed decoding algorithm. Section \\ref{sec::Performance and Complexity} discusses the decoding error performance and complexity. Section \\ref{Sec::Simulation} verifies the BLER and complexity of the proposed decoder via simulations. Finally, Section \\ref{sec::Conclusion} concludes the paper.\n \n \\emph{Notation}: We use $[a]_u^v = [a_u,\\ldots,a_v]$ to denote a row vector containing element $a_{\\ell}$ for $u\\le \\ell\\le v$. For simplicity, we do not distinguish random variables and their samples throughout the paper, while possible abuse of notations will be specified.\n\n\\vspace{-0.1em} \n\\section{Preliminaries} \\label{sec::Preliminaries}\n\\vspace{-0.1em} \nLet $\\mathcal{C}(n,k)$ denote a binary linear block code, where $n$ and $k$ are the lengths of the codeword and the information block, respectively. $\\mathcal{C}(n,k)$ is defined by its generator matrix $\\mathbf{G}$; that is, an information sequence $\\mathbf{b} = [b]_1^k$ is uniquely encoded to a codeword $\\mathbf{c} = [c]_1^n$ by $\\mathbf{c} = \\mathbf{b}\\mathbf{G}$. In this paper, we assume that $\\mathbf{G}$ is systematic, i.e., $\\mathbf{G} = [\\mathbf{I}_k \\ \\mathbf{P}]$, where $\\mathbf{I}_k$ is a $k\\times k$ identity matrix and $\\mathbf{P}$ is the parity sub-matrix.\n\nWe consider an additive white Gaussian Noise (AWGN) channel and binary phase shift keying (BPSK) modulation. Let $\\mathbf{s} = [s]_1^n$ denote the modulated signals, where $s_{i} = (-1)^{c_{i}}\\in \\{\\pm 1\\}$. At the channel output, the received signal is given by $\\mathbf{r} = \\mathbf{s} + \\mathbf{w}$, where $\\mathbf{w}$ is the AWGN vector with zero mean and variance $N_{0}\/2$, for $N_0$ being the single-band noise power spectrum density. SNR is accordingly defined as $\\mathrm{SNR} = 2\/N_0$. Without loss of generality, approaches provided in this paper can be extended to other modulation schemes.\n\nAt the receiver, the bitwise hard-decision estimate $\\mathbf{y}= [y]_{1}^n$ of codeword $\\mathbf{c}$ is obtained according to: $y_{i} = 1 $ for $r_{i}<0$ and $y_{i} = 0$ for $r_{i}\\geq 0$. If codewords in $\\mathcal{C}(n,k)$ have equal transmission probabilities, the log-likelihood ratio (LLR) of the $i$-th received symbol is defined as ${\\ell}_{i} \\triangleq \\ln \\frac{\\mathrm{Pr}(c_{i}=0|r_{i})}{\\mathrm{Pr}(c_{i}=1|r_{i})}$, which is further simplified to ${\\ell}_{i} = \\frac{4r_{i}}{N_{0}}$ if employing BPSK \\cite{lin2004ECC}. Thus, we define $\\alpha_{i} \\triangleq |r_{i}|$ (the scaled magnitude of LLR) as the reliability of $y_i$, where $|\\cdot|$ is the absolute operation.\n\nOSD preprocesses received signals according to reliabilities. First, a permutation $\\pi_{1}$ is performed to sort the reliabilities $\\bm{\\alpha} = [\\alpha]_1^n$ in descending order, and the ordered reliabilities are obtained as $\\pi_1(\\bm{\\alpha})$. Then, the generator matrix are accordingly permuted (in terms of columns) to $\\pi_1(\\mathbf{G})$. Next, OSD obtains the systematic form of $\\pi_1(\\mathbf{G})$ as $\\widetilde{\\mathbf{G}} = [\\mathbf{I}_k \\ \\widetilde{\\mathbf{P}}]$ by performing GE. We represent the GE operation as $\\widetilde{\\mathbf{G}} = \\mathbf{E}(\\pi_2(\\pi_1(\\mathbf{G})))$, where $\\mathbf{E}$ (dimension $k\\times k$) represents row operations, and $\\pi_{2}$ represents an additional column permutation. $\\pi_{2}$ occurs to ensure that the first $k$ columns of $\\pi_1(\\mathbf{G})$ are linearly independent. Accordingly, $\\mathbf{r}$, $\\mathbf{y}$, and $\\bm{\\alpha}$, are permuted into $\\widetilde{\\mathbf{r}} = \\pi_2(\\pi_1(\\mathbf{r}))$, $\\widetilde{\\mathbf{y}} = \\pi_2(\\pi_1(\\mathbf{y}))$, $\\widetilde{\\bm{\\alpha}} = \\pi_2(\\pi_1(\\bm{\\alpha}))$, respectively.\n\nSince $\\pi_2$ only marginally disrupts the descending order of $\\widetilde{\\bm{\\alpha}}$ \\cite[Eq. (59)]{Fossorier1995OSD}, the first $k$ positions of $\\widetilde{\\mathbf{y}}$, denoted by $\\widetilde{\\mathbf{y}}_{\\mathrm{B}} =[\\widetilde{y}]_1^k$, are referred to as the most reliable basis (MRB) \\cite{Fossorier1995OSD}. To eliminate errors in MRB, a length-$k$ TEP $\\mathbf{e} = [e]_1^k$ is added to $\\widetilde{\\mathbf{y}}_{\\mathrm{B}}$ to obtain a codeword estimate by re-encoding according to $\\widetilde{\\mathbf{c}}_{\\mathbf{e}} = \\left(\\widetilde{\\mathbf{y}}_{\\mathrm{B}}\\oplus \\mathbf{e}\\right)\\widetilde{\\mathbf{G}}$, where $\\widetilde{\\mathbf{c}}_{\\mathbf{e}}$ is the ordered codeword estimate with respect to the TEP $\\mathbf{e}$. In reprocessing, a list of TEPs are re-encoded to generate multiple codeword candidates. The maximum Hamming weight of TEPs attempted is limited by a parameter $m$, namely, decoding order. Thus, the number of TEPs processed in an order-$m$ OSD can be up to $\\sum_{i=0}^{m}\\binom{k}{i}$. For a code with the minimum Hamming weight $d_{\\mathrm{H}}$, OSD with order $m = \\lceil d_{\\mathrm{H}}\/4-1\\rceil$ can achieve the ML decoding performance \\cite{Fossorier1995OSD}.\n\nWith BPSK, the best ordered codeword estimate $\\widetilde{\\mathbf{c}}_{\\mathrm{best}}$ is found by minimizing the weighted Hamming distance between codeword estimate $\\widetilde{\\mathbf{c}}_{\\mathbf{e}}$ and $\\widetilde{\\mathbf{y}}$, which is defined as \\cite{valembois2002comparison}\n \t\\begin{equation} \\small \\label{equ::Prelim::WHD_define}\n \t\t \\mathcal{D}(\\widetilde{\\mathbf{c}}_{\\mathbf{e}},\\widetilde{\\mathbf{y}}) \\triangleq \\sum_{1 \\leq i \\leq n } (\\widetilde{c}_{\\mathbf{e},i} \\oplus \\widetilde{y}_{i}) \\widetilde{\\alpha}_{i}.\n \t\\end{equation}\nFinally, the decoding result $\\hat{\\mathbf{c}}_{\\mathrm{best}}$ is output by performing inverse permutations over $\\widetilde{\\mathbf{c}}_{\\mathrm{best}}$, i.e., $\\hat{\\mathbf{c}}_{\\mathrm{best}} = \\pi_1^{-1}(\\pi_2^{-1}(\\widetilde{\\mathbf{c}}_{\\mathrm{best}}))$.\n\n\\vspace{-0.25em} \n\\section{OSD with Adaptive GE Reduction} \\label{sec::Algorithm}\n\\vspace{-0.25em} \n\\subsection{Overall Description}\n\\vspace{-0.25em} \n \\begin{figure} \n \\centering\n \\definecolor{mycolor1}{rgb}{0.00000,0.44706,0.74118}%\n \\definecolor{mycolor2}{rgb}{0.00000,0.44700,0.74100}%\n \\tikzstyle{terminator} = [rectangle, draw, text centered, rounded corners, minimum height=2em]\n \\tikzstyle{process} = [rectangle, draw, text centered, minimum height=2em]\n \\tikzstyle{decision} = [diamond, draw, text centered, minimum height=1em,aspect=2.5]\n \\tikzstyle{connector} = [draw, -latex']\n %\n \\begin{tikzpicture}[node distance=2cm]\n \\node at (-2,0) [terminator] (start) {\\footnotesize Start Decoding};\n \\node [decision] at (-2,-1.3) (con1) {\\footnotesize Condition 1};\n \\node [process] at (2,-1.3) (NonGE) {\\footnotesize Non-GE OSD (order $m\\!-\\!1$)};\n \\node [decision] at (2,-2.6) (con2) {\\footnotesize Condition 2};\n \\node [process] at (-2,-2.6) (GE) {\\footnotesize Standard OSD (order $m$)};\n \\node at (0,-3.9) [terminator] (end) {\\footnotesize Finish Decoding};\n \\path [connector] (start) -- (con1);\n \\path [connector] (con1) -- (NonGE);\n \\path [connector] (con1) -- (GE);\n \\path [connector] (NonGE) -- (con2);\n \\path [connector] (con2) -- (GE);\n \\path [connector] (con2) |- (end);\n \\path [connector] (GE) |- (end);\n \n \\node[draw=none] at (-1.6, -2.0) (No) {\\footnotesize No};\n \\node[draw=none] at (-0.4, -1.1) (Yes) {\\footnotesize Yes};\n \\node[draw=none] at (1.6, -3.3) (yes) {\\footnotesize Yes};\n \\node[draw=none] at (0.4, -2.4) (No) {\\footnotesize No};\n \n \\end{tikzpicture}\n \t\\vspace{-0em}\n \\caption{The structure of the proposed decoder.}\n \t\\vspace{-0em}\n \t\\label{Fig::structure}\n \n\t\\end{figure}\n\nWe proposed an OSD algorithm that can adaptively skip its reprocessing stage (including GE). When reprocessing is skipped, the original hard-decision estimate $\\mathbf{y}$ and generator matrix $\\mathbf{G}$ will be used for re-encoding (instead of using $\\widetilde{\\mathbf{y}}$ and $\\widetilde{{\\mathbf{G}}}$). Specifically, let $\\mathbf{y}_{\\mathrm{B}}$ denote the first $k$ positions of $\\mathbf{y}$, i.e., $\\mathbf{y}_{\\mathrm{B}} = [y]_1^k$. Then, a codeword estimate is directly recovered by $\\mathbf{c}_{\\mathbf{e}} = \\left(\\mathbf{y}_{\\mathrm{B}}\\oplus \\mathbf{e}\\right)\\mathbf{G}$ with respect to a TEP $\\mathbf{e}$. Similar to the standard OSD, a list of TEPs will be used in re-encoding, and the maximum allowed Hamming weight of TEPs is limited by $m'$. Finally, if $\\mathbf{c}_{\\mathbf{e}}$ is identified as the best codeword estimate, it will be directly output as the decoding result with no inverse permutation required. We referred to such a decoding process without GE as the Non-GE OSD, and $m'$ is its decoding order. \n\nThe structure of the proposed decoder is illustrated in Fig \\ref{Fig::structure}. At the start of decoding, the decoder decides whether to conduct the Non-GE OSD or the standard OSD according to ``Condition 1''. If the Non-GE OSD is performed, ``Condition 2\" will determine whether the Non-GE OSD has found the correct decoding result. If not, the standard OSD will be conducted following the Non-GE OSD to avoid degraded decoding performance. We set $m'=\\max(m-1,0)$ in the proposed decoder, i.e., the Non-GE OSD is one-order lower than the standard OSD. The reasons are that 1) if $m'\\geq m$, the Non-GE OSD has a higher complexity than the standard OSD, which negates the need for GE reduction and worsens the worst-case decoding complexity, and 2) if $m'$ is too small, the Non-GE OSD may easily fail to find the correct result.\n\nAs seen, the design of ``Condition 1'' and ``Condition 2'' is of importance for the structure illustrated in Fig. \\ref{Fig::structure}. We note that the standard OSD in Fig. \\ref{Fig::structure} can be implemented by any improved variants of OSD; for example, the efficient probability-based OSD (PB-OSD) proposed recently \\cite{yue2021probability}.\n\n\\vspace{-0.25em} \n\\subsection{The First Condition}\n\\vspace{-0.25em} \nTo derive the first condition, let us first consider the BLER performance of OSD, which is represented as \\cite{Fossorier1995OSD}\n\\begin{equation}\n \\mathrm{P_e} \\leq (1 - \\mathrm{P_{list}}) + \\mathrm{P_{ML}},\n\\end{equation}\nwhere $\\mathrm{P_{ML}}$ is the ML BLER of $\\mathcal{C}(n,k)$. $\\mathrm{P_{ML}}$ is mainly characterized by the structure of $\\mathcal{C}(n,k)$ and in particular, the minimum Hamming weight $d_{\\mathrm{H}}$. $\\mathrm{P_{list}}$ is the probability that some TEP can eliminate the the errors over MRB, whose value depends on the decoding order $m$ \\cite[Eq. (24)]{dhakal2016error}.\n\n\n\nLet $\\mathrm{P'_{list}}$ denote the probability that the Non-GE OSD can eliminate the errors over $\\mathbf{y}_{\\mathrm{B}}$ with some TEP. Thus, BLER of Non-GE OSD is upper bounded by $\\mathrm{P_e} \\leq (1 - \\mathrm{P'_{list}}) + \\mathrm{P_{ML}}$. Therefore, if $\\mathrm{P'_{list}} =\n \\mathrm{P_{list}}$, the Non-GE OSD will deliver the same BLER as OSD. In other words, the Non-GE OSD is sufficient to find the correct decoding result and thus GE is not necessary. $\\mathrm{P'_{list}} =\n \\mathrm{P_{list}}$ can be satisfied when SNR is asymptotically large (i.e., $N_0\\to 0$). To see this, consider $\\mathrm{P_{list}}$ derived from \\cite[Lemma 1]{yue2021revisit}), i.e.,\n \\begin{equation} \\label{equ::Ana::Plist}\n \\mathrm{P_{list}} = \\sum_{i=0}^{m}\\int_{x = 0}^{\\infty} \\binom{k}{i} (p(x))^{i}(1-p(x))^{k-i} f_{\\widetilde{\\alpha}_{k+1}}(x) dx,\n \\end{equation}\n where $f_{\\widetilde{\\alpha}_{k+1}}(x)$ is the probability density function ($\\mathrm{pdf}$) of the $(k+1)$-th ordered reliability, $\\widetilde{\\alpha}_{k+1}$ (as a random variable). $p(x)$ is the average bitwise error probability of $\\widetilde{\\mathbf{y}}_{\\mathrm{B}}$ conditioning on $\\{\\widetilde{\\alpha}_{k+1} = x\\}$, which is given by \\cite[Eq. (13)]{yue2021revisit}\n \\begin{equation} \\label{equ::Ana::Pe::px}\n p(x) = \\frac{Q(\\frac{2x+2}{\\sqrt{2N_0}}) }{ Q(\\frac{2x+2}{\\sqrt{2N_0}}) + Q(\\frac{2x-2}{\\sqrt{2N_0}}) } .\n \\end{equation}\nThen, when $N_0\\to \\infty$, we have \\cite[Eq. (28)]{yue2021linear}\n \\begin{equation} \n \\lim_{N_0 \\to 0}p(x) = \\frac{1}{1 + \\lim\\limits_{N_0 \\to 0}\\exp\\left(\\frac{4x}{N_0}\\right)}.\n \\end{equation}\nwhen $N_0 \\to 0$, there are $\\widetilde{\\alpha}_{k+1} \\to 1$ and $f_{\\widetilde{\\alpha}_{k+1}}(x) \\to \\delta(x-1)$, where $\\delta(x)$ is the Dirac delta function. We then obtain \n\\begin{equation} \\label{equ::Plist::N0to0}\n \\lim_{N_0 \\to 0}\\mathrm{P_{list}} = \\lim_{N_0 \\to 0}\\sum_{i=0}^{m} \\binom{k}{i} \\left(\\frac{1}{1 + e^{4\/N_0}}\\right)^{\\!i\\!}\\left(\\frac{e^{4\/N_0}}{1 + e^{4\/N_0}}\\right)^{\\!\\!k-i}\\!\\!\\!.\n\\end{equation}\n\nOn the other hand, we can derive $\\mathrm{P'_{list}}$ as\n \\begin{equation} \\label{equ::Ana::Plist'}\n \\mathrm{P'_{list}} = \\sum_{i=0}^{m'} \\binom{k}{i} (p')^{i}(1-p')^{k-i},\n \\end{equation}\nwhere $p'$ is the bitwise error probability of $\\mathbf{y}_{\\mathrm{B}}$. Under AWGN and BPSK, $p'$ is readily given by $p' = Q(\\sqrt{2\/N_0})$, which also has the following asymptotic property,\n\\begin{equation} \\label{equ::Ana::P'::app}\n \\lim_{N_0 \\to 0}p' = \\frac{1}{1 + \\lim\\limits_{N_0 \\to 0}\\exp\\left(\\frac{4}{N_0}\\right)}.\n\\end{equation}\nTherefore, substituting (\\ref{equ::Ana::P'::app}) into (\\ref{equ::Ana::Plist'}) and taking $m'=m-1$, we observe that \n\\begin{equation} \\label{equ::ana::same}\n\\begin{split}\n \\lim_{N_0 \\to 0}&\\mathrm{P_{list}}- \\mathrm{P'_{list}} \\\\\n &= \\lim_{N_0 \\to 0}\\binom{k}{m} \\left(\\frac{1}{1 + e^{4\/N_0}}\\right)^{\\!m\\!}\\left(\\frac{e^{4\/N_0}}{1 + e^{4\/N_0}}\\right)^{\\!\\!k-m} \\\\\n &\\to 0, \n\\end{split}\n\\end{equation}\nwhich indicates that when $N_0\\to 0$, the Non-GE OSD could completely replace the standard OSD. However, when $N_0$ is not negligible, there is $\\mathrm{P'_{list}} < \\mathrm{P_{list}}$ for $m',black] (axis cs: 4.8,1.5) -- (axis cs: 5.25,1.5);\n \\node[anchor=south] at (axis cs: 4,1){\\scriptsize PB-OSD};\n\n\n \\end{axis}\n \\end{tikzpicture}%\n \\vspace{-0.3em}\n \\caption{Average number of TEPs, $N_a'$}\n \\vspace{-0.3em}\n \\label{Fig::64-36-N'}\n \\end{subfigure}\n \n \\vspace{-0.11em}\n \\caption{The impacts of values of $\\lambda$ in decoding the $(64,36)$ eBCH code with order 3.}\n \\vspace{-0.61em}\n \\label{Fig::64-36-PARA}\n \\end{figure}\n\n\n\n\n\\vspace{-0.3em}\n\\section{Conclusion} \\label{sec::Conclusion}\n\\vspace{-0.3em}\n\nIn this paper, we designed an efficient ordered-statistics decoding (OSD) algorithm with adaptive GE reduction. The proposed decoder employs two conditions. The first condition decides whether to conduct the OSD decoding without Gaussian elimination (GE). If the OSD without GE is performed, the second condition identifies if the correct decoding result has been found in the Non-GE decoding process. If so, the standard OSD with GE can be avoided. The proposed decoding algorithm is an effective solution to the ``complexity floor'' owning to the overhead of GE in OSD decoders. Simulation results indicated that compared to the approaches from the literature, the proposed decoder can significantly reduce the decoding complexity at high SNRs while maintaining the error performance of the original OSD.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\t\\vspace{-0.3em}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}