_id
stringlengths
36
36
text
stringlengths
5
665k
marker
stringlengths
3
6
marker_offsets
sequence
label
stringlengths
28
32
e929409a-f699-4b35-8b8e-d47fea3e997f
Regarding previous methods such as MoCo v3 [1]} and DINO [2]} adopt ViT/DeiT as their backbone architecture, we first report results of MoBY using DeiT-S [3]} for fair comparison with them. Under 300-epoch training, MoBY achieves 72.8% top-1 accuracy, which is slightly better than MoCo v3 and DINO (without the multi-crop trick), as shown in Table REF .
[2]
[ [ 57, 60 ] ]
https://openalex.org/W3159481202
8a698bd5-08df-49b8-9aa7-0a1a6c49e793
Regarding previous methods such as MoCo v3 [1]} and DINO [2]} adopt ViT/DeiT as their backbone architecture, we first report results of MoBY using DeiT-S [3]} for fair comparison with them. Under 300-epoch training, MoBY achieves 72.8% top-1 accuracy, which is slightly better than MoCo v3 and DINO (without the multi-crop trick), as shown in Table REF .
[3]
[ [ 154, 157 ] ]
https://openalex.org/W3116489684
000dcb59-d9f1-4db4-8014-827ad5823afa
Tricks in MoCo v3 [1]}. MoCo v3 adopts a fixed patch embedding, batch normalization layers to replace the layer normalization ones before the MLP blocks, and a 3-layer MLP head. It also uses large batch size (i.e. 4096) which is unaffordable for many research labs. Tricks in DINO [2]}. DINO adopts asymmetric temperatures between student and teacher, a linearly warmed-up teacher temperature, varying weight decay during pre-training, the last layer fixed at the first epoch, tuning whether to put weight normalization in the head, a concatenation of the last few blocks or CLS tokens as the input to the linear classifier, and etc.
[1]
[ [ 19, 22 ] ]
https://openalex.org/W3145450063
240afcde-48d5-46db-b7c9-55d344df87d4
Tricks in MoCo v3 [1]}. MoCo v3 adopts a fixed patch embedding, batch normalization layers to replace the layer normalization ones before the MLP blocks, and a 3-layer MLP head. It also uses large batch size (i.e. 4096) which is unaffordable for many research labs. Tricks in DINO [2]}. DINO adopts asymmetric temperatures between student and teacher, a linearly warmed-up teacher temperature, varying weight decay during pre-training, the last layer fixed at the first epoch, tuning whether to put weight normalization in the head, a concatenation of the last few blocks or CLS tokens as the input to the linear classifier, and etc.
[2]
[ [ 283, 286 ] ]
https://openalex.org/W3159481202
511dd0dd-e23a-45b9-9c1b-00920ba78ede
In contrast, we mainly adopt standard settings from MoCo v2 [1]} and BYOL [2]}, and use a small batch size of 512 such that the experimental settings will be affordable for most labs. We have also started to try applying some tricks of MoCo v3 [3]}/DINO [4]} to MoBY, though they are not included in the standard settings. Our initial exploration reveals that the fixed patch embedding has no use to MoBY, and replacing the layer normalization layers before the MLP blocks by batch normalization can bring +1.1% top-1 accuracy using 100-epoch training, as shown in Table REF . This indicates that some of these tricks may be useful for the MoBY approach, and the MoBY approach has potential to achieve much higher accuracy on ImageNet-1K linear evaluation. This will be left as our future study.
[1]
[ [ 60, 63 ] ]
https://openalex.org/W3009561768
723c6a62-f717-484a-b29d-3bcf161df4b7
In contrast, we mainly adopt standard settings from MoCo v2 [1]} and BYOL [2]}, and use a small batch size of 512 such that the experimental settings will be affordable for most labs. We have also started to try applying some tricks of MoCo v3 [3]}/DINO [4]} to MoBY, though they are not included in the standard settings. Our initial exploration reveals that the fixed patch embedding has no use to MoBY, and replacing the layer normalization layers before the MLP blocks by batch normalization can bring +1.1% top-1 accuracy using 100-epoch training, as shown in Table REF . This indicates that some of these tricks may be useful for the MoBY approach, and the MoBY approach has potential to achieve much higher accuracy on ImageNet-1K linear evaluation. This will be left as our future study.
[2]
[ [ 74, 77 ] ]
https://openalex.org/W3035060554
7ea72bdf-69d3-4494-947a-3e36008fc68c
In contrast, we mainly adopt standard settings from MoCo v2 [1]} and BYOL [2]}, and use a small batch size of 512 such that the experimental settings will be affordable for most labs. We have also started to try applying some tricks of MoCo v3 [3]}/DINO [4]} to MoBY, though they are not included in the standard settings. Our initial exploration reveals that the fixed patch embedding has no use to MoBY, and replacing the layer normalization layers before the MLP blocks by batch normalization can bring +1.1% top-1 accuracy using 100-epoch training, as shown in Table REF . This indicates that some of these tricks may be useful for the MoBY approach, and the MoBY approach has potential to achieve much higher accuracy on ImageNet-1K linear evaluation. This will be left as our future study.
[3]
[ [ 244, 247 ] ]
https://openalex.org/W3145450063
8ff2defe-c574-479b-8598-aff92f667e4d
In contrast, we mainly adopt standard settings from MoCo v2 [1]} and BYOL [2]}, and use a small batch size of 512 such that the experimental settings will be affordable for most labs. We have also started to try applying some tricks of MoCo v3 [3]}/DINO [4]} to MoBY, though they are not included in the standard settings. Our initial exploration reveals that the fixed patch embedding has no use to MoBY, and replacing the layer normalization layers before the MLP blocks by batch normalization can bring +1.1% top-1 accuracy using 100-epoch training, as shown in Table REF . This indicates that some of these tricks may be useful for the MoBY approach, and the MoBY approach has potential to achieve much higher accuracy on ImageNet-1K linear evaluation. This will be left as our future study.
[4]
[ [ 254, 257 ] ]
https://openalex.org/W3159481202
02b8c549-75e9-4b8e-961d-2fbcc2a0f99e
Two detectors are adopted in the evaluation: Mask R-CNN [1]} and Cascade Mask R-CNN [2]}, following the implementation of [3]}https://github.com/SwinTransformer/Swin-Transformer-Object-Detection. Table REF shows the comparison of the learnt representation by MoBY and the pretrained supervised method in [3]}, in both 1x and 3x settings. For each experiment, we follow all the settings used for supervised pre-trained models [3]}, except that we tune the drop path rate in \(\lbrace 0, 0.1, 0.2\rbrace \) and report the best results (for also supervised models).
[1]
[ [ 56, 59 ] ]
https://openalex.org/W2963150697
399b4039-9677-408d-919e-bbda8f74d88d
Two detectors are adopted in the evaluation: Mask R-CNN [1]} and Cascade Mask R-CNN [2]}, following the implementation of [3]}https://github.com/SwinTransformer/Swin-Transformer-Object-Detection. Table REF shows the comparison of the learnt representation by MoBY and the pretrained supervised method in [3]}, in both 1x and 3x settings. For each experiment, we follow all the settings used for supervised pre-trained models [3]}, except that we tune the drop path rate in \(\lbrace 0, 0.1, 0.2\rbrace \) and report the best results (for also supervised models).
[2]
[ [ 84, 87 ] ]
https://openalex.org/W2964241181
6c2e87ea-bdd0-462f-a9db-156fb2e3b372
Two detectors are adopted in the evaluation: Mask R-CNN [1]} and Cascade Mask R-CNN [2]}, following the implementation of [3]}https://github.com/SwinTransformer/Swin-Transformer-Object-Detection. Table REF shows the comparison of the learnt representation by MoBY and the pretrained supervised method in [3]}, in both 1x and 3x settings. For each experiment, we follow all the settings used for supervised pre-trained models [3]}, except that we tune the drop path rate in \(\lbrace 0, 0.1, 0.2\rbrace \) and report the best results (for also supervised models).
[3]
[ [ 122, 125 ], [ 305, 308 ], [ 426, 429 ] ]
https://openalex.org/W3138516171
fe233a72-4061-45c7-afd1-c939446fb74b
It can be seen that the representations learnt by the self-supervised method (MoBY) and the supervised method are similarly well on transferring performance. While we note that previous SSL works using ResNet as the backbone architecture usually report stronger performance over the supervised methods [1]}, [2]}, [3]}, no gains over supervised methods are observed using Transformer architectures. We hypothesis it is partly because the supervised pre-training on Transformers has involved strong data augmentations [4]}, [5]}, while supervised training of ResNet usually employs much weaker data augmentation. These results also imply space to improve for self-supervised learning using Transformer architectures. <TABLE>
[1]
[ [ 302, 305 ] ]
https://openalex.org/W3035524453
d843de74-3ad2-4f42-8b08-da6e3e8b2650
It can be seen that the representations learnt by the self-supervised method (MoBY) and the supervised method are similarly well on transferring performance. While we note that previous SSL works using ResNet as the backbone architecture usually report stronger performance over the supervised methods [1]}, [2]}, [3]}, no gains over supervised methods are observed using Transformer architectures. We hypothesis it is partly because the supervised pre-training on Transformers has involved strong data augmentations [4]}, [5]}, while supervised training of ResNet usually employs much weaker data augmentation. These results also imply space to improve for self-supervised learning using Transformer architectures. <TABLE>
[2]
[ [ 308, 311 ] ]
https://openalex.org/W3172615411
3156ad3a-b16c-4ad9-8e4e-f4d53a593da9
It can be seen that the representations learnt by the self-supervised method (MoBY) and the supervised method are similarly well on transferring performance. While we note that previous SSL works using ResNet as the backbone architecture usually report stronger performance over the supervised methods [1]}, [2]}, [3]}, no gains over supervised methods are observed using Transformer architectures. We hypothesis it is partly because the supervised pre-training on Transformers has involved strong data augmentations [4]}, [5]}, while supervised training of ResNet usually employs much weaker data augmentation. These results also imply space to improve for self-supervised learning using Transformer architectures. <TABLE>
[3]
[ [ 314, 317 ] ]
https://openalex.org/W3135958856
89413ad8-9dbe-45f7-a6de-018d88d8f517
It can be seen that the representations learnt by the self-supervised method (MoBY) and the supervised method are similarly well on transferring performance. While we note that previous SSL works using ResNet as the backbone architecture usually report stronger performance over the supervised methods [1]}, [2]}, [3]}, no gains over supervised methods are observed using Transformer architectures. We hypothesis it is partly because the supervised pre-training on Transformers has involved strong data augmentations [4]}, [5]}, while supervised training of ResNet usually employs much weaker data augmentation. These results also imply space to improve for self-supervised learning using Transformer architectures. <TABLE>
[4]
[ [ 517, 520 ] ]
https://openalex.org/W3116489684
2693958f-9803-404d-8e63-97a90db19f91
It can be seen that the representations learnt by the self-supervised method (MoBY) and the supervised method are similarly well on transferring performance. While we note that previous SSL works using ResNet as the backbone architecture usually report stronger performance over the supervised methods [1]}, [2]}, [3]}, no gains over supervised methods are observed using Transformer architectures. We hypothesis it is partly because the supervised pre-training on Transformers has involved strong data augmentations [4]}, [5]}, while supervised training of ResNet usually employs much weaker data augmentation. These results also imply space to improve for self-supervised learning using Transformer architectures. <TABLE>
[5]
[ [ 523, 526 ] ]
https://openalex.org/W3138516171
72d44858-7891-4c2e-8a19-56b5ecc3c3fa
The UPerNet approach [1]} and the ADE20K dataset are adopted in the evaluation, following [2]} https://github.com/SwinTransformer/Swin-Transformer-Semantic-Segmentation. The fine-tuning and testing settings also follow [2]} except that the learning rate of each experiment is tuned using \(\lbrace 3\times 10^{-5}, 6\times 10^{-5}, 1\times 10^{-4}\rbrace \) . Table REF shows the comparisons of supervised and self-supervised pre-trained models on this evaluation. It indicates that MoBY performs slightly worse than the supervised method, implying a space to improve for self-supervised learning using Transformer architectures. <TABLE>
[1]
[ [ 21, 24 ] ]
https://openalex.org/W2884822772
9eb5ed10-2e3a-4aa4-8bfe-8072bc5f4d85
The UPerNet approach [1]} and the ADE20K dataset are adopted in the evaluation, following [2]} https://github.com/SwinTransformer/Swin-Transformer-Semantic-Segmentation. The fine-tuning and testing settings also follow [2]} except that the learning rate of each experiment is tuned using \(\lbrace 3\times 10^{-5}, 6\times 10^{-5}, 1\times 10^{-4}\rbrace \) . Table REF shows the comparisons of supervised and self-supervised pre-trained models on this evaluation. It indicates that MoBY performs slightly worse than the supervised method, implying a space to improve for self-supervised learning using Transformer architectures. <TABLE>
[2]
[ [ 90, 93 ], [ 219, 222 ] ]
https://openalex.org/W3138516171
264c87e9-9228-4c32-a7a6-2ccef71258e2
Drop path has proved a useful regularization for supervised representation learning using the image classification task and Transformer architectures [1]}, [2]}. We also ablate the effect of this regularization in Table REF . Increasing the drop path regularization from 0.05 to 0.1 to the online encoder is beneficial for representation learning, especially in longer training, probably due to the relief of over-fitting. Additionally adding drop path regularization to the target encoder results in 1.9% top-1 accuracy drop (70.9% to 69.0%), indicating a harm. We thus adopt an asymmetric drop path rates in pre-training.
[1]
[ [ 150, 153 ] ]
https://openalex.org/W3116489684
7da7fc94-d223-44c6-a714-d98fdbdd8527
Drop path has proved a useful regularization for supervised representation learning using the image classification task and Transformer architectures [1]}, [2]}. We also ablate the effect of this regularization in Table REF . Increasing the drop path regularization from 0.05 to 0.1 to the online encoder is beneficial for representation learning, especially in longer training, probably due to the relief of over-fitting. Additionally adding drop path regularization to the target encoder results in 1.9% top-1 accuracy drop (70.9% to 69.0%), indicating a harm. We thus adopt an asymmetric drop path rates in pre-training.
[2]
[ [ 156, 159 ] ]
https://openalex.org/W3138516171
cea2ce3a-344e-459b-8d89-4de5daa6d826
Since the InceptionTime model did better overall among our base DL models, we decided to optimize it. Due to a large choice of architectural parameters in LSTM, we also decided to optimize LSTM, LSTM-FCN, and MLSTM-FCN. Of course, other models could be optimized but we leave this for future work. We used the optuna framework [1]} with 100 number of trials in our study. Across all of the optimization studies, our search space for the “learning rate" is set between \(0.00001\) and \(0.01\) with logarithmic increments and the “epoch number" is exhausted from 25 to 100 with increments of 25. In our study, search space for InceptionTime parameters are as follows: “nf" takes \(4,8,16,32,40,48,56,64,128\) ; “depth" takes values from 1 to 15 with increments of 1 and “fc_dropout" takes values from 0 to \(0.9\) with increments of \(0.1\) The search space for LSTM parameters are as follows: “n_layers" takes values from 1 to 5 with increments of 1; “rnn_dropout" and “fc_dropout" take values from 0 to \(0.9\) with increments of \(0.1\) ; “bidirectional" takes \(\mbox{True}\) or \(\mbox{False}\) . The search space for LSTM-FCN and MLSTM-FCN parameters are as follows: “rnn_layers" takes values from 1 to 5 with increments of 1; “rnn_dropout" and “fc_dropout" takes values from 0 to \(0.9\) with increments of \(0.1\) ; “bidirectional" takes \(\mbox{True}\) or \(\mbox{False}\) . We first optimized the model parameters independently on the L-WI, U-WI, and C-WI datasets. We used the same parameter sets when training our models on the respective WD datasets. Therefore, for a small number of cases, our models trained on WD underperform in comparison with the base DL models, where the most significant drop is observed for MLSTM-FCN trained on the U-WD dataset. One could ideally optimize the parameters independently on WD as well, but we do not pursue this path mainly because of the high cost of optimizing parameters and the minor performance gains due to parameter optimization over our base models (e.g. compare the accuracy of InceptionTime in Table REF for DL Baseline vs. DL Optimized). More details, including the optimized models and their parameters can be found in our repository [2]} and the accuracy of the optimized results are presented in Table REF . Our optimization efforts resulted in similar accuracy in comparison with our base DL models.
[1]
[ [ 327, 330 ] ]
https://openalex.org/W2962897394
3b31f74c-9eae-4e93-85d2-a595322144cd
When exploring WI datasets, we found increased accuracies when compared with non-ensemble based techiniques. Training models on WI datasets is performed once. Subsequent inferrence will be made using the model trained. WI trained models tend to generalize better to larger amounts of users when compared with WD models [1]}. Since WD models require all users to be part of the training process, with the addition of new users, new models should be trained. It is important to examine the accuracy vs. training-time trade-off [2]}, [3]} of using ensemble learning models.
[1]
[ [ 319, 322 ] ]
https://openalex.org/W2023302299
cd8c6f27-93fa-4e21-aab6-2ab8c55c59c4
From optical communication to digital image processing, frequency domain transformations are a prominant tool used in optical signal analysis [1]}[2]}. In recent years, the research community has taken a growing interest in 3-dimensional (3D) surface imaging for different applications including range sensing, object/facial recognition, dynamic projection, 3D map building, and localization for autonomous vehicles. To make these applications possible, the analysis of structured light (SL) and coded structured light (CSL) for surface profilometry and 3D surface imaging have been proposed. As demonstrated by Salvi et al.'s [3]} summary of works from 1982 to 2009 and Geng's [4]} review of recent advances in surface imaging technology, this can be accomplished through a multitude of different approaches with a wide range of operating characteristics. For the purpose of this work, methods are categorized as either discrete or continuous active-correspondence, with sub-categorization as spatial, temporal, or spatio-temporal multiplexing.
[1]
[ [ 142, 145 ] ]
https://openalex.org/W2320852351
22a1db94-2a56-4837-b127-0a78e5336ac0
From optical communication to digital image processing, frequency domain transformations are a prominant tool used in optical signal analysis [1]}[2]}. In recent years, the research community has taken a growing interest in 3-dimensional (3D) surface imaging for different applications including range sensing, object/facial recognition, dynamic projection, 3D map building, and localization for autonomous vehicles. To make these applications possible, the analysis of structured light (SL) and coded structured light (CSL) for surface profilometry and 3D surface imaging have been proposed. As demonstrated by Salvi et al.'s [3]} summary of works from 1982 to 2009 and Geng's [4]} review of recent advances in surface imaging technology, this can be accomplished through a multitude of different approaches with a wide range of operating characteristics. For the purpose of this work, methods are categorized as either discrete or continuous active-correspondence, with sub-categorization as spatial, temporal, or spatio-temporal multiplexing.
[4]
[ [ 678, 681 ] ]
https://openalex.org/W2171646521
e02d1fd1-d7d3-4d09-9d18-90aa18b2b2d8
Discrete spatially multiplexed methodologies are prominent in consumer products through active-correspondence products such as the Microsoft Kinect v1 [1]} and the Intel RealSense [2]}. These products use infrared (IR) laser sources projected through diffraction gratings to project spatial patterns onto a scene which are visually imperceptible to a human observer and therefore unimpeding of an observer's viewpoint. Higher precision discrete methodologies have been proposed through spatio-temporal multiplexing, including recent work by Cole et al. [3]} which proposed modulating imperceptible grey code patterns by controlling a DLP projector's digital mirroring device (DMD). This approach extends on previously reported spatio-temporal multiplexing of grey-codes by Cotting and Fuchs [4]}. Consumer examples of multi-pattern/multi-shot approaches are limited, likely indicative of the increased hardware costs associated with pattern modulation and/or the requirement for stationary subjects [5]}. As described by Cole et al., a 2D-search requires \(\log _2(M)+\log _2(N)\) grey-code patterns to produce a M\(\times \) N depth-map, thus requiring the projection of 22 unique patterns on a stationary subject for a 1920\(\times \) 1080 depth-map.
[1]
[ [ 151, 154 ] ]
https://openalex.org/W2056898157
e41a8353-a91b-427f-bd6e-6f230676163c
Continuous patterns offer a higher tolerance for distortion and defocused optics which can be advantageous [1]}, [2]}. The single phase shifting (SPS) methods by Srinivasana et al. [3]} and Guan et al. [4]} employ sinusoidal gratings or fringe patterns to sense depth through phase shifts. Multiple phase shifting (MPS) methods extend these approaches through works such as Gushov et al. [5]}, although these methods do not extend beyond the analysis of phase components for sinusoidal signals. The Fourier Transform becomes predominant with single coded frequency multiplexing (SCFM) approaches with the proposition of Fourier-Transform fringe-pattern analysis for topography and interferometry [6]}, and the definition of Fourier Transform profilometry (FTP) in 1983 by Takeda et al. [7]}. These early works were followed by Su et al. [8]} who proposed the usage of a Ronchi grating to create a quasi-sine optical field which could be analyzed in the Fourier domain without overlapping the zero component and other higher spectra. Later works include a single-shot 3D shape measurement methodology using Frequency-multiplex Fourier-transform profilometry by Takeda et al. [9]}.
[6]
[ [ 696, 699 ] ]
https://openalex.org/W2084349857
a2e75099-d7a8-49f4-b909-c6f0d09f867e
Limited derivations of the relationship between spatial transformations and the FT spectrum appear in the first (1977) and second edition (1987) of Digital Image Processing by Gonzalez and Wintz [1]} [2]} and (1996) Digital Image Processing by Castleman [3]}. The relationship between spatial affine transformations and the FT spectrum has been previously defined by Bracewell et al. in 1993 [4]}. Bracewell included derivations of the theorem for Euclidean, similarity, and affine transforms in Fourier Analysis and Imaging [5]}. Derivations for Euclidean and similarity transformations appear in the second edition of Digital Image Processing by Gonzalez and Woods [6]}, but are subsequently omitted from the third and fourth editions of the same text [7]}, [8]}. In this work, the derivation extends Bracewell's approach and combines it with the observations and notational convention set forth by Brigham [9]} [10]}.
[1]
[ [ 195, 198 ] ]
https://openalex.org/W4247537352
c77c06f0-6d48-4b34-a397-2c74d766b2b2
Limited derivations of the relationship between spatial transformations and the FT spectrum appear in the first (1977) and second edition (1987) of Digital Image Processing by Gonzalez and Wintz [1]} [2]} and (1996) Digital Image Processing by Castleman [3]}. The relationship between spatial affine transformations and the FT spectrum has been previously defined by Bracewell et al. in 1993 [4]}. Bracewell included derivations of the theorem for Euclidean, similarity, and affine transforms in Fourier Analysis and Imaging [5]}. Derivations for Euclidean and similarity transformations appear in the second edition of Digital Image Processing by Gonzalez and Woods [6]}, but are subsequently omitted from the third and fourth editions of the same text [7]}, [8]}. In this work, the derivation extends Bracewell's approach and combines it with the observations and notational convention set forth by Brigham [9]} [10]}.
[4]
[ [ 392, 395 ] ]
https://openalex.org/W1972541957
cba688bb-f4e0-4bf4-a0b1-4e5b99685bc8
Limited derivations of the relationship between spatial transformations and the FT spectrum appear in the first (1977) and second edition (1987) of Digital Image Processing by Gonzalez and Wintz [1]} [2]} and (1996) Digital Image Processing by Castleman [3]}. The relationship between spatial affine transformations and the FT spectrum has been previously defined by Bracewell et al. in 1993 [4]}. Bracewell included derivations of the theorem for Euclidean, similarity, and affine transforms in Fourier Analysis and Imaging [5]}. Derivations for Euclidean and similarity transformations appear in the second edition of Digital Image Processing by Gonzalez and Woods [6]}, but are subsequently omitted from the third and fourth editions of the same text [7]}, [8]}. In this work, the derivation extends Bracewell's approach and combines it with the observations and notational convention set forth by Brigham [9]} [10]}.
[9]
[ [ 909, 912 ] ]
https://openalex.org/W3022893215
afa326cd-8a8c-4c5a-b9ce-0dfced2be352
As described by Hartley and Zisserman [1]}, spatial transformations follow a well established hierarchy of classes, i.e. isometry, similarity, affinity, and perspectivity transformations. Each subsequent class in the hierarchy relaxes invariant properties, inheriting and expanding the degrees of freedom (DoFs) of previous classes. For example, affine transformations preserve parallel lines and represent six DoFs in 2D space, whereas perspective transformations preserves only collinearity and represents eight DoFs in 2D space [1]}. These DoFs increase with dimensionality increasing affine transformations to twelve DoF in 3D space and perspective transformations to fifteen DoFs [1]}. Equation REF defines the transformation of a 2D point \(P\) represented in homogeneous coordinates, using perspective transformation \(A\) to produce point \(P^{\prime }\) . The use of homogeneous coordinates adds the third dimension \(z\) , which is subsequently removed from \(P^{\prime }\) by perspective divide to produce \(P^{\prime \prime }\) as shown in Equation REF . \(\begin{bmatrix}x^{\prime } \\ y^{\prime } \\ z^{\prime }\end{bmatrix}=P^{\prime } = AP=\begin{bmatrix}{\chi }_x && {\psi }_{yx} && {\tau }_{x}\\{\psi }_{xy} && {\chi }_y && {\tau }_{y}\\{\psi }_{xz} && {\psi }_{yz} && {\chi }_z\\\end{bmatrix}\begin{bmatrix}x \\ y \\ z=1\end{bmatrix}\) \(P^{\prime \prime } = P^{\prime } / z^{\prime }\)
[1]
[ [ 38, 41 ], [ 531, 534 ], [ 685, 688 ] ]
https://openalex.org/W2033819227
307c4413-4bcd-4d62-ad8c-36138c259965
Following the precedent and convention set forth by Bracewell et al. [1]} in deriving the Affine Theorem, the Perspective Theorem for SFTP is derived here by recognizing that if there exists a mapping between spatial image \(f(x,y,z)\) and frequency domain representation \(F(u,v,w)\) , then given Equation REF for a perspective transformation and Equation REF for a DFT, a mapping from \(f(x^{\prime },y^{\prime },z^{\prime })\) to \(F(u^{\prime },v^{\prime },w^{\prime })\) must exist.
[1]
[ [ 69, 72 ] ]
https://openalex.org/W1972541957
6e7d8514-0a5a-490d-85ab-f935083dee44
Bracewell et al.'s Affine Transformation Theorem created the foundation for the extension to perspective transformation pairs. For affine transformations, the novel formulation presented here reduces to Bracewell's equation, though it should be noted that equations reported are for DFT and more closely match the observations of Fast Fourier Transform and Fast Fourier Transform And Its Applications by Brigham [1]}[2]}. While this work has described these equations to model 2D images in 3D space, there is no theoretical limitation to extending the dimensionality of these equations. The selected scope was chosen to facilitate simulation and model real-world applications, although the mathematics of SFTP has been solved for spatial perspective transformations of 3D models in 3D space.
[1]
[ [ 412, 415 ] ]
https://openalex.org/W3022893215
5dfc0367-a090-4fef-98a0-0a936dc3947e
To run optimization algorithms in combination with supervised learning models it is necessary to limit the region in which they operate to the region, which is covered by the training data. One way to achieve this is by training unsupervised learning methods on the input data, as it is for example done in [1]} using support vector machines (SVM). From a machine learning perspective such an approach can be seen as anomaly detection. Anomaly detection aims to separate data that is characteristically different from the known data of the sample data set, which has been used for training. An extensive overview of anomaly detection methods is given in [2]}. Moreover, [3]} gives an overview on recent deep learning-based approaches for anomaly detection, from which we want to point out neural network-based autoencoders [4]}, which fit especially well into multi-task learning (MTL) schemes other than SVMs.
[3]
[ [ 670, 673 ] ]
https://openalex.org/W2910068345
4c58c57a-1378-4247-89ad-be43ec63b58b
To run optimization algorithms in combination with supervised learning models it is necessary to limit the region in which they operate to the region, which is covered by the training data. One way to achieve this is by training unsupervised learning methods on the input data, as it is for example done in [1]} using support vector machines (SVM). From a machine learning perspective such an approach can be seen as anomaly detection. Anomaly detection aims to separate data that is characteristically different from the known data of the sample data set, which has been used for training. An extensive overview of anomaly detection methods is given in [2]}. Moreover, [3]} gives an overview on recent deep learning-based approaches for anomaly detection, from which we want to point out neural network-based autoencoders [4]}, which fit especially well into multi-task learning (MTL) schemes other than SVMs.
[4]
[ [ 823, 826 ] ]
https://openalex.org/W2100495367
fa326e71-6507-426f-beb5-fe4f4ac69532
Autoencoder approaches assume that features of a data set can be mapped into a lower dimensional latent feature space, in which the known data points differ substantially from unknown data points. By backmapping into the original space, anomalies can be identified by evaluating the reconstruction error, see for example [1]}. In [1]} it is also shown that autoencoder networks are able to detect subtle anomalies, which cannot be detected by linear methods like PCA. Furthermore, autoencoder networks require less complex computations compared to a nonlinear kernel-based PCA.
[1]
[ [ 321, 324 ], [ 330, 333 ] ]
https://openalex.org/W2127979711
5f895e73-e367-4f77-8417-b28c06803412
In the present paper, we introduce a generic MTL-based optimization approach to efficiently identify sets of microstructures, which are highly divers and producible by a process. The approach is based on an optimization algorithm interacting with a machine learning model that combines MTL [1]} with siamese neural networks [2]}. In contrast to [3]}, [4]} and also to [5]}, in our approach a surrogate model is set up in order to replace the numerical simulation, which maps microstructures to properties. The microstructure-properties mapping can be executed efficiently by means of the surrogate model within the optimization procedure.
[2]
[ [ 324, 327 ] ]
https://openalex.org/W2127589108
8334807e-e42b-411e-b575-2da89fef7bda
The two components m-p-m and v-p can be realized by training two separate machine learning models. However, when the training procedures are isolated from each other, the models are not able to mutually access information already learned by the other model. Therefore, we combine the two components as tasks into one MTL model [1]}. Both tasks have a common backbone (the feature extraction part of a network) and different heads (feature processing part of a network) operating on the backbone output. The backbone output vectors form the so-called latent feature space. The proposed MTL approach furthermore uses the backbone as an encoder network of an autoencoder, where the decoder is also attached to the latent feature space with the purpose to reconstruct the input pattern of the backbone. This is achieved by adding the reconstruction of the microstructures from the latent feature space as a third task. In the MTL approach, all three tasks are represented by a single neural network-based model. The weights of the model are trained simultaneously based on a combined loss function. After training the MTL model, the optimizer can operate very efficiently in the lower dimensional latent feature space. The remainder of this section presents the optimization approach and the MTL approach in detail, as well as an extension based on siamese neural networks [2]} to enforce the representation of microstructures in the latent feature space to preserve the microstructure distances in the original representation space.
[2]
[ [ 1369, 1372 ] ]
https://openalex.org/W2127589108
7c0fa773-92fb-418d-acfb-04fc04882a9a
in which the encoder network is parameterized by its weight values \(\theta _{\mathrm {enc}}\) . All three previously described tasks are attached to the encoder in the form of feedforward neural networks. Besides, the encoder can be easily adapted to higher dimensional microstructure representing data types like images (EBSD or micrograph images) or three dimensional microstructure data by using for example convolutional neural networks (see [1]}), which are used for example in [2]} in the materials sciences domain. <FIGURE>
[1]
[ [ 448, 451 ] ]
https://openalex.org/W2163605009
11a33f14-ce3d-4234-bf4f-453606300707
where \(R(\theta )\) is a regularization term that is used to prevent overfitting with the hyperparameter \(\lambda \) defining the strength of the regularization (also known as weight decay, see [1]} and [2]}). Each of the feedforward neural networks is parameterized by the respective weight values \(\theta _{\mathrm {enc}}\) , \(\theta _{\mathrm {regr}}\) , \(\theta _{\mathrm {recon}}\) and \(\theta _{\mathrm {valid}}\) , which are adjusted simultaneously during training and altogether form the weight vector \(\theta \) . In the following we will introduce the three individual loss terms.
[1]
[ [ 198, 201 ] ]
https://openalex.org/W2144513243
83478653-48d7-46ec-ba25-ca9195183797
The above described MTL approach is used in combination with an optimizer that searches for candidate microstructures with desired properties in the latent feature space. However, our approach aims to identify a diverse set of microstructures, which is why a distance measure in the latent feature space is needed. The MTL approach as defined above, is not able to preserve the distance information from the input vectors in the latent feature space. In order to construct a distance preserving latent feature space, the MTL approach is trained in a siamese neural network [1]}, [2]} fashion, which we will describe in the following.
[1]
[ [ 573, 576 ] ]
https://openalex.org/W2127589108
43f7ff62-9364-437f-9df7-ae11a2bcd37e
while \(\text{dist}(x_L, x_R)\) and \( \text{dist}(z_L, z_R)\) are not necessarily the same distance measures. Applying such loss terms leads to multi dimensional scaling, see [1]} and [2]}. Using the distance preservation loss \({L}_{\mathrm {pres}}\) , the MTL loss function, defined in Eq. REF , extends to \(\begin{aligned}{L}_{\mathrm {SMTL}} &= {W}_{\mathrm {regr}} {L}_{\mathrm {regr}} + {W}_{\mathrm {recon}} {L}_{\mathrm {recon}} \\&+ {W}_{\mathrm {valid}} {L}_{\mathrm {valid}} + {W}_{\mathrm {pres}} {L}_{\mathrm {pres}} \\&+ \lambda R(\theta ).\end{aligned}\) <FIGURE>
[1]
[ [ 178, 181 ] ]
https://openalex.org/W2152825437
88f6cc85-6a91-44d2-88eb-b4f90d84f256
where \(q_\mathrm {g}\) and \(q_\mathrm {o}\) are the quaternion representations of the orientations \(g\) and \(o\) [1]}.
[1]
[ [ 121, 124 ] ]
https://openalex.org/W2016959837
c7b8bafd-afff-4dc8-9d90-8c71b376ce94
The set of nearly uniform distributed orientations \(O\) , needed for the histogram-based texture descriptor, can be generated using the algorithm described in [1]}, which is implemented in the software neper [2]}. For the purpose of this study, we sample 512 nearly uniform distributed orientations over the cubic fundamental zone and chose a soft assignment of \(l=3\) .
[2]
[ [ 209, 212 ] ]
https://openalex.org/W2050236324
12e4dc69-ce1b-4261-880c-ce49b8d90fc3
and the flow rule [1]} \(L_\mathrm {p}=\sum _\eta \dot{\gamma }^{(\eta )} m^{(\eta )} \otimes n^{(\eta )},\)
[1]
[ [ 18, 21 ] ]
https://openalex.org/W2057022818
e3a11261-faa9-4fa4-8c9f-ded75c8a0c68
For training, 50000 sets of 2000 discrete orientations are sampled via Latin Hypercube Design [1]}, based on Eq. REF . In order to have an independent test set, further 10000 sets are generated randomly. The ranges inside which the parameters of the texture model vary are defined such that typical bcc rolling textures found in literature can be represented, cf. [2]}, [3]}, [4]}, [5]}, [6]}, [7]}, [8]}. The parameter ranges are listed in Tab. REF . In addition, to evaluate the anomaly detection, a set of artificial textures is needed, which slightly differ from the generated rolling textures. For this purpose, 10000 anomalies are generated by shifting the \(\alpha \) -fiber (i.e. the ideal position of \(a_1\) , \(a_2\) , \(a_4\) and \(a_5\) ) about 20 degrees in \(\varphi _1\) -direction. <TABLE>
[1]
[ [ 94, 97 ] ]
https://openalex.org/W2038669746
50b4b58f-f921-4eb5-8b9b-f1d8fa0edb3d
In this study, the individual tasks of the SMTL model are realized via feedforward neural networks with tanh activation functions to obtain features between \(-1\) and \(+1\) in the latent feature space. The SMTL model is implemented based on the Python TensorFlow API [1]}. The base network of the siamese architecture is illustrated in Fig. REF . The Glorot Normal method [2]} is used for weight initialization. In order to adjust the hyperparameters, a random search method [3]} is applied using 5-fold cross-validation. The best model configuration that was found is shown in Tab. REF . As distance measure in the input space, we use the Chi-Squared distance introduced in Eq. REF . As distance measure in the latent feature space, the sum of squared errors (SSE) between two vectors \(z_{\mathrm {1}}\) and \(z_{\mathrm {2}}\) is used: \(\mathrm {SSE} (z_{\mathrm {1}}, z_{\mathrm {2}}) = \sum _{i=1}^M (z_{\mathrm {1},i} - z_{\mathrm {2},i})^2 .\)
[2]
[ [ 376, 379 ] ]
https://openalex.org/W1533861849
080e37d0-2e6b-474b-aee2-5bf4f0d0622c
In this study, the individual tasks of the SMTL model are realized via feedforward neural networks with tanh activation functions to obtain features between \(-1\) and \(+1\) in the latent feature space. The SMTL model is implemented based on the Python TensorFlow API [1]}. The base network of the siamese architecture is illustrated in Fig. REF . The Glorot Normal method [2]} is used for weight initialization. In order to adjust the hyperparameters, a random search method [3]} is applied using 5-fold cross-validation. The best model configuration that was found is shown in Tab. REF . As distance measure in the input space, we use the Chi-Squared distance introduced in Eq. REF . As distance measure in the latent feature space, the sum of squared errors (SSE) between two vectors \(z_{\mathrm {1}}\) and \(z_{\mathrm {2}}\) is used: \(\mathrm {SSE} (z_{\mathrm {1}}, z_{\mathrm {2}}) = \sum _{i=1}^M (z_{\mathrm {1},i} - z_{\mathrm {2},i})^2 .\)
[3]
[ [ 479, 482 ] ]
https://openalex.org/W2097998348
2f64d657-0fd5-4a5c-95a2-8f741dc9ee9c
The SMTL model is trained for 200 epochs, while the best intermediate result is kept, what can be seen as a form of early stopping [1]}. A priori to the model training, the loss terms are scaled to values between 0 and 1 in order to make them comparable. Based on that, we found the following weights for the loss terms appropriate: \({W}_\mathrm {regr} = 0.05\) , \({W}_\mathrm {recon} = 0.05\) , \({W}_\mathrm {valid} = 0.05\) and \({W}_\mathrm {pres} = 0.85\) . <TABLE><FIGURE>
[1]
[ [ 131, 134 ] ]
https://openalex.org/W1582774210
26f74aa3-9239-4bad-9c57-3b812a1ecf3f
To identify a diverse set of textures, we use the optimization algorithm JADE [1]}, which is an extension of the Differential Evolution algorithm [2]}. Before running JADE, an initial population has to be defined. Therefore, 100 textures are selected from the test set which are approximately uniformly distributed over the property space. For the objective function, defined in Eq. REF , we use the weights \({V}_\mathrm {prop}=0.90\) , \({V}_\mathrm {valid}=0.03\) and \({V}_\mathrm {divers}=0.07\) and scale \(C_\mathrm {props}\) and \(C_\mathrm {divers}\) to values between 0 and 1 based on the selected 100 initial a textures. The threshold \(\xi _\mathrm {valid}\) is set to \(0.01\) based on the maximum anomaly score in the data set, cf. Fig. REF . The optimization is performed for 300 iterations with a fixed population size of 100. During the optimization, all valid textures that fulfill the target properties are collected, according to the texture-property-mapping. Based on the results from the previous section, we use the trained SMTL-model with a 16-dimensional latent feature space. The resulting textures for each target region are presented in the following.
[1]
[ [ 78, 81 ] ]
https://openalex.org/W2155529731
9ea619b9-ba63-47a1-aaa0-a3d2077d80e6
Our focus in this paper is the class of well-dominated graphs that have a nontrivial factorization as a Cartesian, direct or strong product. These three graph products are referred to as the “fundamental products” in the book [1]} by Hammack, Imrich and Klavžar. Along with the lexicographic product they are the most studied graph products in the literature.
[1]
[ [ 226, 229 ] ]
https://openalex.org/W2490805901
a86aac0f-3710-4c14-9b49-bcb049e5c63b
All graphs in this paper are finite, undirected, simple and have order at least 2. For a positive integer \(n\) , we let \([n]=\lbrace 1,\ldots ,n\rbrace \) . This set will be the vertex set of the complete graph of order \(n\) . In general, we follow the terminology and notation of Hammack, Imrich, and Klavžar [1]}. The order of a graph \(G\) is the number of vertices in \(G\) and is denoted \(n(G)\) ; \(G\) is nontrivial if \(n(G) \ge 2\) . For a vertex \(v\) in a graph \(G\) , the open neighborhood \(N(v)\) and the closed neighborhood \(N[v]\) are defined by \(N(v)=\lbrace u \in V(G)\,:\, uv \in E(G)\rbrace \) and \(N[v]=N(v)\cup \lbrace v\rbrace \) . For \(A \subseteq V(G)\) we let \(N(A)=\cup _{v \in A}N(v)\) and \(N[A]=N(A) \cup A\) . Any vertex subset \(D\) such that \(N[D]=V(G)\) is a dominating set of \(G\) , and \(D\) is then a minimal dominating set if no proper subset of \(D\) is a dominating set. The domination number of \(G\) is denoted by \(\gamma (G)\) and is the minimum cardinality among the dominating sets of \(G\) . The upper domination number, denoted \(\Gamma (G)\) , is the largest cardinality of a minimal dominating set of \(G\) . A set \(M \subseteq V(G)\) is an independent set if its vertices are pairwise non-adjacent. An independent set is maximal if it is not a proper subset of an independent set. The cardinalities of a smallest and a largest maximal independent set in \(G\) are denoted by \(i(G)\) and \(\alpha (G)\) , respectively. Note that a maximal independent set is a dominating set, which gives \( \gamma (G) \le i(G) \le \alpha (G) \le \Gamma (G)\,.\)
[1]
[ [ 313, 316 ] ]
https://openalex.org/W2490805901
97750ddd-7b80-4a59-be67-c76350fdde1b
All three of these graph products are associative and commutative. A product graph is called nontrivial if both of its factors are nontrivial. See [1]} for specific information on these and other graph products. The corona of a graph \(G\) , denoted by \(G \odot K_1\) , is the graph of order \(2 n(G)\) obtained by adding, for each vertex \(u\) of \(G\) a new vertex \(u^{\prime }\) together with a new edge \(uu^{\prime }\) .
[1]
[ [ 147, 150 ] ]
https://openalex.org/W2490805901
8f00ba94-cb8d-40e2-a824-1589c80e5ef5
Nowakowski and Rall [1]} established the following relationships between ordinary domination invariants on strong products.
[1]
[ [ 20, 23 ] ]
https://openalex.org/W1969794347
a0201abd-1ea4-43ae-8409-bad1121a1f76
Proposition 6 [1]} If \(G\) and \(H\) are finite graphs, then \(\gamma (G \, \boxtimes \,H) \le \gamma (G)\gamma (H) \text{ and } \Gamma (G \, \boxtimes \,H) \ge \Gamma (G)\Gamma (H)\,.\)
[1]
[ [ 14, 17 ] ]
https://openalex.org/W1969794347
a4e1f4cb-7ea3-4237-ae55-b1382372572b
Proposition 14 [1]} If a graph \(G\) has no isolated vertices, then \(G\) has a minimum dominating set that is open irredundant.
[1]
[ [ 15, 18 ] ]
https://openalex.org/W1997787124
b2a5c637-f132-46e3-814c-98d090e79f8f
Our motivation for this work comes from our longtime association with Non-Small Cell Lung Cancer (NSCLC) research. Lung cancer is the leading cause of cancer mortality in the United States and NSCLC accounts for about \(85\%\) of the lung cancer cases. National Comprehensive Cancer Network and Medicare and Medicaid services are supporting widespread implementation of lung cancer screening programs for identification of early stage lung cancers. Unfortunately, approximately 1 in 5 patients with pathologic stage IA NSCLC die of disease recurrence within 5 years of tumor resection. A recent study focused on identifying serum biomarkers for predicting recurrence after lung resection in node-negative NSCLC patients with tumor stage T2a or less (tumors less than 4 cm). Preoperative serum specimens of the patients were evaluated in a blinded manner for biomarkers of angiogenesis, energy metabolism, apoptosis, and inflammation; biological processes known to be associated with metastatic progression. From a statistical perspective, the popular approach for assessing association with the binary outcome of recurrence within 5 years is a binary regression analysis within the parametric framework of a logistic regression model. However, none of the biomarkers are found to be marginally significantly associated with recurrence in logistic regression framework (see Table REF ). Penalized variable selection approaches such as Lasso [1]}, Elastic Net [2]} and Surely Independent Screening [3]} all yield the null model as the selected model.
[1]
[ [ 1441, 1444 ] ]
https://openalex.org/W2135046866
1dea6f22-9006-42ee-89e7-754dd618af91
Our motivation for this work comes from our longtime association with Non-Small Cell Lung Cancer (NSCLC) research. Lung cancer is the leading cause of cancer mortality in the United States and NSCLC accounts for about \(85\%\) of the lung cancer cases. National Comprehensive Cancer Network and Medicare and Medicaid services are supporting widespread implementation of lung cancer screening programs for identification of early stage lung cancers. Unfortunately, approximately 1 in 5 patients with pathologic stage IA NSCLC die of disease recurrence within 5 years of tumor resection. A recent study focused on identifying serum biomarkers for predicting recurrence after lung resection in node-negative NSCLC patients with tumor stage T2a or less (tumors less than 4 cm). Preoperative serum specimens of the patients were evaluated in a blinded manner for biomarkers of angiogenesis, energy metabolism, apoptosis, and inflammation; biological processes known to be associated with metastatic progression. From a statistical perspective, the popular approach for assessing association with the binary outcome of recurrence within 5 years is a binary regression analysis within the parametric framework of a logistic regression model. However, none of the biomarkers are found to be marginally significantly associated with recurrence in logistic regression framework (see Table REF ). Penalized variable selection approaches such as Lasso [1]}, Elastic Net [2]} and Surely Independent Screening [3]} all yield the null model as the selected model.
[2]
[ [ 1459, 1462 ] ]
https://openalex.org/W2122825543
8a9b5d8d-8f87-4436-9181-64a449333892
Our motivation for this work comes from our longtime association with Non-Small Cell Lung Cancer (NSCLC) research. Lung cancer is the leading cause of cancer mortality in the United States and NSCLC accounts for about \(85\%\) of the lung cancer cases. National Comprehensive Cancer Network and Medicare and Medicaid services are supporting widespread implementation of lung cancer screening programs for identification of early stage lung cancers. Unfortunately, approximately 1 in 5 patients with pathologic stage IA NSCLC die of disease recurrence within 5 years of tumor resection. A recent study focused on identifying serum biomarkers for predicting recurrence after lung resection in node-negative NSCLC patients with tumor stage T2a or less (tumors less than 4 cm). Preoperative serum specimens of the patients were evaluated in a blinded manner for biomarkers of angiogenesis, energy metabolism, apoptosis, and inflammation; biological processes known to be associated with metastatic progression. From a statistical perspective, the popular approach for assessing association with the binary outcome of recurrence within 5 years is a binary regression analysis within the parametric framework of a logistic regression model. However, none of the biomarkers are found to be marginally significantly associated with recurrence in logistic regression framework (see Table REF ). Penalized variable selection approaches such as Lasso [1]}, Elastic Net [2]} and Surely Independent Screening [3]} all yield the null model as the selected model.
[3]
[ [ 1497, 1500 ] ]
https://openalex.org/W2016119924
13005971-1d29-4072-9d14-03a35f068164
Our primary objective in this article is to develop a general framework for providing a decision about the association between outcome and feature without necessarily modeling their functional relation. Towards this goal, we propose an omnibus test for association that can be used as a general black box tool. This omnibus test is based on thresholding and is computationally efficient. Thresholding is a popular approach to provide flexibility from or within parametric models. Recursive partitioning models (Trees [1]}, Random Forests [2]}) extend thresholding to model-free approaches. Even within parametric models, thresholding provides robustness from outlying values and assumed functional forms.
[1]
[ [ 517, 520 ] ]
https://openalex.org/W3085162807
919fbf69-f593-4711-8906-8dc9a3e77e8a
The rest of the article is organized as follows. In section we develop the general framework for testing the association hypothesis. The maximal permutation test is proposed and described in section . Section develops and illustrates the utility of the test as a black box test for association in the setting of binary outcome and compares with standard and other existing methods. We illustrate the model-free performance of the test in section in a wide range of settings ranging from quantile regression to heavy-tailed and outlier-prone cases. We additionally illustrate performance of the proposed approach in feature screening and compare with screening based on distance correlation [1]}. In section we illustrate the performance of our proposed method in establishing association of NSCLC recurrence with preoperative serum biomarkers and conclude with a brief discussion in section .
[1]
[ [ 693, 696 ] ]
https://openalex.org/W2164092415
84381cd1-05fe-4556-a145-49ecb6e1ed1e
Permutation tests were introduced by [1]}. The theoretical properties of these tests were studied in [2]}, [3]}, [4]}, among others. Permutation tests are generally considered when the null hypothesis \(H_0\) under consideration is a subset of an exchangeable specification for the outcomes \(Y_1,\ldots , Y_n\) , \(H_0 \subseteq \lbrace Y_1,\ldots ,Y_n \hbox{ are i.i.d.}\rbrace \)
[1]
[ [ 37, 40 ] ]
https://openalex.org/W2050035105
1071d20b-78be-4160-94ad-bc4ed894a945
Permutation tests were introduced by [1]}. The theoretical properties of these tests were studied in [2]}, [3]}, [4]}, among others. Permutation tests are generally considered when the null hypothesis \(H_0\) under consideration is a subset of an exchangeable specification for the outcomes \(Y_1,\ldots , Y_n\) , \(H_0 \subseteq \lbrace Y_1,\ldots ,Y_n \hbox{ are i.i.d.}\rbrace \)
[2]
[ [ 101, 104 ] ]
https://openalex.org/W2802622649
ee5f2624-f8cf-4ecb-9714-99bd81698fc7
Permutation tests were introduced by [1]}. The theoretical properties of these tests were studied in [2]}, [3]}, [4]}, among others. Permutation tests are generally considered when the null hypothesis \(H_0\) under consideration is a subset of an exchangeable specification for the outcomes \(Y_1,\ldots , Y_n\) , \(H_0 \subseteq \lbrace Y_1,\ldots ,Y_n \hbox{ are i.i.d.}\rbrace \)
[4]
[ [ 113, 116 ] ]
https://openalex.org/W1981606737
4f66a1b2-fdf5-494e-a705-53e46fa3278e
Under the i.i.d. hypothesis, the permutation test is of exact level for each sample size \(n\) [1]}. The book by [2]} provides a practical review of permutation tests. There is an extensive literature on permutations and other resampling tests (see [1]}) and studentized permutation tests when \(H_0\) is strictly bigger than the i.i.d structure [4]}, [5]}. The i.i.d. structure may be violated when the null hypothesis implies equality of a functional (such as mean or quantile) of the distributions, but not the distributions themselves. This, for example, may arise in the presence of nuisance parameters, such as, with heteroscedastic variances [5]}, [4]}.
[4]
[ [ 348, 351 ], [ 657, 660 ] ]
https://openalex.org/W2059679882
e11b4b93-424d-41c8-a948-0c2e487a6115
The power of the permutation test has been extensively investigated. [1]} established general conditions under which permutation tests are asymptotically as powerful as corresponding standard parametric tests. See also [2]}, [3]} and Lehman (1986, p230).
[1]
[ [ 69, 72 ] ]
https://openalex.org/W1981606737
2d73c3eb-1c1c-4cc3-98db-7ed5035d7f35
We now discuss the choice of the two-group test statistic \(T_n^c(Y)\) for comparing \(\lbrace \mbox{$Y$}|X\le c\rbrace \) and \(\lbrace \mbox{$Y$}|X>c\rbrace \) . When \(Y\) is treated as measured in continuous scale, and the regression model between \(Y\) and \(X\) is modeled via \(E(Y|X=x)\) , we have \(H_{0c}: E[Y|X\le c]= E[Y|X>c]\) and one choice for \(T_n^c(Y)\) is the two-sample t-statistic based on \(\lbrace Y_i: x_i \le c\rbrace \) and \(\lbrace Y_i: x_i > c\rbrace \) . The choice of Welch type studentization may provide additional robustness to the permutation test procedure (see [1]}, [2]}) which is further discussed in section 6.1. Alternatively, and for testing \(H^{NP}_{0c}: F_{Y|X\le c} = F_{Y|X>c}\) , one can use the Mann-Whitney/Wilcoxon test statistic. For binary \(Y\) , \(H_{0c}: P(Y=1|X\le c) = P(Y=1|X>c)\) and \(T_n^c(Y)\) can be chosen to be the chi-square or Fisher's exact statistics. For time-to-event \(Y\) , \(T_n^c(Y)\) can be selected to be the log-rank or similar test statistics.
[1]
[ [ 606, 609 ] ]
https://openalex.org/W2059679882
cf0c0283-1106-4f79-9f80-01bcae7f1c15
There is a substantial literature on the maximal \(\chi ^2\) test statistic based on thresholding of the feature space. [1]} considered the maximal \(\chi ^2\) statistic over the central \(\left(1-2\varepsilon \right)\) proportion of \(\lbrace x_i\rbrace \) as \(\chi _{max}^{2}=\max \limits _{\left[\varepsilon n\right]+1\le c\le n-\left[\varepsilon n\right]-1} T_n^c(\mbox{$y$})\) where \(0\le \varepsilon <\frac{1}{2}\) and \(\left[K\right]\) denotes `the largest integer \(\le K\) '. This method of maximal test statistics has been referred to as the equivalent minimum \(p\) -value approach by [2]} to highlight the associated multiple testing [3]}. We have observed that when \(\alpha \) is controlled at \(0.05\) for testing each individual \(H_{0c}\) by the usual \(\chi ^2\) test, the family wise error rate (FWER, [4]}) for \(\bigcap H_{0c}\) can inflate to as high as \(0.30\) . There is an extensive literature and extensive list of general approaches for controlling the FWER (see, for example, [4]}, [6]}), however these general purpose methods may not incorporate the specific structure of repeated thesholding. [7]} considered a similar problem, however, in their framework, \(Y=0\) and \(Y=1\) groups are treated as fixed whereas \(X|Y=j \sim F_j, ~ j=0,1\) and the null hypothesis of interest is \(F_0=F_1\) . In particular, they addressed the question of two sample comparison by the maximally selected \(\chi ^2\) statistic rather than the question of association between the outcome \(Y\) and predictor \(X\) that we are interested in. A detailed review of these approaches and many other methods are discussed in [3]}.
[2]
[ [ 606, 609 ] ]
https://openalex.org/W2028896946
4c8b7502-c092-4be5-982e-be824cf91114
There is a substantial literature on the maximal \(\chi ^2\) test statistic based on thresholding of the feature space. [1]} considered the maximal \(\chi ^2\) statistic over the central \(\left(1-2\varepsilon \right)\) proportion of \(\lbrace x_i\rbrace \) as \(\chi _{max}^{2}=\max \limits _{\left[\varepsilon n\right]+1\le c\le n-\left[\varepsilon n\right]-1} T_n^c(\mbox{$y$})\) where \(0\le \varepsilon <\frac{1}{2}\) and \(\left[K\right]\) denotes `the largest integer \(\le K\) '. This method of maximal test statistics has been referred to as the equivalent minimum \(p\) -value approach by [2]} to highlight the associated multiple testing [3]}. We have observed that when \(\alpha \) is controlled at \(0.05\) for testing each individual \(H_{0c}\) by the usual \(\chi ^2\) test, the family wise error rate (FWER, [4]}) for \(\bigcap H_{0c}\) can inflate to as high as \(0.30\) . There is an extensive literature and extensive list of general approaches for controlling the FWER (see, for example, [4]}, [6]}), however these general purpose methods may not incorporate the specific structure of repeated thesholding. [7]} considered a similar problem, however, in their framework, \(Y=0\) and \(Y=1\) groups are treated as fixed whereas \(X|Y=j \sim F_j, ~ j=0,1\) and the null hypothesis of interest is \(F_0=F_1\) . In particular, they addressed the question of two sample comparison by the maximally selected \(\chi ^2\) statistic rather than the question of association between the outcome \(Y\) and predictor \(X\) that we are interested in. A detailed review of these approaches and many other methods are discussed in [3]}.
[4]
[ [ 835, 838 ], [ 1020, 1023 ] ]
https://openalex.org/W1997917263
7020d7f5-b8a8-4b48-8338-9f6646f76a3a
There is a substantial literature on the maximal \(\chi ^2\) test statistic based on thresholding of the feature space. [1]} considered the maximal \(\chi ^2\) statistic over the central \(\left(1-2\varepsilon \right)\) proportion of \(\lbrace x_i\rbrace \) as \(\chi _{max}^{2}=\max \limits _{\left[\varepsilon n\right]+1\le c\le n-\left[\varepsilon n\right]-1} T_n^c(\mbox{$y$})\) where \(0\le \varepsilon <\frac{1}{2}\) and \(\left[K\right]\) denotes `the largest integer \(\le K\) '. This method of maximal test statistics has been referred to as the equivalent minimum \(p\) -value approach by [2]} to highlight the associated multiple testing [3]}. We have observed that when \(\alpha \) is controlled at \(0.05\) for testing each individual \(H_{0c}\) by the usual \(\chi ^2\) test, the family wise error rate (FWER, [4]}) for \(\bigcap H_{0c}\) can inflate to as high as \(0.30\) . There is an extensive literature and extensive list of general approaches for controlling the FWER (see, for example, [4]}, [6]}), however these general purpose methods may not incorporate the specific structure of repeated thesholding. [7]} considered a similar problem, however, in their framework, \(Y=0\) and \(Y=1\) groups are treated as fixed whereas \(X|Y=j \sim F_j, ~ j=0,1\) and the null hypothesis of interest is \(F_0=F_1\) . In particular, they addressed the question of two sample comparison by the maximally selected \(\chi ^2\) statistic rather than the question of association between the outcome \(Y\) and predictor \(X\) that we are interested in. A detailed review of these approaches and many other methods are discussed in [3]}.
[7]
[ [ 1139, 1142 ] ]
https://openalex.org/W1978018057
d62c4050-b17d-4e03-9ca8-9fd1c7146773
We compare the performance of our maximal permutation test with [1]}, [2]} and modified Bonferroni [3]} in simulation studies. For the first simulation study, we consider a data generating model where the association between binary outcome \(Y\) and predictor \(x\) is described by logistic regression \(logit(P(Y_i=1))=\beta _{0}+\beta _{1}x_{i},\)
[1]
[ [ 64, 67 ] ]
https://openalex.org/W1978018057
f0e492e2-6035-4612-8aa8-b2c7b1881d25
We compare the performance of our maximal permutation test with [1]}, [2]} and modified Bonferroni [3]} in simulation studies. For the first simulation study, we consider a data generating model where the association between binary outcome \(Y\) and predictor \(x\) is described by logistic regression \(logit(P(Y_i=1))=\beta _{0}+\beta _{1}x_{i},\)
[2]
[ [ 70, 73 ] ]
https://openalex.org/W2028896946
92f98f1f-6163-4f2e-b6af-9f84d3f00163
For the analysis methods for evaluating the association between \(X\) and \(Y\) , we consider \((1)\) logistic regression, \((2)\) maximal \(\chi ^2\) statistic based on thresholding which is then compared with \(\chi ^2\) distribution without any adjustments (maximal), \((3)\) [1]}, \((4)\) [2]}, \((5)\) modified Bonferroni [3]} and \((6)\) the proposed maximal permutation approach. We examine the level (type 1 error) and power of these approaches by repeated data simulations from the data generating model in (REF ). <TABLE>
[1]
[ [ 285, 288 ] ]
https://openalex.org/W1978018057
10ae7b8d-9df1-41e8-914c-69d6889beba7
For the analysis methods for evaluating the association between \(X\) and \(Y\) , we consider \((1)\) logistic regression, \((2)\) maximal \(\chi ^2\) statistic based on thresholding which is then compared with \(\chi ^2\) distribution without any adjustments (maximal), \((3)\) [1]}, \((4)\) [2]}, \((5)\) modified Bonferroni [3]} and \((6)\) the proposed maximal permutation approach. We examine the level (type 1 error) and power of these approaches by repeated data simulations from the data generating model in (REF ). <TABLE>
[2]
[ [ 300, 303 ] ]
https://openalex.org/W2028896946
7351b5ce-e1bd-45d2-af47-359d0745e10a
Table REF shows that the maximal-unadjusted approach severely inflate the Type-1 error to 0.32 from the target 0.05 level. The previously proposed approaches of [1]}, [2]} and [3]}, on the other hand, are overly conservative in maintaining their levels. Figure REF plots the empirically estimated power curves of these six methods. The maximal-unadjusted approach depicts highest power but, as noted before, has substantially inflated type-I error. Among the remaining five methods which maintain their levels, the logistic regression analysis model utilizes the correct specified model here and shows highest power. The proposed maximal permutation approach displays the next best power curve. <FIGURE>
[1]
[ [ 162, 165 ] ]
https://openalex.org/W1978018057
ceb19b39-fe61-4970-af41-14ff65dc826e
Table REF shows that the maximal-unadjusted approach severely inflate the Type-1 error to 0.32 from the target 0.05 level. The previously proposed approaches of [1]}, [2]} and [3]}, on the other hand, are overly conservative in maintaining their levels. Figure REF plots the empirically estimated power curves of these six methods. The maximal-unadjusted approach depicts highest power but, as noted before, has substantially inflated type-I error. Among the remaining five methods which maintain their levels, the logistic regression analysis model utilizes the correct specified model here and shows highest power. The proposed maximal permutation approach displays the next best power curve. <FIGURE>
[2]
[ [ 168, 171 ] ]
https://openalex.org/W2028896946
ae174f29-7b42-4022-88ad-57bc0e444401
There has been extensive research on two-sample permutation tests when the null hypothesis specifies equality of two population quantities (such as means) but may not result in the two population distributions being the same. One prominent example is testing for equality of means under unequal variances. [1]} and [2]} established that even though the exchangeable assumption does not hold in this setting, permutation test using studentized statistics, especially the Welch's t-statistic, given by \(T_{W}^{c}\left(\pi \left(Y\right)\right)=\left\lbrace \bar{Y}_{\pi ,1}-\bar{Y}_{\pi ,2}\right\rbrace /\sqrt{n_{\pi ,1}^{-1}s_{\pi ,1}^{2}+n_{\pi ,2}^{-1}s_{\pi ,2}^{2}}\) asymptotically maintains the level of the test.
[2]
[ [ 315, 318 ] ]
https://openalex.org/W2059679882
c059e6b0-f90b-49de-b27c-1df2d3835a75
Heteroscedasticity: If the variances \(Var(Y_i|x_i)\) in the linear regression model (REF ) are not equal, then even under the null hypothesis of \(H_0:\beta _1=0\) , \(Y_1,\ldots ,Y_n\) are no longer exchangeable and the basic setting under which permutation tests function do not hold. As noted before, the properties of two-sample permutation tests in this setting has been studied in [1]} and [2]}. We investigate a specific setting where \(Var(Y|x)\) increases with increase in the value of the predictor \(x\) , in particular \(Var(Y|x)= \sigma ^2\,(1+x), x> 0\) . As we notice in Table REF, the permutation based tests, in fact, maintain their levels even under violation of the exchangeable assumption. The estimated power curves in Figure REF shows that the permutation tests also maintain good power.
[2]
[ [ 399, 402 ] ]
https://openalex.org/W2059679882
693f1c37-b2fd-4ede-a00e-b1f7454ecaf8
We also investigated the robustness of our proposed test methodology in exploring the association between the outcome and predictors in the framework of quantile regression [1]}. The \(p^{th}\) quantile \(\left(0<p<1\right)\) of \(Y\) conditional on \(X\) , denoted by \(q_{p}\left(Y|X\right)\) , is regressed on a set of predictors \(X\) \(q_{p}\left(Y|X\right)=X\beta _{p},\)
[1]
[ [ 173, 176 ] ]
https://openalex.org/W2084871407
c6a1d8de-c024-4e37-b4c9-0e0509920a27
where \(\rho _{p}\left(u\right)=u\left(p-I\left(u<p\right)\right)\) . The loss function in (REF ) is considered to be robust compared to the quadratic loss function in linear regression. The quantile regression problem can equivalently be formulated in terms of the asymmetric Laplace distribution [1]} and in our data generation model, we generate \(Y_i \sim \) Asymmetric Laplace \(\left(\mu _{i},\sigma \right)\) with \(g\left(\mu _{i}\right)=x_{i}^{\prime }\beta _{p}\) . We generate \(X\) from \(Uniform\left(0,4\right)\) , consider values of \(\beta \) in \(\left[0,2.5\right]\) and further considered \(p=0.25,0.5\mbox{ and }0.75\) quantiles. In Figure REF -REF , we observe that for \(p=0.25\) the permutation adjusted rank based Mann-Whitney test outperforms the rest of the methods giving maximum power while simultaneously maintaining the estimated type-I error at 0.05. However, the power of the permutation adjusted two sample T test and Welch test are observed to be less than that of the LM and Sandwich test. Similar results are obtained for \(p=0.75\) too. Also, the ordering of the performance of the methods remains unchanged for \(p=0.5\) ; permutation based Mann-Whitney still outperforms other methods. <FIGURE>
[1]
[ [ 298, 301 ] ]
https://openalex.org/W2084871407
99716b0c-0f5c-463e-8b96-cf2aacddb5b8
In recent years, there has been increased interest in feature screening or filtering while modeling association with a number of predictor variables. Screening is often used to reduce the dimensionality of the feature space so that it is amenable for the next step of the analysis. Screening approaches in the setting of linear models include the sure independent screening (SIS) method proposed in [1]} and the forward regression in [2]}. [3]} proposed screening in generalized linear models whereas [4]} considered feature screening in nonlinear additive models.
[1]
[ [ 399, 402 ] ]
https://openalex.org/W2154560360
91c6efde-c0e2-4aaf-a559-b95285ba7403
In recent years, there has been increased interest in feature screening or filtering while modeling association with a number of predictor variables. Screening is often used to reduce the dimensionality of the feature space so that it is amenable for the next step of the analysis. Screening approaches in the setting of linear models include the sure independent screening (SIS) method proposed in [1]} and the forward regression in [2]}. [3]} proposed screening in generalized linear models whereas [4]} considered feature screening in nonlinear additive models.
[2]
[ [ 434, 437 ] ]
https://openalex.org/W3122008423
c9bd5344-6009-4eba-9912-b997576e18c0
In recent years, there has been increased interest in feature screening or filtering while modeling association with a number of predictor variables. Screening is often used to reduce the dimensionality of the feature space so that it is amenable for the next step of the analysis. Screening approaches in the setting of linear models include the sure independent screening (SIS) method proposed in [1]} and the forward regression in [2]}. [3]} proposed screening in generalized linear models whereas [4]} considered feature screening in nonlinear additive models.
[3]
[ [ 440, 443 ] ]
https://openalex.org/W2118047185
39ebbff9-35de-427a-a689-4da2590de95f
In recent years, there has been increased interest in feature screening or filtering while modeling association with a number of predictor variables. Screening is often used to reduce the dimensionality of the feature space so that it is amenable for the next step of the analysis. Screening approaches in the setting of linear models include the sure independent screening (SIS) method proposed in [1]} and the forward regression in [2]}. [3]} proposed screening in generalized linear models whereas [4]} considered feature screening in nonlinear additive models.
[4]
[ [ 501, 504 ] ]
https://openalex.org/W2056938357
e15b91f4-0af0-4191-99f9-e933cc05c8f1
[1]} considered screening with \(T(Y,X_j)\) being the marginal correlation between outcome \(Y\) and feature \(X_j\) and established sure independent screening property in the setting of linear regression. In the setting of generalized linear models, [2]} proposed independence screening with maximum marginal likelihood estimators. [3]} considered the context of nonparametric additive model where screening is performed by fitting marginal non-parametric regression to each of the features and then thresholding the utility of the predictors.
[1]
[ [ 0, 3 ] ]
https://openalex.org/W2154560360
5e337a44-9e8d-42c6-84da-61d92fdf392e
[1]} considered screening with \(T(Y,X_j)\) being the marginal correlation between outcome \(Y\) and feature \(X_j\) and established sure independent screening property in the setting of linear regression. In the setting of generalized linear models, [2]} proposed independence screening with maximum marginal likelihood estimators. [3]} considered the context of nonparametric additive model where screening is performed by fitting marginal non-parametric regression to each of the features and then thresholding the utility of the predictors.
[2]
[ [ 254, 257 ] ]
https://openalex.org/W2118047185
6a9533c5-45d8-4a26-9210-c2be25ff7fcb
[1]} considered screening with \(T(Y,X_j)\) being the marginal correlation between outcome \(Y\) and feature \(X_j\) and established sure independent screening property in the setting of linear regression. In the setting of generalized linear models, [2]} proposed independence screening with maximum marginal likelihood estimators. [3]} considered the context of nonparametric additive model where screening is performed by fitting marginal non-parametric regression to each of the features and then thresholding the utility of the predictors.
[3]
[ [ 336, 339 ] ]
https://openalex.org/W2056938357
86798710-9856-49a4-aa64-edf0168d7209
We explore screening performance in an example considered in [1]} and [2]}. In this setting of a sparse additive model, out of features \(\lbrace 1,\ldots ,p\rbrace \) , the sparse data generating model only includes predictors \({\cal D} = \lbrace j_1, \ldots ,j_d\rbrace \) which are associated with outcome \(Y\) via an additive model \( Y_i = \sum _{j \in {\cal D}} g_{j}(x_{ij}) \; + \; \varepsilon _i,~ i=1,\ldots ,n\)
[1]
[ [ 61, 64 ] ]
https://openalex.org/W2093994886
58e30504-514b-4963-90cf-6a29ad569ffb
We explore screening performance in an example considered in [1]} and [2]}. In this setting of a sparse additive model, out of features \(\lbrace 1,\ldots ,p\rbrace \) , the sparse data generating model only includes predictors \({\cal D} = \lbrace j_1, \ldots ,j_d\rbrace \) which are associated with outcome \(Y\) via an additive model \( Y_i = \sum _{j \in {\cal D}} g_{j}(x_{ij}) \; + \; \varepsilon _i,~ i=1,\ldots ,n\)
[2]
[ [ 70, 73 ] ]
https://openalex.org/W2056938357
a5501fd7-4845-425f-a151-b66d986af724
where \(g_j(\cdot )\) are functions of the respective predictors. Following [1]} and [2]}, we take \(d=4\) , \({\cal D}= \lbrace 1,2,3,4\rbrace \) and \(g_1(x) = \beta _1\,x\) ,  \(g_2(x) = \beta _2\,(2x-1)^2\) ,  \(g_3(x) = \beta _3\, \frac{ \sin (2\pi x)}{(2-\sin (2\pi x))}\) and \(g_4(x) = \beta _4 \lbrace \,0.1\sin (2\pi x) \,+\, 0.2\cos (2\pi x) \,+\,0.3 \sin (2\pi x)^2 \,+\, 0.4 \cos (2\pi x)^3 \,+\,0.5 \sin (2\pi x)^3 \rbrace \) . We consider the case of \(p=100\) , where the remaining 96 \(X\) -variables do not contribute in the data generating model.
[1]
[ [ 77, 80 ] ]
https://openalex.org/W2093994886
cef123dc-a89a-4a02-85dc-10e0ff93658a
where \(g_j(\cdot )\) are functions of the respective predictors. Following [1]} and [2]}, we take \(d=4\) , \({\cal D}= \lbrace 1,2,3,4\rbrace \) and \(g_1(x) = \beta _1\,x\) ,  \(g_2(x) = \beta _2\,(2x-1)^2\) ,  \(g_3(x) = \beta _3\, \frac{ \sin (2\pi x)}{(2-\sin (2\pi x))}\) and \(g_4(x) = \beta _4 \lbrace \,0.1\sin (2\pi x) \,+\, 0.2\cos (2\pi x) \,+\,0.3 \sin (2\pi x)^2 \,+\, 0.4 \cos (2\pi x)^3 \,+\,0.5 \sin (2\pi x)^3 \rbrace \) . We consider the case of \(p=100\) , where the remaining 96 \(X\) -variables do not contribute in the data generating model.
[2]
[ [ 86, 89 ] ]
https://openalex.org/W2056938357
4b6ca2bf-3407-4bf3-bc69-5405ae7be32e
[1]} proposed use of the distance correlation for general feature screening that may not require a model specification. The distance covariance [2]}, [3]} between two random vectors is a weighted \(L^2\) -distance between the joint characteristic function and the product of the marginal characteristic functions. The distance correlation is the ratio of the distance covariance to the product of the distance standard deviations. It is a measure of dependence between the random vectors and equals zero if and only if the random vectors are independent [2]}. Following the distance correlation based screening in [1]} and other works, we consider \(T(Y,X_j)\) = distance correlation\((Y,X_j)\) as a comparator screening method.
[1]
[ [ 0, 3 ], [ 614, 617 ] ]
https://openalex.org/W2164092415
53c48ece-bf07-43bb-8647-e93d8afa244d
[1]} proposed use of the distance correlation for general feature screening that may not require a model specification. The distance covariance [2]}, [3]} between two random vectors is a weighted \(L^2\) -distance between the joint characteristic function and the product of the marginal characteristic functions. The distance correlation is the ratio of the distance covariance to the product of the distance standard deviations. It is a measure of dependence between the random vectors and equals zero if and only if the random vectors are independent [2]}. Following the distance correlation based screening in [1]} and other works, we consider \(T(Y,X_j)\) = distance correlation\((Y,X_j)\) as a comparator screening method.
[2]
[ [ 144, 147 ], [ 554, 557 ] ]
https://openalex.org/W3106063097
ac288bee-7c65-4f76-8961-84d15765cf1b
[1]} proposed use of the distance correlation for general feature screening that may not require a model specification. The distance covariance [2]}, [3]} between two random vectors is a weighted \(L^2\) -distance between the joint characteristic function and the product of the marginal characteristic functions. The distance correlation is the ratio of the distance covariance to the product of the distance standard deviations. It is a measure of dependence between the random vectors and equals zero if and only if the random vectors are independent [2]}. Following the distance correlation based screening in [1]} and other works, we consider \(T(Y,X_j)\) = distance correlation\((Y,X_j)\) as a comparator screening method.
[3]
[ [ 150, 153 ] ]
https://openalex.org/W2023205960
b57de770-9339-4dad-b592-b298a66754a5
We follow the simulation framework in [1]} and [2]} and consider \(n=400,~p=100\) . \(\varepsilon _i \stackrel{i.i.d.}{\sim } N(0\) , variance=1.74), and \(\mbox{$\beta $}=(\beta _1,\ldots ,\beta _4) = (5,3,4,6)\) . We additionally include \(\mbox{$\beta $}=(0,0,0,0),~ (1,1,1,1)\) and \((2.5,1.5,2,3)\) to consider cases of no and weak associations. We compute \(\lbrace T_l(Y,X_j),~ j=1,\ldots ,p,~ l=1,2\rbrace \) where \(T_1(\cdot )\) and \(T_2(\cdot )\) are respectively the maximal permutation test statistic and the distance correlation. For each \(T_l\) , we perform marginal screening by keeping only those \(X_j\) having the top \(k\) \(T_l(Y,X_j)\) values. We consider three choices, \(k=4, 10\) and 20. The results in Table REF are based on 100 replications.
[1]
[ [ 38, 41 ] ]
https://openalex.org/W2093994886
42cb7266-b44e-46d0-a6ba-39e372e20ed0
We follow the simulation framework in [1]} and [2]} and consider \(n=400,~p=100\) . \(\varepsilon _i \stackrel{i.i.d.}{\sim } N(0\) , variance=1.74), and \(\mbox{$\beta $}=(\beta _1,\ldots ,\beta _4) = (5,3,4,6)\) . We additionally include \(\mbox{$\beta $}=(0,0,0,0),~ (1,1,1,1)\) and \((2.5,1.5,2,3)\) to consider cases of no and weak associations. We compute \(\lbrace T_l(Y,X_j),~ j=1,\ldots ,p,~ l=1,2\rbrace \) where \(T_1(\cdot )\) and \(T_2(\cdot )\) are respectively the maximal permutation test statistic and the distance correlation. For each \(T_l\) , we perform marginal screening by keeping only those \(X_j\) having the top \(k\) \(T_l(Y,X_j)\) values. We consider three choices, \(k=4, 10\) and 20. The results in Table REF are based on 100 replications.
[2]
[ [ 47, 50 ] ]
https://openalex.org/W2056938357
ddb5ebb1-5bac-4a01-b29a-f14e9085479d
One of the biomarkers measured in this study is human epididymis secretory protein 4 (HE4), which is a secretory protein known to be a prognostic factor for NSCLC patients [1]}, [2]}, [3]}. Figure REF shows a scatterplot of the measured HE4 levels and the outcome variable of recurrence status for the 123 patients. A logistic regression based estimated probability of recurrence curve is overlaid on the scatterplot, This estimated curve does not display a significant slope and both logistic and asymmetric complementary log-log link models for association of HE4 levels with NSCLC recurrence yield \(p\) -values higher than \(0.4\) (Table REF ). Another common prognostic marker for NSCLC is Carcino Embryonic Antigen (CEA) [4]}, [5]}. [6]} reported that Stage 1 NSCLC patients with high level of CEA have a higher risk of regional or systemic relapse. In Table REF however, both logistic and complementary log-log link models for association of CEA levels with NSCLC recurrence yield \(p\) -values higher than \(0.6\) . The \(p\) -values for marginal association from logistic regression model for some of the other important serum biomarkers are listed in Table REF . In fact, none of the markers meets the ubiquitous \(p < 0.05\) threshold. Exploration of joint association via forward and backward selection by AIC [7]} both returned the null model. Penalized variable selection via Elastic Net [8]} and Lasso [9]}, utilizing different recommended choices of the penalty parameters, also yield the null as the selected model. We further explore Surely Independent Screening [10]}, which iterates between screening based on marginal association and selection by joint association using the `SCAD' penalty. After many iterations, this approach also returns the null model as the selected model. <TABLE>
[8]
[ [ 1406, 1409 ] ]
https://openalex.org/W2122825543
310b1bc9-1be1-4f18-8f8f-95999d3a7125
One of the biomarkers measured in this study is human epididymis secretory protein 4 (HE4), which is a secretory protein known to be a prognostic factor for NSCLC patients [1]}, [2]}, [3]}. Figure REF shows a scatterplot of the measured HE4 levels and the outcome variable of recurrence status for the 123 patients. A logistic regression based estimated probability of recurrence curve is overlaid on the scatterplot, This estimated curve does not display a significant slope and both logistic and asymmetric complementary log-log link models for association of HE4 levels with NSCLC recurrence yield \(p\) -values higher than \(0.4\) (Table REF ). Another common prognostic marker for NSCLC is Carcino Embryonic Antigen (CEA) [4]}, [5]}. [6]} reported that Stage 1 NSCLC patients with high level of CEA have a higher risk of regional or systemic relapse. In Table REF however, both logistic and complementary log-log link models for association of CEA levels with NSCLC recurrence yield \(p\) -values higher than \(0.6\) . The \(p\) -values for marginal association from logistic regression model for some of the other important serum biomarkers are listed in Table REF . In fact, none of the markers meets the ubiquitous \(p < 0.05\) threshold. Exploration of joint association via forward and backward selection by AIC [7]} both returned the null model. Penalized variable selection via Elastic Net [8]} and Lasso [9]}, utilizing different recommended choices of the penalty parameters, also yield the null as the selected model. We further explore Surely Independent Screening [10]}, which iterates between screening based on marginal association and selection by joint association using the `SCAD' penalty. After many iterations, this approach also returns the null model as the selected model. <TABLE>
[9]
[ [ 1421, 1424 ] ]
https://openalex.org/W2135046866
bfdabe59-97d2-4004-bc9c-38ad9d7a432d
One of the biomarkers measured in this study is human epididymis secretory protein 4 (HE4), which is a secretory protein known to be a prognostic factor for NSCLC patients [1]}, [2]}, [3]}. Figure REF shows a scatterplot of the measured HE4 levels and the outcome variable of recurrence status for the 123 patients. A logistic regression based estimated probability of recurrence curve is overlaid on the scatterplot, This estimated curve does not display a significant slope and both logistic and asymmetric complementary log-log link models for association of HE4 levels with NSCLC recurrence yield \(p\) -values higher than \(0.4\) (Table REF ). Another common prognostic marker for NSCLC is Carcino Embryonic Antigen (CEA) [4]}, [5]}. [6]} reported that Stage 1 NSCLC patients with high level of CEA have a higher risk of regional or systemic relapse. In Table REF however, both logistic and complementary log-log link models for association of CEA levels with NSCLC recurrence yield \(p\) -values higher than \(0.6\) . The \(p\) -values for marginal association from logistic regression model for some of the other important serum biomarkers are listed in Table REF . In fact, none of the markers meets the ubiquitous \(p < 0.05\) threshold. Exploration of joint association via forward and backward selection by AIC [7]} both returned the null model. Penalized variable selection via Elastic Net [8]} and Lasso [9]}, utilizing different recommended choices of the penalty parameters, also yield the null as the selected model. We further explore Surely Independent Screening [10]}, which iterates between screening based on marginal association and selection by joint association using the `SCAD' penalty. After many iterations, this approach also returns the null model as the selected model. <TABLE>
[10]
[ [ 1585, 1589 ] ]
https://openalex.org/W2016119924
50534f2f-a17a-4fd6-a256-fb196850ac35
These results are clearly negative to the objectives and hypotheses of the study. The Hosmer-Lemeshow test [1]} for goodness-of-fit of the marginal logistic model with HE4 however yields a rather small \(p-\) value of 0.005 suggesting issues with the logistics regression model. The scatter plot in Figure REF highlights a few large outlying HE4 values that may be influencing the regression model fit. A scatter plot with CEA (not shown) also displays a few outlying values, however, they represent a different set of patients than the outlying values for HE4. A panel of ROC curves for four biomarkers are shown in Figure REF -REF . In contrast to the weak \(p\) -values from the logistic regression analyses, these ROC curves show strong to moderate sensitivity and specificity for NSCLC recurrence. <FIGURE>
[1]
[ [ 107, 110 ] ]
https://openalex.org/W2031687681
c3b18a5d-e615-46f3-8f9e-8141259b6248
Statistical analysis with many features is an increasingly common practice. It is tedious to perform model diagnostics when association with a large number of features are being explored and for this reason, model diagnostics is often overlooked. As we have illustrated in sections and , the proposed maximal permutation test can be robust to outliers and offers a general blackbox method for making a decision about association without necessarily performing such diagnostics. We employed the maximal permutation test here using chi-square test as the underlying test at each cutpoint. For association of NSCLC recurrence with preoperative levels of the HE4 marker, Figure REF shows a plot of \(p-\) values obtained at different cut points for the original sequence of the data. This process is repeated for the permuted sequences to obtain the permutation distribution of the test statistic. For comparison, we also report \(p\) -values based on [1]}, [2]} and modified Bonferroni [3]} approaches. Note that the [2]} adjustment is known to be similar to the [1]} adjustment for larger \(p\) -values. As we noted in our simulation studies, these adjustments are often overly conservative and have less power. For association with NSCLC recurrence, the proposed maximal permutation test reports a \(p\) -value of \(0.008\) for human epididymis secretory protein 4 (HE4) and \(p\) -value of \(0.05\) for Carcino Embryonic Antigen (CEA). respectively. After adjusting for multiplicity by the [6]} False Discovery Rate (FDR) approach, these \(p\) -values are respectively \(0.08\) and \(0.16\) but the multiplicity adjustment maintains the ordering of the \(p\) -values, and biomarkers HE4 and CEA still remain at the top among 10 markers ranked by adjusted \(p-\) value.
[1]
[ [ 950, 953 ], [ 1062, 1065 ] ]
https://openalex.org/W1978018057