id
stringlengths
14
17
paper_id
stringlengths
33
34
text
stringlengths
1
500
metadata
dict
2312.15971v1_32
http://arxiv.org/abs/2312.15971v1
(i.e., estimate essential matrix Ê). Finally, we leverage Ê combined with Q to carry out the full-size verification which can retrieve the falsely removed inliers during sequential pruning process. In short, the model estimation and the full-size verification can be expressed as:Ê =H(Q_2,w),ED =V(Ê,Q),where H(·,·) denotes the weighted eight-point algorithm <cit.> and V(·,·) represents the full-size verification operation to measure the epipolar distance set of all correspondences (i.e.,
{ "authors": [ "Junwen Guo", "Guobao Xiao", "Shiping Wang", "Jun Yu" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20231226094330", "title": "Graph Context Transformation Learning for Progressive Correspondence Pruning" }
2312.15971v1_33
http://arxiv.org/abs/2312.15971v1
distance set of all correspondences (i.e., ED=[ed_1, ed_2,…, ed_N] ∈ R^N×1). Each correspondence q_i corresponds to a polar distance ed_i, and we classify q_i into inliers when ed_i is less than an artificially set threshold.§.§ Graph Context Enhance Transformer Collecting abundant local context is highly beneficial for accurate correspondence pruning. The graph network plays a significant role in establishing and exploring relationships among neighbors. <cit.> and <cit.> leverage the nature of
{ "authors": [ "Junwen Guo", "Guobao Xiao", "Shiping Wang", "Jun Yu" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20231226094330", "title": "Graph Context Transformation Learning for Progressive Correspondence Pruning" }
2312.15971v1_34
http://arxiv.org/abs/2312.15971v1
<cit.> and <cit.> leverage the nature of graph network to generate graph contexts with respective advantages. However, the converted graph contexts are not thoroughly explored and refined resulting in a lost opportunity to substantially improve the effectiveness of subsequent tasks. As shown in Fig. <ref>, GCET first transforms the feature map of correspondences F={f_1,⋯,f_N} into the graph network 𝒢_i=(𝒱_i,ℰ_i) where 𝒢_i denotes i-th correspondence, 𝒱_i=(v_i^1, ⋯, v_i^k) contains k neighbors
{ "authors": [ "Junwen Guo", "Guobao Xiao", "Shiping Wang", "Jun Yu" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20231226094330", "title": "Graph Context Transformation Learning for Progressive Correspondence Pruning" }
2312.15971v1_35
http://arxiv.org/abs/2312.15971v1
𝒱_i=(v_i^1, ⋯, v_i^k) contains k neighbors and ℰ_i=(e_i^1, ⋯, e_i^k) indicates the relationships between 𝒢_i and its neighbors. Here, we describe e_i^j as [f_if_i-f_i^j] where [··] means the concatenation operation. Then, the graph network is converted into two different types of graph context: credible graph context (CGC) and structure graph context (SGC) by maxpooling with MLPs and convolution with p neighborhood segmentation <cit.> to gather diverse context information. This process can be
{ "authors": [ "Junwen Guo", "Guobao Xiao", "Shiping Wang", "Jun Yu" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20231226094330", "title": "Graph Context Transformation Learning for Progressive Correspondence Pruning" }
2312.15971v1_36
http://arxiv.org/abs/2312.15971v1
diverse context information. This process can be expressed as follows:CGC =MLPs(Maxpooling(MLPs(ℰ_i))),SGC =Conv_2(Conv_1(ℰ_i)),where Conv_1 and Conv_2 are convolutional operations with kernel sizes of 1× p and 1×k/p, respectively. Although CGC discards a majority of edge information, it remains the most credible neighbor relationships. Conversely, SGC, captures the most of structure information among nodes but is susceptible to interference from contaminated information. Next, in order to
{ "authors": [ "Junwen Guo", "Guobao Xiao", "Shiping Wang", "Jun Yu" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20231226094330", "title": "Graph Context Transformation Learning for Progressive Correspondence Pruning" }
2312.15971v1_37
http://arxiv.org/abs/2312.15971v1
from contaminated information. Next, in order to amplify the strengths of these graph contexts, we employe self-attention to recalibrate themselves and leverage cross-attention in parallel to uncover shared significant part. However, both self-attention and cross-attention demand substantial computation resources, especially when dealing with a large number of N. Therefore, before the recalibration of graph contexts, it is imperative to streamline graph contexts into {CGC^', SGC^'} through a
{ "authors": [ "Junwen Guo", "Guobao Xiao", "Shiping Wang", "Jun Yu" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20231226094330", "title": "Graph Context Transformation Learning for Progressive Correspondence Pruning" }
2312.15971v1_38
http://arxiv.org/abs/2312.15971v1
graph contexts into {CGC^', SGC^'} through a clustering operation <cit.> to compact vertices in a learnable manner. The detail operation can be formulated as follows:CGC^', SGC^' =Cluster(CGC, SGC),CGC^',e =(SA(CGC^')⊕ CA(SGC^',CGC^')),SGC^',e =(SA(SGC^')⊕ CA(CGC^',SGC^')),where CGC^',e and SGC^',e denote enhanced graph contexts in a clustered state. SA(·) represents the self-attention and CA(·,·) indicates the cross-attention where the query is derived from the preceding input and the
{ "authors": [ "Junwen Guo", "Guobao Xiao", "Shiping Wang", "Jun Yu" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20231226094330", "title": "Graph Context Transformation Learning for Progressive Correspondence Pruning" }
2312.15971v1_39
http://arxiv.org/abs/2312.15971v1
query is derived from the preceding input and the key-value pairs source from the second input. Additionally, ⊕ is the attentional fusion operation <cit.> which discriminately treats transitional graph contexts to generate the complete graph context with strong characteristics. Finally, the enhanced graph contexts are recovered to the original sizes to keep the permutation invariance and go through another attentional fusion to combine respective highlighted advantages. The process can be
{ "authors": [ "Junwen Guo", "Guobao Xiao", "Shiping Wang", "Jun Yu" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20231226094330", "title": "Graph Context Transformation Learning for Progressive Correspondence Pruning" }
2312.15971v1_40
http://arxiv.org/abs/2312.15971v1
highlighted advantages. The process can be described as:CGC^e, SGC^e =Recover(CGC^',e, SGC^',e),GC^e =(CGC^e⊕ SGC^e),where GC^e is the output of GCET.§.§ Graph Context Guidance Transformer Global consensus is served as convictive evidence to assist in inlier/outlier discerning. Nevertheless, due to the substantial presence and random distribution of outliers, excavating global consensus among inliers is a highly challenging task. To extend the application of the enhanced graph context to the
{ "authors": [ "Junwen Guo", "Guobao Xiao", "Shiping Wang", "Jun Yu" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20231226094330", "title": "Graph Context Transformation Learning for Progressive Correspondence Pruning" }
2312.15971v1_41
http://arxiv.org/abs/2312.15971v1
application of the enhanced graph context to the global realm, we design GCGT, to guide the inlier discrimination process by mining the consensus among inliers. The detailed guidance process is shown in Fig. <ref>. In specific, we first subject the enhanced graph context to a linear layer, assigning confidence scores to each node to generate a score table (ST). Based on ST, we proceed to sort confidence scores in descending order and sample vertices with higher confidence scores to form a set
{ "authors": [ "Junwen Guo", "Guobao Xiao", "Shiping Wang", "Jun Yu" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20231226094330", "title": "Graph Context Transformation Learning for Progressive Correspondence Pruning" }
2312.15971v1_42
http://arxiv.org/abs/2312.15971v1
with higher confidence scores to form a set of candidates. Notably, before delving into the consensus guidance procedure, we expand the candidate set to enhance its expressive capacity and mitigate the potential disruption caused by hidden outliers. Simultaneously, we perform the cluster operation to the enhanced graph context before sampling phase, streamlining its representation and concurrently reducing the computational load during the guidance process. These preparations can be described
{ "authors": [ "Junwen Guo", "Guobao Xiao", "Shiping Wang", "Jun Yu" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20231226094330", "title": "Graph Context Transformation Learning for Progressive Correspondence Pruning" }
2312.15971v1_43
http://arxiv.org/abs/2312.15971v1
process. These preparations can be described as:ST =Linear Layer(GC^e),GS =Expand(Sample(Sort_dec(ST,sr))),GT =Cluster(GC^e),where Sort_dec implies sorting targets in descending order, sr indicates sampling rate, and GS and GT represents the guiding source and the guiding target, respectively. Expand is the inverse operation of Cluster. Next, we employ the vanilla Transformer to conduct the consensus guidance which seeks similarities between GS and GT to assign greater attention to inliers.
{ "authors": [ "Junwen Guo", "Guobao Xiao", "Shiping Wang", "Jun Yu" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20231226094330", "title": "Graph Context Transformation Learning for Progressive Correspondence Pruning" }
2312.15971v1_44
http://arxiv.org/abs/2312.15971v1
GS and GT to assign greater attention to inliers. Here, the query is a linear projection of GS and the key-value pairs source from GT. To prevent from information loss during guidance procedure, we apply skip connection to GS. Besides, we also perform OAFliter <cit.> to the clustered graph context, which captures spatial-wise dependencies complementing the output of Transformer. Finally, we recover the fusion results of the output of OAFilter and consensus guidance and further integrate GC^e to
{ "authors": [ "Junwen Guo", "Guobao Xiao", "Shiping Wang", "Jun Yu" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20231226094330", "title": "Graph Context Transformation Learning for Progressive Correspondence Pruning" }
2312.15971v1_45
http://arxiv.org/abs/2312.15971v1
consensus guidance and further integrate GC^e to harmonize the balance between local context and global consensus. These operations can be formulated as follows:GR =((TF(GS,GT)+GS)⊕ OAFilter(GT)),GC_out =(Recover(GR)⊕ GC^e),where GR means guiding results, TF indicates the vanilla Transformer and GC_out is the final graph context output by GCGT.§.§ Loss functionFollowing <cit.>, we employ a hybrid loss function to optimize GCT-Net. The loss function is composed of two constituents:L =L_cls(o_i ,
{ "authors": [ "Junwen Guo", "Guobao Xiao", "Shiping Wang", "Jun Yu" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20231226094330", "title": "Graph Context Transformation Learning for Progressive Correspondence Pruning" }
2312.15971v1_46
http://arxiv.org/abs/2312.15971v1
is composed of two constituents:L =L_cls(o_i , y_i)+δ L_reg(Ê, E), where L_cls and L_reg denotes the correspondence classification loss and the essential matrix regression loss, respectively. δ represents a parameter utilized to balance these two losses. L_cls can be further formulated as:L_cls(o_i,y_i)=∑_i=1^Kℋ(η_i⊙ o_i,y_i), where ℋ represents the binary cross entropy loss. o_i signifies the logit values derived from i-th pruning module. y_i denotes the ground-truth label set for the i-th
{ "authors": [ "Junwen Guo", "Guobao Xiao", "Shiping Wang", "Jun Yu" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20231226094330", "title": "Graph Context Transformation Learning for Progressive Correspondence Pruning" }
2312.15971v1_47
http://arxiv.org/abs/2312.15971v1
denotes the ground-truth label set for the i-th pruning module where labels are ascertained by the threshold of 10^-4. ⊙ is the Hadamard product. The parameter η_i is a dynamic temperature vector, strategically leveraged to mitigate the negative effects of label ambiguity <cit.>. K indicates the count of correspondence pruning modules. L_reg can be described as follows <cit.>:L_e(Ê,E) =(p^'TÊp)^2/Ep_[1]^2+Ep_[2]^2+E^Tp^'_[1]^2+E^Tp^'_[2]^2,where p and p^' denote the coordinate sets in image
{ "authors": [ "Junwen Guo", "Guobao Xiao", "Shiping Wang", "Jun Yu" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20231226094330", "title": "Graph Context Transformation Learning for Progressive Correspondence Pruning" }
2312.15971v1_48
http://arxiv.org/abs/2312.15971v1
p and p^' denote the coordinate sets in image matching pairs. q_[i] stands for the i-th element in vector q. § EXPERIMENTS §.§ Evaluation Protocols§.§.§ DatasetsWe conduct experiments on outdoor and indoor datasets (i.e., YFCC100M and SUN3D) to demonstrate the outlier removal capability of GCT-Net. The YFCC100M dataset contains 100 million publicly accessible travel images divided into 71 sequences. The SUN3D dataset, comprising a substantial collection of RGBD images, has been categorized into
{ "authors": [ "Junwen Guo", "Guobao Xiao", "Shiping Wang", "Jun Yu" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20231226094330", "title": "Graph Context Transformation Learning for Progressive Correspondence Pruning" }
2312.15971v1_49
http://arxiv.org/abs/2312.15971v1
of RGBD images, has been categorized into 254 sequences. As in <cit.>, these sequences are further divided to generate a training set, a validation set, and a test set. Some images from the sequences being used as training set are retained to act as known scenario testing. §.§.§ Evaluation metricsWe evaluate our proposed GCT-Net in terms of both inlier/outlier classification and relative pose estimation tasks. In the inlier/outlier classification task, the network is supposed to remove outliers
{ "authors": [ "Junwen Guo", "Guobao Xiao", "Shiping Wang", "Jun Yu" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20231226094330", "title": "Graph Context Transformation Learning for Progressive Correspondence Pruning" }
2312.15971v1_50
http://arxiv.org/abs/2312.15971v1
task, the network is supposed to remove outliers and preserve as many inliers as possible. Therefore, Precision (P), Recall (R) and F-score (F) are selected as our evaluation metrics. In relative pose estimation task, the mean average precision (mAP) is adopted as our criteria which measures the angular differences between estimated vectors and the ground truth ones with the perspective of both rotation and translation.§.§ Evaluation ProtocolsIn the overall framework implementation of our
{ "authors": [ "Junwen Guo", "Guobao Xiao", "Shiping Wang", "Jun Yu" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20231226094330", "title": "Graph Context Transformation Learning for Progressive Correspondence Pruning" }
2312.15971v1_51
http://arxiv.org/abs/2312.15971v1
the overall framework implementation of our network, following <cit.>, we utilize two consecutive pruning modules with a pruning rate of 0.5 each to achieve progressive selection. SIFT is employed to generate an initial set of N = 2000 correspondences where the number of channel dimension d is extended to 128. In GCET, the neighbor number k in KNN algorithm is set to 9 for constructing the graph network. In GCGT, we configure the sampling rate sr to be 0.2. As for the common components in GCET
{ "authors": [ "Junwen Guo", "Guobao Xiao", "Shiping Wang", "Jun Yu" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20231226094330", "title": "Graph Context Transformation Learning for Progressive Correspondence Pruning" }
2312.15971v1_52
http://arxiv.org/abs/2312.15971v1
to be 0.2. As for the common components in GCET and GCGT, the channel reduction ratio r in attentional fusion <cit.> and the head number h in Transformer <cit.> is all configured to 4. In alignment with configuration of <cit.>, we utilize the Adam optimizer <cit.>, to set the batch size to 32 and maintain the learning rate of 10^-3 to train our network. It's noteworthy that the training process spans a total of 500k epochs where for the initial 20k epochs, δ in Eq. <ref> is set to 0, and for
{ "authors": [ "Junwen Guo", "Guobao Xiao", "Shiping Wang", "Jun Yu" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20231226094330", "title": "Graph Context Transformation Learning for Progressive Correspondence Pruning" }
2312.15971v1_53
http://arxiv.org/abs/2312.15971v1
20k epochs, δ in Eq. <ref> is set to 0, and for the remaining 480k epochs, δ is fixed to 0.5. §.§ Correspondence ClassificationWe perform a comprehensive comparison between GCT-Net and a selection of classic and cutting-edge works, spanning the traditional method <cit.> as well as learning-based methods <cit.>. Here, we utilize a ratio test with a threshold of 0.8 in RANSAC to proactively eliminate certain erroneous matches, preventing a sharp decline.Table <ref> showcases the comparative
{ "authors": [ "Junwen Guo", "Guobao Xiao", "Shiping Wang", "Jun Yu" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20231226094330", "title": "Graph Context Transformation Learning for Progressive Correspondence Pruning" }
2312.15971v1_54
http://arxiv.org/abs/2312.15971v1
decline.Table <ref> showcases the comparative results conducting the task of correspondence classification on YFCC100M and SUN3D. We can observe that our network achieves the best performance, except in terms of the Recall metric. The reason lies in our adoption of the progressive correspondence pruning strategy, which, while removing a mass of outliers, inevitably eliminates hidden inliers as well. Consequently, our method and CLNet exhibit significant improvement in the Precision metric,
{ "authors": [ "Junwen Guo", "Guobao Xiao", "Shiping Wang", "Jun Yu" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20231226094330", "title": "Graph Context Transformation Learning for Progressive Correspondence Pruning" }
2312.15971v1_55
http://arxiv.org/abs/2312.15971v1
significant improvement in the Precision metric, while the value of Recall is relatively lower compared to other methods. However, considering the overall metrics (i.e. F-score), we still gain the optimal results which surpasses the second-best method by 2.27% and 0.51% on the YFCC100M and SUN3D datasets, respectively. Fig. <ref> displays the visualization results of classification, which further demonstrates the remarkable ability of our network in removing outliers. After correspondence
{ "authors": [ "Junwen Guo", "Guobao Xiao", "Shiping Wang", "Jun Yu" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20231226094330", "title": "Graph Context Transformation Learning for Progressive Correspondence Pruning" }
2312.15971v1_56
http://arxiv.org/abs/2312.15971v1
in removing outliers. After correspondence classification, inliers are assigned to weights to execute the relative pose estimation task. The concerning experimental results are shown in Table <ref>. Here, we also evaluate the compatibility of various feature matching methods with different feature extraction approaches. In contrast to the hand-crafted method SIFT <cit.>, we employ a learning-based feature extraction approach, SuperPoint <cit.> for testing. In experiments, we select mAP5^∘ and
{ "authors": [ "Junwen Guo", "Guobao Xiao", "Shiping Wang", "Jun Yu" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20231226094330", "title": "Graph Context Transformation Learning for Progressive Correspondence Pruning" }
2312.15971v1_57
http://arxiv.org/abs/2312.15971v1
for testing. In experiments, we select mAP5^∘ and mAP20^∘ to comprehensively evaluate the performance of these methods under high-tolerance and low-tolerance scenarios. Besides, to assess the generalization capability of models, we conduct experiments in both known and unknown scenes.From Table <ref>, it is apparent that GCT-Net outperforms all configurations under the SIFT-based condition. When compared to CLNet, which also employs the progressive pruning framework, our network demonstrates a
{ "authors": [ "Junwen Guo", "Guobao Xiao", "Shiping Wang", "Jun Yu" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20231226094330", "title": "Graph Context Transformation Learning for Progressive Correspondence Pruning" }
2312.15971v1_58
http://arxiv.org/abs/2312.15971v1
pruning framework, our network demonstrates a significant lead in unknown scenes, surpassing CLNet by 13% in mAP5^∘ and 7.1% in mAP20^∘. We also achieve 9.18% and 6.98% improvements compared to ConvMatch on unknown and known scenes under mAP5^∘. However, when adopting SuperPoint as the feature extraction method, our network only slightly surpasses ConvMatch in mAP20^∘, while trailing behind ConvMatch in mAP5^∘. This discrepancy might be attributed to SuperPoint generating lots of high-quality
{ "authors": [ "Junwen Guo", "Guobao Xiao", "Shiping Wang", "Jun Yu" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20231226094330", "title": "Graph Context Transformation Learning for Progressive Correspondence Pruning" }
2312.15971v1_59
http://arxiv.org/abs/2312.15971v1
to SuperPoint generating lots of high-quality correspondences at the beginning. For our network, pruning such high-quality correspondences could result in the loss of crucial information and thus cause a decrease in estimation accuracy. In contrast, ConvMatch can leverage convolutions to capture additional information effectively.§.§ Ablation Studies We perform ablation experiments on GCT-Net to demonstrate the effectiveness of individual components. Table <ref> displays the experimental
{ "authors": [ "Junwen Guo", "Guobao Xiao", "Shiping Wang", "Jun Yu" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20231226094330", "title": "Graph Context Transformation Learning for Progressive Correspondence Pruning" }
2312.15971v1_60
http://arxiv.org/abs/2312.15971v1
components. Table <ref> displays the experimental results about the network integration with various modules. IPS indicates that the network composed of only ResNet blocks adopts the iterative pruning strategy. GCET represents the application of Graph Context Enhance Transformer. GCGT-P refers the partial Graph Context Guidance Transformer, where we isolate the injection of OAFilter to assess the effectiveness of the process of sampling to consensus guidance. GCGT-W signifies the whole Graph
{ "authors": [ "Junwen Guo", "Guobao Xiao", "Shiping Wang", "Jun Yu" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20231226094330", "title": "Graph Context Transformation Learning for Progressive Correspondence Pruning" }
2312.15971v1_61
http://arxiv.org/abs/2312.15971v1
guidance. GCGT-W signifies the whole Graph Context Guidance Transformer. From Table <ref>, it is evident that the integration of every component causes a favorable impact on the network performance, compared to single utilization of IPS. In specific, the second row of table which incorporates GCET into IPS obatins 13.97% and 10.27% improvments under mAP5^∘ and mAP20^∘. This demonstrates that the significance of generating graph contexts and effectively leveraging them. The third row (i.e. IPS +
{ "authors": [ "Junwen Guo", "Guobao Xiao", "Shiping Wang", "Jun Yu" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20231226094330", "title": "Graph Context Transformation Learning for Progressive Correspondence Pruning" }
2312.15971v1_62
http://arxiv.org/abs/2312.15971v1
leveraging them. The third row (i.e. IPS + GCGT-P) validates the effectiveness of the sampling strategy and consensus guidance, which gains a 11.67% improvement under mAP5^∘. Compare to partial GCGT, utilizing complete GCGT (the fourth row) results in improvements of 1.3% and 0.84% under mAP5^∘ and 20^∘. This highlights the injection of OAFilter, which enhances the output of Transformer in a complementary manner. By combining GCET and GCGT, the network achieves the optimal performance.We also
{ "authors": [ "Junwen Guo", "Guobao Xiao", "Shiping Wang", "Jun Yu" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20231226094330", "title": "Graph Context Transformation Learning for Progressive Correspondence Pruning" }
2312.15971v1_63
http://arxiv.org/abs/2312.15971v1
network achieves the optimal performance.We also perform the ablation studies about different sampling rates. When the sampling rate is excessively large, it can lead to substantial computational load. Therefore, during the implementation of experiments, we maintain the sampling rate below 0.5. As shown in Fig. <ref>, opting for a low sampling rate (i.e., 0.05) can limit the expressive capacity of network, whereas selecting an excessively high sampling rate (i.e., 0.5) can make it susceptible
{ "authors": [ "Junwen Guo", "Guobao Xiao", "Shiping Wang", "Jun Yu" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20231226094330", "title": "Graph Context Transformation Learning for Progressive Correspondence Pruning" }
2312.15971v1_64
http://arxiv.org/abs/2312.15971v1
sampling rate (i.e., 0.5) can make it susceptible to disruption by outlier information. Therefore, it's necessary to choose an appropriate sampling rate (i.e., 0.2) to strike a balance between the two aspects. § CONCLUSIONIn this paper, we propose the effective Graph Context Transformation Network (GCT-Net) for progressive correspondence pruning. The graph network is served as an effective carrier of local context information. Therefore, we propose the Graph Context Enhance Transformer to
{ "authors": [ "Junwen Guo", "Guobao Xiao", "Shiping Wang", "Jun Yu" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20231226094330", "title": "Graph Context Transformation Learning for Progressive Correspondence Pruning" }
2312.15971v1_65
http://arxiv.org/abs/2312.15971v1
propose the Graph Context Enhance Transformer to convert the graph network into multi-branch graph contexts and enhance the individual characteristic and shared significant information of graph contexts. This allows the advantages of different graph contexts to be effectively combined and fully utilized. For extending the enhanced graph context to the global domain, we further design the Graph Context Guidance Transformer. This module adopts a score-based sampling strategy to select candidates
{ "authors": [ "Junwen Guo", "Guobao Xiao", "Shiping Wang", "Jun Yu" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20231226094330", "title": "Graph Context Transformation Learning for Progressive Correspondence Pruning" }
2312.15971v1_66
http://arxiv.org/abs/2312.15971v1
sampling strategy to select candidates as the guiding source and regards the unsampled vertices as the guiding target for the execution of consensus guidance which seeks the hidden inliers by consensus similarities. Numerous experiments conducted on tasks related to correspondence classification and relative pose estimation demonstrate the superior ability of GCT-Net, surpassing the performance of state-of-the-art methods.§ ACKNOWLEDGMENTThis work was supported by the National Natural Science
{ "authors": [ "Junwen Guo", "Guobao Xiao", "Shiping Wang", "Jun Yu" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20231226094330", "title": "Graph Context Transformation Learning for Progressive Correspondence Pruning" }
2312.15971v1_67
http://arxiv.org/abs/2312.15971v1
was supported by the National Natural Science Foundation of China under Grants 62072223, 62125201 and 62020106007.
{ "authors": [ "Junwen Guo", "Guobao Xiao", "Shiping Wang", "Jun Yu" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20231226094330", "title": "Graph Context Transformation Learning for Progressive Correspondence Pruning" }
2312.16325v1_0
http://arxiv.org/abs/2312.16325v1
An efficient approach to characterize spatio-temporal dependence in cortical surface fMRI data Huy DangDept. of Statistics, Pennsylvania State University, USAand Marzia A. Cremona Dept. of Operations and Decision Systems, Université Laval, CanadaCHU de Québec – Université Laval Research Center, Canadaand Francesca ChiaromonteDept. of Statistics, Pennsylvania State University, USAInst. of Economics and L'EMbeDS, Sant'Anna School of Advanced Studies, Italy and Nicole LazarDept. of Statistics,
{ "authors": [ "Elliott Vest" ], "categories": [ "math.MG" ], "primary_category": "math.MG", "published": "20231226201722", "title": "The Curtain Model is Not a Quasi-Isometry Invariant of CAT(0) Spaces" }
2312.16325v1_1
http://arxiv.org/abs/2312.16325v1
Italy and Nicole LazarDept. of Statistics, Pennsylvania State University, USAJanuary 14, 2024
{ "authors": [ "Elliott Vest" ], "categories": [ "math.MG" ], "primary_category": "math.MG", "published": "20231226201722", "title": "The Curtain Model is Not a Quasi-Isometry Invariant of CAT(0) Spaces" }
2312.16325v1_2
http://arxiv.org/abs/2312.16325v1
===================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
{ "authors": [ "Elliott Vest" ], "categories": [ "math.MG" ], "primary_category": "math.MG", "published": "20231226201722", "title": "The Curtain Model is Not a Quasi-Isometry Invariant of CAT(0) Spaces" }
2312.16325v1_3
http://arxiv.org/abs/2312.16325v1
===================================================================
{ "authors": [ "Elliott Vest" ], "categories": [ "math.MG" ], "primary_category": "math.MG", "published": "20231226201722", "title": "The Curtain Model is Not a Quasi-Isometry Invariant of CAT(0) Spaces" }
2312.16325v1_4
http://arxiv.org/abs/2312.16325v1
Petyt-Spriano-Zalloum recently developed the notion of a curtain model, which is a hyperbolic space associated to any CAT(0) space.It plays a similar role for CAT(0) spaces that curve graphs do for mapping class groups of finite-type surfaces.Those authors asked whether this curtain model is a quasi-isometry invariant, namely if quasi-isometric CAT(0) spaces have quasi-isometric curtain models.In this short note, we provide an explicit example answering this question in the negative. §
{ "authors": [ "Elliott Vest" ], "categories": [ "math.MG" ], "primary_category": "math.MG", "published": "20231226201722", "title": "The Curtain Model is Not a Quasi-Isometry Invariant of CAT(0) Spaces" }
2312.16325v1_5
http://arxiv.org/abs/2312.16325v1
answering this question in the negative. § INTRODUCTIONIn <cit.>, Petyt-Spriano-Zalloum introduced a combinatorial tool called a curtain that serves as an analogue in the CAT(0) setting of a hyperplane from the theory of CAT(0) cube complexes. Building off of “hyperplane-separation" metrics introduced by Genevois <cit.>, the authors utilize curtains in a CAT(0) space X to build the curtain model — a hyperbolic space which effectively collapses the “flat" parts of the CAT(0) space. This “coning
{ "authors": [ "Elliott Vest" ], "categories": [ "math.MG" ], "primary_category": "math.MG", "published": "20231226201722", "title": "The Curtain Model is Not a Quasi-Isometry Invariant of CAT(0) Spaces" }
2312.16325v1_6
http://arxiv.org/abs/2312.16325v1
“flat" parts of the CAT(0) space. This “coning off" is by design, and gives rise to many similarities between a CAT(0) space and its curtain model that parallel the relationship between mapping class groups and their curve graphs (See <cit.>).Petyt-Spriano-Zalloum asked in <cit.> if a quasi-isometry between CAT(0) spaces always induces a quasi-isometry between their corresponding curtain models. We answer this question in the negative. For a CAT(0) space X, we denote X to be its curtain model.
{ "authors": [ "Elliott Vest" ], "categories": [ "math.MG" ], "primary_category": "math.MG", "published": "20231226201722", "title": "The Curtain Model is Not a Quasi-Isometry Invariant of CAT(0) Spaces" }
2312.16325v1_7
http://arxiv.org/abs/2312.16325v1
space X, we denote X to be its curtain model. There exists a CAT(0) space X and a self quasi-isometry ϕ: X⟶ X such that ϕ does not descend to a quasi-isometry for X. Further, there exists two quasi-isometric CAT(0) spaces W, Z whose curtain models W, Z are not quasi-isometric.Our example is based on an example due to Cashen <cit.>, which he used to show that quasi-isometries of CAT(0) spaces need not induce homeomorphisms of their contracting boundaries when equipped with the Gromov product
{ "authors": [ "Elliott Vest" ], "categories": [ "math.MG" ], "primary_category": "math.MG", "published": "20231226201722", "title": "The Curtain Model is Not a Quasi-Isometry Invariant of CAT(0) Spaces" }
2312.16325v1_8
http://arxiv.org/abs/2312.16325v1
boundaries when equipped with the Gromov product topology. Thus, it also follows that we get an analogous result for the curtain models of CAT(0) spaces. There exist quasi-isometric CAT(0) spaces W,Z whose curtain models have non-homeomorphic Gromov boundaries. Acknowledgments: Harry Petyt independently discovered this example.I would like to thank him for his useful comments on an earlier draft of this article and for encouraging me to write it up. Also, the warmest of thanks goes to Matthew
{ "authors": [ "Elliott Vest" ], "categories": [ "math.MG" ], "primary_category": "math.MG", "published": "20231226201722", "title": "The Curtain Model is Not a Quasi-Isometry Invariant of CAT(0) Spaces" }
2312.16325v1_9
http://arxiv.org/abs/2312.16325v1
up. Also, the warmest of thanks goes to Matthew Gentry Durham for his constructive feedback on an earlier draft of this paper. § BACKGROUND We now give a small summary of definitions imported from <cit.>. For background of CAT(0) spaces, we refer the reader to <cit.>. The following is the required background to define the curtain model (Definition <ref>). We always assume X is a CAT(0) space.Let X be a CAT(0) space and let α:I→ X be a geodesic. For any number r such that [r-1/2, r+1/2] in in
{ "authors": [ "Elliott Vest" ], "categories": [ "math.MG" ], "primary_category": "math.MG", "published": "20231226201722", "title": "The Curtain Model is Not a Quasi-Isometry Invariant of CAT(0) Spaces" }
2312.16325v1_10
http://arxiv.org/abs/2312.16325v1
For any number r such that [r-1/2, r+1/2] in in the interior of I, the curtain dual to α at r ish=h_α=h_α,r=π^-1_α(α[r-1/2, r+1/2])where π_α is the closest point projection to α. We call the segment α[r-1/2, r+1/2] the pole of the curtain which we denote as P when needed.A curtain h separates sets A,B ⊂ X if A ⊂ h^- and B ⊂ h^+. A set {h_i} is a chain if each of the h_i are disjoint and h_i separates h_i-1 and h_i+1 for all i. We say a chain {h_i} separates sets A,B ⊂ X if each h_i separates A
{ "authors": [ "Elliott Vest" ], "categories": [ "math.MG" ], "primary_category": "math.MG", "published": "20231226201722", "title": "The Curtain Model is Not a Quasi-Isometry Invariant of CAT(0) Spaces" }
2312.16325v1_11
http://arxiv.org/abs/2312.16325v1
separates sets A,B ⊂ X if each h_i separates A and B.Let L ∈ℕ. Disjoint curtains h and h^' are said to be L-separated if every chain meeting both h and h^' has cardinality at most L. Two disjoint curtains are said to be separated if they are L-separated for some L. If c is a chain of curtains such that each pair is L-separated, then we refer to c as an L-chain. Denote X_L for the metric space (X, d_L), where d_L is the metric defined asd_L(x, y)=1+max{|c|: c is an L-chain separating x from y}
{ "authors": [ "Elliott Vest" ], "categories": [ "math.MG" ], "primary_category": "math.MG", "published": "20231226201722", "title": "The Curtain Model is Not a Quasi-Isometry Invariant of CAT(0) Spaces" }
2312.16325v1_12
http://arxiv.org/abs/2312.16325v1
c is an L-chain separating x from y} with d_L(x,x) = 0. Note that, by Remark 2.16 in <cit.>, we have that for any x,y ∈ X, it follows that d_L(x,y) < 1 +d(x,y). Fix a sequence of number λ_L ∈ (0,1) such that∑_L=1^∞λ_L < ∑_L=1^∞ Lλ_L < ∑_L=1^∞ L^2λ_L < ∞We consider the space (X, d̂), where the distance between two points x,y∈ X is defined by d̂(x,y) = ∑_L=1^∞λ_L d_L(x,y) and d_L is the L-metric defined in Definition <ref>. We call (X, d̂) the curtain model of X and denote it as X. Both of the
{ "authors": [ "Elliott Vest" ], "categories": [ "math.MG" ], "primary_category": "math.MG", "published": "20231226201722", "title": "The Curtain Model is Not a Quasi-Isometry Invariant of CAT(0) Spaces" }
2312.16325v1_13
http://arxiv.org/abs/2312.16325v1
model of X and denote it as X. Both of the following definitions will also help in the construction of the counterexample. Let X be a CAT(0) space and let α:[0, a] → X and α^':[0, a^'] → X be two geodesic paths issuing from the same point α(0)=α^'(0). Then the comparison angle ∠_𝔼(α(t), α^'(t^')) is a non-decreasing function of both t, t^'≥ 0, and the Alexandrov angle ∠(α, α^') is equal tolim _t, t^'→ 0∠_𝔼(α(t), α^'(t^'))=lim _t → 0∠_𝔼(α(t), α^'(t)) .Hence, we define:∠(α, α^')=lim _t → 0 2
{ "authors": [ "Elliott Vest" ], "categories": [ "math.MG" ], "primary_category": "math.MG", "published": "20231226201722", "title": "The Curtain Model is Not a Quasi-Isometry Invariant of CAT(0) Spaces" }
2312.16325v1_14
http://arxiv.org/abs/2312.16325v1
α^'(t)) .Hence, we define:∠(α, α^')=lim _t → 0 2 arcsin1/2 t d(α(t), α^'(t)) . A geodesic α is D-strongly contracting if for any ball B disjoint from α we have diam(π_α(B)) ≤ D, where π_α is the closest point projection to α.§ THE COUNTEREXAMPLE The following counterexample was used in <cit.> to show that two quasi-isometric CAT(0) spaces can have contracting boundaries of different homeomorphism type when equipped with the Gromov product topology. We first introduce this space and its curtain
{ "authors": [ "Elliott Vest" ], "categories": [ "math.MG" ], "primary_category": "math.MG", "published": "20231226201722", "title": "The Curtain Model is Not a Quasi-Isometry Invariant of CAT(0) Spaces" }
2312.16325v1_15
http://arxiv.org/abs/2312.16325v1
We first introduce this space and its curtain model. 3.1 The Infinite Parking Lot and its Curtain Model.Let Y be ^2 with a disc of radius one centered at the origin removed. Denote X as the universal cover of Y. We can view X in the following way: Take X_i to be a quarter flat with the quarter disc centered at the origin removed. Then X = ∪_i X_i∼ where ∼ denotes gluing the y-axis of X_i to the x-axis of X_i+1 for all i∈ℤ. One informally calls X the “infinite parking lot" as it can be viewed as
{ "authors": [ "Elliott Vest" ], "categories": [ "math.MG" ], "primary_category": "math.MG", "published": "20231226201722", "title": "The Curtain Model is Not a Quasi-Isometry Invariant of CAT(0) Spaces" }
2312.16325v1_16
http://arxiv.org/abs/2312.16325v1
the “infinite parking lot" as it can be viewed as a collection of quarter flats glued together that are spiraling up and down, giving the “infinite levels" of a parking lot. See Figure <ref>.X is indeed a CAT(0) space since it is a gluing of CAT(0) spaces along single geodesic lines. The result of this space is that a half flat with a half disc of radius one removed at the origin can be isometrically embedded into each X_i∪ X_i+1∼. In fact, we can spiral up any θ amount and get the same
{ "authors": [ "Elliott Vest" ], "categories": [ "math.MG" ], "primary_category": "math.MG", "published": "20231226201722", "title": "The Curtain Model is Not a Quasi-Isometry Invariant of CAT(0) Spaces" }
2312.16325v1_17
http://arxiv.org/abs/2312.16325v1
we can spiral up any θ amount and get the same isometry of the half flat with a half disc removed at the origin. Parameterize X via its natural polar coordinates × [1,∞), and define the spiral to be the line ℝ×{1}. We now explain why X's curtain model X is a quasi-line.Take any geodesic ray γ such that γ(0) is on the spiral and the Alexandrov angle between γ and the spiral is π/2. Up to an isometric rotation of X by some θ along the spiral, γ is the y-axis of some X_i. Since γ is the y-axis of
{ "authors": [ "Elliott Vest" ], "categories": [ "math.MG" ], "primary_category": "math.MG", "published": "20231226201722", "title": "The Curtain Model is Not a Quasi-Isometry Invariant of CAT(0) Spaces" }
2312.16325v1_18
http://arxiv.org/abs/2312.16325v1
the y-axis of some X_i. Since γ is the y-axis of some isometrically embedded half flat (with a half disc removed), all curtains dual to γ will stay in its half flat, X_i∪ X_i+1∼. As seen in Figure <ref>, if h_1, h_2 are two disjoint curtains dual to γ, then h_1,h_2 will be two parallel, infinitely long strips of width one in X_i∪ X_i+1∼. All curtains dual to the x-axis of X_i will meet h_1 and h_2, which means h_1 and h_2 are not L-separated for any L. The same is true for any two disjoint
{ "authors": [ "Elliott Vest" ], "categories": [ "math.MG" ], "primary_category": "math.MG", "published": "20231226201722", "title": "The Curtain Model is Not a Quasi-Isometry Invariant of CAT(0) Spaces" }
2312.16325v1_19
http://arxiv.org/abs/2312.16325v1
for any L. The same is true for any two disjoint curtains dual to γ. Also, by Lemma 2.21 in <cit.>, the max L-chain that can cross γ is bounded above by 4L+10. Thus, the diameter of γ isdiam(γ) = ∑_L=1^∞λ_Ldiam_L(γ) ≤∑_L=1^∞λ_L(4L+10) < ∞.This is true for any geodesic ray that starts at the spiral and whose Alexandrov angle with the spiral is π/2. In particular, if we denote the spiral as α, then for any x ∈ X, d̂ (x, π_α(x)) ≤ 4L+10.Now, fix some origin ∈ X on α, and let α^+ denote the
{ "authors": [ "Elliott Vest" ], "categories": [ "math.MG" ], "primary_category": "math.MG", "published": "20231226201722", "title": "The Curtain Model is Not a Quasi-Isometry Invariant of CAT(0) Spaces" }
2312.16325v1_20
http://arxiv.org/abs/2312.16325v1
fix some origin ∈ X on α, and let α^+ denote the positive spiral direction and α^- the negative spiral direction emanating from . Both directions are π-strongly contracting as balls disjoint from the axis can only project to half of the circumference of one of the circles in the spiral. By <cit.>, there exists an infinite L-chain dual to α^+ for some L (similarly for α^-). Thus, in the curtain model X, the diameters of α^+ and α^- will both be unbounded. By <cit.>, both α^+ and α^- are
{ "authors": [ "Elliott Vest" ], "categories": [ "math.MG" ], "primary_category": "math.MG", "published": "20231226201722", "title": "The Curtain Model is Not a Quasi-Isometry Invariant of CAT(0) Spaces" }
2312.16325v1_21
http://arxiv.org/abs/2312.16325v1
be unbounded. By <cit.>, both α^+ and α^- are unparameterized quasi-geodesics in X. This concludes α is a quasi-line in X. Since for any x ∈ X, d̂(x, π_α(x))≤ 4L+10, this yields that X is a quasi-line.3.2 A Self Quasi-Isometry Does Not Induce a Quasi-Isometry of Curtain Models.For some ∈ X on the spiral, denote the points of X by (θ, r), where θ is the angle traveled around the spiral starting at , and r is the “radius" distance away from the spiral. Consider the points (i, 2^i) and (0, 2^i)for
{ "authors": [ "Elliott Vest" ], "categories": [ "math.MG" ], "primary_category": "math.MG", "published": "20231226201722", "title": "The Curtain Model is Not a Quasi-Isometry Invariant of CAT(0) Spaces" }
2312.16325v1_22
http://arxiv.org/abs/2312.16325v1
Consider the points (i, 2^i) and (0, 2^i)for all i ∈ℕ. Through a variation of the logarithmic spiral quasi-isometry of the Euclidean planeϕ:X⟶ X(t,r) ⟼ (t-log_2(r),r),we see that ϕ( (i, 2^i)) = (0, 2^i). However, in the curtain model X, {(0,2^i)}_i represents a quasi-point, and {(i,2^i)}_i represents a quasi-line. This means that the self-quasi-isometry ϕ will not descend to a quasi-isometry for X. 3.3 Upgrading to a Counterexample for Quasi-Isometric Invariance. Now, following the same vein as
{ "authors": [ "Elliott Vest" ], "categories": [ "math.MG" ], "primary_category": "math.MG", "published": "20231226201722", "title": "The Curtain Model is Not a Quasi-Isometry Invariant of CAT(0) Spaces" }
2312.16325v1_23
http://arxiv.org/abs/2312.16325v1
Invariance. Now, following the same vein as <cit.>, we construct two quasi-isometric CAT(0) spaces whose curtain models are not quasi-isometric. Construct the space W by gluing a geodesic ray γ_i to X at each (i, 2^i) point. Similarly, construct the space Z by gluing a geodesic ray γ_i' to X at each (0, 2^i) point. These spaces are quasi-isometric via the quasi-isometryϕ:W⟶ Z(t,r) ⟼ (t-log_2(r),r) γ_i ⟼γ_i'.However, the curtain models will not be quasi-isometric. See Figure <ref>. Indeed, as
{ "authors": [ "Elliott Vest" ], "categories": [ "math.MG" ], "primary_category": "math.MG", "published": "20231226201722", "title": "The Curtain Model is Not a Quasi-Isometry Invariant of CAT(0) Spaces" }
2312.16325v1_24
http://arxiv.org/abs/2312.16325v1
be quasi-isometric. See Figure <ref>. Indeed, as {(0,2^i)}_i is a quasi-point in Z,each of the geodesic rays in {γ_i'}_i emanate from a point which is within bounded distance ofon the quasi-line X. Thus, Z is quasi-isometric to an infinite wedge of rays. On the other hand, {(i,2^i)}_i represents some sub-quasi-line in X, so the geodesic rays {γ_i}_i have starting points at increasing distance away fromin X as i increases. So, W is quasi-isometric to ℝ with a ray attached to each positive
{ "authors": [ "Elliott Vest" ], "categories": [ "math.MG" ], "primary_category": "math.MG", "published": "20231226201722", "title": "The Curtain Model is Not a Quasi-Isometry Invariant of CAT(0) Spaces" }
2312.16325v1_25
http://arxiv.org/abs/2312.16325v1
to ℝ with a ray attached to each positive integer. These two spaces are not quasi-isometric.The same logic can also apply to show W and Z have Gromov boundaries of different homeomorphism type. The sequence {γ_i}_i in the Gromov boundary of W converges to α^+. No such converging sequence exists in Z. This proves the two Gromov boundaries for W, Z are not homeomorphic. alpha
{ "authors": [ "Elliott Vest" ], "categories": [ "math.MG" ], "primary_category": "math.MG", "published": "20231226201722", "title": "The Curtain Model is Not a Quasi-Isometry Invariant of CAT(0) Spaces" }
2312.16600v1_0
http://arxiv.org/abs/2312.16600v1
Learning from small data sets: Patch-based regularizers in inverse problems for image reconstruction Moritz Piening^1,Fabian Altekrüger^2, Johannes Hertrich^2, Paul Hagemann^1, Andrea Walther^2, Gabriele Steidl^1 January 14, 2024 ===================================================================================================================== Single-cell RNA sequencing (scRNA-seq) enables researchers to analyze gene expression at single-cell level. One important task in scRNA-seq data
{ "authors": [ "Weikang Jiang", "Jinxian Wang", "Jihong Guan", "Shuigeng Zhou" ], "categories": [ "q-bio.GN", "cs.AI", "cs.LG" ], "primary_category": "q-bio.GN", "published": "20231227145059", "title": "scRNA-seq Data Clustering by Cluster-aware Iterative Contrastive Learning" }
2312.16600v1_1
http://arxiv.org/abs/2312.16600v1
level. One important task in scRNA-seq data analysis is unsupervised clustering, which helps identify distinct cell types, laying down the foundation for other downstream analysis tasks.In this paper, we propose a novel method called Cluster-aware Iterative Contrastive Learning (CICL in short) for scRNA-seq data clustering, which utilizes an iterative representation learning and clustering framework to progressively learn the clustering structure of scRNA-seq data with a cluster-aware
{ "authors": [ "Weikang Jiang", "Jinxian Wang", "Jihong Guan", "Shuigeng Zhou" ], "categories": [ "q-bio.GN", "cs.AI", "cs.LG" ], "primary_category": "q-bio.GN", "published": "20231227145059", "title": "scRNA-seq Data Clustering by Cluster-aware Iterative Contrastive Learning" }
2312.16600v1_2
http://arxiv.org/abs/2312.16600v1
structure of scRNA-seq data with a cluster-aware contrastive loss. CICL consists of a Transformer encoder, a clustering head, a projection head and a contrastive loss module. First, CICL extracts the feature vectors of the original and augmented data by the Transformer-encoder. Then, it computes the clustering centroids by K-means and employs the student’s t-distribution to assign pseudo-labels to all cells in the clustering head. The projection-head uses a Multi-Layer Perceptron (MLP) to
{ "authors": [ "Weikang Jiang", "Jinxian Wang", "Jihong Guan", "Shuigeng Zhou" ], "categories": [ "q-bio.GN", "cs.AI", "cs.LG" ], "primary_category": "q-bio.GN", "published": "20231227145059", "title": "scRNA-seq Data Clustering by Cluster-aware Iterative Contrastive Learning" }
2312.16600v1_3
http://arxiv.org/abs/2312.16600v1
uses a Multi-Layer Perceptron (MLP) to obtain projections of the augmented data. At last, both pseudo-labels and projections are used in the contrastive loss to guide the model training. Such a process goes iteratively so that the clustering result becomes better and better. Extensive experiments on 25 real-world scRNA-seq datasets show that CICL outperforms the state-of-the-art (SOTA) methods. Concretely, CICL surpasses the existing methods by from 14% to 280%, and from 5% to 133% on average
{ "authors": [ "Weikang Jiang", "Jinxian Wang", "Jihong Guan", "Shuigeng Zhou" ], "categories": [ "q-bio.GN", "cs.AI", "cs.LG" ], "primary_category": "q-bio.GN", "published": "20231227145059", "title": "scRNA-seq Data Clustering by Cluster-aware Iterative Contrastive Learning" }
2312.16600v1_4
http://arxiv.org/abs/2312.16600v1
from 14% to 280%, and from 5% to 133% on average in terms of performance metrics ARI and NMI respectively. Source code is available at https://github.com/Alunethy/CICLhttps://github.com/Alunethy/CICL. § INTRODUCTION Each cell possesses unique characteristics and biological functions defined by its gene transcription activities. Conventional bulk RNA sequencing measures the average transcription levels of a multitude of cells, thereby obscuring the heterogeneity among individual cells. In the
{ "authors": [ "Weikang Jiang", "Jinxian Wang", "Jihong Guan", "Shuigeng Zhou" ], "categories": [ "q-bio.GN", "cs.AI", "cs.LG" ], "primary_category": "q-bio.GN", "published": "20231227145059", "title": "scRNA-seq Data Clustering by Cluster-aware Iterative Contrastive Learning" }
2312.16600v1_5
http://arxiv.org/abs/2312.16600v1
the heterogeneity among individual cells. In the past decade, the rapid progress of single-cell RNA sequencing (scRNA-seq) technologies <cit.> enables transcriptome-wide gene expression measurement in individual cells, which greatly helps deepen our understanding of cellular heterogeneity and propels the research on cell biology, immunology, and complex diseases <cit.>. Identifying cell types is a fundamental step in unraveling complex biological processes such as cellular differentiation,
{ "authors": [ "Weikang Jiang", "Jinxian Wang", "Jihong Guan", "Shuigeng Zhou" ], "categories": [ "q-bio.GN", "cs.AI", "cs.LG" ], "primary_category": "q-bio.GN", "published": "20231227145059", "title": "scRNA-seq Data Clustering by Cluster-aware Iterative Contrastive Learning" }
2312.16600v1_6
http://arxiv.org/abs/2312.16600v1
processes such as cellular differentiation, lineage commitment, and gene regulation <cit.>. As such, cell clustering becomes an important task in scRNA-seq analysis. However, the inherent high-dimensionality, noise, and sparsity of scRNA-seq data present severe challenges for scRNA-seq data clustering analysis <cit.>.Up to now, many models or algorithms have been developed for scRNA-seq data clustering.Early scRNA-seq clustering methods mainly rely on traditional dimensionality reduction and
{ "authors": [ "Weikang Jiang", "Jinxian Wang", "Jihong Guan", "Shuigeng Zhou" ], "categories": [ "q-bio.GN", "cs.AI", "cs.LG" ], "primary_category": "q-bio.GN", "published": "20231227145059", "title": "scRNA-seq Data Clustering by Cluster-aware Iterative Contrastive Learning" }
2312.16600v1_7
http://arxiv.org/abs/2312.16600v1
rely on traditional dimensionality reduction and clustering methods. For example, pcaReduce <cit.> combines PCA and K-means, iteratively merging cluster pairs based on related probability density function. Recognizing the importance of similarity metrics in the clustering task, SIMLR <cit.> amalgamates multiple kernels to learn sample similarity and perform spectral clustering. Seurat <cit.> employs a graph-based community detection algorithm, while Louvain <cit.> is based on the shared nearest
{ "authors": [ "Weikang Jiang", "Jinxian Wang", "Jihong Guan", "Shuigeng Zhou" ], "categories": [ "q-bio.GN", "cs.AI", "cs.LG" ], "primary_category": "q-bio.GN", "published": "20231227145059", "title": "scRNA-seq Data Clustering by Cluster-aware Iterative Contrastive Learning" }
2312.16600v1_8
http://arxiv.org/abs/2312.16600v1
Louvain <cit.> is based on the shared nearest neighbor graph to identify cell types.In the past decade, with the rapid development of deep learning, deep neural networks (DNN) have been extensively applied to scRNA-seq data clustering to address the limitations of conventional methods <cit.>. DEC <cit.> and IDEC <cit.>, based on autoencoders (AE), use KL divergence as the clustering loss, achieving simultaneous learning of feature representations and cluster assignments. To address the
{ "authors": [ "Weikang Jiang", "Jinxian Wang", "Jihong Guan", "Shuigeng Zhou" ], "categories": [ "q-bio.GN", "cs.AI", "cs.LG" ], "primary_category": "q-bio.GN", "published": "20231227145059", "title": "scRNA-seq Data Clustering by Cluster-aware Iterative Contrastive Learning" }
2312.16600v1_9
http://arxiv.org/abs/2312.16600v1
and cluster assignments. To address the pervasive dropout events in scRNA-seq data, DCA <cit.> proposes a zero-inflated negative binomial (ZINB) model to better characterize the distribution of scRNA-seq data, and uses the negative likelihood as the reconstruction loss instead of the frequently-used mean-square error (MSE) loss in autoencoders. scVI <cit.> is a deep generative model based on variational autoencoders, which can do various scRNA-seq data analyses such as data imputation,
{ "authors": [ "Weikang Jiang", "Jinxian Wang", "Jihong Guan", "Shuigeng Zhou" ], "categories": [ "q-bio.GN", "cs.AI", "cs.LG" ], "primary_category": "q-bio.GN", "published": "20231227145059", "title": "scRNA-seq Data Clustering by Cluster-aware Iterative Contrastive Learning" }
2312.16600v1_10
http://arxiv.org/abs/2312.16600v1
scRNA-seq data analyses such as data imputation, clustering, and visualization. scDeepCluster <cit.> introduces a novel model-based deep learning clustering approach. By combining the ZINB model with the DEC algorithm, it is designed to capture the underlying cluster structure of scRNA-seq data. scDHA <cit.> exploits a stacked Bayesian self-learning network to learn compact and generalized representations of scRNA-seq data. To leverage the relationships between cells, some studies construct the
{ "authors": [ "Weikang Jiang", "Jinxian Wang", "Jihong Guan", "Shuigeng Zhou" ], "categories": [ "q-bio.GN", "cs.AI", "cs.LG" ], "primary_category": "q-bio.GN", "published": "20231227145059", "title": "scRNA-seq Data Clustering by Cluster-aware Iterative Contrastive Learning" }
2312.16600v1_11
http://arxiv.org/abs/2312.16600v1
between cells, some studies construct the cell-cell graph and apply Graph Neural Networks (GNNs) to learn the representations of cells. scDSC <cit.> formulates and aggregates cell-cell relationships with graph neural networks and learns latent gene expression patterns using a ZINB model based autoencoder. GraphSCC <cit.> integrates the structural relationships between cells into scRNA-seq clustering by employing a graph convolutional network. It also utilizes a dual self-supervised module to
{ "authors": [ "Weikang Jiang", "Jinxian Wang", "Jihong Guan", "Shuigeng Zhou" ], "categories": [ "q-bio.GN", "cs.AI", "cs.LG" ], "primary_category": "q-bio.GN", "published": "20231227145059", "title": "scRNA-seq Data Clustering by Cluster-aware Iterative Contrastive Learning" }
2312.16600v1_12
http://arxiv.org/abs/2312.16600v1
It also utilizes a dual self-supervised module to cluster cells and guide the training process. Furthermore,Some other works have tried to train models using manual annotations as supervisory information or prior knowledge, as demonstrated in transfer learning and meta-learning methods <cit.>. While these methods can deliver excellent results on specific datasets, they also face serious challenge of scalability.Contrastive learning (CL) has been widely used in computer vision and natural
{ "authors": [ "Weikang Jiang", "Jinxian Wang", "Jihong Guan", "Shuigeng Zhou" ], "categories": [ "q-bio.GN", "cs.AI", "cs.LG" ], "primary_category": "q-bio.GN", "published": "20231227145059", "title": "scRNA-seq Data Clustering by Cluster-aware Iterative Contrastive Learning" }
2312.16600v1_13
http://arxiv.org/abs/2312.16600v1
been widely used in computer vision and natural language processing <cit.>. There have also been endeavors to incorporate contrastive learning into scRNA-seq data clustering. For instance, contrastive-sc <cit.> proposes a contrastive learning based method for scRNA-seq data by masking a certain proportion of data features to obtain augmented data. Similar to most practices in contrastive learning, this method designates augmented pairs as positive samples, while considering all other pairs as
{ "authors": [ "Weikang Jiang", "Jinxian Wang", "Jihong Guan", "Shuigeng Zhou" ], "categories": [ "q-bio.GN", "cs.AI", "cs.LG" ], "primary_category": "q-bio.GN", "published": "20231227145059", "title": "scRNA-seq Data Clustering by Cluster-aware Iterative Contrastive Learning" }
2312.16600v1_14
http://arxiv.org/abs/2312.16600v1
samples, while considering all other pairs as negatives. scNAME <cit.> improves the conventional contrastive loss by proposing a new neighborhood contrastive loss combined with an ancillary mask estimation task, characterizing feature correlation and pairwise cell similarity better. CLEAR <cit.> employs multiple data augmentation methods to simulate different noise types, uses the infoNCE <cit.> loss as a contrastive loss, and generates feature representations for scRNA-seq data with the
{ "authors": [ "Weikang Jiang", "Jinxian Wang", "Jihong Guan", "Shuigeng Zhou" ], "categories": [ "q-bio.GN", "cs.AI", "cs.LG" ], "primary_category": "q-bio.GN", "published": "20231227145059", "title": "scRNA-seq Data Clustering by Cluster-aware Iterative Contrastive Learning" }
2312.16600v1_15
http://arxiv.org/abs/2312.16600v1
representations for scRNA-seq data with the momentum update strategy of encoder. However, these methods mainly apply the standard contrastive learning directly, failing to adapt the selection of positive and negative samples to the clustering task. This paper aims to boost the performance of scRNA-seq data clustering by exploring new methods. Our contributions are two-fold. On the one hand, we propose a Cluster-aware Iterative Contrastive Learning (CICL) method for scRNA-seq data clustering.
{ "authors": [ "Weikang Jiang", "Jinxian Wang", "Jihong Guan", "Shuigeng Zhou" ], "categories": [ "q-bio.GN", "cs.AI", "cs.LG" ], "primary_category": "q-bio.GN", "published": "20231227145059", "title": "scRNA-seq Data Clustering by Cluster-aware Iterative Contrastive Learning" }
2312.16600v1_16
http://arxiv.org/abs/2312.16600v1
method for scRNA-seq data clustering. CICL employs an iterative representation learning and clustering framework with a cluster-aware contrastive loss, it can progressively improve the clustering result by comprehensively exploiting the hidden cluster structure for scRNA-seq data representation. On the other hand, we conduct extensive experiments on 25 real-world datasets, which show that our method outperforms the SOTA methods in most cases. § MATERIALS AND METHODS§.§ Datasets and Performance
{ "authors": [ "Weikang Jiang", "Jinxian Wang", "Jihong Guan", "Shuigeng Zhou" ], "categories": [ "q-bio.GN", "cs.AI", "cs.LG" ], "primary_category": "q-bio.GN", "published": "20231227145059", "title": "scRNA-seq Data Clustering by Cluster-aware Iterative Contrastive Learning" }
2312.16600v1_17
http://arxiv.org/abs/2312.16600v1
MATERIALS AND METHODS§.§ Datasets and Performance MetricsThe proposed CICL method is evaluated on 25 real scRNA-seq datasets, and each dataset contains cells whose labels are known as prior or validated in the previous studies. The 25 datasets were derived from 7 different sequencing platforms. The smallest dataset contains only 90 cells, while the largest dataset has 48,266 cells. The number of cell subtypes in these datasets ranges from 2 to 15. Statistics of these datasets are presented in
{ "authors": [ "Weikang Jiang", "Jinxian Wang", "Jihong Guan", "Shuigeng Zhou" ], "categories": [ "q-bio.GN", "cs.AI", "cs.LG" ], "primary_category": "q-bio.GN", "published": "20231227145059", "title": "scRNA-seq Data Clustering by Cluster-aware Iterative Contrastive Learning" }
2312.16600v1_18
http://arxiv.org/abs/2312.16600v1
15. Statistics of these datasets are presented in Table <ref>. We preprocess thescRNA-seq data with the Python package SCANPY <cit.>, following the strategy in <cit.>. Specifically, given the raw read counts (i.e., gene expression matrix), we first filter out cells and genes without counts. Then, we calculate the library size of each cell as the total number of read counts per cell, and obtain the size factor of each cell via dividing its library size by the median of all library sizes.
{ "authors": [ "Weikang Jiang", "Jinxian Wang", "Jihong Guan", "Shuigeng Zhou" ], "categories": [ "q-bio.GN", "cs.AI", "cs.LG" ], "primary_category": "q-bio.GN", "published": "20231227145059", "title": "scRNA-seq Data Clustering by Cluster-aware Iterative Contrastive Learning" }
2312.16600v1_19
http://arxiv.org/abs/2312.16600v1
library size by the median of all library sizes. Thirdly, we obtain the normalized read count by dividing the raw read count with the size factor of each cell, followed by a natural log transformation. Furthermore, we consider only the top-t highly variable genes according to their normalized dispersion values, and set t to 500 by default in our paper. Finally, we transform the normalized read counts into z-score data.Two widely used metrics, normalized mutual information (NMI) and adjusted
{ "authors": [ "Weikang Jiang", "Jinxian Wang", "Jihong Guan", "Shuigeng Zhou" ], "categories": [ "q-bio.GN", "cs.AI", "cs.LG" ], "primary_category": "q-bio.GN", "published": "20231227145059", "title": "scRNA-seq Data Clustering by Cluster-aware Iterative Contrastive Learning" }
2312.16600v1_20
http://arxiv.org/abs/2312.16600v1
normalized mutual information (NMI) and adjusted rand index (ARI) are used to evaluate clustering performance.NMI measures the similarity between the predicted labels and the real labels. Specifically, given the predicted labels U=[u_1, u_2, ..., u_N] ∈ℝ^N and the real labels V=[v_1, v_2, ..., v_N] ∈ℝ^N, N denotes the number of cells, NMI is evaluated as follows:NMI=I(U,V)/max(H(U),H(V))where I(U,V)=∑_u∑_vp(u, v)log p(u, v)/p(u)p(v) calculates the mutual information between U and V, p(u, v) is
{ "authors": [ "Weikang Jiang", "Jinxian Wang", "Jihong Guan", "Shuigeng Zhou" ], "categories": [ "q-bio.GN", "cs.AI", "cs.LG" ], "primary_category": "q-bio.GN", "published": "20231227145059", "title": "scRNA-seq Data Clustering by Cluster-aware Iterative Contrastive Learning" }
2312.16600v1_21
http://arxiv.org/abs/2312.16600v1
mutual information between U and V, p(u, v) is the joint distribution of U and V, p(u) and p(v) are marginal distributions respectively. H(U) = ∑_u p(u)log(p(u)) is the entropy of clustering U. Similarly, H(V) = ∑_v p(v)log(p(v)). ARI was also used to measure the similarity between clustering results and true categories. It solves the problem of insufficient punishment of RI and considers the impact of random assignments. The value of ARI ranges from -1 to 1. The larger the value, the more
{ "authors": [ "Weikang Jiang", "Jinxian Wang", "Jihong Guan", "Shuigeng Zhou" ], "categories": [ "q-bio.GN", "cs.AI", "cs.LG" ], "primary_category": "q-bio.GN", "published": "20231227145059", "title": "scRNA-seq Data Clustering by Cluster-aware Iterative Contrastive Learning" }
2312.16600v1_22
http://arxiv.org/abs/2312.16600v1
from -1 to 1. The larger the value, the more similar the clustering result is to the real categories. ARI is defined asARI=∑_ijn_ij 2-[ ∑_ia_i2∑_jb_j2 ]/N2/[ ∑_ia_i2∑_jb_j2 ]/2-[ ∑_ia_i2∑_jb_j2 ]/N2where n_ij denotes the number of cells in both cluster i of U and cluster j of V, and a_i denotes the count of cells assigned to cluster i of U, b_j indicates the count of cells assigned to cluster j of V.§.§ The CICL Method§.§.§ OverviewCICL is a cluster-aware iterative contrastive learning method
{ "authors": [ "Weikang Jiang", "Jinxian Wang", "Jihong Guan", "Shuigeng Zhou" ], "categories": [ "q-bio.GN", "cs.AI", "cs.LG" ], "primary_category": "q-bio.GN", "published": "20231227145059", "title": "scRNA-seq Data Clustering by Cluster-aware Iterative Contrastive Learning" }
2312.16600v1_23
http://arxiv.org/abs/2312.16600v1
iterative contrastive learning method designed for clustering scRNA-seq data, its framework is illustrated in Fig. <ref>. Specifically, in the model training phase, we first generate two augmented views X_aug1 and X_aug2 of the raw data X by adding noise that is randomly sampled from Gaussian N(0, 1) and is mapped to the range [0, 1] via a linear transformation. Then, X, X_aug1 and X_aug2 are input into a transformer encoder <cit.> to obtain their representations H, H_aug1 and H_aug2,
{ "authors": [ "Weikang Jiang", "Jinxian Wang", "Jihong Guan", "Shuigeng Zhou" ], "categories": [ "q-bio.GN", "cs.AI", "cs.LG" ], "primary_category": "q-bio.GN", "published": "20231227145059", "title": "scRNA-seq Data Clustering by Cluster-aware Iterative Contrastive Learning" }
2312.16600v1_24
http://arxiv.org/abs/2312.16600v1
their representations H, H_aug1 and H_aug2, respectively.Next, we perform K-means on H to get the centroid matrix C = [c_1, c_2, ..., c_K] where c_i is the centroid vector of cluster i. The number of centroids is equal to the number of cell subtypes (or clusters) in the training dataset.After that, H and C are fed to the clustering-head and generate a pseudo-label for each cell. Meanwhile, the projection-head encodes H_aug1 and H_aug2 to obtain their projections Z_aug1 and Z_aug2.Finally, in
{ "authors": [ "Weikang Jiang", "Jinxian Wang", "Jihong Guan", "Shuigeng Zhou" ], "categories": [ "q-bio.GN", "cs.AI", "cs.LG" ], "primary_category": "q-bio.GN", "published": "20231227145059", "title": "scRNA-seq Data Clustering by Cluster-aware Iterative Contrastive Learning" }
2312.16600v1_25
http://arxiv.org/abs/2312.16600v1
their projections Z_aug1 and Z_aug2.Finally, in addition to the traditional instance-wise contrastive loss, we propose a novel cluster-aware contrastive loss to align the positive pairs and contrast the negative pairs simultaneously, which takes the projections Z_aug1, Z_aug2 and pseudo-labels as input. We construct the positive pairs by an instance-wise way and a pseudo-label based way. In particular, an instance-wise positive pair consists of the representations of the two augmented copies of
{ "authors": [ "Weikang Jiang", "Jinxian Wang", "Jihong Guan", "Shuigeng Zhou" ], "categories": [ "q-bio.GN", "cs.AI", "cs.LG" ], "primary_category": "q-bio.GN", "published": "20231227145059", "title": "scRNA-seq Data Clustering by Cluster-aware Iterative Contrastive Learning" }
2312.16600v1_26
http://arxiv.org/abs/2312.16600v1
representations of the two augmented copies of each cell, and a pseudo-label positive pair is formed by the representations of two augmented copies of two cells with similar pseudo-label (i.e., belonging to the same cluster). This training process goes iteratively. In the clustering phase, the input data X are preprocessed and encoded by the trained transformer encoder to obtain the representation H. Then, H are clustered by K-means to generate the final clustering result. In the following
{ "authors": [ "Weikang Jiang", "Jinxian Wang", "Jihong Guan", "Shuigeng Zhou" ], "categories": [ "q-bio.GN", "cs.AI", "cs.LG" ], "primary_category": "q-bio.GN", "published": "20231227145059", "title": "scRNA-seq Data Clustering by Cluster-aware Iterative Contrastive Learning" }
2312.16600v1_27
http://arxiv.org/abs/2312.16600v1
the final clustering result. In the following sections, we present the major components of our method in detail.§.§.§ Transformer EncoderThe raw scRNA-seq data is modeled as a matrix X ∈ℝ^N × G where N indicates the number of cells andG denotes the number of genes.To begin with, we construct augmented data by adding Gaussian noise, and two augmented copies (or views) X_aug1 and X_aug2 are generated for X. Then, we encode X_aug1, X_aug2 and X by a transformer encoder, which has four layers, each
{ "authors": [ "Weikang Jiang", "Jinxian Wang", "Jihong Guan", "Shuigeng Zhou" ], "categories": [ "q-bio.GN", "cs.AI", "cs.LG" ], "primary_category": "q-bio.GN", "published": "20231227145059", "title": "scRNA-seq Data Clustering by Cluster-aware Iterative Contrastive Learning" }
2312.16600v1_28
http://arxiv.org/abs/2312.16600v1
transformer encoder, which has four layers, each of which consists of two networks: a multi-head self-attention network and a position-wise fully connected feed-forward network, each of them is followed by a residual connection and layer normalization. For example, give the input X of the self-attention layer, the output is as follows: H_MulitHead = Concat(Att_1(XW_1^v), ..., Att_h(XW_h^v))W^Owhere W_i^v∈ℝ^G × d and W^O∈ℝ^hd × G are learnable parameter matrices, h is the number of heads. And
{ "authors": [ "Weikang Jiang", "Jinxian Wang", "Jihong Guan", "Shuigeng Zhou" ], "categories": [ "q-bio.GN", "cs.AI", "cs.LG" ], "primary_category": "q-bio.GN", "published": "20231227145059", "title": "scRNA-seq Data Clustering by Cluster-aware Iterative Contrastive Learning" }
2312.16600v1_29
http://arxiv.org/abs/2312.16600v1
parameter matrices, h is the number of heads. And Att_i is evaluated by Att_i = softmax(XW_i^q× (XW_i^k)^𝖳/√(d)) , i = 1, 2, ..., hAbove, W_i^q∈ℝ^G × d and W_i^k∈ℝ^G × d are learnable parameter matrices. Then, after the residual connection and layer normalization, we haveH_res = LayerNorm(X + H_MulitHead) The fully connected feed-forward network consists of two linear layers, following a rectified linear activation function (ReLU), so we haveH_fc = ReLU(H_resW_1)W_2where W_1 and W_2 are
{ "authors": [ "Weikang Jiang", "Jinxian Wang", "Jihong Guan", "Shuigeng Zhou" ], "categories": [ "q-bio.GN", "cs.AI", "cs.LG" ], "primary_category": "q-bio.GN", "published": "20231227145059", "title": "scRNA-seq Data Clustering by Cluster-aware Iterative Contrastive Learning" }
2312.16600v1_30
http://arxiv.org/abs/2312.16600v1
haveH_fc = ReLU(H_resW_1)W_2where W_1 and W_2 are learnable parameter matrices. Finally, the output of the i-th layer of the transformer encoder isH_i = LayerNorm(H_fc + H_res) §.§.§ Clustering-head and Pseudo-label GenerationTraditional contrastive loss suffers from sampling bias <cit.>. For example, given a cell i, all the other cells are considered as its negative samples. However, these negative samples contain some cells of the same type as cell i, which will be undesirably pushed away
{ "authors": [ "Weikang Jiang", "Jinxian Wang", "Jihong Guan", "Shuigeng Zhou" ], "categories": [ "q-bio.GN", "cs.AI", "cs.LG" ], "primary_category": "q-bio.GN", "published": "20231227145059", "title": "scRNA-seq Data Clustering by Cluster-aware Iterative Contrastive Learning" }
2312.16600v1_31
http://arxiv.org/abs/2312.16600v1
as cell i, which will be undesirably pushed away from cell i in the representation space by current contrastive learning. To address this problem, CICL employs a cluster-aware contrastive learning strategy. To this end, we cluster the training data H by K-means, and each cluster is characterized by its centroid c_i in the representation space, which will be updated iteratively. Then, in the clustering-head, we use the Student’s t-distribution to compute the probability q_ij that cell i belongs
{ "authors": [ "Weikang Jiang", "Jinxian Wang", "Jihong Guan", "Shuigeng Zhou" ], "categories": [ "q-bio.GN", "cs.AI", "cs.LG" ], "primary_category": "q-bio.GN", "published": "20231227145059", "title": "scRNA-seq Data Clustering by Cluster-aware Iterative Contrastive Learning" }
2312.16600v1_32
http://arxiv.org/abs/2312.16600v1
compute the probability q_ij that cell i belongs to the j-th cluster,q_ij = (1+h_i - c_j) _2^2 / α)^-α + 1/2/∑_k = 1^K(1+h_i - c_k) _2^2 / α)^-α + 1/2where h_i is the representation of cell i in H. α is the degree of freedom of the Student’s t-distribution, we set α = 1 in this paper. Finally, we obtain the pseudo-label l_i of cell i by the probability vector q_i=(q_i1,q_i2,...,q_iK) as follows: l_i = label_assign(q_i)where label_assign is a function that returns the cluster index corresponding
{ "authors": [ "Weikang Jiang", "Jinxian Wang", "Jihong Guan", "Shuigeng Zhou" ], "categories": [ "q-bio.GN", "cs.AI", "cs.LG" ], "primary_category": "q-bio.GN", "published": "20231227145059", "title": "scRNA-seq Data Clustering by Cluster-aware Iterative Contrastive Learning" }
2312.16600v1_33
http://arxiv.org/abs/2312.16600v1
that returns the cluster index corresponding the maximum q_ij (j ∈ 1, 2, ..., K). Thus, we obtain the pseudo-labels L=(l_1, l_2, ..., l_N) of all cells, which are used for contrastive loss computation. Note that each cell and its two augmented copies have the same pseudo-label. We use the term “pseudo-label”because they are just intermediate (not final) cluster labels. §.§.§ Projection-head and Contrastive Learning LossesWe project H_aug1 and H_aug2 to obtain projections Z_aug1 and Z_aug2 by
{ "authors": [ "Weikang Jiang", "Jinxian Wang", "Jihong Guan", "Shuigeng Zhou" ], "categories": [ "q-bio.GN", "cs.AI", "cs.LG" ], "primary_category": "q-bio.GN", "published": "20231227145059", "title": "scRNA-seq Data Clustering by Cluster-aware Iterative Contrastive Learning" }
2312.16600v1_34
http://arxiv.org/abs/2312.16600v1
H_aug2 to obtain projections Z_aug1 and Z_aug2 by the projection-head, which is composed of a two-layer perceptron. Formally,Z_aug1 = W_3ReLU(W_4H_aug1) Z_aug2 = W_3ReLU(W_4H_aug2)where W_3 and W_4 are learnable parameters. ReLU is the activation function. Let z_i and z^'_i be the i-th row of Z_aug1 and Z_aug2 respectively, which correspond to the representations of cell i in the two augmented views.For z_i, we not only treat z_i and z^'_i, but also z_i and any other sample of the same cluster
{ "authors": [ "Weikang Jiang", "Jinxian Wang", "Jihong Guan", "Shuigeng Zhou" ], "categories": [ "q-bio.GN", "cs.AI", "cs.LG" ], "primary_category": "q-bio.GN", "published": "20231227145059", "title": "scRNA-seq Data Clustering by Cluster-aware Iterative Contrastive Learning" }
2312.16600v1_35
http://arxiv.org/abs/2312.16600v1
also z_i and any other sample of the same cluster in terms of pseudo-label as a positive pair, while z_i and any sample of the other clusters as a negative pair. Given the batch size B, we consider two losses as follows: Instance-wise contrastive loss. CICL computes the infoNCE loss <cit.> for each cell. For cell i with two views z_i and z_i^', its contrastive loss in terms of z_i is1!l_ins(z_i) = -log exp(sim(z_i, z_i^')/T)/∑_m=1^B𝕀_i ≠ m exp(sim(z_i, z_m)/T) + ∑_m=1^B exp(sim(z_i,
{ "authors": [ "Weikang Jiang", "Jinxian Wang", "Jihong Guan", "Shuigeng Zhou" ], "categories": [ "q-bio.GN", "cs.AI", "cs.LG" ], "primary_category": "q-bio.GN", "published": "20231227145059", "title": "scRNA-seq Data Clustering by Cluster-aware Iterative Contrastive Learning" }
2312.16600v1_36
http://arxiv.org/abs/2312.16600v1
≠ m exp(sim(z_i, z_m)/T) + ∑_m=1^B exp(sim(z_i, z_m^')/T)where 𝕀_i ≠ m is an indicator function whose value is 1 if i ≠ m, otherwise is 0. T is the temperature parameter set to 0.5 in our paper. The similarity function sim(.,.) adopts point product or cosine similarity, i.e.,sim(z_i, z_i^') = z_i^Tz_i^'/z_iz_i^' The overall instance-wise contrastive loss isℒ_ins = 1/2B∑^B_i=1[l_ins(z_i) + l_ins(z^'_i)]where l_ins(z^'_i) is cell i's contrastive loss in terms of z^'_i. With this instance-wise
{ "authors": [ "Weikang Jiang", "Jinxian Wang", "Jihong Guan", "Shuigeng Zhou" ], "categories": [ "q-bio.GN", "cs.AI", "cs.LG" ], "primary_category": "q-bio.GN", "published": "20231227145059", "title": "scRNA-seq Data Clustering by Cluster-aware Iterative Contrastive Learning" }
2312.16600v1_37
http://arxiv.org/abs/2312.16600v1
loss in terms of z^'_i. With this instance-wise contrastive loss, CICL can learn the representations well by pulling positive pairs together and pushing negative pairs away in the cell representation space.Cluster-aware contrastive loss. To solve the sample bias, we propose a novel cluster-aware contrastive loss, which is evaluated with the pseudo-labels L, Z_aug1 and Z_aug2. We treat the pairs of representations of the same pseudo-label as positive pairs, and the remaining pairs as negative
{ "authors": [ "Weikang Jiang", "Jinxian Wang", "Jihong Guan", "Shuigeng Zhou" ], "categories": [ "q-bio.GN", "cs.AI", "cs.LG" ], "primary_category": "q-bio.GN", "published": "20231227145059", "title": "scRNA-seq Data Clustering by Cluster-aware Iterative Contrastive Learning" }