Papers
arxiv:2305.04501

SEGA: Structural Entropy Guided Anchor View for Graph Contrastive Learning

Published on May 8, 2023
Authors:
,
,
,
,

Abstract

In contrastive learning, the choice of ``view'' controls the information that the representation captures and influences the performance of the model. However, leading graph contrastive learning methods generally produce views via random corruption or learning, which could lead to the loss of essential information and alteration of semantic information. An anchor <PRE_TAG>view</POST_TAG> that maintains the essential information of input graphs for contrastive learning has been hardly investigated. In this paper, based on the theory of graph information bottleneck, we deduce the definition of this anchor <PRE_TAG>view</POST_TAG>; put differently, the anchor <PRE_TAG>view</POST_TAG> with essential information of input graph is supposed to have the minimal structural uncertainty. Furthermore, guided by structural entropy, we implement the anchor <PRE_TAG>view</POST_TAG>, termed SEGA, for graph contrastive learning. We extensively validate the proposed anchor <PRE_TAG>view</POST_TAG> on various benchmarks regarding graph classification under unsupervised, semi-supervised, and transfer learning and achieve significant performance boosts compared to the state-of-the-art methods.

Community

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2305.04501 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2305.04501 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2305.04501 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.