taesiri commited on
Commit
cf4229b
1 Parent(s): 773cd52

Upload papers/2102/2102.06514.tex with huggingface_hub

Browse files
Files changed (1) hide show
  1. papers/2102/2102.06514.tex +1471 -0
papers/2102/2102.06514.tex ADDED
@@ -0,0 +1,1471 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ \documentclass{article}
4
+ \pdfoutput=1
5
+ \usepackage{microtype}
6
+ \usepackage{graphicx,wrapfig}
7
+ \usepackage{tikz}
8
+ \usepackage{amsmath, amssymb,mathtools}
9
+ \usetikzlibrary{positioning,decorations.pathmorphing,calc,shapes}
10
+ \usepackage{placeins}
11
+ \usepackage{subfigure}
12
+ \usepackage{wrapfig}
13
+ \usepackage{booktabs}
14
+
15
+ \usepackage[unicode=true]{hyperref}
16
+
17
+ \newcommand{\theHalgorithm}{\arabic{algorithm}}
18
+
19
+
20
+
21
+ \usepackage{selectp}
22
+
23
+
24
+
25
+ \definecolor{babyblue}{rgb}{0.54, 0.81, 0.94}
26
+ \definecolor{citrine}{rgb}{0.89, 0.82, 0.04}
27
+ \definecolor{misocolor}{rgb}{0.16,0.27,0.86}
28
+ \definecolor{jbcolor}{rgb}{0.9,0.4,0.2}
29
+ \definecolor{bernacolor}{rgb}{0.9608,0.4863,0.00}
30
+ \definecolor{carlcolor}{rgb}{0.0,0.9863,0.30}
31
+ \definecolor{grey}{rgb}{0.3, 0.3, 0.3}
32
+ \newcommand{\todom}[1]{\todo[color=misocolor!30]{#1}\xspace}
33
+ \newcommand{\todomi}[1]{\todo[inline,color=misocolor!30]{#1}}
34
+ \newcommand{\todoinline}[1]{\todo[inline,color=grey!30]{#1}}
35
+
36
+
37
+ \newcommand{\todoco}[1]{\todo[color=citrine!30]{#1}\xspace}
38
+ \newcommand{\todocoi}[1]{\todo[inline,color=citrine!30]{#1}\xspace}
39
+ \newcommand{\todojb}[1]{\todo[color=jbcolor!30]{#1}\xspace}
40
+ \newcommand{\todoberna}[1]{\todo[color=bernacolor!10]{#1}\xspace}
41
+ \newcommand{\todor}[1]{\todo[color=babyblue]{R\' emi: #1}\xspace}
42
+ \newcommand{\todori}[1]{\todo[inline,color=babyblue]{R\' emi: #1}\xspace}
43
+ \newcommand{\todofloriani}[1]{\todo[inline,color=grey!15]{Florian: #1}\xspace}
44
+ \newcommand{\carl}[1]{\todo[color=carlcolor!50]{#1}\xspace}
45
+
46
+ \definecolor{graphicbackground}{rgb}{0.96,0.96,0.8}
47
+ \definecolor{rouge1}{RGB}{226,0,38} \definecolor{orange1}{RGB}{243,154,38} \definecolor{jaune}{RGB}{254,205,27} \definecolor{blanc}{RGB}{255,255,255} \definecolor{rouge2}{RGB}{230,68,57} \definecolor{orange2}{RGB}{236,117,40} \definecolor{taupe}{RGB}{134,113,127} \definecolor{gris}{RGB}{91,94,111} \definecolor{bleu1}{RGB}{38,109,131} \definecolor{bleu2}{RGB}{28,50,114} \definecolor{vert1}{RGB}{133,146,66} \definecolor{vert3}{RGB}{20,200,66} \definecolor{vert2}{RGB}{157,193,7} \definecolor{darkyellow}{RGB}{233,165,0} \definecolor{lightgray}{rgb}{0.9,0.9,0.9}
48
+ \definecolor{darkgray}{rgb}{0.6,0.6,0.6}
49
+ \definecolor{babyblue}{rgb}{0.54, 0.81, 0.94}
50
+ \definecolor{citrine}{rgb}{0.89, 0.82, 0.04}
51
+ \definecolor{misogreen}{rgb}{0.25,0.6,0.0}
52
+ \definecolor{PalePurp}{rgb}{0.66,0.57,0.66}
53
+ \definecolor{todocolor}{rgb}{0.66,0.99,0.99}
54
+ \definecolor{pearOne}{HTML}{2C3E50}
55
+ \definecolor{pearTwo}{HTML}{A9CF54}
56
+ \definecolor{pearTwoT}{HTML}{C2895B}
57
+ \definecolor{pearThree}{HTML}{E74C3C}
58
+ \colorlet{titleTh}{pearOne}
59
+ \colorlet{bull}{pearTwo}
60
+ \definecolor{pearcomp}{HTML}{B97E29}
61
+ \definecolor{pearFour}{HTML}{588F27}
62
+ \definecolor{pearFith}{HTML}{ECF0F1}
63
+ \definecolor{pearDark}{HTML}{2980B9}
64
+ \definecolor{pearDarker}{HTML}{1D2DEC}
65
+
66
+
67
+ \hypersetup{
68
+ colorlinks,
69
+ citecolor=pearDark,
70
+ linkcolor=pearThree,
71
+ breaklinks=true,
72
+ urlcolor=pearDarker}
73
+
74
+ \newcommand{\rcol}[1]{\textcolor{red}{\textit{#1}}}
75
+ \newcommand{\defcol}[1]{\textcolor{vert3}{\textbf{#1}}}
76
+ \newcommand{\yellcol}[1]{\textcolor{babyblue}{\textbf{#1}}}
77
+ \newcommand{\gcol}[1]{\textcolor{vert3}{\textbf{#1}}}
78
+
79
+ \newcommand{\emphcol}[1]{\textcolor{vert3}{\textbf{#1}}}
80
+
81
+ \newcommand{\grcol}[1]{\textcolor{gray}{#1}}
82
+ \newcommand{\notecol}[1]{\textcolor{gray}{#1}}
83
+
84
+ \newcommand{\bcol}[1]{\textcolor{blue}{\textit{#1}}}
85
+ \newcommand{\ycol}[1]{\textcolor{darkyellow}{\textit{#1}}}
86
+ \newcommand{\rcolb}[1]{\textcolor{red}{\textit{\textbf{#1}}}}
87
+ \newcommand{\gcolb}[1]{\textcolor{vert3}{\textit{\textbf{#1}}}}
88
+ \newcommand{\bcolb}[1]{\textcolor{blue}{\textit{\textbf{#1}}}}
89
+ \newcommand{\ycolb}[1]{\textcolor{darkyellow}{\textit{\textbf{#1}}}}
90
+ \newcommand{\mcolA}[1]{{\color{bleu1} #1}}
91
+ \newcommand{\mcolB}[1]{{\color{darkyellow} #1}}
92
+ \newcommand{\mcolC}[1]{{\color{misogreen} #1}}
93
+
94
+
95
+
96
+
97
+
98
+ \newcommand*\diff{\mathop{}\!\mathrm{d}}
99
+ \newcommand*\Diff[1]{\mathop{}\!\mathrm{d^#1}}
100
+ \renewcommand{\d}[1]{\ensuremath{\operatorname{d}\!{#1}}}
101
+
102
+ \newcommand{\set}[1]{\left\{#1\right\}}
103
+
104
+
105
+
106
+
107
+ \newcommand{\II}[1]{\mathds{1}_{\left\{#1\right\}}}
108
+ \newcommand{\I}{{\mathds{1}}}
109
+
110
+
111
+
112
+ \newcommand{\ra}{\rightarrow}
113
+
114
+ \newcommand{\Bernoulli}{\mathrm{Bernoulli}}
115
+
116
+
117
+ \newcommand{\specialcell}[2][c]{\begin{tabular}[#1]{@{}c@{}}#2\end{tabular}}
118
+
119
+ \newtheorem{assumption}{Assumption}
120
+ \newtheorem{lemma}{Lemma}
121
+ \newtheorem{theorem}{Theorem}
122
+ \newtheorem{proposition}{Proposition}
123
+ \newtheorem{definition}{Definition}
124
+ \newtheorem{corollary}{Corollary}
125
+ \newtheorem{remark}{Remark}
126
+
127
+
128
+
129
+ \newcommand{\simclr}{\texttt{SimCLR}\xspace}
130
+ \newcommand{\meanteacher}{\texttt{Mean Teacher}\xspace}
131
+ \newcommand{\moco}{\texttt{MoCo}\xspace}
132
+ \newcommand{\mocoo}{\texttt{MoCo v2}\xspace}
133
+ \newcommand{\pirl}{\texttt{PIRL}\xspace}
134
+ \newcommand{\cmc}{\texttt{CMD}\xspace}
135
+ \newcommand{\dimmm}{\texttt{DIM}\xspace}
136
+ \newcommand{\amdim}{\texttt{AMDIM}\xspace}
137
+ \newcommand{\sela}{\texttt{SELA}\xspace}
138
+
139
+ \newcommand{\BN}{\normalfont\textsc{B\hskip -0.1em N}\xspace}
140
+ \newcommand{\LN}{\normalfont\textsc{L\hskip -0.1em N}\xspace}
141
+ \newcommand{\Id}{\normalfont\textsc{I\hskip -0.1em d}\xspace}
142
+ \newcommand{\BNt}{\normalfont\textsc{BatchNorm}\xspace}
143
+ \newcommand{\LNt}{\normalfont\textsc{LayerNorm}\xspace}
144
+
145
+ \newcommand{\LARS}{\normalfont\texttt{LARS}\xspace}
146
+ \newcommand{\MT}{\normalfont\texttt{MT}\xspace}
147
+ \newcommand{\SGD}{\normalfont\texttt{SGD}\xspace}
148
+ \newcommand{\LBFGS}{\normalfont\texttt{LBFGS}\xspace}
149
+
150
+ \newcommand{\CPC}{\normalfont\texttt{CPC}\xspace}
151
+ \newcommand{\CMC}{\normalfont\texttt{CMC}\xspace}
152
+ \newcommand{\CPCC}{\normalfont\texttt{CPC v2}\xspace}
153
+ \newcommand{\PIRL}{\normalfont\texttt{PIRL}\xspace}
154
+
155
+
156
+ \newcommand{\R}{\mathbb{R}}
157
+ \newcommand{\realset}{\mathbb{R}}
158
+
159
+ \newcommand{\NN}{{\mathbb N}}
160
+ \newcommand{\1}{\mathds{1}}
161
+ \newcommand{\bOne}{{\bf 1}}
162
+ \newcommand{\bZero}{{\bf 0}}
163
+ \newcommand{\E}{\mathbb{E}}
164
+ \newcommand{\EE}[1]{\mathbb{E}\left[#1\right]}
165
+ \newcommand{\EEt}[1]{\mathbb{E}_t\left[#1\right]}
166
+ \newcommand{\EEs}[2]{\mathbb{E}_{#1}\left[#2\right]}
167
+ \newcommand{\EEc}[2]{\mathbb{E}\left[#1\left|#2\right.\right]}
168
+ \newcommand{\EEcc}[2]{\mathbb{E}\left[\left.#1\right|#2\right]}
169
+ \newcommand{\EEcct}[2]{\mathbb{E}_t\left[\left.#1\right|#2\right]}
170
+ \newcommand{\probability}{\mathbb{P}}
171
+ \renewcommand{\P}{\mathbb{P}}
172
+ \newcommand{\PP}[1]{\mathbb{P}\left[#1\right]}
173
+ \newcommand{\PPt}[1]{\mathbb{P}_t\left[#1\right]}
174
+ \newcommand{\PPc}[2]{\mathbb{P}\left[#1\left|#2\right.\right]}
175
+ \newcommand{\PPcc}[2]{\mathbb{P}\left[\left.#1\right|#2\right]}
176
+ \newcommand{\PPct}[2]{\mathbb{P}_t\left[#1\left|#2\right.\right]}
177
+ \newcommand{\PPcct}[2]{\mathbb{P}_t\left[\left.#1\right|#2\right]}
178
+ \newcommand{\pa}[1]{\left(#1\right)}
179
+ \newcommand{\sqpa}[1]{\left[#1\right]}
180
+ \newcommand{\ac}[1]{\left\{#1\right\}}
181
+ \newcommand{\ev}[1]{\left\{#1\right\}}
182
+ \newcommand{\card}[1]{\left|#1\right|}
183
+
184
+
185
+ \let\originalleft\left
186
+ \let\originalright\right
187
+ \renewcommand{\left}{\mathopen{}\mathclose\bgroup\originalleft}
188
+ \renewcommand{\right}{\aftergroup\egroup\originalright}
189
+
190
+
191
+ \newcommand{\normtwo}[1]{\left\|#1\right\|_2}
192
+ \newcommand{\norm}[1]{\left\|#1\right\|}
193
+ \newcommand{\onenorm}[1]{\norm{#1}_1}
194
+ \newcommand{\infnorm}[1]{\norm{#1}_\infty}
195
+ \newcommand{\norminf}[1]{\infnorm{#1}}
196
+
197
+ \newcommand{\abs}[1]{\left|#1\right|}
198
+
199
+ \newcommand{\CommaBin}{\mathbin{\raisebox{0.5ex}{,}}}
200
+ \newcommand*{\eqdef}{\triangleq}
201
+ \newcommand{\transpose}{^\mathsf{\scriptscriptstyle T}}
202
+
203
+ \newcommand{\cA}{\mathcal{A}}
204
+ \newcommand{\cB}{\mathcal{B}}
205
+ \newcommand{\cC}{\mathcal{C}}
206
+ \newcommand{\cD}{\mathcal{D}}
207
+ \newcommand{\cE}{\mathcal{E}}
208
+ \newcommand{\F}{\mathcal{F}}
209
+ \newcommand{\cF}{\mathcal{F}}
210
+ \newcommand{\cG}{\mathcal{G}}
211
+ \newcommand{\cH}{\mathcal{H}}
212
+ \newcommand{\cI}{\mathcal{I}}
213
+ \newcommand{\cJ}{\mathcal{J}}
214
+ \newcommand{\cK}{\mathcal{K}}
215
+ \newcommand{\cL}{\mathcal{L}}
216
+ \newcommand{\calL}{\cL}
217
+ \newcommand{\cM}{\mathcal{M}}
218
+ \newcommand{\cN}{\mathcal{N}}
219
+ \newcommand{\cO}{\mathcal{O}}
220
+ \newcommand{\tcO}{\widetilde{\cO}}
221
+ \newcommand{\OO}{\mathcal{O}}
222
+ \newcommand{\tOO}{\wt{\OO}}
223
+ \newcommand{\cP}{\mathcal{P}}
224
+ \newcommand{\cQ}{\mathcal{Q}}
225
+ \newcommand{\cR}{\mathcal{R}}
226
+ \newcommand{\Sw}{\mathcal{S}}
227
+ \newcommand{\cS}{\mathcal{S}}
228
+ \newcommand{\cT}{\mathcal{T}}
229
+ \newcommand{\T}{\cT}
230
+ \newcommand{\cU}{\mathcal{U}}
231
+ \newcommand{\cV}{\mathcal{V}}
232
+ \newcommand{\cW}{\mathcal{W}}
233
+ \newcommand{\cX}{\mathcal{X}}
234
+ \newcommand{\X}{\cX}
235
+ \newcommand{\cY}{\mathcal{Y}}
236
+ \newcommand{\cZ}{\mathcal{Z}}
237
+
238
+ \newcommand{\ba}{{\bf a}}
239
+ \newcommand{\bA}{{\bf A}}
240
+ \newcommand{\bb}{{\bf b}}
241
+ \newcommand{\bB}{{\bf B}}
242
+ \newcommand{\bc}{{\bf c}}
243
+ \newcommand{\bC}{{\bf C}}
244
+ \newcommand{\bD}{{\bf D}}
245
+ \newcommand{\bg}{{\bf g}}
246
+ \newcommand{\bG}{{\bf G}}
247
+ \newcommand{\bI}{{\bf I}}
248
+ \newcommand{\bH}{{\bf H}}
249
+ \newcommand{\bM}{{\bf M}}
250
+ \newcommand{\bO}{\boldsymbol{O}}
251
+ \newcommand{\bp}{\boldsymbol{p}}
252
+ \newcommand{\bP}{{\bf P}}
253
+ \newcommand{\br}{{\bf r}}
254
+ \newcommand{\bR}{{\bf R}}
255
+ \newcommand{\bQ}{{\bf Q}}
256
+ \newcommand{\be}{{\bf e}}
257
+ \newcommand{\bff}{{\bf f}}
258
+ \newcommand{\bi}{{\bf i}}
259
+ \newcommand{\bk}{{\bf k}}
260
+ \newcommand{\bK}{{\bf K}}
261
+ \newcommand{\bL}{{\bf L}}
262
+ \newcommand{\bs}{{\bf s}}
263
+ \newcommand{\bq}{{\bf q}}
264
+ \newcommand{\bu}{{\bf u}}
265
+ \newcommand{\bU}{{\bf U}}
266
+ \newcommand{\bv}{{\bf v}}
267
+ \newcommand{\bV}{{\bf V}}
268
+ \newcommand{\bw}{{\bf w}}
269
+ \newcommand{\bW}{{\bf W}}
270
+ \newcommand{\by}{{\bf y}}
271
+ \newcommand{\bx}{{\bf x}}
272
+ \newcommand{\bX}{{\bf X}}
273
+ \newcommand{\bZ}{{\bf Z}}
274
+
275
+ \newcommand{\eps}{\varepsilon}
276
+ \renewcommand{\epsilon}{\varepsilon}
277
+ \renewcommand{\hat}{\widehat}
278
+ \renewcommand{\tilde}{\widetilde}
279
+ \renewcommand{\bar}{\overline}
280
+
281
+ \newcommand{\balpha}{{\boldsymbol \alpha}}
282
+ \newcommand{\talpha}{\widetilde{\alpha}}
283
+ \newcommand{\btheta}{{\boldsymbol \theta}}
284
+ \newcommand{\tTheta}{{\widetilde\Theta}}
285
+ \newcommand{\bdelta}{{\boldsymbol \delta}}
286
+ \newcommand{\bDelta}{{\boldsymbol \Delta}}
287
+ \newcommand{\bLambda}{{\boldsymbol \Lambda}}
288
+ \newcommand{\bSigma}{{\boldsymbol \Sigma}}
289
+ \newcommand{\bmu}{{\boldsymbol \mu}}
290
+ \newcommand{\bxi}{{\boldsymbol \xi}}
291
+ \newcommand{\bell}{\boldsymbol \ell}
292
+
293
+ \newcommand{\nothere}[1]{}
294
+ \newcommand{\moveb}{\\ \bigskip}
295
+ \newcommand{\movebb}{\\[-0.25em]}
296
+ \newcommand{\moves}{\\ \smallskip}
297
+ \newcommand{\movess}{\\[-0.5em]}
298
+ \newcommand{\movesss}{\smallskip}
299
+
300
+ \newcommand{\hloss}{\hat\ell}
301
+ \newcommand{\bloss}{\boldsymbol \ell}
302
+ \newcommand{\hbl}{\hat{\bloss}}
303
+ \newcommand{\hbL}{\wh{\bL}}
304
+ \newcommand{\wh}{\widehat}
305
+ \newcommand{\ti}{_{t,i}}
306
+ \newcommand{\wt}{\widetilde}
307
+
308
+
309
+
310
+ \usepackage{xspace}
311
+ \renewcommand{\ttdefault}{lmtt}
312
+ \newcommand{\LP}{\texttt{LP}\xspace}
313
+ \newcommand{\CMG}{\texttt{CMG}\xspace}
314
+ \newcommand{\FPL}{\texttt{FPL}\xspace}
315
+ \newcommand{\TS}{\normalfont \texttt{TS}\xspace}
316
+ \newcommand{\UCB}{\texttt{UCB}\xspace}
317
+ \newcommand{\MOSS}{\texttt{MOSS}\xspace}
318
+ \newcommand{\UCBE}{\texttt{UCB-E}\xspace}
319
+ \newcommand{\ImprovedUCB}{\texttt{ImprovedUCB}\xspace}
320
+ \newcommand{\klucb}{\texttt{KL-UCB}\xspace}
321
+ \newcommand{\CUCB}{\texttt{CUCB}\xspace}
322
+ \newcommand{\EXP}{\texttt{Exp3}\xspace}
323
+ \newcommand{\exph}{\EXP}
324
+ \newcommand{\LinearTS}{\normalfont \texttt{LinearTS}\xspace}
325
+ \newcommand{\ThompsonSampling}{\normalfont \texttt{ThompsonSampling}\xspace}
326
+ \newcommand{\SpectralEliminator}{\normalfont \texttt{\textcolor[rgb]{0.5,0.2,0}{SpectralEliminator}}\xspace}
327
+ \newcommand{\LinearEliminator}{\normalfont \texttt{\textcolor[rgb]{0.5,0.2,0}{LinearEliminator}}\xspace}
328
+ \newcommand{\LinUCB}{\normalfont \texttt{LinUCB}\xspace}
329
+ \newcommand{\LinRel}{\texttt{LinRel}\xspace}
330
+ \newcommand{\KernelUCB}{\texttt{\textcolor[rgb]{0.5,0.2,0}{KernelUCB}}\xspace}
331
+ \newcommand{\SupKernelUCB}{\texttt{\textcolor[rgb]{0.5,0.2,0}{SupKernelUCB}}\xspace}
332
+ \newcommand{\GPUCB}{\texttt{GP-UCB}\xspace}
333
+ \newcommand{\OFUL}{\texttt{OFUL}\xspace}
334
+ \newcommand{\OPM}{\texttt{\textcolor[rgb]{0.5,0.2,0}{OPM}}\xspace}
335
+ \newcommand{\CLUB}{\texttt{CLUB}\xspace}
336
+ \newcommand{\GOBLin}{\texttt{GOB.Lin}\xspace}
337
+ \newcommand{\UCBN}{\texttt{UCB-N}\xspace}
338
+ \newcommand{\UCBmaxN}{\texttt{UCB-MaxN}\xspace}
339
+ \newcommand{\GraphMOSS}{\texttt{\textcolor[rgb]{0.5,0.2,0}{GraphMOSS}}\xspace}
340
+ \newcommand{\SpectralUCB}{\normalfont \texttt{\textcolor[rgb]{0.5,0.2,0}{SpectralUCB}}\xspace}
341
+ \newcommand{\CheapUCB}{\texttt{CheapUCB}\xspace}
342
+ \newcommand{\SpectralTS}{\texttt{\textcolor[rgb]{0.5,0.2,0}{SpectralTS}}\xspace}
343
+ \newcommand{\SupLinRel}{\normalfont \texttt{SupLinRel}\xspace}
344
+ \newcommand{\SupLinUCB}{\normalfont \texttt{SupLinUCB}\xspace}
345
+ \newcommand{\imb}{\texttt{\textcolor[rgb]{0.5,0.2,0}{IMLinUCB}}\xspace}
346
+ \newcommand{\NetBandits}{\texttt{NetBandits}\xspace}
347
+ \newcommand{\BARE}{\texttt{\textcolor[rgb]{0.5,0.2,0}{BARE}}\xspace}
348
+ \newcommand{\ELP}{\texttt{ELP}\xspace}
349
+ \newcommand{\ELPP}{\texttt{ELP.P}\xspace}
350
+ \newcommand{\expix}{\texttt{\textcolor[rgb]{0.5,0.2,0}{Exp3-IX}}\xspace}
351
+ \newcommand{\expset}{\texttt{Exp3-SET}\xspace}
352
+ \newcommand{\expdom}{\texttt{Exp3-DOM}\xspace}
353
+ \newcommand{\expg}{\texttt{Exp3.G}\xspace}
354
+ \newcommand{\fplbgr}{\texttt{FPL-BGR}\xspace}
355
+ \newcommand{\fplix}{\texttt{\textcolor[rgb]{0.5,0.2,0}{FPL-IX}}\xspace}
356
+ \newcommand{\comphedge}{\texttt{Component\-Hedge}\xspace}
357
+ \newcommand{\hedge}{\texttt{Hedge}\xspace}
358
+ \newcommand{\expxxx}{\texttt{\textcolor[rgb]{0.5,0.2,0}{Exp3-WIX}}\xspace}
359
+ \newcommand{\expwix}{\texttt{\textcolor[rgb]{0.5,0.2,0}{Exp3-WIX}}\xspace}
360
+ \newcommand{\expixa}{\texttt{Exp3-IXa}\xspace}
361
+ \newcommand{\expixb}{\texttt{Exp3-IXb}\xspace}
362
+ \newcommand{\expixt}{\texttt{Exp3-IXt}\xspace}
363
+ \newcommand{\expcoop}{\texttt{Exp3-Coop}\xspace}
364
+ \newcommand{\expres}{\texttt{\textcolor[rgb]{0.5,0.2,0}{Exp3-Res}}\xspace}
365
+
366
+ \newcommand{\StoSOO}{\texttt{\textcolor[rgb]{0.5,0.2,0}{StoSOO}}\xspace}
367
+ \newcommand{\POO}{\texttt{\textcolor[rgb]{0.5,0.2,0}{POO}}\xspace}
368
+ \newcommand{\OOB}{\normalfont\texttt{\textcolor[rgb]{0.5,0.2,0}{OOB}}\xspace}
369
+ \newcommand{\DOO}{\texttt{DOO}\xspace}
370
+ \newcommand{\SOO}{\texttt{SOO}\xspace}
371
+ \newcommand{\Zooming}{\texttt{Zooming}\xspace}
372
+ \newcommand{\UCT}{\texttt{UCT}\xspace}
373
+ \newcommand{\HCT}{\texttt{HCT}\xspace}
374
+ \newcommand{\SHOO}{\POO}
375
+ \newcommand{\HOO}{\texttt{HOO}\xspace}
376
+ \newcommand{\ATB}{\texttt{ATB}\xspace}
377
+ \newcommand{\TZ}{\texttt{TaxonomyZoom}\xspace}
378
+ \newcommand{\Direct}{\texttt{DiRect}\xspace}
379
+ \newcommand{\SiRI}{\texttt{\textcolor[rgb]{0.5,0.2,0}{SiRI}}\xspace}
380
+ \newcommand{\olop}{\texttt{OLOP}\xspace}
381
+ \newcommand{\stopalgo}{\texttt{StOP}\xspace}
382
+ \newcommand{\metagrill}{\texttt{\textcolor[rgb]{0.5,0.2,0}{\textup{TrailBlazer}}}\xspace}
383
+ \newcommand{\greedy}{\texttt{Greedy}\xspace}
384
+ \newcommand{\opm}{\texttt{\textcolor[rgb]{0.5,0.2,0}{OPM}}\xspace}
385
+ \newcommand{\pagerank}{\texttt{PageRank}\xspace}
386
+
387
+
388
+
389
+ \newcommand{\maxn}{\texttt{max}\xspace}
390
+ \newcommand{\avgn}{\texttt{avg}\xspace}
391
+
392
+
393
+ \newcommand{\reg}{\gamma}
394
+ \newcommand{\hmu}{\hat{\mu}}
395
+ \newcommand{\hw}{\hat{w}}
396
+ \newcommand{\hth}{\hat{\theta}}
397
+ \newcommand{\hs}{\hat{\sigma}}
398
+ \newcommand{\td}{\tilde{d}}
399
+
400
+ \newcommand{\etat}{\eta_t}
401
+ \newcommand{\gammat}{\gamma_t}
402
+ \newcommand{\nodes}{{\textcolor{misogreen}{N}}}
403
+ \newcommand{\rounds}{{\textcolor[rgb]{0.25,0.0,0.6}{T}}}
404
+ \newcommand{\nodeset}{\cV}
405
+ \newcommand{\edgeset}{\cE}
406
+ \newcommand{\regret}{R_\rounds}
407
+ \newcommand{\cgamma}{c_\gamma}
408
+ \newcommand{\cgammat}{c_{\gamma_t}}
409
+ \newcommand{\sumT}{\sum_{t = 1}^\rounds}
410
+ \newcommand{\sumt}{\sum_{t=1}^\rounds}
411
+ \newcommand{\sumtl}{\sum\limits_{t=1}^\rounds}
412
+ \newcommand{\sumj}{\sum_{j\in \nodes_i^-}}
413
+ \newcommand{\sumtj}{\sum_{j\in \nodes_{t,i}^-}}
414
+ \newcommand{\sumi}{\sum_{i=1}^{\nodes}}
415
+ \newcommand{\sumji}{\sum_{j\in \{\nodes_i^-\cup\{i\}\}}}
416
+ \newcommand{\sumtji}{\sum_{j\in \{\nodes_{t,i}^-\cup\{i\}\}}}
417
+ \newcommand{\dti}{d_{t,i}^-}
418
+ \newcommand{\hdi}{\hat{d}_i^-}
419
+ \newcommand{\hdk}{\hat{d}_k^-}
420
+ \newcommand{\hd}{\hat{d}^-}
421
+ \newcommand{\tti}{_{t+1,i}}
422
+ \newcommand{\tj}{_{t,j}}
423
+ \newcommand{\ji}{_{j,i}}
424
+ \newcommand{\Ii}{_{I_t,i}}
425
+ \newcommand{\pti}{p\ti}
426
+ \newcommand{\pta}{p_{t,a}}
427
+ \newcommand{\qti}{q\ti}
428
+ \newcommand{\hpti}{\hat{p}\ti}
429
+ \newcommand{\hpi}{\hat{p}_i}
430
+ \newcommand{\hp}{\hat{p}}
431
+ \newcommand{\hqti}{\hat{q}\ti}
432
+ \newcommand{\ptj}{p_{t,j}}
433
+ \newcommand{\qtj}{q_{t,j}}
434
+ \newcommand{\hptj}{\hat{p}_{t,j}}
435
+ \newcommand{\hqtj}{\hat{q}_{t,j}}
436
+ \newcommand{\oti}{o\ti}
437
+ \newcommand{\Oti}{O\ti}
438
+ \newcommand{\loss}{\ell}
439
+ \newcommand{\hLoss}{\hat{L}}
440
+ \newcommand{\hL}{\wh{L}}
441
+ \newcommand{\noise}{\xi}
442
+ \newcommand{\dold}{d_{\scriptsize\mbox{old}}}
443
+ \newcommand{\dnew}{d_{\scriptsize\mbox{new}}}
444
+ \newcommand{\gweight}{s}
445
+ \newcommand{\avgalpha}{\alpha^*_{\text{avg}}}
446
+ \newcommand{\rkdual}{r_k^{\circ}}
447
+ \newcommand{\rktdual}{r_{k,t}^{\circ}}
448
+ \newcommand{\rkprimetdual}{r_{k',t}^{\circ}}
449
+ \newcommand{\rkttdual}{r_{k,t+1}^{\circ}}
450
+ \newcommand{\rkprimettdual}{r_{k',t+1}^{\circ}}
451
+ \newcommand{\ridual}{r_i^{\circ}}
452
+ \newcommand{\rstardual}{r_\star^{\circ}}
453
+ \newcommand{\Ddual}{D^{\circ}}
454
+ \newcommand{\Ddualset}{\mathcal D^{\circ}}
455
+ \newcommand{\node}[2]{(#1,#2)}
456
+
457
+
458
+
459
+ \newcommand{\jbrequest}[1]{\begin{tabular}{l} #1 \end{tabular}}
460
+
461
+
462
+
463
+ \usepackage{xspace}
464
+ \renewcommand{\ttdefault}{lmtt}
465
+ \newcommand{\nodevec}{\texttt{node2vec}\xspace}
466
+ \newcommand{\DIM}{\texttt{Deep\,InfoMax}\xspace}
467
+ \newcommand{\deepwalk}{\texttt{DeepWalk}\xspace}
468
+ \newcommand{\BYOL}{\texttt{BYOL}\xspace}
469
+ \newcommand{\DMGI}{\texttt{DMGI}\xspace}
470
+ \newcommand{\MINE}{\texttt{MINE}\xspace}
471
+ \newcommand{\LabelProp}{\texttt{LabelProp}\xspace}
472
+
473
+
474
+
475
+ \newcommand{\GCA}{\texttt{GCA}\xspace}
476
+ \newcommand{\InfoGraph}{\texttt{InfoGraph}\xspace}
477
+ \newcommand{\BYOG}{\texttt{BGRL}\xspace}
478
+ \newcommand{\BGRL}{\texttt{BGRL}\xspace}
479
+ \newcommand{\DGI}{\texttt{DGI}\xspace}
480
+ \newcommand{\DGB}{\texttt{DGB}\xspace}
481
+ \newcommand{\SelfGNN}{\texttt{SelfGNN}\xspace}
482
+ \newcommand{\STDGI}{\texttt{ST-DGI}\xspace}
483
+ \newcommand{\graphCL}{\texttt{GraphCL}\xspace}
484
+ \newcommand{\randominit}{\texttt{Random-Init}\xspace}
485
+ \newcommand{\GRACE}{\texttt{GRACE}\xspace}
486
+ \newcommand{\GRACESUB}{\texttt{GRACE-SUBSAMPLING}\xspace}
487
+ \newcommand{\GCAlite}{\texttt{GCA-T-A}\xspace}
488
+ \newcommand{\GMI}{\texttt{GMI}\xspace}
489
+ \newcommand{\MVGRL}{\texttt{MVGRL}\xspace}
490
+
491
+ \newcommand{\WikiCS}{\texttt{WikiCS}\xspace}
492
+ \newcommand{\AmazonComputers}{\texttt{Amazon-Computers}\xspace}
493
+ \newcommand{\AmazonPhotos}{\texttt{Amazon-Photos}\xspace}
494
+ \newcommand{\CoauthorCS}{\texttt{Coauthor-CS}\xspace}
495
+ \newcommand{\CoauthorPhysics}{\texttt{Coauthor-Physics}\xspace}
496
+ \newcommand{\PPI}{\texttt{PPI}\xspace}
497
+ \newcommand{\ogbnarxiv}{\texttt{ogbn-arxiv}\xspace}
498
+
499
+ \newcommand{\defeq}{\vcentcolon=}
500
+
501
+ \newcommand{\UCBOne}{\texttt{UCB1}\xspace}
502
+ \newcommand{\UCBTwo}{\texttt{UCB2}\xspace}
503
+ \newcommand{\KLUCB}{\texttt{KL-UCB}\xspace}
504
+ \newcommand{\EBA}{\texttt{EBA}\xspace}
505
+ \newcommand{\MPA}{\texttt{MPA}\xspace}
506
+ \newcommand{\EDP}{\texttt{EDP}\xspace}
507
+ \newcommand{\SE}{\texttt{SuccessiveElimination}\xspace}
508
+ \newcommand{\ME}{\texttt{MedianElimination}\xspace}
509
+ \newcommand{\Racing}{\texttt{Racing}\xspace}
510
+ \newcommand{\LUCB}{\texttt{LUCB}\xspace}
511
+ \newcommand{\KLRacing}{\texttt{KL-Racing}\xspace}
512
+ \newcommand{\KLLUCB}{\texttt{KL-LUCB}\xspace}
513
+ \newcommand{\Track}{\texttt{Track-and-Stop}\xspace}
514
+ \newcommand{\DT}{\texttt{D-Tracking}\xspace}
515
+ \newcommand{\CT}{\texttt{C-Tracking}\xspace}
516
+ \newcommand{\EGE}{\texttt{ExponentialGapElimination}\xspace}
517
+ \newcommand{\LIL}{\texttt{lil'UCB}\xspace}
518
+ \newcommand{\SR}{\texttt{SuccessiveReject}\xspace}
519
+ \newcommand{\SHA}{\texttt{SequentialHalving}\xspace}
520
+ \newcommand{\ISHA}{\texttt{ISHA}\xspace}
521
+ \newcommand{\UGapE}{\texttt{UGapE}\xspace}
522
+ \newcommand{\ATLUCB}{\texttt{AT-LUCB}\xspace}
523
+ \newcommand{\TTTS}{\texttt{TTTS}\xspace}
524
+ \newcommand{\TCS}{\texttt{T3S}\xspace}
525
+ \newcommand{\HTTTS}{\hyperref[alg:httts]{\textcolor{red}{\texttt{H-TTTS}}}\xspace}
526
+ \newcommand{\DTTTS}{\hyperref[alg:dttts]{\textcolor{red}{\texttt{D-TTTS}}}\xspace}
527
+ \newcommand{\TCC}{\texttt{T3C}\xspace}
528
+ \newcommand{\TCCG}{\texttt{T3C-Greedy}\xspace}
529
+ \newcommand{\TTPS}{\texttt{TTPS}\xspace}
530
+ \newcommand{\TTVS}{\texttt{TTVS}\xspace}
531
+ \newcommand{\TTEI}{\texttt{TTEI}\xspace}
532
+ \newcommand{\BC}{\texttt{BC}\xspace}
533
+
534
+ \newcommand{\enc}{\mathcal{E}}
535
+
536
+
537
+
538
+
539
+
540
+
541
+
542
+
543
+ \newcommand{\LIBSVM}{\texttt{LIBSVM}\xspace}
544
+ \newcommand{\Scikit}{\texttt{scikit-learn}\xspace}
545
+ \newcommand{\TensorFlow}{\texttt{TensorFlow}\xspace}
546
+ \newcommand{\Theano}{\texttt{Theano}\xspace}
547
+ \newcommand{\PyTorch}{\texttt{PyTorch}\xspace}
548
+ \newcommand{\Spearmint}{\texttt{Spearmint}\xspace}
549
+ \newcommand{\BoTorch}{\texttt{BoTorch}\xspace}
550
+ \newcommand{\RoBO}{\texttt{RoBO}\xspace}
551
+ \newcommand{\flow}{\texttt{GPflowOpt}\xspace}
552
+
553
+
554
+ \newcommand{\pdf}{\texttt{pdf}\xspace}
555
+ \newcommand{\cdf}{\texttt{cdf}\xspace}
556
+ \newcommand{\erf}{\texttt{erf}\xspace}
557
+ \newcommand{\MLE}{\texttt{MLE}\xspace}
558
+ \newcommand{\MAP}{\texttt{MAP}\xspace}
559
+
560
+
561
+
562
+
563
+ \usepackage{iclr2022_conference,times}
564
+ \iclrfinalcopy
565
+ \renewcommand{\ttdefault}{lmtt}
566
+ \usepackage[utf8]{inputenc} \usepackage[T1]{fontenc} \usepackage{hyperref} \usepackage{url} \usepackage{booktabs} \usepackage{amsfonts} \usepackage{nicefrac} \usepackage{microtype} \usepackage{xcolor}
567
+
568
+ \usepackage{caption}
569
+
570
+
571
+ \title{Large-Scale Representation Learning on Graphs via Bootstrapping}
572
+
573
+ \author{Shantanu Thakoor\thanks{Correspondence to: Shantanu Thakoor <thakoor@google.com>.} \\
574
+ DeepMind\\
575
+ \And
576
+ Corentin Tallec\\
577
+ DeepMind\\
578
+ \And
579
+ Mohammad Gheshlaghi Azar\\
580
+ DeepMind\\
581
+ \And
582
+ Mehdi Azabou\\
583
+ Georgia Institute of Technology\\
584
+ \And
585
+ Eva Dyer\\
586
+ Georgia Institute of Technology\\
587
+ \And
588
+ R\'emi Munos\\
589
+ DeepMind\\
590
+ \And
591
+ Petar Veli\v{c}kovi\'{c}\\
592
+ DeepMind\\
593
+ \And
594
+ Michal Valko\\
595
+ DeepMind\\
596
+ }
597
+
598
+ \begin{document}
599
+
600
+ \maketitle
601
+
602
+ \begin{abstract}
603
+
604
+
605
+
606
+ Self-supervised learning provides a promising path towards eliminating the need for
607
+ costly label information in representation learning on graphs. However, to achieve state-of-the-art performance, methods often need large numbers of negative examples and rely on complex augmentations. This can be prohibitively expensive, especially for large graphs. To address these challenges, we introduce Bootstrapped Graph Latents (\BGRL) - a graph representation learning method that learns by predicting alternative augmentations of the input. \BGRL uses only simple augmentations and alleviates the need for contrasting with negative examples, and is thus \textit{scalable} by design. \BGRL outperforms or matches prior methods on several established benchmarks, while achieving a 2-10x reduction in memory costs.
608
+ Furthermore, we show that \BGRL can be scaled up to extremely large graphs with hundreds of millions of nodes in the semi-supervised regime - achieving state-of-the-art performance and improving over supervised baselines where representations are shaped only through label information. In particular, our solution centered on \BGRL constituted one of the winning entries to the Open Graph Benchmark - Large Scale Challenge at \textit{KDD Cup 2021}, on a graph orders of magnitudes larger than all previously available benchmarks, thus demonstrating the scalability and effectiveness of our approach.
609
+
610
+
611
+
612
+
613
+
614
+ \end{abstract}
615
+
616
+
617
+
618
+
619
+ \section{Introduction}
620
+ \label{sec:introduction}
621
+
622
+ Graphs provide a powerful abstraction for complex datasets that arise in a variety of applications such as social networks, transportation networks, and biological sciences \citep{hamilton2018inductive,google_maps,ppiZitnik_2017,catalyst_dataset}. Despite recent advances in graph neural networks (GNNs), when trained with supervised data alone, these networks can easily overfit and may fail to generalize \citep{dropedge}. Thus, finding ways to form simplified representations of graph-structured data without labels is an important yet unsolved challenge.
623
+
624
+
625
+
626
+
627
+
628
+
629
+
630
+
631
+
632
+
633
+
634
+
635
+
636
+
637
+
638
+
639
+
640
+ Current state-of-the-art methods for unsupervised representation learning on graphs \citep{dgi,gmi,mvgrl,grace,grace_adaptive,graphCL} are {\em contrastive}. These methods work by pulling together representations of related objects and pushing apart representations of unrelated ones. For example, current best methods \citet{grace} and \citet{grace_adaptive} learn node representations by creating
641
+ two augmented versions of a graph, pulling together the representation of the same node in the two
642
+ graphs, while pushing apart \textit{every other node pair}.
643
+ As such, they inherently rely on the ability to compare each object to a large
644
+ number of \textit{negatives}.
645
+ In the absence of a principled way of choosing these negatives, this can require computation and memory quadratic in the number of nodes.
646
+ In many cases, the generation of a large number of negatives poses a prohibitive cost, especially for large graphs.
647
+
648
+ In this paper, we introduce a scalable approach for self-supervised representation learning on graphs called
649
+ \emph{Bootstrapped Graph Latents} (\BGRL). Inspired by recent advances in self-supervised learning in vision~\citep{grill2020bootstrap}
650
+ , \BGRL learns node representations by encoding
651
+ two augmented versions of a graph using two distinct graph encoders: an online encoder, and a target encoder. The online encoder is trained through predicting the representation of the target encoder, while the target encoder is updated as an exponential moving average of
652
+ the online network.
653
+ Critically, \BGRL does not require contrasting negative examples, and thus can scale easily to very large graphs.
654
+
655
+ Our main contributions are:
656
+
657
+ \begin{itemize}
658
+
659
+ \item We introduce Bootstrapped Graph Latents (\BGRL), a graph self-supervised learning method that effectively scales to extremely large graphs and outperforms existing methods, while using only simple graph augmentations and not requiring negative examples (Section~\ref{sec:method}).
660
+
661
+ \item We show that contrastive methods face a trade-off between peak performance and memory constraints, due to their reliance on negative examples (Section~\ref{sec:subsampling_quadratic}).
662
+ Due to having time and space complexity scaling only \textit{linearly} in the size of the input, \BGRL avoids the
663
+ performance-memory trade-off inherent to contrastive methods altogether. \BGRL
664
+ provides performance competitive with the best contrastive methods, while using 2-10x less memory on standard benchmarks (Section~\ref{sec:theory_computation}).
665
+
666
+ \item We show that leveraging the scalability of \BGRL allows making full use of the \textit{vast amounts of unlabeled data} present in large graphs via semi-supervised learning. In particular, we find that efficient use of unlabeled data for representation learning prevents representations from overfitting to the classification task, and achieves significantly higher, state-of-the-art performance. This was critical to the success of our solution at \emph{KDD Cup 2021} in which our \BGRL-based solution was awarded one of the \emph{winners}, on the largest publicly available graph dataset, of size 360GB consisting of 240 million nodes and 1.7 billion edges (Section~\ref{sec:ogb_lsc_exps}).
667
+
668
+
669
+ \end{itemize}
670
+
671
+
672
+ \section{Bootstrapped Graph Latents}
673
+ \label{sec:method}
674
+
675
+ \begin{figure}[t]
676
+ \centering
677
+ \captionsetup{font=small}
678
+ \scalebox{0.65}{
679
+ \begin{tikzpicture}
680
+ \node[circle, thick, draw] (o0) {$\vec{x}$};
681
+ \node[circle, thick, draw, above right=0.1em and 3em of o0] (o1) {};
682
+ \node[circle, thick, draw, above right=0.8em and 0.5em of o0] (o2) {};
683
+ \node[circle, thick, draw, left=of o0] (o3) {};
684
+ \node[circle, thick, draw, below left=0.8em and 1.5em of o0] (o4) {};
685
+ \node[circle, thick, draw, below right=0.8em and 3.3em of o0] (o5) {};
686
+
687
+ \draw[-, thick] (o5) -- (o1);
688
+ \draw[-, thick] (o0) -- (o2);
689
+ \draw[-, thick] (o0) -- (o3);
690
+ \draw[-, thick] (o0) -- (o4);
691
+ \draw[-, thick] (o0) -- (o5);
692
+ \draw[-, thick] (o4) -- (o3);
693
+ \draw[-, thick] (o5) -- (o2);
694
+
695
+ \node[rectangle, draw, dashed, minimum width=11em, minimum height=5.5em, below=0mm of o0.center, anchor=center] (oR) {};
696
+ \node[above=0em of oR] (l1) {$({\bf X}, {\bf A})$};
697
+
698
+ \node[circle, thick, draw, above right = 2em and 14em of o0] (0) {$\vec{\widetilde{x}}_1$};
699
+ \node[circle, thick, draw, above right=0.1em and 3em of 0] (1) {};
700
+ \node[circle, thick, draw, above right=0.8em and 0.5em of 0] (2) {};
701
+ \node[circle, thick, draw, left=of 0] (3) {};
702
+ \node[circle, thick, draw, below left=0.8em and 1.5em of 0] (4) {};
703
+ \node[circle, thick, draw, below right=0.8em and 3.3em of 0] (5) {};
704
+
705
+
706
+ \draw[-, thick] (0) -- (1);
707
+ \draw[-, thick] (0) -- (2);
708
+ \draw[-, thick] (0) -- (3);
709
+ \draw[-, thick] (4) -- (3);
710
+
711
+ \node[rectangle, draw, dashed, minimum width=11em, minimum height=5.5em, below=0mm of 0.center, anchor=center] (RR) {};
712
+ \node[above=0em of RR] (l1) {$({\bf \widetilde{X}_1}, {\bf \widetilde{A}_1})$};
713
+
714
+ \node[circle, thick, draw, below=3.7em of 0] (00) {$\vec{\widetilde{x}}_2$};
715
+ \node[circle, thick, draw, above right=0.1em and 3em of 00] (01) {};
716
+ \node[circle, thick, draw, above right=0.8em and 0.5em of 00] (02) {};
717
+ \node[circle, thick, draw, below left=0.8em and 1.5em of 00] (03) {};
718
+ \node[circle, thick, draw, below right=0.8em and 3.3em of 00] (04) {};
719
+ \node[circle, thick, draw, left=of 00] (05) {};
720
+
721
+ \draw[-, thick] (00) -- (01);
722
+ \draw[-, thick] (00) -- (02);
723
+ \draw[-, thick] (00) -- (03);
724
+ \draw[-, thick] (00) -- (04);
725
+ \draw[-, thick] (03) -- (05);
726
+
727
+ \node[rectangle, draw, dashed, minimum width=11em, minimum height=5.5em, below=0mm of 00.center, anchor=center] (RR2) {};
728
+ \node[below=0em of RR2] (l2) {$({\bf \widetilde{X}_2}, {\bf \widetilde{A}_2})$};
729
+
730
+ \draw[very thick] (oR.east) edge[decoration={snake, pre length=0.01mm, segment length=2mm, amplitude=0.3mm, post length=1.5mm}, decorate,-stealth] node[above=0.19em] (CC) {$\mathcal{T}_1$} (RR.west);
731
+ \draw[very thick] (oR.east) edge[decoration={snake, pre length=0.01mm, segment length=2mm, amplitude=0.3mm, post length=1.5mm}, decorate,-stealth] node[below] (CC) {$\mathcal{T}_2$} (RR2.west);
732
+
733
+ \node[rectangle, thick, draw, minimum width=2em, minimum height=2em, right=13em of 0] (0) {$\vec{\widetilde{h}}_1$};
734
+ \node[rectangle, thick, draw, above right=0.1em and 3em of 0] (1) {};
735
+ \node[rectangle, thick, draw, above right= 0.8em and 0.5em of 0] (2) {};
736
+ \node[rectangle, thick, draw, left=of 0] (3) {};
737
+ \node[rectangle, thick, draw, below left=0.8em and 1.5em of 0] (4) {};
738
+ \node[rectangle, thick, draw, below right=0.8em and 3.3em of 0] (5) {};
739
+
740
+ \draw[-, thick] (0) -- (1);
741
+ \draw[-, thick] (0) -- (2);
742
+ \draw[-, thick] (0) -- (3);
743
+ \draw[-, thick] (4) -- (3);
744
+
745
+ \node[rectangle, draw, dashed, minimum width=11em, minimum height=5.5em, below=0mm of 0.center, anchor=center] (AA) {};
746
+ \node[above=0em of AA] (l1) {$({\bf \widetilde{H}_1}, {\bf \widetilde{A}_1})$};
747
+
748
+ \node[rectangle, thick, draw, minimum width=2em, minimum height=2em, below=4.1em of 0] (00) {$\vec{\widetilde{h}}_2$};
749
+ \node[rectangle, thick, draw, above right=0.1em and 3em of 00] (01) {};
750
+ \node[rectangle, thick, draw, above right=0.8em and .5em of 00] (02) {};
751
+ \node[rectangle, thick, draw, below left=0.8em and 1.5em of 00] (03) {};
752
+ \node[rectangle, thick, draw, below right=0.8em and 3.3em of 00] (04) {};
753
+ \node[rectangle, thick, draw, left=of 00] (05) {};
754
+
755
+ \draw[-, thick] (00) -- (01);
756
+ \draw[-, thick] (00) -- (02);
757
+ \draw[-, thick] (00) -- (03);
758
+ \draw[-, thick] (00) -- (04);
759
+ \draw[-, thick] (03) -- (05);
760
+
761
+ \node[rectangle, draw, dashed, minimum width=11em, minimum height=5.5em, below=0mm of 00.center, anchor=center] (AA2) {};
762
+ \node[below=0em of AA2] (l2) {$({\bf \widetilde{H}_2}, {\bf \widetilde{A}}_2)$};
763
+
764
+ \draw[-stealth, very thick, decoration={snake, pre length=0.01mm, segment length=2mm, amplitude=0.3mm, post length=1.5mm}, decorate,] (RR) -- node[above] {$\mathcal{E}_{\theta}$} (AA);
765
+
766
+ \draw[-stealth, very thick, decoration={snake, pre length=0.01mm, segment length=2mm, amplitude=0.3mm, post length=1.5mm}, decorate,] (RR2) -- node[below] {$\mathcal{E}_{\phi}$} (AA2);
767
+
768
+ \node[rectangle, thick, draw, minimum width=2em, minimum height=2em, right=11em of 0] (p0) {$\vec{\widetilde{z}}_1$};
769
+ \node[rectangle, thick, draw, above right=0.1em and 3em of p0] (p1) {};
770
+ \node[rectangle, thick, draw, above right= 0.8em and 0.5em of p0] (p2) {};
771
+ \node[rectangle, thick, draw, left=of p0] (p3) {};
772
+ \node[rectangle, thick, draw, below left=0.8em and 1.5em of p0] (p4) {};
773
+ \node[rectangle, thick, draw, below right=0.8em and 3.3em of p0] (p5) {};
774
+
775
+ \draw[-, thick] (p0) -- (p1);
776
+ \draw[-, thick] (p0) -- (p2);
777
+ \draw[-, thick] (p0) -- (p3);
778
+ \draw[-, thick] (p4) -- (p3);
779
+
780
+ \node[rectangle, draw, dashed, minimum width=11em, minimum height=5.5em, below=0mm of p0.center, anchor=center] (ZZ) {};
781
+ \node[above=0em of ZZ] (l1) {$({\bf \widetilde{Z}_1}, {\bf \widetilde{A}_1})$};
782
+
783
+ \draw[-stealth, very thick, decoration={snake, pre length=0.01mm, segment length=2mm, amplitude=0.3mm, post length=1.5mm}, decorate,] (AA) -- node[above] {$p_{\theta}$} (ZZ);
784
+
785
+ \coordinate[below=1em of $ (RR) !.5! (AA)$.below] (aux1);
786
+ \coordinate[above=1em of $ (RR2) !.5! (AA2)$.above] (aux2);
787
+ \draw[-stealth, very thick] (aux1) -- node[fill=white]{EMA} (aux2);
788
+
789
+ \node[right=7em of 00, rectangle, draw, thick] (L2) {$ -\frac{2}{N}\sum\limits_{i=0}^{N - 1}\frac{\widetilde{\bZ}_{(1, i)} \widetilde{\bH}_{(2, i)}^{\top}}{\|\widetilde{\bZ}_{(1, i)}\| \|\widetilde{\bH}_{(2, i)}\|}$};
790
+
791
+ \draw[-stealth, very thick] (ZZ) -- (L2);
792
+ \draw[-stealth, very thick] (AA2) -- node{$\|$} (L2);
793
+
794
+
795
+ \end{tikzpicture}
796
+ }
797
+
798
+ \vspace{-.5em}
799
+ \caption{Overview of our proposed \BGRL method. The original graph is first used to derive two different semantically similar views using augmentations $\mathcal{T}_{1,2}$. From these, we use encoders $\enc_{\theta, \phi}$ to form online and target node embeddings. The predictor $p_\theta$ uses the online embedding $\widetilde{\bH}_1$ to form a prediction $\widetilde{\bZ}_1$ of the target embedding $\widetilde{\bH}_2$. The final objective is then computed as the cosine similarity between $\widetilde{\bZ}_1$ and $\widetilde{\bH}_2$, flowing gradients only through $\widetilde{\bZ}_1$. The target parameters $\phi$ are updated as an exponentially moving average of $\theta$.}
800
+ \label{tikz:BGRL}
801
+ \vspace{-.5em}
802
+ \end{figure}
803
+
804
+
805
+
806
+
807
+
808
+ \subsection{\BGRL Components}
809
+
810
+
811
+
812
+ \BGRL builds representations through the use of two graph encoders, an online encoder $\enc_\theta$ and a target encoder $\enc_\phi$, where~$\theta$ and $\phi$
813
+ denote two distinct sets of parameters.
814
+ We consider a graph $\bG = (\bX, \bA)$, with \emph{node
815
+ features} $\bX \in \mathbb{R}^{N \times F}$ and
816
+ \emph{adjacency matrix} $\bA \in \mathbb{R}^{N \times N}$.
817
+ \BGRL first produces two alternate views of $\bG$:
818
+ $\bG_1 = (\widetilde{\bX}_1,\widetilde{\bA}_1)$ and $\bG_2 = (\widetilde{\bX}_2, \widetilde{\bA}_2)$, by applying stochastic graph augmentation functions $\mathcal{T}_1$ and $\mathcal{T}_2$ respectively.
819
+ The online encoder produces an online representation from the first augmented graph, $\widetilde{\bH}_1 \coloneqq \enc_\theta(\widetilde{\bX}_1, \widetilde{\bA}_1)$; similarly the target encoder produces
820
+ a target representation of the second augmented graph,
821
+ $\widetilde{\bH}_2 \coloneqq \enc_\phi(\widetilde{\bX}_2, \widetilde{\bA}_2)$.
822
+ The online representation is fed into a node-level predictor $p_\theta$ that outputs a prediction of the target representation, $\widetilde{\bZ}_1 \coloneqq
823
+ p_\theta(\widetilde{\bH}_1)$.
824
+
825
+
826
+ \BGRL differs from prior bootstrapping approaches such as \BYOL~\citep{grill2020bootstrap} in that it \emph{does not use a projector network}.
827
+ Unlike vision tasks, in which a projection step is used by \BYOL for dimensionality reduction,
828
+ common embedding sizes are quite small for graph tasks and so this is not a concern in our case. In fact, we observe that this step can be eliminated altogether without loss in performance (Appendix~\ref{sec:projector_ablations}).
829
+
830
+ The augmentation functions $\mathcal{T}_{1}$ and $\mathcal{T}_2$ used are simple, standard graph perturbations previously explored~\citep{graphCL,grace}. We use a combination of random \textbf{node feature masking} and \textbf{edge masking} with fixed masking probabilities $p_f$ and $p_e$ respectively. More details and background on graph augmentations is provided in Appendix~\ref{sec:augmentation_details}.
831
+
832
+
833
+
834
+ \subsection{\BGRL update step}
835
+
836
+ \paragraph{Updating the online encoder $\enc_\theta$:}
837
+ The online parameters $\theta$ (and not $\phi$), are updated to make the
838
+ predicted target representations $\widetilde{\bZ}_1$ closer to the true target
839
+ representations $\widetilde{\bH}_2$ for each node, by following the gradient of the cosine similarity w.r.t. $\theta$, i.e.,
840
+ \begin{equation}
841
+ \ell(\theta, \phi) = -\frac{2}{N}\sum\limits_{i=0}^{N - 1}\frac{\widetilde{\bZ}_{(1, i)} \widetilde{\bH}_{(2, i)}^{\top}}{\|\widetilde{\bZ}_{(1, i)}\| \|\widetilde{\bH}_{(2, i)}\|}
842
+ \end{equation}
843
+
844
+ \begin{equation}
845
+ \theta \leftarrow \text{optimize}(\theta,~\eta,~\partial_\theta \ell(\theta, \phi)),
846
+ \end{equation}
847
+ where $\eta$ is the learning rate and the final updates are
848
+ computed from the gradients of the objective with respect to $\theta$
849
+ \textit{only}, using an optimization method such as SGD or Adam \citep{adam}.
850
+ In practice, we symmetrize this loss, by also predicting the target representation of the first view using the online representation of the second.
851
+
852
+ \paragraph{Updating the target encoder $\enc_\phi$:}
853
+ The target parameters $\phi$ are updated as an exponential moving average of the
854
+ online parameters $\theta$, using a decay rate $\tau$, i.e.,
855
+ \begin{equation}
856
+ \phi \leftarrow \tau \phi + (1 - \tau) \theta,
857
+ \end{equation}
858
+ Figure~\ref{tikz:BGRL} visually summarizes \BGRL's architecture.
859
+
860
+ Note that although the objective $\ell(\theta, \phi)$ has undesirable or trivial solutions,
861
+ \BGRL does not actually optimize this loss. Only the online parameters $\theta$ are updated to
862
+ reduce this loss, while the target parameters $\phi$ follow a different objective.
863
+ This non-collapsing behavior even without relying on negatives has been studied further~\citep{noncollapse_theory}.
864
+ We provide an empirical analysis of this behavior in Appendix~\ref{sec:appendix_byol_nontrivial}, showing that in practice \BGRL does not collapse to trivial solutions
865
+ and $\ell(\theta, \phi)$ does not converge to $0$.
866
+
867
+
868
+ \paragraph{Scalable non-contrastive objective:}
869
+ Here we note that a contrastive approach would instead encourage $\widetilde{\bZ}_{(1, i)}$ and $\widetilde{\bH}_{(2, j)}$ to be far apart for node pairs $(i, j)$ that are dissimilar.
870
+ In the absence of a principled way of choosing such dissimilar pairs, the na\"ive approach of simply contrasting \textit{all pairs} $\{(i, j) \mid i \neq j\}$, scales \textit{quadratically} in the size of the input.
871
+ As \BGRL does not rely on this contrastive step, \BGRL scales \textit{linearly} in the size of the graph, and thus is scalable by design.
872
+
873
+
874
+
875
+
876
+ \section{Computational Complexity Analysis}
877
+ \label{sec:theory_computation}
878
+ We provide a brief description of the time and space complexities of the \BGRL update step, and illustrate its advantages compared to previous strong contrastive methods such as \GRACE~\citep{grace}, which perform a quadratic all-pairs contrastive computation at each update step. The same analysis applies to variations of the \GRACE method such as \GCA~\citep{grace_adaptive}.
879
+
880
+ Consider a graph with $N$ nodes and $M$ edges, and simple encoders $\enc$ that compute embeddings in time and space $\mathcal{O}(N+M)$. This property is satisfied by most popular GNN architectures such as convolutional \citep{gcnkipf}, attentional \citep{gat}, or message-passing \citep{mpnn} networks.
881
+ \BGRL performs four encoder computations per update step (twice for the target and online encoders, and twice for each augmentation) plus a node-level prediction step; \GRACE performs two encoder computations (once for each augmentation), plus a node-level projection step.
882
+ Both methods backpropagate the learning signal twice (once for each augmentation), and we assume the backward pass to be approximately as costly as a forward pass. We ignore the cost for computation of the augmentations in this analysis.
883
+ Thus the total time and space complexity per update step for \BGRL is
884
+ {$6C_{\rm encoder}(M+N) + 4C_{\rm prediction}N + {\color{blue}C_{\rm BGRL}N}$}, compared to
885
+ {$4C_{\rm encoder}(M+N) + 4C_{\rm projection}N + {\color{red}C_{\rm GRACE}N^2}$} for \GRACE, where $C_{\cdot}$ are constants depending on architecture of the different components.
886
+ Table~\ref{tab:computation} shows an empirical comparison of \BGRL and \GRACE's computational requirements on a set of benchmark tasks, with further details in Appendix~\ref{sec:memory_scatterplot}.
887
+
888
+
889
+
890
+
891
+
892
+
893
+ \begin{table}[ht]
894
+ \captionsetup{font=small}
895
+
896
+ \centering
897
+ \small
898
+ \begin{tabular}{l|l|l|l|l|l}
899
+ \hline
900
+ Dataset & Amazon Photos & WikiCS & Amazon Computers & Coauthor CS & Coauthor Phy \\
901
+ \hline
902
+ \#Nodes & 7,650 & 11,701 & 13,752 & 18,333 & 34,493 \\
903
+ \#Edges & 119,081 & 216,123 & 245,861 & 81,894 & 247,962 \\
904
+ \hline
905
+ \GRACE Memory & 1.81 GB & 3.82 GB & 5.14 GB & 11.78 GB & OOM \\
906
+ \BGRL Memory & \textbf{0.47 GB} & \textbf{0.63 GB} & \textbf{0.58 GB} & \textbf{2.86 GB} & \textbf{5.50 GB} \\
907
+ \hline
908
+ \end{tabular}
909
+ \vspace{1em}
910
+ \caption{Comparison of computational requirements on a set of standard benchmark graphs. OOM indicates ruuning out of memory on a 16GB V100 GPU.}
911
+ \label{tab:computation}
912
+ \end{table}
913
+
914
+
915
+
916
+
917
+ \section{Experimental Analysis}
918
+ \label{sec:experiments}
919
+ We present an extensive empirical study of performance and scalability, showing that
920
+ \BGRL is effective across a wide range of settings from frozen linear evaluation to semi-supervised learning, and both when performing full-graph training and training on subsampled node neighborhoods. We give results across a range of dataset scales and encoder architectures including convolutional, attentional, and message-passing neural networks.
921
+
922
+ We analyze the performance of \BGRL on a set of 7 standard transductive and inductive benchmark tasks, as well as in the very high-data regime by evaluating on the MAG240M dataset~\citep{hu2021ogblsc}. We present results on medium-sized datasets where contrastive objectives can be computed on the entire graph (Section~\ref{sec:full_quadratic}), on larger datasets where this objective must be approximated (Section~\ref{sec:subsampling_quadratic}), and finally on the much larger MAG240M dataset designed to test scalability limits (Section~\ref{sec:ogb_lsc_exps}), showing that \BGRL improves performance across all scales of datasets.
923
+ In Appendix~\ref{sec:cora_exps}, we show that \BGRL achieves state-of-the-art performance even in the low-data regime on a set of 4 small-scale datasets. Dataset sizes are summarized in Table~\ref{tab:dataset_stats} and described further in Appendix~\ref{sec:appendix_dataset_details}.
924
+
925
+
926
+ \paragraph{Evaluation protocol:}
927
+ In most tasks, we follow the standard linear-evaluation protocol on graphs~\citep{dgi}. This involves first training each graph encoder in a fully unsupervised manner and computing embeddings for each node; a simple linear model is then trained on top of these frozen embeddings through a logistic regression loss with $\ell_2$ regularization, without flowing any gradients back to the graph encoder network.
928
+ In the more challenging MAG240M task, we extend \BGRL to the semi-supervised setting by combining our self-supervised representation learning loss together with a supervised loss. We show that \BGRL's bootstrapping objective obtains state-of-the-art performance in this hybrid setting, and even improves further with the added use of unlabeled data for representation learning - properties which have not been previously demonstrated by prior works on self-supervised representation learning on graphs.
929
+
930
+
931
+ Implementation details including model architectures and hyperparameters are provided in Appendix~\ref{sec:appendix_implementation}.
932
+ Algorithm implementation and experiment code for most tasks
933
+ can be found at \href{https://github.com/nerdslab/bgrl}{https://github.com/nerdslab/bgrl} while code for our solution on MAG240M has been open-sourced
934
+ as part of the \textit{KDD Cup 2021}~\citep{deepmind_ogb_report} at \href{https://github.com/deepmind/deepmind-research/tree/master/ogb\_lsc/mag}{https://github.com/deepmind/deepmind-research/tree/master/ogb\_lsc/mag}.
935
+
936
+
937
+ \begin{table}
938
+ \small
939
+ \captionsetup{font=small}
940
+ \centering
941
+ \begin{tabular}{lrrrrr}
942
+ \hline
943
+ & \textbf{Task} & \textbf{Nodes} & \textbf{Edges} & \textbf{Features} & \textbf{Classes} \\
944
+ \hline
945
+ \textbf{WikiCS} & Transductive & 11,701 & 216,123 & 300 & 10\\
946
+ \textbf{Amazon Computers} & Transductive & 13,752 & 245,861 & 767 & 10\\
947
+ \textbf{Amazon Photos} & Transductive & 7,650 & 119,081 & 745 & 8\\
948
+ \textbf{Coauthor CS} & Transductive & 18,333 & 81,894 & 6,805 & 15\\
949
+ \textbf{Coauthor Physics} & Transductive & 34,493 & 247,962 & 8,415 & 5\\
950
+ \textbf{ogbn-arxiv} &Transductive & 169,343 & 1,166,243 & 128 & 40\\
951
+ \textbf{PPI (24 graphs)} & Inductive & 56,944 & 818,716 & 50 & 121 (multilabel)\\
952
+ \textbf{MAG240M} & Transductive & 244,160,499 & 1,728,364,232 & 768 & 153 \\
953
+ \hline
954
+
955
+ \end{tabular}
956
+
957
+ \label{sec:small_datasets}
958
+
959
+ \caption{Statistics of datasets used in our experiments.}
960
+ \label{tab:dataset_stats}
961
+ \end{table}
962
+ \subsection{Performance and Efficiency gains when scalability is not a bottleneck}
963
+ \label{sec:full_quadratic}
964
+
965
+ We first evaluate our method on a set of 5 recent real-world datasets
966
+ --- WikiCS, Amazon-Computers, Amazon-Photos, Coauthor-CS, Coauthor-Physics ---
967
+ in the transductive setting. Note that these are challenging medium-scale datasets specifically proposed for rigorous evaluation of semi-supervised node classification methods \citep{wikicsDataset, pitfallsshchur2019}, but are almost all small enough that constrastive approaches such as \GRACE~\citep{grace} can compute their quadratic objective exactly. Thus, these experiments present a comparison of \BGRL with prior methods in the idealized case where scalability is not a bottleneck. We show that even in this steelmanned setting, our method outperforms or matches prior methods while requiring a fraction of the memory costs.
968
+
969
+
970
+
971
+ \label{sec:small_results}
972
+ We primarily compare \BGRL against \GRACE, a recent strong contrastive representation learning method on graphs. We also report performances for other commonly used self-supervised graph methods from previously published results \citep{deepwalk,dgi,gmi,mvgrl,grace_adaptive}, as well as \randominit~\citep{dgi}, a baseline using embeddings from a randomly initialized encoder, thus measuring the quality of the inductive biases present in the encoder model.
973
+ We use a 2-layer GCN model \citep{gcnkipf} as our graph encoder $\enc$, and closely follow models, architectures, and graph-augmentation settings used in prior works \citep{grace_adaptive, dgi, grace}.
974
+
975
+
976
+
977
+ \begin{table}[!htb]
978
+ \captionsetup{font=small}
979
+ \centering
980
+ \small
981
+ \begin{tabular}{lccccc}
982
+ \hline
983
+ & \textbf{WikiCS} & \textbf{Am. Comp.} & \textbf{Am. Photos} & \textbf{Co.CS} & \textbf{Co.Phy} \\
984
+ \hline
985
+ Raw features & 71.98 $\pm$ 0.00 & 73.81 $\pm$ 0.00 & 78.53 $\pm$ 0.00 & 90.37 $\pm$ 0.00 & 93.58 $\pm$ 0.00 \\
986
+ \deepwalk & 74.35 $\pm$ 0.06 & 85.68 $\pm$ 0.06 & 89.44 $\pm$ 0.11 & 84.61 $\pm$ 0.22 & 91.77 $\pm$ 0.15 \\
987
+ \deepwalk + feat. & 77.21 $\pm$ 0.03 & 86.28 $\pm$ 0.07 & 90.05 $\pm$ 0.08 & 87.70 $\pm$ 0.04 & 94.90 $\pm$ 0.09 \\
988
+ \hline
989
+ \DGI & 75.35 $\pm$ 0.14 & 83.95 $\pm$ 0.47 & 91.61 $\pm$ 0.22 & 92.15 $\pm$ 0.63 & 94.51 $\pm$ 0.52 \\
990
+ \GMI & 74.85 $\pm$ 0.08 & 82.21 $\pm$ 0.31 & 90.68 $\pm$ 0.17 & OOM & OOM \\
991
+ \MVGRL & 77.52 $\pm$ 0.08 & 87.52 $\pm$ 0.11 & 91.74 $\pm$ 0.07 & 92.11 $\pm$ 0.12 & 95.33 $\pm$ 0.03 \\
992
+ \randominit\!\!$^\star$ & 78.95 $\pm$ 0.58 & 86.46 $\pm$ 0.38 & 92.08 $\pm$ 0.48 & 91.64 $\pm$ 0.29 & 93.71 $\pm$ 0.29 \\
993
+ \GRACE $^\star$ & \textbf{80.14 $\pm$ 0.48} & 89.53 $\pm$ 0.35 & 92.78 $\pm$ 0.45 & 91.12 $\pm$ 0.20 & OOM \\
994
+ \BGRL\!\!$^\star$ & 79.98 $\pm$ 0.10 & \textbf{90.34 $\pm$ 0.19} &\textbf{93.17 $\pm$ 0.30} & \textbf{93.31 $\pm$ 0.13 } & \textbf{95.73 $\pm$ 0.05} \\
995
+ \hline
996
+ \GCA &\textit{ 78.35 $\pm$ 0.05} &\textit{ 88.94 $\pm$ 0.15 } & \textit{92.53 $\pm$ 0.16 } & \textit{93.10 $\pm$ 0.01 } & \textit{95.73 $\pm$ 0.03 } \\
997
+ Supervised GCN & \textit{77.19 $\pm$ 0.12 } & \textit{86.51 $\pm$ 0.54 } & \textit{92.42 $\pm$ 0.22 } & \textit{93.03 $\pm$ 0.31 } & \textit{95.65 $\pm$ 0.16} \\
998
+ \hline
999
+ \end{tabular}
1000
+ \caption{Performance measured in terms of classification accuracy along with standard deviations. Our experiments, marked as $\star$, are over 20 random dataset splits and model initializations. The other results are taken from previously published reports. OOM indicates running out of memory on a 16GB V100 GPU. We report the best result for GCA out of the proposed GCA-DE, GCA-PR, and GCA-EV models.}
1001
+ \label{tab:results_table}
1002
+ \end{table}
1003
+
1004
+ In Table~\ref{tab:results_table}, we report results of our experiments on these standard benchmark tasks.
1005
+ We see that even when scalability does not prevent the use of contrastive objectives, \BGRL performs competitively both with our unsupervised and fully supervised baselines, achieving state-of-the-art performance in 4 of the 5 datasets. Further, as noted in Table~\ref{tab:computation}, \BGRL achieves this despite using 2-10x less memory. \BGRL provides this improvement in memory-efficiency at no cost in performance, demonstrating a useful practical advantage over prior methods such as \GRACE.
1006
+
1007
+ \begin{table}
1008
+
1009
+ \captionsetup{font=small}
1010
+ \centering
1011
+ \small
1012
+ \begin{tabular}{llllll}
1013
+ \hline
1014
+ \textbf{Method} & \textbf{Augmentation} & \textbf{Co.CS}& \textbf{Co.Phy}& \textbf{Am. Comp.}& \textbf{Am. Photos} \\
1015
+ \hline
1016
+ \BGRL & Standard& 93.31 $\pm$ 0.13& 95.73 $\pm$ 0.05& 90.34 $\pm$ 0.19& 93.17 $\pm$ 0.30\\
1017
+ & Degree centrality & 93.34 $\pm$ 0.13 & 95.62 $\pm$ 0.09 & 90.39 $\pm$ 0.22 & 93.15 $\pm$ 0.37\\
1018
+ & Pagerank centrality & 93.34 $\pm$ 0.11 & 95.59 $\pm$ 0.09 & 90.45 $\pm$ 0.25 & 93.13 $\pm$ 0.34\\
1019
+ & Eigenvector centrality & 93.32 $\pm$ 0.15 & 95.62 $\pm$ 0.06 & 90.20 $\pm$ 0.27 & 93.03 $\pm$ 0.39\\
1020
+ \hline
1021
+ \GCA & Standard & 92.93 $\pm$ 0.01 & 95.26 $\pm$ 0.02 & 86.25 $\pm$ 0.25 & 92.15 $\pm$ 0.24 \\
1022
+ & Degree centrality & 93.10 $\pm$ 0.01 & 95.68 $\pm$ 0.05 & 87.85 $\pm$ 0.31 & 92.49 $\pm$ 0.09\\
1023
+ & Pagerank centrality & 93.06 $\pm$ 0.03 & 95.72 $\pm$ 0.03 & 87.80 $\pm$ 0.23 & 92.53 $\pm$ 0.16\\
1024
+ & Eigenvector centrality & 92.95 $\pm$ 0.13 & 95.73 $\pm$ 0.03 & 87.54 $\pm$ 0.49 & 92.24 $\pm$ 0.21\\
1025
+ \hline
1026
+ \end{tabular}
1027
+ \caption{Comparison of \BGRL and \GCA for simple versus complex augmentation heuristics on four benchmark graphs. For \GCA, we report the numbers provided in their original paper.}
1028
+ \label{tab:adaptive}
1029
+ \end{table}
1030
+
1031
+ \paragraph{Effect of more complex augmentations:} In addition to the original \GRACE method, we also highlight \GCA, a variant of it that has the same learning objective but trades off more expressive but expensive graph augmentations for better performance. However, these augmentations often take time \textit{cubic} in the size of the graph, or are otherwise cumbersome to implement on large graphs. As we focus on scalability to the high-data regime, we primarily restrict our comparisons to the base method \GRACE, which uses the same simple, easily scalable augmentations as \BGRL.
1032
+ Nevertheless, for the sake of completeness, in Table~\ref{tab:adaptive} we investigate the effect of these complex augmentations with \BGRL. We see that \BGRL obtains equivalent performance with both simple and complex augmentations, while \GCA requires more expensive augmentations for peak performance. This indicates that \BGRL can safely rely on simple augmentations when scaling to larger graphs without sacrificing performance.
1033
+ \subsection{Scalability-performance trade-offs for large graphs}
1034
+ \label{sec:subsampling_quadratic}
1035
+
1036
+ When scaling up to large graphs, it may not be possible to compare each node's representation to all others. In this case, a natural way to reduce memory is to compare each node with only a subset of nodes in the rest of the graph. To study how the number of negatives impacts performance in this case, we propose an approximation of \GRACE's objective called \GRACESUB, where instead of contrasting every pair of nodes in the graph, we subsample $k$ nodes randomly across the graph to use as negative examples for each node at every gradient step.
1037
+ Note that $k=2$ is the asymptotic equivalent of \BGRL in terms of memory costs, as \BGRL always only compares each node with itself across both views, i.e.,~\BGRL faces no such computational difficulty or design choice in scaling up.
1038
+
1039
+ \subsubsection*{Evaluating on ogbn-arXiv Dataset}
1040
+
1041
+ To study the tradeoff between performance and complexity we consider a node classification task on a much larger dataset, from the OGB benchmark \citep{ogbDataset}, ogbn-arXiv. In this case, \GRACE cannot run without subsampling (on a GPU with 16GB of memory).
1042
+ Considering the increased difficulty of this task, we slightly expand our model to use 3 GCN layers, following the baseline model provided by \citet{ogbDataset}. As there has not been prior work on applying GNN-based unsupervised approaches to the ogbn-arXiv task, we implement and compare against two representative contrastive-learning approaches, \DGI and \GRACE. In addition, we report results from \citet{ogbDataset} for \nodevec~\citep{node2vec} and a supervised-learning baseline. We report results on both validation and test sets, as is convention for this task since the dataset is split based on a chronological ordering.
1043
+
1044
+
1045
+ \begin{table}
1046
+ \small
1047
+ \centering
1048
+ \captionsetup{font=small}
1049
+
1050
+ \begin{tabular}{lcc}
1051
+ \hline
1052
+ & Validation & Test \\
1053
+ \hline
1054
+ MLP & 57.65$\pm$ 0.12 & 55.50 $\pm$ 0.23 \\
1055
+ \nodevec & 71.29 $\pm$ 0.13 & 70.07 $\pm$ 0.13 \\
1056
+ \hline
1057
+ \randominit\!\!$^\star$ & 69.90 $\pm$ 0.11 & 68.94 $\pm$ 0.15 \\
1058
+ \DGI\!$^\star$ & 71.26 $\pm$ 0.11 & 70.34 $\pm$ 0.16 \\
1059
+ \GRACE full-graph$^\star$ & OOM & OOM \\
1060
+ \GRACESUB ($k=2$)$^\star$ & 60.49 $\pm$ 3.72 & 60.24 $\pm$ 4.06 \\
1061
+ \GRACESUB ($k=8$)$^\star$ & 71.30 $\pm$ 0.17 & 70.33 $\pm$ 0.18 \\
1062
+ \GRACESUB ($k=32$)$^\star$ & 72.18 $\pm$ 0.16 & 71.18 $\pm$ 0.16 \\
1063
+ \GRACESUB ($k=2048$)$^\star$ & \textbf{72.61 $\pm$ 0.15} & \textbf{71.51 $\pm$ 0.11} \\
1064
+ \BGRL\!\!$^\star$ & \textbf{72.53 $\pm$ 0.09} & \textbf{71.64 $\pm$ 0.12} \\
1065
+ \hline
1066
+ Supervised GCN & \textit{73.00 $\pm$ 0.17} & \textit{71.74 $\pm$ 0.29} \\
1067
+ \hline
1068
+ \end{tabular}
1069
+ \caption{Performance on the ogbn-arXiv task measured in terms of classification accuracy along with standard deviations. Our experiments, marked as $\star$, are averaged over $20$ random model initializations. Other results are taken from previously published reports. OOM indicates running out of memory on a $16$GB V$100$ GPU.}
1070
+ \label{tab:ogb_results}
1071
+ \vspace{-.7em}
1072
+
1073
+ \end{table}
1074
+
1075
+ Our results, summarized in Table~\ref{tab:ogb_results}, show that \BGRL is competitive with the supervised learning baseline.
1076
+ Further, we note that the performance of \GRACESUB is very sensitive to the parameter $k$---requiring a large number of negatives to match the performance of \BGRL. Note that \BGRL far exceeds the performance of \GRACESUB with $k=2$, its asymptotic equivalent in terms of memory; and that larger values of $k$ lead to out-of-memory errors on a 16GB GPU. These results suggest that the performance of contrastive methods such as \GRACE may suffer due to approximations to their objectives that must be made when scaling up.
1077
+
1078
+
1079
+ \subsubsection*{Evaluating on Protein-Protein Interaction Dataset}
1080
+ \label{sec:ppi_experiments}
1081
+
1082
+ Next, we consider the Protein-Protein Interaction (PPI) task---a more challenging inductive task on {\em multiple graphs} where the gap between the best self-supervised methods and fully supervised methods is (still)
1083
+ significantly large, due to 40\% of the nodes missing feature information.
1084
+ In addition to simple mean-pooling propagation rules from GraphSage-GCN~\citep{hamilton2018inductive}, we also consider Graph Attention Networks (GAT, \citealp{gat}) where each node aggregates features from its neighbors non-uniformly using a learned attention weight.
1085
+ It has been shown~\citep{gat} that GAT improves over non-attentional models on this dataset when trained in supervised settings, but these models have thus far not been able to be trained to a higher performance than non-attentional models through contrastive techniques.
1086
+
1087
+ \begin{table}
1088
+ \small
1089
+ \centering
1090
+ \captionsetup{font=small}
1091
+
1092
+ \begin{tabular}{lr}
1093
+ \hline
1094
+ & \textbf{PPI}\\
1095
+ \hline
1096
+ Raw features & 42.20 \phantom{$\pm$ 0.20} \\
1097
+ \hline
1098
+ \DGI & 63.80 $\pm$ 0.20 \\
1099
+ \GMI & 65.00 $\pm$ 0.02 \\
1100
+ \randominit & 62.60 $\pm$ 0.20 \\
1101
+ \hline
1102
+ \GRACE MeanPooling encoder$^\star$ & 69.66 $\pm$ 0.15 \\
1103
+ \BGRL MeanPooling encoder$^\star$ & 69.41 $\pm$ 0.15 \\
1104
+ \hline
1105
+ \GRACE GAT encoder$^\star$& 69.71 $\pm$ 0.17\\
1106
+ \BGRL GAT encoder$^\star$& \textbf{70.49 $\pm$ 0.05}\\
1107
+ \hline
1108
+ Supervised MeanPooling & \textit{96.90 $\pm$ 0.20} \\
1109
+ Supervised GAT & \textit{97.30 $\pm$ 0.20} \\
1110
+ \hline
1111
+ \end{tabular}
1112
+
1113
+ \caption{Performance on the PPI task measured in terms of Micro-F$_1$ across the 121 labels along with standard deviations. Our experiments, marked as $\star$, are averaged over 20 random model initializations. Other results are taken from previously published reports.}
1114
+ \label{tab:ppi_results}
1115
+ \end{table}
1116
+
1117
+ We report our results in Table~\ref{tab:ppi_results}, showing that \BGRL is competitive with \GRACE when using the simpler MeanPooling networks. Applying \BGRL to a GAT model, results in a new state-of-the-art performance, improving over the MeanPooling network. On the other hand, the \GRACE contrastive loss is unable to improve performance of a GAT model over the non-attentional MeanPooling encoder.
1118
+ \begin{figure}[!htb]
1119
+ \captionsetup{font=small}
1120
+ \minipage{0.5\textwidth}
1121
+ \includegraphics[width=\linewidth]{sections/gat_learning_curves.pdf}
1122
+ \caption{PPI task performance, averaged over 20 seeds.}\label{fig:gat_learning_curves}
1123
+ \endminipage\hfill
1124
+ \minipage{0.5\textwidth}
1125
+ \includegraphics[width=\linewidth]{sections/entropy.pdf}
1126
+ \caption{Histogram of GAT attention entropies.}\label{fig:entropies}
1127
+ \endminipage\hfill
1128
+ \end{figure}
1129
+ We observe that the approximation of the contrastive objective results not only in lower accuracies (Figure~\ref{fig:gat_learning_curves}), but also qualitatively different behaviors in the GAT models being trained.
1130
+ In Figure~\ref{fig:entropies}, we examine the internals of GAT models trained through both \BGRL and \GRACE by analyzing the entropy of the attention weights learned. For each training node, we compute the average entropy of its attention weights across all GAT layers and attention heads, minus the entropy of a uniform attention distribution as a baseline.
1131
+ We see that GAT models learned using \GRACE, particularly when subsampling few negative examples, tend to have very low attention entropies and perform poorly.
1132
+ On the other hand, \BGRL is able to train the model to have meaningful attention weights, striking a balance between the low-entropy models learned through \GRACE and the maximum-entropy uniform-attention distribution.
1133
+ This aligns with recent observations~\citep{superGAT, wang2019improving} that auxiliary losses must be chosen carefully for the stability of GAT attention weights.
1134
+
1135
+ \subsection{Scaling to extremely large graphs}
1136
+ \label{sec:ogb_lsc_exps}
1137
+
1138
+ To further test the scalability and evaluate the performance of \BGRL in the very high-data regime, we consider the MAG240M node classification task~\citep{hu2021ogblsc}. As a single connected graph of 360GB with over 240 million nodes (of which 1.4 million are labeled) and 1.7 billion edges, this dataset is orders of magnitude larger than previously available public datasets, and poses a significant scaling challenge. Since the test labels for this dataset are (still) hidden,
1139
+ we report performance based on validation accuracies in our experiments. Implementation and experiment details are in Appendix~\ref{sec:mag_240m_appendix}.
1140
+
1141
+ To account for the increased scale and difficulty of the classification task on this dataset, we make a number of changes to our learning setup. First, since we can no longer perform full-graph training due to the sheer size of the graph, we thus adopt the Neighborhood Sampling strategy proposed by \citet{hamilton2018inductive} to sample a small number of central nodes at which our loss is to be applied, and sampling a fixed size neighborhood around them.
1142
+ Second, we use more expressive Message Passing Neural Networks~\citep{mpnn} as our graph encoders.
1143
+ Finally,
1144
+ as we are interested in pushing performance on this competition dataset, we make use of the available labels for representation learning and shift from evaluating on top of a frozen representation, to semi-supervised training by combining both supervised and self-supervised signals at each update step. We emphasize that these are significant changes from the standard small-scale evaluation setup for graph representation learning methods studied previously, and more closely resemble real-world conditions in which these algorithms would be employed.
1145
+
1146
+
1147
+ \begin{figure}[!htb]
1148
+ \captionsetup{font=small}
1149
+ \minipage{0.46\textwidth}
1150
+ \includegraphics[width=\linewidth]{ogb_lsc_iclr.pdf}
1151
+ \caption{Performance on MAG240M using \BGRL or \GRACESUB as an auxiliary signal, averaged over 5 seeds and run for 50k steps.}\label{fig:ogb_lsc_curves}
1152
+ \endminipage\hfill
1153
+ \minipage{0.46\textwidth}
1154
+ \includegraphics[width=\linewidth]{sections/mixing_unlabeled.pdf}
1155
+ \caption{Mixing varying amounts of unlabeled data for representation learning with \BGRL, averaged over 5 seeds and run for 500k steps.}\label{fig:mixing_unlabeled}
1156
+ \endminipage\hfill
1157
+ \end{figure}
1158
+
1159
+ In Figure~\ref{fig:ogb_lsc_curves}, we see that \BGRL used as an auxiliary signal is able to learn faster and significantly improve final performance over fully supervised learning on this challenging task. Considering the difficulty of this task and the small gap in final performance between winning entries in the \textit{KDD Cup 2021} contest, this is a significant improvement.
1160
+ On the other hand, \GRACESUB provides much lower benefits over fully supervised learning, possibly due to no longer being able to sample sufficiently many negatives over the whole graph. Here we used $k=256$ negatives, the largest value we were able to run without running out of memory.
1161
+
1162
+ We further show that we can leverage the high scalability of \BGRL to make use of the vast amounts of \textit{unlabeled} data present in the dataset. Since labeled nodes form only 0.5\% of the graph, unlabeled data offers a rich self-supervised signal for learning better representations and ultimately improving performance on the supervised task. In Figure~\ref{fig:mixing_unlabeled}, we consider adding some number of unlabeled nodes to each minibatch of data, and examine the effect on performance as this ratio of unlabeled to labeled data in each batch increases. Thus at each step, we apply the supervised loss on only the labeled nodes in the batch, and \BGRL on all nodes. Note that a ratio of 0 corresponds to the case where we apply \BGRL as an auxiliary loss only to the training nodes, already examined in Figure~\ref{fig:ogb_lsc_curves}. We observe a dramatic increase in both stability and peak performance as we increase this ratio, showing that \BGRL can utilize the unlabeled nodes effectively to shape higher quality representations and prevent early overfitting to the supervised signal. This effect shows steady improvement as we increase this ratio from 1x to 10x unlabeled data, where we stop due to resource constraints on running ablations on this large-scale graph - however, this trend may continue to even higher ratios, as the true ratio of unlabeled to labeled nodes present in the graph is 99x.
1163
+
1164
+ This result of 73.89\% is \textbf{state-of-the-art} for this dataset for the highest single-model performance (i.e., without ensembling) - the OGB baselines report a score of 70.02\%~\citep{hu2021ogblsc} while the \textit{KDD Cup 2021} contest first place solution reported a score of 73.71\% before ensembling.
1165
+
1166
+ \paragraph{KDD Cup 2021:\footnote{Leaderboard at \href{https://ogb.stanford.edu/kddcup2021/results/\#awardees\_mag240m}{https://ogb.stanford.edu/kddcup2021/results/\#awardees\_mag240m}.}
1167
+ } Our solution using \BGRL to shape representations, utilizing unlabeled data in conjunction with a supervised signal for semi-supervised learning, was awarded as one of the winners of the MAG240M track at OGB-LSC\citep{deepmind_ogb_report}. It achieved a final position of second overall, achieving 75.19\% accuracy on the test set. The first and third place solutions achieved 75.49\% and 74.60\% respectively. Although differences in many other factors such as model architectures, feature engineering, ensembling strategies, etc. prevent a direct comparison\footnote{For example, the first place solution used a much larger set of 30 ensembled models compared to our 10, and exclusively relied on architectural improvements to improve performance without using self-supervised learning.} between these solutions, these results serve as a strong empirical evidence for the effectiveness of \BGRL for learning representations on extremely large scale datasets.
1168
+ %
1169
+ \section{Related Work}
1170
+ \label{sec:related_work}
1171
+ Early methods in the area relied on \emph{random-walk objectives} such as \deepwalk \citep{deepwalk} and \nodevec \citep{node2vec}. Even though the graph neural networks (GNNs) inductive bias aligns with these objectives \citep{wu2019simplifying, dgi, gcnkipf}, composing GNNs and random-walks does not work very well and can even degrade performance \citep{hamilton2018inductive}.
1172
+ Earlier combinations of GNNs and self-supervised learning involve \texttt{Embedding Propagation} \citep{garcia2017learning}, \texttt{Variational Graph Autoencoders} \citep{kipf2016variational} and \texttt{Graph2Gauss} \citep{bojchevski2017deep}.
1173
+ \citet{hu2019strategies} leverages BERT \citep{devlin2018bert} for representation learning in graph-structured inputs. \citet{hu2019strategies} assumes specific graph structures and uses feature masking objectives to shape representations.
1174
+
1175
+ Recently, contrastive methods effective on images have also been adapted to graphs using GNNs. This includes \DGI \citep{dgi}, inspired by \DIM \cite{hjelm2018learning}, which contrasts node-local patches against global graph representations. Next, \InfoGraph \citep{sun2019infograph} modified \DGI's pipeline for graph classification tasks.
1176
+ \GMI \cite{gmi} maximizes a notion of \emph{graphical} mutual information inspired by \MINE \citep{belghazi2018mine}, allowing for a more fine-grained contrastive loss than \DGI's. The \simclr method of \citet{chen2020simple,chen2020big} has been specialized for graphs by \GRACE and variants such as \GCA \citep{grace,grace_adaptive} that rely on more complex data-adaptive augmentations. \graphCL \citep{graphCL} adapts \simclr to learn graph-level embeddings using a contrastive objective. \MVGRL \citep{mvgrl} generalizes \CMC \citep{tian2019contrastive} to graphs.
1177
+ Graph Barlow Twins~\citep{graph_barlow_twins} presents a method to learn representations by minimizing correlation between different representation dimensions.
1178
+ Concurrent works
1179
+ \DGB~\citep{che2020self} and \SelfGNN~\citep{selfgnn}, like \BGRL, adapt \BYOL~\citep{grill2020bootstrap} for graph representation learning.
1180
+ However, \BGRL differs from these works in the following ways:
1181
+ \begin{itemize}
1182
+ \item We show that BGRL scales to and attains state-of-the-art results on the very high-data regime on the MAG240M dataset. These results are unprecedented in the graph self-supervised learning literature and demonstrate a high degree of scalability.
1183
+ \item We show that \BGRL is effective even when trained on sampled subgraphs and not full graph.
1184
+ \item We provide an extensive analysis of the performance-computation trade-off of \BGRL versus contrastive methods, showing that \BGRL can be more efficient in terms of computation and memory usage as it requires no negative examples.
1185
+ \item We show that \BGRL is effective when performing semi-supervised training, providing further gains when leveraging both labeled data and unlabeled data. This is a significant result that has not been demonstrated in graph representation learning methods using neural networks prior to our work.
1186
+ \end{itemize}
1187
+
1188
+
1189
+
1190
+
1191
+
1192
+
1193
+
1194
+
1195
+
1196
+
1197
+ \section*{ICLR Ethics Statement}
1198
+ Our contributions are in developing and evaluating a general method for self-supervised representation learning in graphs. As such, they may be helpful in applications where obtaining labels can be challenging or expensive, thus enabling newer applications potentially in the direction of positive social good.
1199
+
1200
+ On the other hand, as an unsupervised pretraining method, there is a risk of practitioners using it for downstream tasks without carefully considering how these embeddings were originally trained, potentially leading to stereotyping or unfair biases. Further, since the bootstrapping dynamics of \BGRL are not yet fully understood, there is a higher chance of it being used as a blackbox machine learning method and harmful downstream effects being difficult to diagnose and resolve.
1201
+
1202
+ \section*{ICLR Reproducibility Statement}
1203
+ We believe that the results we report in this paper are reproducible and strengthen our empirical contributions.
1204
+
1205
+ We have submitted our algorithm implementation and experimental setup, config, and code for almost all of our experiments as supplementary material. Most experiments finish within 30 minutes on a single V100 GPU, and thus are easy to verify with few resources. In addition, we are providing trained model weights/checkpoints for directly loading and verifying performance without training. Experiments for which code has not been provided are described in detail in the appendices (Appendix~\ref{sec:appendix_dataset_details} and Appendix~\ref{sec:appendix_implementation}) and should allow for reproduction.
1206
+
1207
+ Besides this, code for our large-scale MAG240M solution has been open-sourced as part of the KDD Cup 2021 and has been verified independently by the OGB-LSC contest organizers.
1208
+ \bibliographystyle{iclr2022_conference}
1209
+ \bibliography{library}
1210
+
1211
+
1212
+
1213
+ \appendix
1214
+
1215
+ \appendix
1216
+ \onecolumn
1217
+
1218
+ \section{\BGRL does not converge to trivial solutions}
1219
+ \label{sec:appendix_byol_nontrivial}
1220
+
1221
+ In Figure~\ref{fig:total_bgrl_loss} we show the \BGRL loss curve throughout training for all the datasets considered. As we see, the loss does not converge to zero, indicating that the training does not result in a trivial solution.
1222
+
1223
+ In Figure~\ref{fig:embedding_spread} we plot the spread of the node embeddings, i.e., the standard deviation of the representations learned across all nodes, divided by the average norm. As we see, the embeddings learned across all datasets have a standard deviation that is a similar order of magnitude as the norms of the embeddings themselves, further indicating that the training dynamics do not converge to a constant solution.
1224
+
1225
+ Further, Figure~\ref{fig:embedding_norm} shows that the embeddings do not collapse to zero or blow up as training progresses.
1226
+
1227
+
1228
+ \begin{figure}[!htb]
1229
+ \captionsetup{font=scriptsize}
1230
+
1231
+ \minipage{0.32\textwidth}
1232
+ \includegraphics[width=\linewidth]{bgrl_loss.pdf}
1233
+ \caption{\BGRL Loss}\label{fig:total_bgrl_loss}
1234
+ \endminipage\hfill
1235
+ \minipage{0.32\textwidth}
1236
+ \includegraphics[width=\linewidth]{bgrl_embedding_spread.pdf}
1237
+ \caption{Embedding spread}\label{fig:embedding_spread}
1238
+ \endminipage
1239
+ \minipage{0.32\textwidth}
1240
+ \includegraphics[width=\linewidth]{bgrl_embedding_norm.pdf}
1241
+ \caption{Average embedding norm}\label{fig:embedding_norm}
1242
+ \endminipage\hfill
1243
+ \end{figure}
1244
+
1245
+
1246
+ \section{Ablations on Projector Network}
1247
+ \label{sec:projector_ablations}
1248
+ As noted in Section~\ref{sec:method}, \BGRL does not use a projector network, unlike both \BYOL and \GRACE.
1249
+ Prior works such as \GRACE use a projector network to prevent the embeddings from becoming completely invariant to the augmentations used - however in \BGRL, the predictor network can serve the same purpose.
1250
+ On the other hand, \BYOL relies on this for dimensionality reduction, to simplify the task of the predictor $p_{\theta}$, as it is challenging to directly predict very high-dimensional embeddings.
1251
+ required for large-scale vision tasks like ImageNet~\citep{imagenet_cvpr09}.
1252
+ Here we empirically verify that even in our most challenging, large-scale task of MAG240M, the projector network is not needed and only slows down learning. In Figure~\ref{fig:ogb_lsc_projector_curves} we can see that adding the projector network leads to both slower learning and a lower final performance.
1253
+
1254
+ \begin{figure}[ht]
1255
+ \centering
1256
+ \includegraphics[scale=0.7]{ogb_lsc_projector.pdf}
1257
+ \caption{Performance on OGB-LSC MAG240M task, averaged over 5 seeds, testing effect of using projector network.}\label{fig:ogb_lsc_projector_curves}
1258
+ \end{figure}
1259
+
1260
+ \section{Comparison on small datasets}
1261
+ \label{sec:cora_exps}
1262
+
1263
+ We perform additional experiments on 4 commonly used small datasets, (Cora, CiteSeer, PubMed, and DBLP) \citep{cora_dataset, dblp_dataset} and show that \BGRL's bootstrapping mechanism still performs well in the low-data regime, even attaining a new state of the art performance on two of the datasets.
1264
+
1265
+ Note that Table~\ref{tab:small_datasets} reports results averaged over 20 random dataset splits, as has been followed in \citet{grace}, instead of using the standard fixed splits for these datasets which are known to be easy to unreliable for evaluating GNN methods \citep{pitfallsshchur2019}.
1266
+
1267
+ \begin{table}[ht]
1268
+ \small
1269
+ \centering
1270
+ \begin{tabular}{|l|l|l|l|l|}
1271
+ \hline
1272
+ & Cora & CiteSeer & PubMed & DBLP \\
1273
+ \hline
1274
+ \GRACE & 83.02 $\pm$ 0.89 & 71.63 $\pm$ 0.64 & 86.06 $\pm$ 0.26 & 84.08 $\pm$ 0.29 \\
1275
+ \BGRL & 83.83 $\pm$ 1.61 & 72.32 $\pm$ 0.89 & 86.03 $\pm$ 0.33 & 84.07 $\pm$ 0.23 \\
1276
+ \hline
1277
+ Supervised & 82.8 & 72.0 & 84.9 & 82.7 \\
1278
+ \hline
1279
+ \end{tabular}
1280
+ \caption{Evaluation on small datasets. Results averaged over 20 dataset splits and model initializations.}
1281
+ \label{tab:small_datasets}
1282
+ \end{table}
1283
+
1284
+
1285
+ \section{Graph augmentation functions}
1286
+ \label{sec:augmentation_details}
1287
+ Generating meaningful augmentations is a much less explored problem in graphs than in other domains such as vision. Further, since we work over entire graphs, complex augmentations can be very expensive to compute and will impact all nodes at once. Our contributions are orthogonal to this problem, and we primarily consider only the standard graph augmentation pipeline that has been used in previous works on representation learning \citep{graphCL, grace}.
1288
+
1289
+
1290
+ In particular, we consider two simple graph augmentation functions --- \textbf{node feature masking} and \textbf{edge masking}. These augmentations are graph-wise: they do not operate on each node independently, and instead leverage graph topology information through edge masking. This contrasts with transformations used in \BYOL, which operate on each image independently.
1291
+ First, we generate a single random binary mask of size $F$, each element of which follows a Bernoulli distribution $\mathcal{B}(1 - p_{f})$, and use it to \textbf{mask features} of all nodes in the graph (i.e., all nodes have the same features masked). Empirically, we found that performance is similar for using different random masks per node or sharing them, and so we use a single mask for simplicity.
1292
+ In addition to this node-level attribute transformation, we also compute a binary mask of size $E$ (where $E$ is the number of edges in the original graph), each element of which follows a Bernoulli distribution $\mathcal{B}(1 - p_{e})$, and use it to \textbf{mask edges} in the augmented graph.
1293
+ To compute our final augmented graphs, we make use of both augmentation functions with different hyperparameters for each graph, i.e. $p_{f_1}$ and $p_{e_1}$ for the first view, and $p_{f_2}$ and $p_{e_2}$ for the second view.
1294
+
1295
+ Beyond these standard augmentations, in Section~\ref{sec:full_quadratic} we also consider more complex \textit{adaptive} augmentations proposed by prior works \citep{grace_adaptive} which use various heuristics to mask different features or edges with different probabilities.
1296
+
1297
+ \section{Dataset details}
1298
+ \label{sec:appendix_dataset_details}
1299
+
1300
+
1301
+
1302
+ \paragraph{WikiCS\footnote{\url{https://github.com/pmernyei/wiki-cs-dataset/raw/master/dataset}}} This graph is constructed from Wikipedia references, with nodes representing articles about Computer Science and edges representing links between them. Articles are classified into 10 classes based on their subfield, and node features are the average of GloVE \citep{pennington-etal-2014-glove} embeddings of all words in the article. This dataset comes with 20 canonical train/valid/test splits, which we use directly.
1303
+
1304
+ \paragraph{Amazon Computers, Amazon Photos\footnote{\url{https://github.com/shchur/gnn-benchmark/tree/master/data/npz}}} These graphs are from the Amazon co-purchase graph \citep{amazonDatasets} with nodes representing products and edges being between pairs of goods frequently purchased together. Products are classified into 10 (for Computers) and 8 (for Photos) classes based on product category, and node features are a bag-of-words representation of a product's reviews. We use a random split of the nodes into (10/10/80\%) train/validation/test nodes respectively as these datasets do not come with a standard dataset split.
1305
+
1306
+ \paragraph{Coauthor CS, Coauthor Physics\footnote{\url{https://github.com/shchur/gnn-benchmark/tree/master/data/npz}}} These graphs are from the Microsoft Academic Graph \citep{mag}, with nodes representing authors and edges between authors who have co-authored a paper. Authors are classified into 15 (for CS) and 5 (for Physics) classes based on the author's research field, and node features are a bag-of-words representation of the keywords of an author's papers. We again use a random (10/10/80\%) split for these datasets.
1307
+
1308
+ \paragraph{ogbn-arXiv:} This is another citation network, where nodes represent CS papers on arXiv indexed by the Microsoft Academic Graph \citep{mag}. In our experiments, we symmetrize this graph and thus there is an edge between any pair of nodes if one paper has cited the other. Papers are classified into $40$ classes based on arXiv subject area. The node features are computed as the average word-embedding of all words in the paper, where the embeddings are computed using a skip-gram model \citep{mikolov2013efficient} over the entire corpus.
1309
+
1310
+ \paragraph{PPI \footnote{\url{https://s3.us-east-2.amazonaws.com/dgl.ai/dataset/ppi.zip}}} is a protein-protein interaction network \citep{ppiZitnik_2017,hamilton2018inductive}, comprised of multiple ($24$) graphs each corresponding to different human tissues. We use the standard dataset split as $20$ graphs for training, $2$ for validation, and $2$ for testing. Each node has $50$ features computed from various biological properties. This is a multilabel classification task, where each node can possess up to $121$ labels.
1311
+
1312
+ \section{Implementation details}
1313
+ \label{sec:appendix_implementation}
1314
+
1315
+ In all our experiments, we use the AdamW optimizer \citep{adam, adamW} with weight decay set to $10^{-5}$, and all models initialized using Glorot initialization~\citep{glorot}. The \BGRL predictor $p_{\theta}$ used to predict the embedding of nodes across views is fixed to be a Multilayer Perceptron (MLP) with a single hidden layer. The decay rate $\tau$ controlling the rate of updates of the \BGRL target parameters~$\phi$ is initialized to $0.99$ and gradually increased to $1.0$ over the course of training following a cosine schedule.
1316
+ Other model architecture and training details vary per dataset and are described further below.
1317
+ The augmentation hyperparameters $p_{f_{1,2}}$ and $p_{e_{1,2}}$ are reported below.
1318
+
1319
+ \paragraph{Graph Convolutional Networks}
1320
+ Formally, the GCN propagation rule \citep{gcnkipf} for a single layer is as follows,
1321
+ \begin{equation}\text{GCN}_i(\bX, \bA) = \sigma\left(\hat{\bD}^{-\frac{1}{2}}\hat{\bA}\hat{\bD}^{-\frac{1}{2}} \bX \bW_i\right),\end{equation}
1322
+ where $\hat{\bA} = \bA + \bI$ is the adjacency matrix with self-loops, $\hat{\bD}$ is the degree matrix, $\sigma$ is a non-linearity such as ReLU, and $\bW_i$ is a learned weight matrix for the $i$'th layer.
1323
+
1324
+ \paragraph{Mean Pooling Rule}
1325
+ Formally, the Mean Pooling \citep{hamilton2018inductive} rule for a single layer is given by:
1326
+ \begin{equation}
1327
+ \mathrm{MP}_i(\bX, \bA) = \sigma(\hat{\bD}^{-1}\hat{\bA} \bX \bW_i)
1328
+ \end{equation}
1329
+ As proposed by \citet{dgi}, our exact encoder in inductive experiments ~$\mathcal{E}$ is a $3$-layer mean-pooling network with skip connections. We use a layer size of $512$ and PReLU~\citep{prelu} activation.
1330
+ Thus, we compute:
1331
+ \begin{align}
1332
+ \bH_1 &= \sigma(\mathrm{MP}_1(\bX, \bA))\\
1333
+ \bH_2 &= \sigma(\mathrm{MP}_2(\bH_1 + \bX \bW_{skip}, \bA))\\
1334
+ \mathcal{E}(\bX, \bA) &= \sigma(\mathrm{MP}_3(\bH_2 + \bH_1 + \bX \bW_{skip'}, \bA))
1335
+ \end{align}
1336
+
1337
+ \paragraph{Graph Attention Networks}
1338
+ The GAT layer \citep{gat} consists of a learned matrix $\bW$ that transforms each node features. We then use self-attention to compute attention coefficient for a pair of nodes $i$ and $j$ as $e_{ij} = a({\bf h}_i, {\bf h}_j)$. The attention function $a$ is computed as LeakyReLU$(\ba [\bW {\bf h}_i || \bW {\bf h}_j])$, where $\ba$ is a learned matrix transforming a pair of concatenated attention queries into a single scalar attention logit. The weight of the edge between nodes $i$ and $j$ is computed as $\alpha_{ij} = \textnormal{softmax}_j(e_{ij})$.
1339
+ We follow the architecture proposed by \citet{gat}, including a $3$-layer GAT model (with the first $2$ layers consisting of $4$ heads of size $256$ each and the final layer size $512$ with $6$ output heads), ELU activation \citep{elu}, and skip-connections in intermediate layers.
1340
+
1341
+ \paragraph{Model architectures}
1342
+ As described in Section~\ref{sec:experiments}, we use GCN \citep{gcnkipf} encoders in our experiments on the smaller transductive tasks, while on the inductive task of PPI we use MeanPooling encoders with residual connections. The \BGRL predictor $p_{\theta}$ is implemented as a mutilayer perceptron (MLP). We also used stabilization techniques like batch normalization \citep{batchnorm}, layer normalization \citep{ba2016layer}, and weight standardization \citep{weightStandardization}. The decay rate use for statistics in the batch normalization is fixed to 0.99. We use PReLU activation~\citep{prelu} in all experiments except those using a GAT encoder, where we use the ELU activation~\citep{elu}.
1343
+ In all our models, at each layer including the final layer, we apply first the batch/layer normalization as applicable, and then the activation function.
1344
+ Table~\ref{tab:hypers} describes hyperparameter and architectural details for most of our experimental setups with \BGRL.
1345
+ In addition to these standard settings, we perform additional experiments on the PPI dataset using a GAT \citep{gat} model as the encoder.
1346
+ When using the GAT encoder on PPI, we use 3 attention layers --- the first two with 4 attention heads of size 256 each, and the final with 6 attention heads of size 512, following a very similar model proposed by \citet{gat}. We concatenate the attention head outputs for the first 2 layers, and use the mean for the final output. We also use the ELU activation~\citep{elu}, and skip connections in the intermediate attention layers, as suggested by \citet{gat}.
1347
+
1348
+
1349
+ \begin{table}[ht]
1350
+ \small
1351
+ \centering
1352
+ \begin{tabular}{|l|c|c|c|c|c|c|c|}
1353
+ \hline
1354
+ Dataset & WikiCS & Am.\,Computers & Am.\,Photos & Co.\,CS & Co.\,Physics & ogbn-arXiv & PPI \\
1355
+ \hline
1356
+ $p_{f,1}$ & 0.2 & 0.2 & 0.1 & 0.3 & 0.1 & 0.0 & 0.25 \\
1357
+ $p_{f,2}$ & 0.1 & 0.1 & 0.2 & 0.4 & 0.4 & 0.0 & 0.00 \\
1358
+ $p_{e,1}$ & 0.2 & 0.5 & 0.4 & 0.3 & 0.4 & 0.6 & 0.30 \\
1359
+ $p_{e,2}$ & 0.3 & 0.4 & 0.1 & 0.2 & 0.1 & 0.6 & 0.25 \\
1360
+ \hline
1361
+ $\eta_\text{\,base}$ & $5\cdot10^{-4}$ & $5\cdot10^{-4}$ & $10^{-4}$ & $10^{-5}$ & $10^{-5}$ & $10^{-2}$ & $5\cdot10^{-3}$ \\
1362
+ embedding size & 256 & 128 & 256 & 256 & 128 & 256 & 512 \\
1363
+ $\enc$ hidden sizes & 512 & 256 & 512 & 512 & 256 & 256, 256 & 512, 512 \\
1364
+ $p_{\theta}$ hidden sizes & 512 & 512 & 512 & 512 & 512 & 256 & 512 \\
1365
+ \hline
1366
+ batch norm & Y & Y & Y & Y & Y & N & N \\
1367
+ layer norm & N & N & N & N & N & Y & Y \\
1368
+ weight standard. & N & N & N & N & N & Y & N \\
1369
+ \hline
1370
+
1371
+ \end{tabular}
1372
+
1373
+
1374
+ \caption{Hyperparameter settings for unsupervised \BGRL learning.}
1375
+ \label{tab:hypers}
1376
+ \end{table}
1377
+
1378
+ \paragraph{Augmentation parameters}
1379
+ The hyperparameter settings for graph augmentations, as well as the sizes of the embeddings and hidden layers, very closely follow previous work \citep{grace, grace_adaptive} on all datasets with the exception of ogbn-arXiv. On this dataset, since there has not been prior work on applying self-supervised graph learning methods, we provide the hyperparameters we found through a small grid search.
1380
+
1381
+ \paragraph{Optimization settings}
1382
+ We perform full-graph training at each gradient step on all small-scale experiments, with the exception of experiments using GAT encoders on the PPI dataset. Here, due to memory constraints, we perform training with a batch size of 1 graph. Since the PPI dataset consists of multiple smaller, disjoint subgraphs, we do not have to perform any node subsampling at training time.
1383
+
1384
+ We use Glorot initialization \citep{glorot} the AdamW optimizer \citep{adam, adamW} with a base learning rate $\eta_\text{\,base}$ and weight decay set to $10^{-5}$. The learning rate is annealed using a cosine schedule over the course of learning of $n_\text{total}$ total steps with an initial warmup period of $n_\text{warmup}$ steps. Hence, the learning rate at step $i$ is computed as
1385
+
1386
+ \[
1387
+ \eta_i \triangleq
1388
+ \begin{cases}
1389
+ \frac{i \times \eta_\text{\,base}}{n_\text{warmup}} & \text{if } i \leq n_\text{warmup},\\
1390
+ \eta_\text{\,base} \times \left(1 + \cos{\frac{(i - n_\text{warmup}) \times \pi}{n_\text{total} - n_\text{warmup}}}\right) \times 0.5 & \text{if } n_\text{warmup} \leq i \leq n_\text{total}.
1391
+ \end{cases}
1392
+ \]
1393
+
1394
+ We fix $n_\text{total}$ to be 10,000 total steps and $n_\text{warmup}$ to 1,000 warmup steps, with the exception of experiments on the GAT encoder that requires using a batch size of 1 graph on the PPI dataset. In this case, we increase the number of total steps to 20,000 and warmup to 2,000 steps.
1395
+
1396
+
1397
+ The target network parameters $\phi$ are initialized randomly from the same distribution of the online parameters $\theta$ but with a different random seed. The decay parameter $\tau$ is also updated using a cosine schedule starting from an initial value of $\tau_\text{base} = 0.99$ and is computed as
1398
+
1399
+ \[
1400
+ \tau_i \triangleq 1 - \frac{(1-\tau_\text{base})}{2} \times \pa{\cos\pa{\frac{i \times \pi}{n_\text{total}}} + 1}.
1401
+ \]
1402
+
1403
+ These annealing schedules for both $\eta$ and $\tau$ follow the procedure used by \citet{grill2020bootstrap}.
1404
+
1405
+ \paragraph{Frozen linear evaluation of embeddings}
1406
+
1407
+ In the linear evaluation protocol, the final evaluation is done by fitting a linear classifier on top of the frozen learned embeddings without flowing any gradients back to the encoder.
1408
+ For the smaller datasets of WikiCS, Amazon Computers/Photos, and Coauthor CS/Physics, we use an $\ell_2$-regularized LogisticRegression classifier from Scikit-Learn \citep{scikit-learn} using the `liblinear' solver. We do a hyperparameter search over the regularization strength to be between $\{2^{-10}, 2^{-9}, \dots 2^{9}, 2^{10}\}$.
1409
+
1410
+ For larger PPI and ogbn-arXiv datasets, where the liblinear solver takes too long to converge, we instead perform 100 steps of gradient descent using AdamW with learning rate 0.01, with a smaller hyperparameter search on the weight decay between $\{2^{-10}, 2^{-8}, 2^{-6}, \dots 2^{6}, 2^{8}, 2^{10}\}$.
1411
+
1412
+ In all cases, we $\ell_2$-normalize the frozen learned embeddings over the entire graph before fitting the classifier on top.
1413
+
1414
+
1415
+
1416
+ \section{MAG240M Experiment Details}
1417
+ \label{sec:mag_240m_appendix}
1418
+
1419
+ Full implementation and experiment code has been open-sourced as part of the KDD Cup 2021. Key implementation details and hyperparameter descriptions are reproduced below.
1420
+
1421
+ \paragraph{OGB-LSC MAG240M Dataset:} This is a heterogeneous graph introduced for the KDD Cup 2021 \citep{hu2021ogblsc}, comprised of 121 million academic papers, 122 million authors, and 26 thousand institutions. Papers are represented by 768-dimensional BERT embeddings~\citep{devlin2018bert}, and the task is to classify arXiv papers into one of 153 categories, where 1\% of the paper nodes are from arXiv.
1422
+
1423
+
1424
+ \paragraph{Message Passing Neural Networks encoders:} We use a bi-directional version of the standard MPNN~\citep{mpnn} architectures with 4 message passing steps, a hidden size of 256 at each layer, with node and edge update functions represented by Multilayer Perceptrons (MLPs) with 2 hidden layers of size 512 each.
1425
+
1426
+ \paragraph{Node Neighborhood Sampling:} Since we can no longer perform full-graph training, we sample a batch size of 1024 central nodes split across 8 devices, and subsample a fixed-size neighborhood for each. Specifically, we sample a depth-2 neighborhood with different numbers of neighbors sampled per layer depending on the type (paper, author, institution) of each neighbor. We sample up to 80 papers and 20 authors for each paper; and 40 papers and 10 institutions per author.
1427
+
1428
+ \paragraph{Other hyperparamters:} We use edge masking probability $p_e$ of 0.2 and feature masking probability $p_f$ of 0.4 for each augmentation. We use a higher decay rate $\tau$, starting at 0.999 and decayed to 1.0 with a cosine schedule. We use AdamW optimizer with a weight decay of $10^{-5}$, and a learning rate starting at 0.01 and annealed to 0 over the course of learning with a warmup step equal to 10\% the period of learning.
1429
+
1430
+ \section{Model Ablations on MAG240M Experiments}
1431
+ \label{sec:gcn_mag}
1432
+
1433
+ In our main experiments on the OGB-LSC MAG240M dataset, we focus on MPNN encoders to \textit{(i)} achieve high accuracy on this challenging dataset, and \textit{(ii)} to evaluate the stability of training these more complex models with the \BGRL approach. In this section, we further experiment with simpler GCN encoders and evaluate the benefits of applying \BGRL even on top of these weaker encoders.
1434
+
1435
+ We use a 2-layer GCN encoder, with an embedding size of $128$. All other settings such as learning rate schedules, augmentation parameters, etc. are unchanged.
1436
+
1437
+ In Figure~\ref{fig:gcn_mag}, we see that both \BGRL and \GRACE improve over the performance of a fully supervised approach, with \BGRL learning faster and more stably. Thus the effectiveness of \BGRL even with weaker encoder architectures makes it more applicable in practice.
1438
+
1439
+
1440
+ \begin{figure}[ht]
1441
+ \centering
1442
+ \includegraphics[scale=0.7]{sections/gcn_mag.pdf}
1443
+ \caption{Performance on OGB-LSC MAG240M task, averaged over 3 seeds, using GCN encoders.}\label{fig:gcn_mag}
1444
+ \end{figure}
1445
+
1446
+ \section{Visualization of Scaling Behavior}
1447
+ \label{sec:memory_scatterplot}
1448
+
1449
+
1450
+ \begin{figure}[h!]
1451
+ \centering
1452
+ \includegraphics[scale=0.5]{sections/memory_scatterplot.pdf}
1453
+ \caption{Memory usage of \BGRL and \GRACE across 5 standard datasets.}\label{fig:memory_scatterplot}
1454
+ \end{figure}
1455
+
1456
+
1457
+ In this section, we provide the information contained in Table~\ref{tab:computation} as a scatterplot, to more easily visualize the different scaling properties of \BGRL and \GRACE. We see in Figure~\ref{fig:memory_scatterplot} that the empirical scaling behavior matches the theoretical predictions in Section~\ref{sec:theory_computation}. Note that we do not show the memory usage of \GRACE for the largest dataset, as it runs out of memory here, and we only visualize as a function of the number of nodes in the graph and ignore the number of edges for the purposes of this visualization.
1458
+
1459
+ \section{Frozen Linear Evaluation on MAG240M}
1460
+ \label{sec:frozen_linear_mag}
1461
+
1462
+ We briefly run experiments evaluating \BGRL and \GRACE under the frozen evaluation protocol using an MLP classification layer on the MAG240M dataset.
1463
+ We see that although \GRACE outperforms \BGRL in this setting, both methods perform poorly. In particular, they underperform~\LabelProp~\citep{labelprop,hu2021ogblsc} a simple parameterless, graph-agnostic baseline. This, combined with our goal of pushing performance on the competition dataset, motivates our consideration of the semi-supervised learning setting.
1464
+
1465
+ \begin{figure}[ht]
1466
+ \centering
1467
+ \includegraphics[scale=0.7]{sections/frozen_linear_mag.pdf}
1468
+ \caption{Performance on OGB-LSC MAG240M task, averaged over 5 seeds, under frozen evaluation protocol.}\label{fig:ogb_lsc_frozen_linear}
1469
+ \end{figure}
1470
+
1471
+ \end{document}