File size: 54,902 Bytes
e093fa5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
# Maximally Expressive Gnns For Outerplanar Graphs

Anonymous authors Paper under double-blind review

## Abstract

We propose a *linear time* graph transformation that enables the Weisfeiler-Leman (WL) algorithm and message passing graph neural networks (MPNNs) to be maximally expressive on *outerplanar* graphs. Our approach is motivated by the fact that most pharmaceutical molecules correspond to outerplanar graphs. Existing research predominantly enhances the expressivity of graph neural networks without specific graph families in mind. This often leads to methods that are impractical due to their computational complexity. In contrast, the restriction to outerplanar graphs enables us to encode the Hamiltonian cycle of each biconnected component in linear time. As the main contribution of the paper we prove that our method achieves maximum expressivity on outerplanar graphs. Experiments confirm that our graph transformation improves the predictive performance of MPNNs on molecular benchmark datasets at negligible computational overhead.

## 1 Introduction

We study graph neural networks (GNNs) for the family of outerplanar graphs and devise a GNN model that can distinguish all non-isomorphic outerplanar graphs after applying a linear time pre-processing step. Morris et al. (2019) and Xu et al. (2019) showed that message passing graph neural networks (MPNNs) have limited *expressivity*, i.e., there exist non-isomorphic graphs on which each MPNN will produce the same embedding. Such graphs exist even within the restricted class of outerplanar graphs (see Figure 1). This led to the development of GNNs that are more expressive than MPNNs, often called *higher-order* GNNs.

However, the increase in expressivity usually comes with a significant increase in computational complexity.

For example, k-GNNs (Morris et al., 2019) have a time complexity of Ω(|V | k), while other higher-order GNNs count pattern graphs such as cliques (Bodnar et al., 2021b), cycles (Bodnar et al., 2021a;b), and general subgraphs (Bouritsas et al., 2022), which can take time exponential in the pattern size. However, for certain domains of interest, the graph structure can be exploited to build efficient higher-order GNNs.

In this work, we focus on the pharmaceutical domain and on graphs that represent molecules. Over 92% to 97% of the graphs in widely used benchmark datasets in this domain are *outerplanar* (see Table 1). The properties of outerplanar graphs have been exploited by algorithms for graph mining (Horváth et al., 2010)
and molecular similarity computation (Schietgat et al., 2013; Droschinsky et al., 2017). However, no linear time GNNs with expressivity guarantees on outerplanar graphs have been proposed. We focus on this class of graphs and devise a linear time transformation that allows MPNNs to become maximally expressive on outerplanar graphs.

We propose to decompose the outerplanar graphs into biconnected outerplanar components and trees. Using

![0_image_0.png](0_image_0.png)

the fact that each biconnected outerplanar component has a unique Hamiltonian cycle that can be computed in linear time, we split each component into the two directions of the Hamiltonian cycle and prove that Figure 1: The molecules bicyclopentyl (left) and decalin (right) which correspond to outerplanar graphs that cannot be distinguished by 1-WL.

![0_image_1.png](0_image_1.png)

![0_image_2.png](0_image_2.png)

MPNNs are maximally expressive on biconnected outerplanar graphs transformed in this way. Taking advantage of the well-known fact that MPNNs are maximally expressive on labeled trees (Arvind et al.,
2015; Kiefer, 2020), we extend our result into a linear time graph transformation called Cyclic Adjacency Transform (CAT) that works on all outerplanar graphs. We benchmark CAT with common MPNNs on a variety of molecular graph benchmarks and show that CAT consistently boosts the performance of MPNNs.

Main contributions. We propose CAT, a linear time graph transformation that renders MPNNs maximally expressive on outerplanar graphs. Experimentally, CAT consistently improves the performance of MPNNs with little increase in runtime.

## 2 Discussion And Related Work

Since the expressivity of MPNNs is bounded by the 1-WL algorithm (Morris et al., 2019; Xu et al., 2019), any pair of non-isomorphic graphs that cannot be distinguished by 1-WL will get mapped to the same embedding by MPNNs. One such pair of graphs are the molecules decalin and bicyclopentyl (see Fig. 1). As these two graphs are outerplanar, it follows that MPNNs are not sufficiently expressive for outerplanar graphs.

Furthermore, in the graph mining community it is well known that many pharmaceutical molecules are outerplanar (Horváth et al., 2006; Horváth & Ramon, 2010). Outerplanarity has also been discussed in the context of reconstruction with GNNs (Cotta et al., 2021). This motivates the need for GNNs that are highly expressive on outerplanar graphs. Outerplanar graphs have treewidth at most two (Bodlaender, 1998), and Kiefer (2020) showed that 3-WL is sufficiently expressive to distinguish all outerplanar graphs. Hence, any GNN that matches the expressivity of 3-WL, such as 3-IGN (Maron et al., 2019) or 3-GNN (Morris et al.,
2019), is capable of solving our main goal of distinguishing all outerplanar graphs. However, the runtime of the 3-WL algorithm is O(n 3log n), which can be infeasible for even medium-sized real-world graphs (Immerman & Lander, 1990; Kiefer, 2020). Similarly, 3-GNN and 3-IGN run in roughly O(n 3) time (Maron et al., 2019; Morris et al., 2019). Even when additionally restricting the graph class to *biconnected* outerplanar graphs, MPNNs are not sufficiently expressive. Furthermore, Zhang et al. (2023b) has shown that most GNNs cannot even detect simple properties associated with biconnectivity such as articulation vertices.

They find that only their distance-based GNN and specific GNNs based on subgraphs (Bevilacqua et al.,
2021; Frasca et al., 2022) are able to detect some of these properties. Again, these approaches have at least quadratic worst case runtime. It is not straightforward to use outerplanarity to efficiently improve the expressivity of GNNs, as even finding a subgraph remains NP-hard for outerplanar graphs (Sysło, 1982). Thus, methods like the graph structural network (Bouritsas et al., 2022) that rely on counting subgraphs remain computationally expensive even on outerplanar graphs. Subgraph GNNs model graphs as a collection of subgraphs (Frasca et al., 2022), which usually requires a pre-processing with at least quadratic runtime, depending on the method used to extract subgraphs. For example, Node-delete (Bevilacqua et al., 2021) creates all subgraphs which are obtained by deleting a single node which always creates O(V
2) nodes. k-ego-net (Bevilacqua et al., 2021) extracts the k-hop neighborhood for each node which for k ≥ 2 can create O(V
2) nodes in the worst case, for example for star graphs, which are also outerplanar.

Recently, Dimitrov et al. (2023) have proposed PlanE, a GNN architecture that can distinguish all planar graphs in O(|V | 2) time. As a consequence, PlanE is also able to distinguish all outerplanar graphs. However, this comes at the cost of (1) runtime, (2) flexibility, and—counter-intuitively—(3) expressivity. (1) PlanE
requires a quadratic time pre-processing whereas CAT+MPNN runs in linear time. (2) PlanE is a GNN
architecture while CAT is a graph transformation. This means, that CAT can be combined with any GNN while PlanE requires specialized GNN layers which are not easily combined with other architectures.

(3) Without changes to PlanE's pre-processing, PlanE is incapable of operating on graphs with non-planar components whereas CAT still achieves at least 1-WL expressivity on such graphs. This leads to the counterintuitive result that vanilla PlanE is incomparable in expressivity to CAT (see Prop. 2 and 3 in the Appendix).

Finally, while there exist many GNNs that are provably more expressive than WL, little is known about the precise class of graphs for which such GNNs are provably maximally expressive. Furthermore, proving an upper bound on the expressivity of an architecture is considered difficult and requires significant effort as

![2_image_0.png](2_image_0.png)

![2_image_1.png](2_image_1.png)

Figure 2: A graph and both directions of its directed Hamiltonian cycle. Nodes are annotated with their HALs, the distances on the Hamiltonian cycle to their neighbors (Colbourn & Booth, 1981).
demonstrated by Zhang et al. (2023a). In contrast, we identify outerplanar graphs as a large practical graph family that our proposed method CAT can distinguish.

## 3 Preliminaries

A *graph* G = (*V, E, µ, ν*) consists of a set of nodes V , a set of edges E ⊆ V × V and attributes (also called features) for the nodes µ: V → X and edges ν : E → X, respectively, where X is a set of arbitrary attributes.

We refer to an edge from u to v by uv and in case of undirected graphs uv = vu. The *in-neighbors* of a node u ∈ V are denoted by Nin(u) = {v | vu ∈ E}. The *out-neighbors* of a node u ∈ V are denoted by Nout(u) = {v | uv ∈ E} and in case of undirected graphs, Nin = Nout. In this paper, the input graphs are undirected and are transformed into directed ones. A graph G′ = (V
′, E′, µ′, ν′) is a subgraph of a graph G,
denoted by G′ ⊆ G, iff V
′ ⊆ V , E′ ⊆ E, ∀v ∈ V
′: µ
′(v) = µ(v), and ∀e ∈ E′: ν
′(e) = ν(e). A (directed)
cycle (v1*, . . . , v*k) is a sequence of k ≥ 3 distinct nodes, with ∀i ∈ {1*, . . . , k* − 1} : vivi+1 ∈ E and vkv1 ∈ E.

A graph is *acyclic* if it does not contain a cycle. Given a graph G, we denote the shortest path distance between two nodes u and v by dG(*u, v*), or d(*u, v*) if G is clear from the context. We denote the *diameter* of a graph G by Φ(G) = maxu,v∈V (G) d(*u, v*).

A graph is *outerplanar* if it can be drawn in the plane without edge crossings and with all nodes belonging to the exterior face (see Felsner (2012) for more details). We call an undirected graph with at least three vertices biconnected if the removal of any single node does not disconnect the graph. A *biconnected component* is a maximal biconnected subgraph. We refer to the outerplanar biconnected components of a graph as *blocks*.

Two graphs G and H are isomorphic, if there exists a bijection ψ: V (G) → V (H), so that ∀*u, v* ∈
V (G): µ(v) = µ(ψ(v)) ∧ uv ∈ E(G) ⇔ ψ(u)ψ(v) ∈ E(H) ∧ ∀uv ∈ E(G): ν(uv) = ν(ψ(u)ψ(v)). We call ψ an isomorphism between G and H. An *in-tree* T is a directed, acyclic graph with a distinct *root* having no outgoing edges and all other nodes having one outgoing edge and for every node there is exactly one path from it to the root.

Weisfeiler-Leman. The 1-dimensional Weisfeiler-Leman algorithm (WL) assigns colors (usually represented by numbers) to nodes. The color of a node v ∈ V (G) is updated iteratively according to ci+1(v) = h (ci(v), {{(ν(uv), ci(u)) | u ∈ Nin(v)}}), where h is an injective function and c0 = µ. Note, that this variant of WL makes use of edge features and works on directed graphs. While traditionally WL is defined for undirected and unlabeled graphs, this is a common assumption in similar lines of work.

The *unfolding tree* with height i of a node v ∈ V (G) is defined as the in-tree F
v i = (v, {{F
u i−1 | u ∈ Nin(v)}}),
where F
v 0 = ({v}, ∅). The unfolding trees F
v i and F
w iof two nodes v and w are isomorphic iff the colors of the nodes in iteration i are equal. For more details and a full proof see, e.g., Kriege (2022). The Weisfeiler-Leman algorithm has historically been used as a heuristic for graph isomorphism. Let WL(G) = {{c∞(v) | v ∈ V (G)}}
be the multiset of node colors in the stable coloring (Arvind et al., 2015). Two graphs G and H are not isomorphic if WL(G) ̸= WL(H). However, non-isomorphic graphs G and H with WL(G) = WL(H) exist.

WL for example cannot distinguish the molecular graphs in Figure 1 or a 6-cycle from two triangles.

Hamiltonian adjacency lists. A Hamiltonian cycle of a graph is a cycle containing each node exactly once. Biconnected outerplanar graphs have a unique Hamiltonian cycle that can be found in linear time (Mitchell, 1979). Hamiltonian adjacency lists (HALs) are derived by annotating each node with the

![3_image_0.png](3_image_0.png)

Figure 3: Biconnected outerplanar graph G, CAT∗(G), and unfolding tree of node b.

Table 2: Pre-processing time of CAT on the training splits of all datasets and relative additional training/evaluation time with CAT.

|               | training splits of all datasets and relative additional training/evaluation time with CAT. Dataset CAT Runtime            |             |                                 |
|---------------|------------|-------------|---------------------------------|
| Dataset       | #Graphs    | Outerplanar |                                 |
| ZINC          | 12000      | 98 %        |                                 |
| PCQM-Contact  | 529434     | 98 %        |                                 |
| MOLESOL       | 1128       | 97 %        |                                 |
| MOLTOXCAST    | 8576       | 96 %        |                                 |
| MOLTOX21      | 7831       | 96 %        |                                 |
| MOLLIPO       | 4200       | 96 %        |                                 |
| MOLCLINTOX    | 1477       | 94 %        |                                 |
| NCI-2000      | 250251     | 94 %        |                                 |
| peptides-func | 15535      | 93 %        |                                 |
| MOLBACE       | 1513       | 93 %        |                                 |
| MOLSIDER      | 1427       | 92 %        |                                 |
| MOLBBBP       | 2039       | 92 %        |                                 |
| MOLHIV        | 41127      | 92 %        | Train +Evaluation Time Increase |
|               | MOLESOL    | 2 ± 1 s     | 26 %                            |
|               | MOLBBBP    | 5 ± 1 s     | 36 %                            |
|               | MOLSIDER   | 6 ± 1 s     | 21 %                            |
|               | MOLBACE    | 6 ± 1 s     | 42 %                            |
|               | MOLLIPO    | 14 ± 1 s    | 38 %                            |
|               | MOLTOX21   | 15 ± 1 s    | 27 %                            |
|               | MOLTOXCAST | 16 ± 1 s    | 13 %                            |
|               | ZINC       | 44 ± 1 s    | 27 %                            |
|               | MOLHIV     | 152 ± 1 s   | 31 %                            |

sorted distances dC to all its neighbors on the two directed variants of the Hamiltonian cycle C. Figure 2 shows a graph annotated with its HALs in both directions of the Hamiltonian cycle. Following the Hamiltonian cycle in one direction and concatenating the HALs gives a HAL sequence S (and a reverse sequence R, for the other direction).

A sequence S of length n is a *cyclic shift* of another sequence S
′ of length n if there exists an ℓ ∈ N such that Si = S
′
j for all i ∈ {1*, . . . , n*} where j = i + ℓ mod n. The HAL sequence uniquely identifies a biconnected outerplanar graph (if both directions and cyclic shifts are considered): Lemma 1 ((Colbourn & Booth, 1981)). Two biconnected outerplanar graphs G and H with HAL and reverse sequences SG, SH and RG, RH are isomorphic, iff SG *is a cyclic shift of* SH or RH.

## 4 Identifying Outerplanar Graphs Using Weisfeiler-Leman

We develop a graph transformation called *cyclic adjacency transform* (CAT), that enables WL to distinguish all outerplanar graphs. We first introduce CAT∗, enabling WL to distinguish any pair of non-isomorphic biconnected outerplanar graphs, and then extend it to all outerplanar graphs.

In CAT∗(see Section 4.1), nodes are duplicated to represent the Hamiltonian cycle in both directions. We annotate edges not in the Hamiltonian cycle with the distance their endpoints have on the Hamiltonian cycle. This allows the Weisfeiler-Leman algorithm to encode HAL sequences in the unfolding trees of the nodes and in turn distinguish pairs of non-isomorphic biconnected outerplanar graphs.

To extend our transformation to all outerplanar graphs (see Section 4.2), we need to ensure that the biconnected components keep their unique encoding. It should also be ensured that non-isomorphic graphs with the same biconnected components (but arranged or rotated differently) can be distinguished. We address this by introducing articulation and block pooling vertices. These nodes contain all the information about their respective blocks and their position in these blocks. Together with nodes outside of blocks, this forms a forest which encodes the entire original graph. As it is known that WL is maximally expressive on forests, it follows that WL is maximally expressive on such graphs.

## 4.1 Identifying Biconnected Outerplanar Graphs Using Weisfeiler-Leman

We first present a graph transformation called CAT∗, that allows the Weisfeiler-Leman algorithm to distinguish any two non-isomorphic *biconnected* outerplanar graphs. Figure 3 shows an example of CAT∗. Note that CAT∗(G) consists of two disjoint copies of G, with directed and annotated edges. We first describe the transformation informally for ease of understanding, followed by the formal definition.

We start with an empty graph and add the (unique) Hamiltonian cycle of the original graph twice, directed in opposite ways (steps 1 and 2). Then, we add the remaining edges of the original graph to both cycles, directed in both directions each (step 3). Finally, we set node and edge features to the original features, and extend edge features with the distance of their endpoints in the Hamiltonian cycle (step 4).

Definition 1. The CAT∗(G) = G′*transformation maps a biconnected outerplanar graph* G = (*V, E, µ, ν*)
to a new graph G′ = (V
′, E′, µ′, ν′) *as follows:*
1. Let C = (v1, . . . , vn) be the (directed) Hamiltonian cycle of G and ←−C be its reverse.

2. Add node disjoint copies of C and ←−C to an empty graph G′ and ∀e ∈ E′set ν
′(e) = (1, ν(e)).

3. Let D ⊆ E *be the edges of* G not *on the (undirected) Hamiltonian cycle. Add edges in both directions* to G′for the copies of C and ←−C *for each edge in* D: E′ = E′∪Ed∪E←−
d with Ed =Svivj∈D{v
′
i v
′
j
, v′j v
′
i
}
and E←−
d
=Svivj∈D{v
′′
i v
′′
j
, v′′
j v
′′
i
} *for copies* v
′
i of vi in C *(resp.* v
′′
i in
←−C ).

4. ∀v
′
i
, v′′
i ∈ V
′set µ
′(v
′
i
) = µ
′(v
′′
i
) = µ(vi). ∀vivj ∈ Ed set ν
′(vivj ) = (dC (vj , vi), ν(vivj )) and
∀v
′
i v
′
j ∈ E←−
d set ν
′(v
′
i v
′
j
) = (d←−C
(v
′
j
, v′i
), ν(vjvi)).

Using CAT∗ we prove our first main result.

Theorem 1. Two biconnected outerplanar graphs G and H *are isomorphic, if and only if* WL(CAT∗(G)) = WL(CAT∗(H)).

Proof. Two graphs are distinguished by WL if and only if the multisets of node colors of their stable colorings differ. Trivially, |V (G)| ̸= |V (H)*| ⇒ |*V (CAT∗(G))| ̸= |V (CAT∗(H))| ⇒ WL(CAT∗(G)) ̸= WL(CAT∗(H)),
so we only focus on graphs with |V (G)| = |V (H)|. Two nodes only get the same color if their unfolding trees are isomorphic. The first number in the HAL of each node is always 1, so it can be ignored, and the last number is always |V (G)| − 1, so this can simply be reconstructed by |V (CAT∗(G))|. The rest of the HAL
sequence and the node labels of G can be reconstructed from the unfolding tree of any node in CAT∗(G):
Trivially, each node has two direct neighbors in the Hamiltonian cycle. In the unfolding tree these are the parent and the single child with the 1-annotated edge. All other neighbors in the HAL can be reconstructed by looking at the weights of the edges that do not have weight 1. Figure 4 shows an example. Looking at any two biconnected outerplanar graphs with n nodes, the Weisfeiler-Leman algorithm will be able to distinguish them after at most n iterations, iff they are non-isomorphic: Since the HAL sequence is encoded in the unfolding trees from all starting points (cyclic shift) and in both directions (reverse direction), this identifies isomorphism by Lemma 1.

## 4.2 Extending Cat∗**To All Outerplanar Graphs**

While CAT∗is defined for single blocks (biconnected outerplanar components), an outerplanar graph can contain multiple of them, as well as additional connections. We define the CAT transformation by applying CAT∗to the blocks of the graphs and adding additional nodes and edges, which enable WL to distinguish any two non-isomorphic outerplanar graphs. We first give an informal description to provide an intuition for the various steps before stating the formal definition.

![5_image_0.png](5_image_0.png)

Figure 4: One part of the CAT∗transformation of the graph from Figure 3 and an example unfolding tree of one of its nodes from which the HAL sequence of the original graph can be reconstructed.
Starting from an empty graph, add the graph induced by all edges not in blocks and nodes in more than one block (steps 1 and 2). For each block, add the result of CAT∗applied to the block (step 3.1). We refer to nodes created in this step as Hamiltonian cycle nodes. Add pooling nodes connecting the two Hamiltonian cycle nodes corresponding to the same original node (step 3.2). Add a node for each block
(block nodes) and connect it to the pooling nodes of that block (step 3.3). Relabel the nodes from step 2, that belong to at least one block (articulation nodes) and connect them to the pooling nodes corresponding to the same original node (step 3.6). Finally, create a global (block) pooling node and connect it to the block nodes (step 4). For each node we add, its initial features (if present) are extended by a label referring to the type (Hamiltonian cycle node, pooling node, articulation node, etc.).

Definition 2. The CAT(G) = G′transformation maps a graph G to a new graph G′ as follows:
1. Let B1, . . . , Bℓ be the blocks of G and let F be the graph induced by the edges of G *that are not in* any block plus the nodes that are present in more than one block. Let {⊥, □, ▷◁, ⋆, △} *be distinct* node labels not in X.

2. Add F to G′ *with labels* µ
′(v) = ( ⊥, µ(v)) for all v ∈ F.

3. For each block Bi in G:
3.1. Let B′i
,
←−
B′i be the two connected components in CAT∗(Bi)*. Add* B′i and
←−
B′i to G′.

3.2. For all pairs (v, ←−v ) *of corresponding nodes in* B′i and
←−
B′i add a node pv *with* µ
′(pv) = ( *⋆, µ*(v))
and edges pvv and pv
←−v to G′.

3.3. Add a node bi to G′ *with* µ
′(bi) = □. For all v ∈ V (Bi) add an edge bipv.

3.4. Let Ai = V (Bi) ∩ V (F) *be the nodes of* Bi in F.

3.5. Let γi: Ai → V (B′i
) map nodes of F *to their copy in* B′i and ←−γi *to the copy in*
←−
B′i
.

3.6. For each a ∈ Ai*, let* µ
′(a) = ( ▷◁, µ(a)) *and add edge* pγi(a)a to G′.

4. Add node g *with* µ
′(g) = △ to G′ and for all nodes bi, add an edge gbi to G′.

5. Let CAT(G) = G′.

An example of the CAT transformation can be seen in Figure 5. Appendix D contains additional visualizations of the transformation on real-life molecular graphs.

Theorem 2. Two outerplanar graphs G and H *are isomorphic, if and only if* WL(CAT(G)) = WL(CAT(H)). Proof. Following Theorem 1, each block will be uniquely identified by WL. Since the additional nodes have distinct labels, they will not cause WL to falsely report two blocks as isomorphic when they are not. The information about the entire HAL sequence of each block is stored in the block nodes b. The pooling nodes p connect the block and block nodes to the rest of the graph (through the articulation nodes a), determining the orientation of the block. Note that the graph returned by CAT without the CAT∗blocks and the global pooling node g is a tree. Relying on the labels of the pooling and block nodes, we can reconstruct

![6_image_0.png](6_image_0.png) 
Figure 5: A graph and its CAT transformation. Original node labels are represented by letters, edge and node labels from the CAT transform are represented by colors. Note that CAT(G) looks like a cat.
the original graph from this tree. As WL can distinguish non-isomorphic labeled trees (Arvind et al., 2015; Kiefer, 2020), it can thus distinguish non-isomorphic outerplanar graphs using CAT. For the other direction, note that CAT is permutation-invariant: for two isomorphic graphs G and H, the graphs CAT(G) and CAT(H) are isomorphic and WL will give the same coloring for both. Importantly, we can compute CAT(G) in linear time. The computational complexity is dominated by the computation of the blocks (Tarjan, 1972) and their Hamiltonian cycles (Mitchell, 1979), which both require linear time. Note that we only add a linear number of nodes and edges. From Morris et al. (2019) and Xu et al. (2019) it follows, that MPNNs, that are as expressive as 1-WL, can distinguish CAT(G) and CAT(H)
for non-isomorphic outerplanar graphs G and H.

Note that our proof used an important property of the WL algorithm: Adding uniquely labeled nodes and edges to WL-distinguishable graphs never leads to WL-indistinguishable graphs. We use this property to add a global pooling node in step 4 of CAT which is connected to all block pooling nodes. This allows to pass messages between block nodes in fewer iterations in the subsequent MPNN step.

CAT can also be applied to non-outerplanar graphs. In this case, our graph transformation performs the steps described in Definition 2. However, if a non-outerplanar biconnected component Biis encountered, only one copy B′i is created in CAT(G) and its vertices are connected to the corresponding pooling nodes. An example for this is depicted in Appendix B. While this never reduces expressivity, it is also not guaranteed to improve expressivity on non-outerplanar graphs. Note that it can be determined in linear time whether a block is outerplanar while trying to compute the Hamiltonian cycle of the block (Mitchell, 1979). Hence, the CAT transformation always only requires linear time.

## 4.3 Influence Of Cat **On Graph Connectivity**

As poor graph connectivity may negatively influence the predictive performance of GNNs (Alon & Yahav, 2021), we investigate the effects of CAT on different measures of graph connectivity such as the diameter Φ(G) of a graph. Here, we refer to the shortest path distance between two nodes as the *distance* between them.

Observation 1. For a block B of a graph G*, it holds that* Φ(CAT(B)) ≤ 4. Proof. Let *u, v* ∈ V (CAT(B)). By definition all nodes in CAT(B) are either from a Hamiltonian cycle created by CAT∗, a pooling node, or a block node. If both nodes are from a Hamiltonian cycle, then there is a path

| deviation across all graphs in the datasets. In all cases, smaller is better. Dataset Φ(G) Φ(CAT(G)) ρ(G) ρ(CAT(G)) maxρ(G)   |             |            |           | maxρ(CAT(G))   |             |           |
|-------------------------------------------------------------------------------------------------------------------------------|-------------|------------|-----------|----------------|-------------|-----------|
| ZINC                                                                                                                          | 12.5 ± 2.6  | 9.9 ± 1.6  | 4.0 ± 0.7 | 2.6 ± 0.4      | 10.0 ± 2.0  | 7.7 ± 1.9 |
| MOLESOL                                                                                                                       | 6.6 ± 3.3   | 6.9 ± 3.8  | 2.3 ± 1.0 | 2.1 ± 0.9      | 5.5 ± 2.3   | 6.0 ± 2.8 |
| MOLTOXCAST                                                                                                                    | 8.5 ± 4.7   | 8.4 ± 4.0  | 3.0 ± 1.5 | 2.6 ± 1.3      | 7.2 ± 4.3   | 7.4 ± 3.9 |
| MOLTOX21                                                                                                                      | 8.8 ± 4.6   | 8.7 ± 4.0  | 3.1 ± 1.5 | 2.7 ± 1.3      | 7.5 ± 4.2   | 7.7 ± 3.9 |
| MOLLIPO                                                                                                                       | 13.8 ± 4.0  | 9.9 ± 2.1  | 4.3 ± 1.2 | 2.6 ± 0.5      | 10.7 ± 3.4  | 7.9 ± 2.3 |
| MOLBACE                                                                                                                       | 15.1 ± 3.2  | 11.5 ± 2.8 | 5.0 ± 1.3 | 2.9 ± 0.7      | 12.5 ± 3.4  | 9.1 ± 2.6 |
| MOLSIDER                                                                                                                      | 12.6 ± 11.8 | 10.4 ± 7.3 | 4.1 ± 3.8 | 2.9 ± 2.2      | 10.4 ± 11.0 | 8.9 ± 6.8 |
| MOLBBBP                                                                                                                       | 10.7 ± 3.7  | 9.1 ± 2.6  | 3.4 ± 1.1 | 2.4 ± 0.6      | 8.3 ± 3.8   | 7.5 ± 2.5 |
| MOLHIV                                                                                                                        | 11.9 ± 5.2  | 9.9 ± 3.8  | 3.9 ± 1.7 | 2.7 ± 1.2      | 9.3 ± 4.7   | 8.2 ± 3.8 |

u, pu, b, pv, v between them, where pu, pv are pooling nodes and b is the block node. Hence, dCAT(G)(*u, v*) ≤ 4.

If u or v is a pooling or a block node, then the above path implies that dCAT(G)(*u, v*) < 4.

Observation 2. Let Bi and Bj be two blocks of a graph G*. In* CAT(G)*, the maximum distance between* any node in CAT(Bi) *and any node in* CAT(Bj ) is 6.

Proof. Let u ∈ V (CAT(Bi)) and v ∈ V (CAT(Bj )). If Bi = Bj , then Observation 1 implies dCAT(G)(*u, v*) ≤ 4.

If Bi ̸= Bj , then there exists a path u, pu, bi, g, bj , pv, v where pu, pv are pooling nodes, bi, bj are the block node for block Bi, Bj , and g is the global block pooling node. Thus, dCAT(G)(*u, v*) ≤ 6.

Observations 1 and 2 show that for some subgraphs CAT provides constant upper bounds in diameter. On the graph level, CAT can only lead to a small increase of the diameter.

Proposition 1. *For an outerplanar graph* G, Φ(CAT(G)) ≤ Φ(G) + 7.

Proof sketch. To prove the proposition we bound the distance between any pair of nodes in CAT(G) by case analysis based on the type of nodes. We defer the full proof to Appendix A.

In most practical cases, the short-cutting inside or between blocks leads to CAT reducing the graph diameter (see Observations 1 and 2). In Table 3, we demonstrate this on molecular benchmark datasets. Besides the diameter, another useful graph connectivity measure is the *effective resistance*. The notion of effective resistance originates in electrical engineering (Kirchhoff, 1847) and has implications on several graph properties. For example, the effective resistance between two nodes is proportional to the commute time between them (Chandra et al., 1989). Intuitively, a large effective resistance between two nodes suggests that information propagation between the nodes is hindered. Recently, effective resistance has been in fact linked to *over-squashing* (Black et al., 2023) in GNNs, which is a negative effect that leads to long-range interactions having little impact on the predictions of a GNN. Effective resistance as introduced by Kirchhoff (1847) is naturally only defined for undirected graphs. As CAT produces directed graphs, we therefore use an extension of effective resistance introduced by Young et al. (2015) that is applicable to directed graphs.

We refer to Young et al. (2015) for more details. In Table 3 we demonstrate that CAT reduces the pair-wise effective resistance on molecular benchmark datasets.

## 5 Experimental Evaluation

We investigate whether our proposed method CAT1can improve the predictive performance of MPNNs on molecular benchmark datasets. We utilize three commonly used MPNNs: GIN (Xu et al., 2019), GCN (Kipf & Welling, 2017), and GAT-v2 (Veličković et al., 2018; Brody et al., 2022). We train on the commonly used 1Our code can be found at https://anonymous.4open.science/r/outerplanarGNNs.

| with CAT that outperforms the same MPNN without CAT. Dataset → ZINC ZINC250k MOLHIV   | MOLBACE       | MOLBBBP       |            |               |            |
|---------------------------------------------------------------------------------------|---------------|---------------|------------|---------------|------------|
| ↓ Model                                                                               | MAE ↓         | MAE ↓         | ROC-AUC ↑  | ROC-AUC ↑     | ROC-AUC ↑  |
| GIN                                                                                   | 0.168 ± 0.007 | 0.033 ± 0.003 | 77.9 ± 1.0 | 74.6 ± 3.2    | 66.0 ± 2.1 |
| CAT+GIN                                                                               | 0.101 ± 0.004 | 0.034 ± 0.003 | 76.7 ± 1.8 | 79.5 ± 2.5    | 67.2 ± 1.8 |
| GCN                                                                                   | 0.184 ± 0.013 | 0.067 ± 0.005 | 76.7 ± 1.4 | 77.9 ± 1.7    | 66.1 ± 2.4 |
| CAT+GCN                                                                               | 0.123 ± 0.008 | 0.034 ± 0.003 | 77.1 ± 1.6 | 79.2 ± 1.5    | 68.3 ± 1.7 |
| GAT                                                                                   | 0.375 ± 0.013 | 0.103 ± 0.004 | 76.6 ± 2.0 | 81.7 ± 2.3    | 66.2 ± 1.4 |
| CAT+GAT                                                                               | 0.201 ± 0.022 | 0.046 ± 0.004 | 75.3 ± 1.6 | 79.3 ± 1.6    | 66.0 ± 1.9 |
| Dataset →                                                                             | MOLSIDER      | MOLESOL       | MOLTOXCAST | MOLLIPO       | MOLTOX21   |
| ↓ Model                                                                               | ROC-AUC ↑     | RMSE ↓        | ROC-AUC ↑  | RMSE ↓        | ROC-AUC ↑  |
| GIN                                                                                   | 56.6 ± 1.0    | 1.105 ± 0.077 | 65.3 ± 0.6 | 0.717 ± 0.016 | 75.8 ± 0.7 |
| CAT+GIN                                                                               | 58.2 ± 0.9    | 0.985 ± 0.055 | 65.6 ± 0.5 | 0.798 ± 0.031 | 74.8 ± 1.0 |
| GCN                                                                                   | 56.7 ± 1.5    | 1.053 ± 0.087 | 64.4 ± 0.4 | 0.748 ± 0.018 | 76.4 ± 0.3 |
| CAT+GCN                                                                               | 57.9 ± 1.8    | 1.006 ± 0.036 | 66.2 ± 0.8 | 0.771 ± 0.023 | 74.9 ± 0.8 |
| GAT                                                                                   | 58.4 ± 1.0    | 1.037 ± 0.063 | 63.8 ± 0.8 | 0.728 ± 0.024 | 76.3 ± 0.6 |
| CAT+GAT                                                                               | 58.3 ± 1.3    | 1.09 ± 0.048  | 64.5 ± 0.8 | 0.754 ± 0.021 | 75.4 ± 0.7 |

ZINC (Gómez-Bombarelli et al., 2018; Sterling & Irwin, 2015) and MOLHIV (Hu et al., 2020) datasets, which contain graphs representing molecules. We supplement these with 7 small datasets (see Table 4) from the OGB collection (Hu et al., 2020). In addition to the ZINC dataset with 12k graphs we also use the larger ZINC250k variant with 250k graphs. In total, we train three MPNNs on ten datasets with and without CAT.

For each configuration, we separately tune hyperparameters on the validation set and train a model with the best hyperparameters ten times on the training set and evaluate it on the test set. The only exception to this is ZINC250k where we evalute the final hyperparameter configuration five times, due to the large dataset size. For each dataset we report the mean and standard deviation of the most common evaluation metric on the test set in the epoch with the best validation performance. For ZINC we use a batch size of 128 and an initial learning rate of 10−3that gets halved if the validation metric does not improve for 20 epochs.

The training stops after 500 epochs or if the learning rate dips below 10−5. For all other datasets we train with a fixed learning rate for 100 epochs and a batch size of 128. We use the same hyperparameter grid for all models and provide more details in Appendix C. Besides measuring the predictive performance, we also measure the time needed for applying CAT (averaged over ten runs), and the training and evaluation time for GIN and GIN+CAT with the same hyperparameters on all datasets (averaged over five runs). Finally, we report the values for the diameters and effective resistances as described in Section 4.3.

Results. Table 2 shows the pre-processing time of CAT. Note that this is the performance of running CAT on only a single CPU core. Thus, it is possible to achieve faster runtimes by simply parallelizing different graphs over different cores. This negligible runtime of around 5ms per graph on MOLHIV allows to apply the transformation even in realistic high-throughput screening applications (Krasoulis et al., 2022).

Training and prediction time on CAT transformed graphs increases by 29% on average. Table 4 shows the predictive performance of all models. Note that our baseline models obtain very strong results, often surpassing the performance of (higher-order) GNNs reported in the literature and that we train each MPNN and MPNN+CAT with exactly the same sets of hyperparameters. Overall, CAT improves the predictive performance of GIN and GCN in the majority of datasets (6 / 9 and 7 / 9, respectively). For GIN and GCN, performance increases reliably on all datasets, except MOLLIPO and MOLTOX21. Surprisingly, CAT
does not work well with GAT and only improves its performance in 2 / 9 datasets. Most notably on ZINC,
CAT achieves very strong results boosting the predictive performance of MPNNs between 33% (GCN) and 46% (GAT). This is only surpassed by higher-order GNNs such as CW Networks (Bodnar et al., 2021a)
which obtain a MAE of 0.079 ± 0.006 at the cost of potentially exponential pre-processing runtime due to enumerating cycles in the graph. Table 3 shows that CAT reduces both graph diameter and maximum pairwise resistance on most datasets. Furthermore, CAT reduces the average pair-wise resistance in all datasets.

This suggests that CAT is effective at improving graph connectivity in real-life molecular graphs.

## 6 Conclusion

We proposed CAT, a graph transformation that enables the Weisfeiler-Leman algorithm to be maximally expressive on outerplanar graphs. We rely on the fact that biconnected outerplanar graphs can be uniquely identified by their Hamiltonian adjacency list sequences, which CAT encodes in unfolding trees. By combining MPNNs with CAT we enable them to distinguish all outerplanar graphs as well. We achieved promising empirical results on standard molecular benchmark datasets where CAT improved the performance of GIN and GCN in most cases, while for GAT we could not observe this benefit. Computing CAT takes linear time and our implementation of CAT typically runs in the order of seconds. Motivated by the recent interest in the over-squashing phenomenon, we also studied the effect of CAT on graph connectivity. We proved that in the worst-case CAT increases the diameter of outerplanar graphs by a small additive constant.

However, inspecting CAT on real-world data, we find that the diameter decreases most of the time. Similarly, we observed that the maximum and average pair-wise effective resistance, which is associated with oversquashing, typically decreases after applying CAT.

In future work, we intend to further extend our approach to more general graph families. Promising candidates are k-outerplanar graphs, which are known to capture even more molecular graphs (Horváth et al.,
2010). A major challenge going beyond outerplanar graphs is that non-outerplanar biconnected components can have multiple or even no Hamiltonian cycles, making an extension of CAT to such graphs non-trivial. A
possible approach could be to split a graph into trees and components with unique Hamiltonian cycles. We hypothesize that such a transformation would lead to maximally expressive MPNNs at the cost of potentially exponential pre-processing time depending on the graph type.

## References

Uri Alon and Eran Yahav. On the bottleneck of graph neural networks and its practical implications. In ICLR, 2021.

Vikraman Arvind, Johannes Köbler, Gaurav Rattan, and Oleg Verbitsky. On the power of color refinement.

In *Fundamentals of Computation Theory*, 2015.

Beatrice Bevilacqua, Fabrizio Frasca, Derek Lim, Balasubramaniam Srinivasan, Chen Cai, Gopinath Balamurugan, Michael Bronstein, and Haggai Maron. Equivariant subgraph aggregation networks. In *ICLR*,
2021.

Lukas Biewald. Experiment tracking with weights and biases, 2020. URL https://www.wandb.com/. Software available from wandb.com.

Mitchell Black, Zhengchao Wan, Amir Nayyeri, and Yusu Wang. Understanding oversquashing in GNNs through the lens of effective resistance. In *ICML*, 2023.

Hans L. Bodlaender. A partial k-arboretum of graphs with bounded treewidth. *Theoretical Computer* Science, 209(1-2):1–45, 1998.

Cristian Bodnar, Fabrizio Frasca, Nina Otter, Yu Guang Wang, Pietro Liò, Guido Montúfar, and Michael Bronstein. Weisfeiler and Lehman go cellular: CW networks. In *NeurIPS*, 2021a.

Cristian Bodnar, Fabrizio Frasca, Yu Guang Wang, Nina Otter, Guido Montufar, Pietro Lio, and Michael Bronstein. Weisfeiler and Lehman go topological: Message passing simplicial networks. In *ICML*, 2021b.

Giorgos Bouritsas, Fabrizio Frasca, Stefanos Zafeiriou, and Michael M Bronstein. Improving graph neural network expressivity via subgraph isomorphism counting. *Transactions on Pattern Analysis and Machine* Intelligence, 45(1):657–668, 2022.

Shaked Brody, Uri Alon, and Eran Yahav. How attentive are graph attention networks? In *ICLR*, 2022.

Ashok K Chandra, Prabhakar Raghavan, Walter L Ruzzo, and Roman Smolensky. The electrical resistance of a graph captures its commute and cover times. In *STOC*, 1989.

Charles J. Colbourn and Kellogg S. Booth. Linear time automorphism algorithms for trees, interval graphs, and planar graphs. *SIAM Journal on Computing*, 10(1):203–225, 1981.

Leonardo Cotta, Christopher Morris, and Bruno Ribeiro. Reconstruction for powerful graph representations.

NeurIPS, 2021.

Radoslav Dimitrov, Zeyang Zhao, Ralph Abboud, and Ismail Ceylan. Plane: Representation learning over planar graphs. *NeurIPS*, 2023.

Andre Droschinsky, Nils Kriege, and Petra Mutzel. Finding largest common substructures of molecules in quadratic time. In *International Conference on Current Trends in Theory and Practice of Informatics*,
2017.

Stefan Felsner. *Geometric graphs and arrangements: some chapters from combinatorial geometry*. Springer, 2012.

Matthias Fey and Jan E. Lenssen. Fast graph representation learning with PyTorch Geometric. In ICLR
Workshop on Representation Learning on Graphs and Manifolds, 2019.

Fabrizio Frasca, Beatrice Bevilacqua, Michael M. Bronstein, and Haggai Maron. Understanding and extending subgraph GNNs by rethinking their symmetries. In *NeurIPS*, 2022.

Rafael Gómez-Bombarelli, Jennifer N. Wei, David Duvenaud, José Miguel Hernández-Lobato, Benjamín Sánchez-Lengeling, Dennis Sheberla, Jorge Aguilera-Iparraguirre, Timothy D. Hirzel, Ryan P. Adams, and Alán Aspuru-Guzik. Automatic chemical design using a data-driven continuous representation of molecules. *ACS Central Science*, 2018.

Tamás Horváth and Jan Ramon. Efficient frequent connected subgraph mining in graphs of bounded treewidth. *Theoretical Computer Science*, 411(31-33):2784–2797, 2010.

Tamás Horváth, Jan Ramon, and Stefan Wrobel. Frequent subgraph mining in outerplanar graphs. In KDD,
2006.

Tamás Horváth, Jan Ramon, and Stefan Wrobel. Frequent subgraph mining in outerplanar graphs. *Data* Mining and Knowledge Discovery, 21(3):472–508, 2010.

Weihua Hu, Matthias Fey, Marinka Zitnik, Yuxiao Dong, Hongyu Ren, Bowen Liu, Michele Catasta, and Jure Leskovec. Open Graph Benchmark: Datasets for machine learning on graphs. In *NeurIPS*, 2020.

Neil Immerman and Eric Lander. *Describing graphs: A first-order approach to graph canonization*. Yale University Press, 1990.

Sandra Kiefer. *Power and limits of the Weisfeiler-Leman algorithm*. PhD thesis, RWTH Aachen University, 2020.

Thomas N. Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. In ICLR, 2017.

Gustav Kirchhoff. Ueber die auflösung der gleichungen, auf welche man bei der untersuchung der linearen vertheilung galvanischer ströme geführt wird. *Annalen der Physik*, 148(12):497–508, 1847.

Agamemnon Krasoulis, Nick Antonopoulos, Vassilis Pitsikalis, and Stavros Theodorakis. Denvis: Scalable and high-throughput virtual screening using graph neural networks with atomic and surface protein pocket features. *Journal of Chemical Information and Modeling*, 62(19):4642–4659, 2022.

Nils M Kriege. Weisfeiler and Leman go walking: Random walk kernels revisited. In *NeurIPS*, 2022.

Haggai Maron, Heli Ben-Hamu, Hadar Serviansky, and Yaron Lipman. Provably powerful graph networks.

NeurIPS, 2019.

Sandra L. Mitchell. Linear algorithms to recognize outerplanar and maximal outerplanar graphs. Information Processing Letters, 9(5):229–232, 1979.

Christopher Morris, Martin Ritzert, Matthias Fey, William L Hamilton, Jan Eric Lenssen, Gaurav Rattan, and Martin Grohe. Weisfeiler and leman go neural: Higher-order graph neural networks. In *AAAI*, 2019.

Leander Schietgat, Jan Ramon, and Maurice Bruynooghe. A polynomial-time maximum common subgraph algorithm for outerplanar graphs and its application to chemoinformatics. *Annals of Mathematics and* Artificial Intelligence, 69(4):343–376, 2013.

Teague Sterling and John J. Irwin. Zinc 15 - ligand discovery for everyone. *Journal of Chemical Information* and Modeling, 2015.

Maciej M. Sysło. The subgraph isomorphism problem for outerplanar graphs. *Theoretical Computer Science*,
17(1):91–97, 1982.

Robert Endre Tarjan. Depth-first search and linear graph algorithms. *SIAM Journal of Computing*, 1(2):
146–160, 1972.

Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, and Yoshua Bengio.

Graph Attention Networks. In *ICLR*, 2018.

Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How powerful are graph neural networks? In ICLR, 2019.

George Forrest Young, Luca Scardovi, and Naomi Ehrich Leonard. A new notion of effective resistance for directed graphs—part i: Definition and properties. *Transactions on Automatic Control*, 61(7):1727–1736, 2015.

Bohang Zhang, Guhao Feng, Yiheng Du, Di He, and Liwei Wang. A complete expressiveness hierarchy for subgraph GNNs via subgraph Weisfeiler-Lehman tests. In *ICML*, 2023a.

Bohang Zhang, Shengjie Luo, Liwei Wang, and Di He. Rethinking the expressive power of GNNs via graph biconnectivity. In *ICLR*, 2023b.

## A Proof Of Proposition 1

We prove Proposition 1 which states that Φ(CAT(G)) ≤ Φ(G) + 7 for every outerplanar graph G.

Proof. Let *u, v* ∈ V (CAT(G)) such that dCAT(G)(*u, v*) = Φ(CAT(G)). We call a node a *tree node* if it was not part of a block in G and was not created by CAT. A node that is not a tree node is either a Hamiltonian cycle node, a pooling node, a block pooling node, or a global block pooling node.

Case 1: Both *u, v* are not tree nodes. By Observation 2: dCAT(G)(*u, v*) ≤ 6.

Case 2: Node u is a tree node and v is not. Let a ∈ V (G) be the closest articulation node to u in CAT(G).

Then, there is a path of length dG(*u, a*) in CAT(G) from u to a. We can extend this path by one node to reach a pooling node. By Definition 2, there exists a path of length at most 6 from this pooling node to v.

Thus dCAT(G)(u, v) ≤ dG(*u, a*) + 7 ≤ Φ(G) + 7.

Case 3: Both *u, v* are tree nodes.

Case 3a: Suppose that the shortest path between u and v in G does not contain any edge inside of an outerplanar block, then dCAT(G)(*u, v*) = dG(*u, v*) ≤ Φ(G).

Case 3b: Suppose that the shortest path between u and v in G contains one or more edges inside exactly one block. Then, we can enter and exit this block in CAT(G) through a path a1, p1, b, p2, a2, where a1, a2 are articulation nodes, p1, p2 are pooling nodes, and b is a block node. Note that the articulation nodes were part of the path in G which implies dCAT(G)(*u, v*) = dG(*u, a*1) + dG(a2, v) + 4 = dG(u, v) − dG(a1, a2) + 4.

Furthermore, we do not need to take the one or more edges inside the block to go from a1 to a2. Using dG(a1, a2) ≥ 1 we obtain dCAT(G)(*u, v*) = dG(u, v) − dG(a1, a2) + 4 ≤ dG(*u, v*) + 3 ≤ Φ(G) + 3.

Case 3c: Suppose that the shortest path between u and v in G contains two or more edges that are contained in two or more different blocks. Then, for CAT(G) we can shortcut from the first to the last block node of the shortest path between u and v through a path a1, p1, b1, g, b2, p2, a2, where a1, a2 are articulation nodes, p1, p2 are pooling nodes, b1, b2 are block nodes, and g is the global block pooling node.

Note that the articulation nodes were part of the path in G which implies that dCAT(G)(*u, v*) = dG(*u, a*1) +
dG(a2, v) + 6 = dG(*u, v*) − dG(a1, a2) + 6. By assumption, we know that dG(a1, a2) ≥ 2 which implies dCAT(G)(*u, v*) = dG(u, v) − dG(a1, a2) + 6 ≤ dG(*u, v*) + 4 ≤ Φ(G) + 4.

## B Incomparable Expressivity Of Cat And Plane

The expressivity of CAT and PlanE is incomparable. This follows from Propositions 2 and 3, showing that there are pairs of graphs that are distinguishable by CAT and not PlanE and vice versa.

Proposition 2. *There exists non-planar graphs that* CAT *can distinguish and PlanE cannot distinguish.* Proof. The claim follows from Figure 6. Proposition 3. *There exists planar graphs that PlanE can distinguish and* CAT *cannot distinguish.* Proof. The claim follows from Figure 7.

## C Details For Experimental Evaluation

Our models are implemented in PyTorch-Geometric (Fey & Lenssen, 2019) and trained on a single NVIDIA
GeForce RTX 3080 GPU. We use WandB (Biewald, 2020) for tracking. The used server has 64 GB of RAM, has an 11th Gen Intel(R) Core(TM) i9-11900KF CPU running at 3.50GHz. Table 5 shows the hyperparameters for our MPNNs on different datasets. We use the same hyperparameter grid for MPNNs combined with CAT. We used a smaller hyperparameter grid for MOLHIV than for ZINC, as MOLHIV is larger than ZINC meaning that training takes much longer. When benchmarking the speed of GIN against

![13_image_0.png](13_image_0.png)

Figure 6: Two nonplanar graphs which PlanE cannot distinguish. However, CAT can distinguish the graphs

![13_image_1.png](13_image_1.png) by their two central outerplanar biconnected components.

Figure 7: Two planar (but not outerplanar) graphs which CAT cannot distinguish.
GIN+CAT we train for 100 epochs with a batch size of 128 on all datasets with the same hyperparameters for both models (see Table 5). CAT **implementation.** CAT adds an additional feature to each node which encodes the type of that node i.e., nodes from Hamiltonian cycles, block nodes, pooling nodes, articulation nodes and or global block nodes.

Furthermore, we create additional edge features encoding the types of nodes incident to this edge i.e., an edge between two different nodes in a Hamiltonian cycle has a different type than an edge from a pooling node to the block node. For newly created nodes and edges we set their remaining features to the feature of the node / edge they are based on; for example, a pooling node will have the features of the node they are performing the pooling operation for. For nodes that have no natural representation in the graph (block and block pooling nodes) we set these features to 0. To ensure that only these nodes get assigned 0 features, we shift the values of these features for all other nodes by 1. Note that our MPNNs treat the distance on edges in blocks as a categorical feature. Representing the distances as numerical features did not improve performance in preliminary experiments.

|                        |               | Benchmarking   |              |
|------------------------|---------------|----------------|--------------|
| Parameter              | All datasets  | GIN (+CAT)     |              |
|                        | except MOLHIV | MOLHIV         | all datasets |
| Message passing layers | 2, 3, 4, 5    | 4, 5           | 4            |
| Final MLP layers       | 2             | 2              | 2            |
| Pooling operation      | mean, sum     | mean, sum      | mean         |
| Embedding dimension    | 64, 128, 256  | 64,128         | 64           |
| Jumping knowledge      | last          | concat         | concat       |
| Dropout rate           | 0, 0.5        | 0.5            | 0            |

Table 5: Hyperparameter grids for GIN, GCN and GAT on different datasets.

## D Additional Figures

We provide additional visualizations of the CAT transformation. In all figures, the color of the vertices

![14_image_0.png](14_image_0.png)

![14_image_1.png](14_image_1.png)

in the transformed graph have the following meaning: red nodes are from Hamiltonian cycles, blue nodes correspond to blocks, yellow nodes pool the nodes from Hamiltonian cycles, orange nodes correspond to articulation nodes and the gray node pools block nodes. Figure 8 shows a synthetic example with a nonouterplanar graph. Figure 9 demonstrates CAT on various synthetic graphs. Figure 10 shows the result of CAT on molecular graphs from ZINC and MOLHIV. Somewhat ironically, CAT often generates frog graphs on MOLHIV as can be seen in Figure 11.

Figure 8: Left: example non-outerplanar graph. Right: result of applying CAT to the graph. Colors indicate the type of node (see Appendix D).

![15_image_0.png](15_image_0.png)

![15_image_1.png](15_image_1.png)

Figure 9: Left: example graphs. Right: result of applying CAT to these graphs. Colors indicate the type of node (see Appendix D).

![16_image_0.png](16_image_0.png)

![16_image_1.png](16_image_1.png)

Figure 10: Left: example graphs from MOLHIV (top) and ZINC (bottom). Right: result of applying CAT to these graphs. Colors indicate the type of node (see Appendix D).

![17_image_0.png](17_image_0.png)

![17_image_1.png](17_image_1.png)

 Figure 11: Left: example graphs from MOLHIV. Right: result of applying CAT to the graph. Colors indicate the type of node (see Appendix D).