File size: 90,560 Bytes
4f7f6fc
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
# Isr: Invertible Symbolic Regression

Anonymous authors Paper under double-blind review

## Abstract

We introduce an Invertible Symbolic Regression (ISR) method, a machine learning technique that generates analytical relationships between inputs and outputs of a given dataset via invertible maps (or architectures). The proposed ISR method naturally combines the principles of Invertible Neural Networks (INNs) and Equation Learner (EQL), a neural network-based symbolic architecture for function learning. In particular, we transform the affine coupling blocks of INNs into a symbolic framework, resulting in an end-to-end differentiable symbolic invertible architecture that allows for efficient gradient-based learning.

The proposed ISR framework also relies on sparsity promoting regularization, allowing the discovery of concise and interpretable invertible expressions. We show that ISR can serve as a (symbolic) normalizing flow for density estimation tasks. Furthermore, we highlight its practical applicability in solving inverse problems, including a benchmark inverse kinematics problem, and notably, a geoacoustic inversion problem in oceanography aimed at inferring posterior distributions of underlying seabed parameters from acoustic signals.

## 1 Introduction

In many applications in engineering and science, experts have developed theories about how measurable quantities result from system parameters, known as forward modeling. In contrast, the *Inverse Problem* aims to find unknown parameters of a system that lead to desirable observable quantities. A typical challenge is that numerous configurations of these parameters yield the same observable quantity, especially with underlying complicated nonlinear governing equations and where hidden parameters outnumber the observable variables. To tackle challenging and ill-posed inverse problems, a common method involves estimating a posterior distribution on the unknown parameters, given the observations. Such a probabilistic approach facilitates the uncertainty quantification by analyzing the diversity of potential inverse solutions. An established computationally expensive approach in finding the posterior distribution is to directly generate samples using acceptance/rejection. In this scope, the Markov Chain Monte Carlo (MCMC) methods (Brooks et al., 2011; Andrieu et al., 2003; Doucet & Wang, 2005; Murphy, 2012; Goodfellow et al., 2016; Korattikara et al., 2014; Atchadé & Rosenthal, 2005; Kungurtsev et al., 2023) offer a strong alternative for achieving near-optimum Bayesian estimation (Constantine et al., 2016; Conrad et al., 2016; Dosso & Dettmer, 2011).

However, MCMC methods can be inefficient MacKay (2003) as the number of unknown parameter increases.

When the likelihood is unknown or intractable, the Approximate Bayesian Computation (ABC) Csilléry et al. (2010) is often used to estimate the posterior distribution. However, similar to MCMC, this method also suffers from poor scalability Cranmer et al. (2020a); Papamakarios et al. (2019). A more efficient alternative is to approximate the posterior using a tractable distribution, i.e. the variational method Blei et al. (2017);
Salimans et al. (2015); Wu et al. (2018). However, the performance of the variational method deteriorates as the true posterior becomes more complicated.

Neural networks have become popular for solving inverse problems due to their ability to effectively handle complex relationships. They can be used not only for finding point estimates but also in the Bayesian framework to estimate the posterior distribution. For instance, in the non-Bayesian setting, by leveraging the rich mathematical and physical theories behind the inverse problems, the works in (Ying, 2022; Fan &
Ying, 2019; Khoo & Ying, 2019) developed novel neural network architectures for solving these problems while reducing the reliance on large amounts of data. Moreover, in the Bayesian setting, the learning-enabled method for sampling distribution has gained attention as they shown to outperform traditional methods.

One approach is to utilize learned surrogate models within traditional sampling methods such as MCMC.

Since direct sampling of the posterior distribution requires many runs of the forward map, often a trained and efficient surrogate model is used instead of the exact model (Li et al., 2023). Surrogate models are considered as an efficient representation of the forward map trained on the data. Popular approaches include recently introduced Physics-Informed Neural Networks (Raissi et al., 2019). Also, invertible architectures are shown to be well-suited for solving inverse problems. Unlike classical Bayesian neural networks (Kendall & Gal, 2017), which directly tackle the ambiguous inverse problem, invertible Neural Networks (INNs) learn the forward process and utilize latent variables to capture otherwise lost information. The invertible structure of INNs implicitly learns the inverse process, providing the full posterior over parameter space and circumventing the challenge of defining a supervised loss in the inverse direction for direct posterior learning (Ardizzone et al., 2019a; Zhang & Curtis, 2021; Luce et al., 2023; Putzky & Welling, 2019; Guan et al.).

Due to the black-box nature of neural networks, it is beneficial to express the forward map symbolically instead for several reasons. First, Symbolic Regression (SR) can provide model interpretability, while understanding the inner workings of deep Neural Networks is challenging (Kim et al., 2020; Gilpin et al., 2018).

Second, studying the symbolic outcome can lead to valuable insights and provide nontrivial relations and/or physical laws (Udrescu & Tegmark, 2020; Udrescu et al., 2020; Liu & Tegmark, 2021; Keren et al., 2023).

Third, they may achieve better results than Neural Networks in out-of-distribution generalization Cranmer et al. (2020b). Fourth, unlike conventional regression methods, such as least squares (Wild & Seber, 1989),
likelihood-based (Edwards, 1984; Pawitan, 2001; Tohme et al., 2023b), and Bayesian regression techniques
(Lee, 1997; Leonard & Hsu, 2001; Vanslette et al., 2020; Tohme et al., 2020; Tohme, 2020), SR does not rely on a fixed parametric model structure. The attractive properties of SR, such as interpretability, often come at high computational cost compared to standard Neural Networks. This is because SR optimizes for model structure and parameters simultaneously.

Therefore, SR is thought to be NP-hard (Petersen et al., 2021; Virgolin & Pissis, 2022). However tractable solutions exist, that can approximate the optimal solution suitable for applications. For instance, genetic algorithms (Koza & Koza, 1992; Schmidt & Lipson, 2009; Tohme et al., 2023a; Orzechowski et al., 2018; La Cava et al., 2021) and machine learning algorithms, such as neural networks, and transformers (Sahoo et al., 2018; Jin et al., 2019; Udrescu et al., 2020; Cranmer et al., 2020b; Kommenda et al., 2020; Burlacu et al., 2020; Biggio et al., 2021; Mundhenk et al., 2021; Petersen et al., 2021; Valipour et al., 2021; Zhang et al., 2022; Kamienny et al., 2022), are used to solve SR efficiently.

Related Works. In recent years, a branch of Machine Learning methods has emerged and is dedicated to finding data-driven invertible maps. While they are ideal for data generation and inverse problem, they lack interpretablity. On the other hand, several methods have been developed to achieve interpretablity in representing the forward map via Symbolic Regression. Hence, it is natural to incorporate SR in the invertible map for the inverse problem. Next, we review the related works in the scope of this work.

Normalizing Flows: The idea of this class of methods is to train an invertible map such that in the forward problem, the input samples are mapped to a known distribution function, e.g. the normal distribution function. Then, the unknown distribution function is found by inverting the trained map with the normal distribution as the input. This procedure is called the normalizing flow technique (Rezende & Mohamed, 2015; Dinh et al., 2016; Kingma & Dhariwal, 2018; Durkan et al., 2019; Tzen & Raginsky, 2019; Kobyzev et al., 2020; Wang & Marzouk, 2022). This method has been used for re-sampling unknown distributions, e.g. Boltzmann generators (Noé et al., 2019), as well as density recovery such as AI-Feynmann (Udrescu &
Tegmark, 2020; Udrescu et al., 2020).

Invertible Neural Networks (INNs): This method can be categorized in the class of normalizing flows. The invertibility of INN is rooted in their architecture. The most popular design is constructed by concatenating affine coupling blocks Kingma & Dhariwal (2018); Dinh et al. (2016; 2014), which limits the architecture of the neural network. INNs have been shown to be effective in estimating the posterior of the probabilistic inverse problems while outperforming MCMC, ABC, and variational methods. Applications include epidemiology Radev et al. (2021), astrophysics Ardizzone et al. (2019a), medicine Ardizzone et al. (2019a), optics Luce et al. (2023), geophysics Zhang & Curtis (2021); Wu et al. (2023), and reservoir engineering Padmanabha & Zabaras (2021). Compared to classical Bayesian neural networks for solving inverse problems, INNs lead to more accurate and reliable solutions as they leverage the better-understood forward process, avoiding the challenge of defining a supervised loss for direct posterior learning. Ardizzone et al. (2019a). However, similar to standard neural networks, INNs lead to a model that cannot be evaluated with interpretable mathematical formula.

Equation Learner (EQL): Among SR methods, the EQL network is one of the attractive methods since it incorporates gradient descent in the symbolic regression task for better efficiency (Martius & Lampert, 2016; Sahoo et al., 2018; Kim et al., 2020). EQL devises a neural network-based architecture for SR task by replacing commonly used activation functions with a dictionary of operators and use back-propagation for training. However, in order to obtain a symbolic estimate for the inverse problem efficiently, it is necessary to merge such efficient SR method with an invertible architecture, which is the goal of this paper.

Our Contributions. We present Invertible Symbolic Regression (ISR), a machine learning technique that identifies mathematical relationships that best describe the forward and inverse map of a given dataset through the use of invertible maps. ISR is based on an invertible symbolic architecture that bridges the concepts of Invertible Neural Networks (INNs) and Equation Learner (EQL), i.e. a neural network-based symbolic architecture for function learning. In particular, we transform the affine coupling blocks of INNs into a symbolic framework, resulting in an end-to-end differentiable symbolic inverse architecture. This allows for efficient gradient-based learning. The symbolic invertible architecture is easily invertible with a tractable Jacobian, which enables explicit computation of posterior probabilities. The proposed ISR method, equipped with sparsity promoting regularization, captures complex functional relationships with concise and interpretable invertible expressions. In addition, as a byproduct, we naturally extend ISR into a conditional ISR (cISR) architecture by integrating the EQL network within conditional INN (cINN) architectures present in the literature. We further demonstrate that ISR can also serve as a symbolic normalizing flow (for density estimation) in a number of test distributions. We demonstrate the applicability of ISR in solving inverse problems, and compare it with INN on a benchmark inverse kinematics problem, as well as a geoacoustic inversion problem in oceanography (see (Chapman & Shang, 2021) for further information). Here, we aim to characterize the undersea environment, such as water-sediment depth, sound speed, etc., from acoustic signals. To the best of our knowledge, this work is the first attempt towards finding interpretable solutions to general nonlinear inverse problems by establishing analytical relationships between measurable quantities and unknown variables via symbolic invertible maps.

The remainder of the paper is organized as follows. In Section 2, we go through an overall background about Symbolic Regression (SR) and review the Equation Learner (EQL) network architecture. In Section 3, we introduce and present the proposed Invertible Symbolic Regression (ISR) method. We then show our results in Section 4, where we demonstrate the versatility of ISR as a density estimation method on a variety of examples (distributions), and then show its applicability in inverse problems on an inverse kinematics benchmark problem and through a case study in ocean geoacoustic inversion. Finally in Section 5, we provide our conclusions and outlook.

## 2 Background

Before diving into the proposed ISR method, we first delve into a comprehensive background of the Symbolic Regression (SR) task, as well as the Equation Learner (EQL) network architecture.

Symbolic Regression. Given a dataset D = {xi, yi}
N
i=1 consisting of N independent and identically distributed (i.i.d.) paired examples, where xi ∈ R
dx represents the input variables and yi ∈ R
dy the corresponding output for the i-th observation, the objective of SR is to find an analytical (symbolic) expression f that best maps inputs to outputs, i.e. yi ≈ f(xi). SR seeks to identify the functional form of f from the space of functions S defined by a set of given arithmetic operations (e.g. +, −, ×, ÷) and mathematical functions (e.g. sin, cos, exp, etc.) that minimizes a predefined loss function L(f, D), which measures the discrepancy between the true outputs yi and the predictions f(xi) over all observations in the dataset. Unlike

![3_image_0.png](3_image_0.png)

Figure 1: EQL network architecture for symbolic regression. For visual simplicity, we only show 2 hidden layers and 5 activation functions per layer (identity or "id", square, sine, exponential, and multiplication).
conventional regression methods that fit parameters within a predefined model structure, SR dynamically constructs the model structure itself, offering a powerful means to uncover underlying physical laws and/or nontrivial relationships. Equation Learner Network. The Equation Learner (EQL) network is a multi-layer feed-forward neural network that is capable of performing symbolic regression by substituting traditional nonlinear activation functions with elementary functions. The EQL network was initially introduced by Martius & Lampert
(2016) and Sahoo et al. (2018), and further explored by Kim et al. (2020). As shown in Figure 1, the EQL
network architecture is based on a fully connected neural network where the ouput h
(i) of the i-th layer is given by

$$\mathbf{g}^{(i)}=\mathbf{W}^{(i)}\mathbf{h}^{(i-1)}\tag{1}$$ $$\mathbf{h}^{(i)}=f\big{(}\mathbf{g}^{(i)}\big{)}\tag{2}$$
$$(1)$$
$$\left(2\right)$$
$$\left({\boldsymbol{3}}\right)$$

where W(i)is the weight matrix of the i-th layer, f denotes the nonlinear (symbolic) activation functions, and h
(0) = x represents the input data. In regression tasks, the final layer does not typically have an activation function, so the output for a network with L hidden layers is given by

$$\mathbf{y}=\mathbf{h}^{(L+1)}=\mathbf{g}^{(L+1)}=\mathbf{W}^{(L+1)}\mathbf{h}^{(L)}.$$
(L). (3)
In traditional neural networks, activation functions such as ReLU, tanh, or sigmoid are typically employed.

However, for the EQL network, the activation function f(g) may consist of a separate primitive function for each component of g (e.g. the square function, sine, exponential, etc.), and may include functions that take multiple arguments (e.g. the multiplication function). In addition, the primitive functions may be duplicated within each layer (to reduce the training's sensitivity to random initializations).

It is worth mentioning that, for visual simplicity, the schematic shown in Figure 1, shows an EQL network with only two hidden layers, where each layer has only five primitive functions, i.e., the activation function

$f(\mathbf{g})\in\left\{\mathrm{id}\right\}$
$$\mathrm{{t1t}}$$
f(g) ∈identity, square, sine, exponential, multiplication	.

However, the EQL network can in fact include other functions or more hidden layers to fit a broader range (or class) of functions. Indeed, the number of hidden layers can dictate the complexity of the resulting symbolic expression and plays a similar role to the maximum depth of expression trees in genetic programming techniques. Although the EQL network may not offer the same level of generality as traditional symbolic regression methods, it is adequately capable of representing the majority of functions commonly encountered

![4_image_0.png](4_image_0.png)

Figure 2: (Left) The proposed ISR framework learns a bijective symbolic transformation that maps the
(unknown) variables x to the (observed) quantities y while transforming the lost information into latent variables z. (Right) The conditional ISR (cISR) framework learns a bijective symbolic map that transforms x directly to a latent representation z given the observation y. As we will show, both the forward and inverse mappings are efficiently computable and possess a tractable Jacobian, allowing explicit computation of posterior probabilities.
in scientific and engineering contexts. Crucially, the parametrized nature of the EQL network enables efficient optimization via gradient descent (and backpropagation). After training the EQL network, the identified equation can be directly derived from the network weights.

To avoid reaching overly complex symbolic expressions and to maintain interpretability, it is essential to guide the network towards learning the simplest expression that accurately represents the data. In methods based on genetic programming, this simplification is commonly achieved by restricting the number of terms in the expression. For the EQL network, this is attained by applying sparsity regularization to the network weights, which sets as many of these weights to zero as possible (e.g. L1 regularization (Tibshirani, 1996),
L0.5 regularization (Xu et al., 2010)). In this work, we use a smoothed L0.5 regularization (Wu et al., 2014; Fan et al., 2014), which was also adopted by Kim et al. (2020). Further details can be found in Appendix B.

## 3 Invertible Symbolic Regression

In this section, we delineate the problem setup for inverse problems, and then describe the proposed ISR
approach.

## 3.1 Problem Specification

In various engineering and natural systems, the theories developed by experts describe how measurable (or observable) quantities y ∈ R
dy result from the unknown (or hidden) properties x ∈ R
dx , known as the forward process x → y. The goal of inverse prediction is to predict the unknown variables x from the observable variables y, through the *inverse process* y → x. As critical information is lost during the forward process (i.e. dx ≥ dy), the inversion is usually intractable. Given that f
−1(y) does not yield a uniquely defined solution, an effective inverse model should instead estimate the *posterior* probability distribution p(x | y) of the hidden variables x, conditioned on the observed variable y.

Invertible Symbolic Regression (ISR). Assume we are given a training dataset D = {xi, yi}
N
i=1, collected using forward model y = s(x) and prior p(x). To counteract the loss of information during the forward process, we introduce latent random variables z ∈ R
dz drawn from a multivariate standard normal distribution, i.e. z ∼ pZ (z) = N (0, Idz
), where dz = dx − dy. These latent variables are designed to capture the information related to x that is not contained in y (Ardizzone et al., 2019a). In ISR, we aim to learn a bijective symbolic function f : R
dx → R
dy ×R
dzfrom the space of functions defined by a set of mathematical functions (e.g. sin, cos, exp, log) and arithmetic operations (e.g. +, −, ×, ÷), and such that

$$[\mathbf{y},\mathbf{z}]=f(\mathbf{x})=\left[f_{\mathbf{y}}(\mathbf{x}),f_{\mathbf{z}}(\mathbf{x})\right],$$
−1(y, z) (4)
where fy(x) ≈ s(x) is an approximation of the forward process s(x). As discussed later, we will learn f
(and hence f
−1) through an invertible symbolic architecture with bi-directional training. The solution of the

$$\mathbf{x}=f^{-1}(\mathbf{y},\mathbf{z})$$
$$\left(4\right)$$

![5_image_0.png](5_image_0.png)

Figure 3: The proposed ISR method integrates EQL within the affine coupling blocks of the INN invertible architecture.1 This results in a bijective symbolic transformation that is both easily invertible and has a tractable Jacobian. Indeed, the forward and inverse directions both possess identical computational cost.

Here, ⊙ and ⊘ denote element-wise multiplication and divison, respectively.
inverse problem (i.e. the posterior p(x | y
∗)) can then be found by calling f
−1for a fixed observation y
∗ while randomly (and repeatedly) sampling the latent variable z from the same standard Gaussian distribution. Conditional Invertible Symbolic Regression (cISR). Inspired by works on conditional invertible neural networks (cINNs) (Ardizzone et al., 2019b; 2021; Kruse et al., 2021; Luce et al., 2023), instead of training ISR to predict y from x while transforming the lost information into latent variables z, we train them to transform x directly to latent variables z given the observed variables y. This is achieved by incorporating y as an additional input within the bijective symbolic architecture during both the forward and inverse passes (see Figure 2). cISR works with larger latent spaces than ISR since dz = dx regardless of the dimension dy of the observed quantities y. Further details are provided in the following section.

In addition to approximating the forward model via mathematical relations, ISR also identifies an interpretable inverse map via analytical expressions (see Figure 2). Such interpretable mappings are of particular interest in physical sciences, where an ambitious objective involves creating intelligent machines capable of generating novel scientific findings(Udrescu & Tegmark, 2020; Udrescu et al., 2020; Liu & Tegmark, 2021; Keren et al., 2023; Liu et al., 2024). As described next, the ISR architecture is both easily invertible and has a tractable Jacobian, allowing for explicit computation of posterior probabilities.

## 3.2 Invertible Symbolic Architecture

The general SR problem is to search the space of functions to find the optimal analytical expression given data. As discussed in Section 3.1, the objective of ISR is to analytically learn a bijection (more specifically a diffeomorphism (Teshima et al., 2020)) via symbolic distillation. In other words, the objective of ISR is essentially similar to that of the general SR problem with the additional constraint that the resulting model has to be invertible. Traditionally, many SR methods rely on Genetic Programming (Tohme et al., 2023a) to search for the optimal symbolic expression. While various methods can be used to learn the bijective symbolic function f in Eq. (4), as discussed next, we resort to coupling-based invertible architectures coupled with EQL networks, whose parametrized nature enhances computational efficiency.

Inspired by the architectures proposed by Dinh et al. (2016); Kingma & Dhariwal (2018); Ardizzone et al. (2019a), we adopt a fully invertible architecture mainly defined by a sequence of n reversible blocks where each block consists of two complementary affine coupling layers. In particular, we first split the block's input u ∈ R
du into u1 ∈ R
du1 and u2 ∈ R
du2 (where du1 + du2 = du), which are fed into the coupling layers as follows:

$$\begin{bmatrix}\mathbf{v}_{1}\\ \mathbf{v}_{2}\end{bmatrix}=\begin{bmatrix}\mathbf{u}_{1}\odot\exp\left(s_{1}(\mathbf{u}_{2})\right)+t_{1}(\mathbf{u}_{2})\\ \mathbf{u}_{2}\end{bmatrix},\qquad\qquad\begin{bmatrix}\mathbf{o}_{1}\\ \mathbf{o}_{2}\end{bmatrix}=\begin{bmatrix}\mathbf{v}_{1}\\ \mathbf{v}_{2}\odot\exp\left(s_{2}(\mathbf{v}_{1})\right)+t_{2}(\mathbf{v}_{1})\end{bmatrix},\tag{5}$$

where ⊙ denotes the Hadamard product or element-wise multiplication. The outputs [o1, o2] are then concatenated again and passed to the next coupling block. The internal mappings s1 and t1 are functions from R
du2 → R
du1 , and s2 and t2 are functions from R
du1 → R
du2 . In general, si and ti can be arbitrarily complicated functions (e.g. neural networks as in Ardizzone et al. (2019a)). In our proposed ISR approach, they are represented by EQL networks (see Figure 3), resulting in a fully symbolic invertible architecture.

Moving forward, we shall refer to them as EQL *subnetworks* of the block.

The transformations above result in upper and lower triangular Jacobians:

$$J_{\mathbf{u}\mapsto\mathbf{v}}=\begin{bmatrix}\text{diag}\big{(}\exp\big{(}s_{1}(\mathbf{u}_{2})\big{)}\big{)}&\frac{\partial\mathbf{v}_{1}}{\partial\mathbf{u}_{2}}\\ 0&I\end{bmatrix},\qquad\qquad J_{\mathbf{v}\mapsto\mathbf{o}}=\begin{bmatrix}I&0\\ \frac{\partial\mathbf{u}_{2}}{\partial\mathbf{v}_{1}}&\text{diag}\big{(}\exp\big{(}s_{2}(\mathbf{v}_{1})\big{)}\big{)}\end{bmatrix}.\tag{6}$$

Hence, their determinants can be trivially computed:

$$\det\bigl{(}J_{\mathbf{u}\rightarrow\mathbf{v}}\bigr{)}=\prod_{i=1}^{d_{u_{1}}}\exp\Big{(}\left[s_{1}(\mathbf{u}_{2})\right]_{i}\Big{)}=\exp\Big{(}\sum_{i=1}^{d_{u_{1}}}\left[s_{1}(\mathbf{u}_{2})\right]_{i}\Big{)},$$ $$\det\bigl{(}J_{\mathbf{v}\rightarrow\mathbf{o}}\bigr{)}=\prod_{i=1}^{d_{u_{2}}}\exp\Big{(}\left[s_{2}(\mathbf{v}_{1})\right]_{i}\Big{)}=\exp\Big{(}\sum_{i=1}^{d_{u_{2}}}\left[s_{2}(\mathbf{v}_{1})\right]_{i}\Big{)}.\tag{7}$$

Then, the resulting Jacobian determinant of the coupling block is given by

detJu 7→ o = detJu 7→ v · detJv 7→ o  = exp (Pdu1 i=1 [s1(u2)]i)· exp (Pdu2 i=1 [s2(v1)]i) = exp (Pdu1 i=1 [s1(u2)]i +Pdu2 i=1 [s2(v1)]i) = exp (Pdu1 i=1 [s1(u2)]i +Pdu2 i=1 [s2(u1 ⊙ exp (s1(u2)) + t1(u2))]i ) (8) which can be efficiently calculated. Indeed, the Jacobian determinant of the whole map x → [y, z] is the
product of the Jacobian determinants of the n underlying coupling blocks (see Figure 3).
Given the output o = [o1, o2], the expressions in Eqs. (5) are clearly invertible:

$$\mathbf{u}_{2}=\big{(}\mathbf{o}_{2}-t_{2}(\mathbf{o}_{1})\big{)}\oslash\exp\big{(}s_{2}(\mathbf{o}_{1})\big{)},\qquad\qquad\mathbf{u}_{1}=\big{(}\mathbf{o}_{1}-t_{1}(\mathbf{u}_{2})\big{)}\oslash\exp\big{(}s_{1}(\mathbf{u}_{2})\big{)}\tag{9}$$

where ⊘ denotes element-wise division. Crucially, even when the coupling block is inverted, the EQL subnetworks si and ti need not themselves be invertible; they are only ever evaluated in the forward direction. We denote the whole ISR map x → [y, z] as f(x; θ) = -fy(x; θ), fz(x; θ)parameterized by the EQL subnetworks parameters θ, and the inverse as f
−1(y, z; θ).

Remark 3.1. The proposed ISR architecture consists of a sequence of these symbolic reversible blocks. To enhance the model's predictive and expressive capability, we can: i) increase the number of reversible coupling blocks, ii) increase the number of hidden layers in each underlying EQL network, iii) increase the number of hidden neurons per layer in each underlying EQL network, or iv) increase the complexity of the symbolic activation functions used in the EQL network. However, it is worth noting that these enhancements come with a trade-off, as they inevitably lead to a decrease in the model's interpretability. Remark 3.2. To further improve the model capacity, as in Ardizzone et al. (2019a), we incorporate (random, but fixed) permutation layers between the coupling blocks, which shuffles the input elements for subsequent coupling blocks. This effectively randomizes the configuration of splits u = [u1, u2] across different blocks, thereby enhancing interplay between variables.

1As direct division can lead to numerical issues, we apply the exponential function to si (after clipping its extreme values)
in the formulation described in Eq. (5). This also guarantees non-zero diagonal entries in the Jacobian matrices.
7 Remark 3.3. Inspired by Ardizzone et al. (2019a), we split the coupling block's input vector u ∈ R
du into two halves, i.e. u1 ∈ R
du1 and u2 ∈ R
du2 where du1 = ⌊
du 2
⌋ and du2 = du − du1
. In the case where u is one-dimensional (or scalar), i.e. du = 1 and u ∈ R, we pad it with an extra zero (so that du = 2) along with a loss term that prevents the encoding of information in the extra dimension (e.g. we use the L2 loss to maintain those values near zero).

Remark 3.4. The proposed ISR architecture is also compatible with the conditional ISR (cISR) framework proposed in the previous section. In essence, cISR identifies a bijective symbolic transformation directly between x and z given the observation y. This is attained by feeding y as an extra input to each coupling block, during both the forward and inverse passes. In particular, and as suggested by Ardizzone et al.

(2019b; 2021); Kruse et al. (2021), we adapt the same coupling layers given by Eqs. (5) and (9) to produce a conditional coupling block. Since the subnetworks si and ti are never inverted, we enforce the condition on the observation by concatenating y to their inputs without losing the invertibility, i.e. we replace s1(u2) with s1(u2, y), etc. In complex settings, the condition y is first fed into a separate feed-forward conditioning network, resulting in higher-level conditioning features that are then injected into the conditional coupling blocks. Although cISR can have better generative properties (Kruse et al., 2021), it leads to more complex symbolic expressions and less interpretability as it explicitly conditions the map on the observation within the symbolic formulation. We denote the entire cISR forward map x → z as f(x; y, θ) parameterized by θ, and the inverse as f
−1(z; y, θ).

## 3.3 Maximum Likelihood Training Of Isr

We train the proposed ISR model to learn a bijective symbolic transformation f : R
dx → R
dy × R
dz. There are various choices to define the loss functions with different advantage and disadvantages (Grover et al.,
2018; Ren et al., 2020; Ardizzone et al., 2019a; 2021; Kruse et al., 2021). As reported in Kruse et al. (2021),
there are two main training approaches:
i) A standard supervised L2 loss for fitting the model's y predictions to the training data, combined with a Maximum Mean Discrepancy (MMD) (Gretton et al., 2012; Ardizzone et al., 2019a) for fitting the latent distribution pZ (z) to N (0, Idz
), given samples.

ii) A Maximum Likelihood Estimate (MLE) loss that enforces z to be standard Gaussian, i.e. z ∼ pZ (z) = N (0, Idz
) and by approximating the distribution on y with a Gaussian distribution around the ground truth values ygt with very low variance σ 2(Dinh et al., 2016; Ren et al., 2020; Kruse et al., 2021).

Given that MLE is shown to perform well as reported in the literature (Ardizzone et al., 2019a), we apply
it here. Next, we demonstrate how this approach is equivalent to minimizing the forward Kullback-Leibler
(KL) divergence as the cost (cf. (Papamakarios et al., 2021)). We note that given the map f(x; θ) 7→ [z, y],
parameterized by θ, and assuming y and z are independent, the density pX relates to pY and pZ
through
the change-of-variables formula
$$p_{\mathbf{x}}(\mathbf{x};\theta)=p_{\mathbf{x}}\left(\mathbf{y}=f_{\mathbf{x}}(\mathbf{x};\theta)\right)\,p_{\mathbf{z}}\left(\mathbf{z}=f_{\mathbf{z}}(\mathbf{x};\theta)\right)\cdot\left|\operatorname*{det}(J_{\mathbf{x}\mapsto[\mathbf{z},\mathbf{y}]}(\mathbf{x};\theta))\right|.$$
z = fz(x; θ)·detJx 7→ [z,y](x; θ). (10)
where Jx 7→ [z,y](x; θ) denotes the Jacobian of the map f parameterized by θ. This expression is then used to
define the loss function, which we derive by following the work in (Papamakarios et al., 2021). In particular, we aim to minimize the forward KL divergence between a target distribution p
∗
X
(x) and our *flow-based* model
pX (x; θ), given by
The forward KL divergence is particularly suitable for cases where we have access to samples from the target distribution, but we cannot necessarily evaluate the target density p
∗
X
(x). Assuming we have a set of samples
{xi}
N
i=1 from p
given by  $\begin{array}{l}\mathcal{L}(\theta)=D_{\mathrm{KL}}\big[p_{\mathbf{x}}^{\ast}\left(\mathbf{x}\right)\big|\big|\,p_{\mathbf{x}}\left(\mathbf{x};\theta\right)\big]\\ \qquad=-\mathbb{E}_{p_{\mathbf{x}}^{\ast}\left(\mathbf{x}\right)}\big[\log p_{\mathbf{x}}\left(\mathbf{x};\theta\right)\big]+\text{const.}\end{array}$. 
∗
X
(x), we can approximate the expectation in Eq. (11) using Monte Carlo integration as
$$=-\mathbb{E}_{p_{\mathbf{x}}^{\prime}(\mathbf{x})}\big{[}\log p_{\mathbf{\nu}}\left(f_{\mathbf{y}}(\mathbf{x};\theta)\right)+\log p_{\mathbf{x}}\left(f_{\mathbf{\pi}}(\mathbf{x};\theta)\right)+\log\left|\det\left(J_{\mathbf{x}\mapsto[\mathbf{x},\mathbf{y}]}(\mathbf{x};\theta)\right)\right|\big{]}+\text{const.}\tag{11}$$
$${\cal L}(\theta)\approx-\frac{1}{N}\sum_{i=1}^{N}\left(\log p_{Y}\left(f_{\mathbf{y}}({\bf x}_{i};\theta)\right)+\log p_{\mathbf{x}}\left(f_{\mathbf{z}}({\bf x}_{i};\theta)\right)+\log\left|\det\left(f_{\mathbf{x}\sim\left\lfloor\mathbf{x},\mathbf{y}\right\rfloor}({\bf x}_{i};\theta)\right)\right|+{\rm const.}\right)\,.\tag{12}$$
$\overline{x}$
$$(10)$$
As we can see, minimizing the above Monte Carlo approximation of the KL divergence is equivalent to maximizing likelihood (or minimizing negative log-likelihood). Assuming pZ
is standard Gaussian and pY is a multivariate normal distribution around ygt, the negative log-likelihood (NLL) loss in Eq. (12) becomes

$${\mathcal{L}}_{\mathrm{NLL}}(\theta)={\frac{1}{N}}\sum_{i=1}^{N}\left({\frac{1}{2}}\cdot{\frac{\left(f_{\mathbf{y}}(\mathbf{x}_{i};\theta)-\mathbf{y}_{\mathrm{gt}}\right)^{2}}{\sigma^{2}}}+{\frac{1}{2}}\cdot f_{\mathbf{z}}(\mathbf{x}_{i};\theta)^{2}-\log\left|\operatorname*{det}\left(J_{\mathbf{x}^{*}\mapsto[\mathbf{z},\mathbf{y}]}(\mathbf{x}_{i};\theta)\right)\right|\right)\ .$$
. (13)
In other words, we find the optimal ISR parameters θ by minimizing the NLL loss in Eq. (13), and the resulting bijective symbolic expression can be directly extracted from the these optimal parameters.

Remark 3.5. We note that cISR is also suited for maximum likelihood training. Given the conditioning observation y, the density pX | Y
relates to pZ
through the change-of-variables formula

$$p_{\mathbf{x}\,|\,\mathbf{Y}}(\mathbf{x}\,|\,\mathbf{y},\theta)=p_{\mathbf{z}}\left(\mathbf{z}=f(\mathbf{x};\mathbf{y},\theta)\right)\cdot\left|\operatorname*{det}\left(J_{\mathbf{x}\mapsto\mathbf{z}}(\mathbf{x};\mathbf{y},\theta)\right)\right|,$$
z = f(x; y, θ)·detJx 7→ z(x; y, θ), (14)
where Jx 7→ z(x; y, θ) indicates the Jacobian of the map f conditioned on y and parameterized by θ. Following the same procedure as above, the cISR model can be trained by minimizing the following NLL loss function

$${\cal L}_{\rm NLL}(\theta)=\frac{1}{N}\sum_{i=1}^{N}\left(\frac{1}{2}\cdot f({\bf x}_{i};{\bf y}_{i},\theta)^{2}-\log\left|\det(J_{{\bf x}\mapsto{\bf z}}({\bf x}_{i};{\bf y}_{i},\theta))\right|\right).\tag{15}$$
$$(13)$$
$$(14)$$

As we will show in the next section, if we ignore the condition on the observation y, the loss in Eq. 15 can also be used for training ISR as a normalizing flow for the unsupervised learning task of approximating a target probability density function from samples (cf. Eq. 18).

## 4 Results

We evaluate our proposed ISR method on a variety of problems. We first show how ISR can serve as a normalizing flow for density estimation tasks on several test distributions. We then demonstrate the capabilities of ISR in solving inverse problems by considering two synthetic problems and then a more challenging application in ocean acoustics (Jensen et al., 2011; Ali et al., 2023; Huang et al., 2006; Dosso &
Dettmer, 2011; Bianco et al., 2019; Holland et al., 2005; Benson et al., 2000). We mainly compare our ISR
approach against INN (Ardizzone et al., 2019a) throughout our experiments. Further experimental details can be found in Appendix B.

## 4.1 Leveraging Isr For Density Estimation Via Normalizing Flow

Given N independently and identically distributed (i.i.d.) samples, i.e. {Xi}
N
i=1 ∼ p target X, we would like to estimate the target density p target Xand generate new samples from it. This problem categorizes as the density estimation where non-parametric, e.g. Kernel Density Estimation (Sheather, 2004), and parametric estimators, e.g. Maximum Entropy Distribution as the least biased estimator (Tohme et al., 2024), are classically used. In recent years, this problem has been approached using normalizing flow equipped with invertible map, which has gained a great deal of interest in the generative AI task. In an attempt to introduce interpretability in the trained model, we extend invertible normalizing flow to symbolic framework using the proposed ISR architecture.

In order to use the proposed ISR method as the normalizing flow for the unsupervised task of resampling from an intractable target distribution, we drop out y and enforce dx = dz.

2In this case, we aim to learn a an invertible and symbolic map f : R
dx → R
dz, parameterized by θ such that

z = f(x; θ), x = f
−1(z; θ), (16)
2In the absence of y, cISR and ISR are equivalent, so we simply refer to them as ISR. Similarly, INN and cINN become equivalent, and we simply refer to them as INN.

$$\mathbf{x}=f^{-1}(\mathbf{z};\theta),$$
$$(16)$$

![9_image_0.png](9_image_0.png)

Figure 4: Samples from four different target densities (first row), and their estimated distributions using INN (second row) and the proposed ISR method (third row).
where z ∼ pZ
is the standard normal distribution function, which is easy to sample from. Using the changeof-variables formula, the density pX relates to the density pZ via

$$p_{\bf x}({\bf x};\theta)=p_{\bf z}\left({\bf z}=f({\bf x};\theta)\right)\cdot\left|\det\left(J_{\bf x\mapsto z}({\bf x};\theta)\right)\right|,\tag{17}$$

where Jx 7→ z indicates the Jacobian of the map f parameterized by θ. Following the same procedure outlined in Section 3.3, the model can be trained by minimizing the following NLL loss function

$${\cal L}_{\rm NLL}(\theta)=\frac{1}{N}\sum_{i=1}^{N}\left(\frac{1}{2}\cdot f({\bf x}_{i};\theta)^{2}-\log\left|\det(J_{{\bf x}\mapsto{\bf z}}({\bf x}_{i};\theta))\right|\right).\tag{18}$$

We compare the proposed ISR approach with INN in recovering several two-dimensional target distributions
(i.e. dx = dz = 2). First, we consider a fairly simple multivariate normal distribution Nµ, Σwith mean µ = [0, 3] and covariance matrix Σ =1 10 · I2 as the target density. Then, we consider more challenging distributions: the "Banana," "Mixture of Gaussians (MoG)," and "Ring" distributions that are also considered in Jaini et al. (2019); Wenliang et al. (2019). For each of these target distributions, we draw Ns = 104i.i.d. samples and train an invertible map that transport the samples to a standard normal distribution. This is called normalizing flow, where we intend to compare the standard INN with the proposed ISR architecture. Here, we use a single coupling block for the Gaussian and banana cases, and two coupling blocks for the ring and MoG test cases.

As shown in Figure 4, the proposed ISR method finds and generates samples of the considered target densities with slightly better accuracy than INN. We report the bijective symbolic expressions in Table 1 of Appendix A. For instance, the first target distribution in Figure 4 is the two-dimensional multivariate Gaussian distribution Nµ, Σwith mean µ = [0, 3] and covariance matrix Σ =1 10 · I2. This is indeed a shifted and scaled standard Gaussian distribution where we know the analytical solution to the true map:

$$\mathbf{X}={\begin{bmatrix}X_{1}\\ X_{2}\end{bmatrix}}\sim{\mathcal{N}}\left(\boldsymbol{\mu},{\frac{1}{10}}\cdot\boldsymbol{I}_{2}\right)\sim\boldsymbol{\mu}+{\sqrt{\frac{1}{10}}}\cdot{\mathcal{N}}\left(\mathbf{0},\boldsymbol{I}_{2}\right)\,$$

![10_image_0.png](10_image_0.png)

Figure 5: Results for the inverse kinematics benchmark problem. The faint colored lines indicate sampled arm configurations x taken from each model's predicted posterior pˆ(x | y
∗), conditioned on the target end point y
∗, which is indicated by a gray cross. The contour lines around the target end point enclose the regions containing 97% of the sampled arms' end points. We emphasize the arm with the highest estimated likelihood as a bold line.

$$=\mu+\frac{1}{\sqrt{10}}\cdot{\bf Z}=\begin{bmatrix}0\\ 3\end{bmatrix}+0.316\cdot\begin{bmatrix}Z_{1}\\ Z_{2}\end{bmatrix}=\begin{bmatrix}0.316\,Z_{1}\\ 3+0.316\,Z_{2}\end{bmatrix}\;.\tag{19}$$

As shown in Table 1 of Appendix A, for this Gaussian distribution example, the proposed ISR method finds the following invertible expression:

 $z_1=x_1\cdot e^{1.16}=3.19\,x_1\qquad\qquad\qquad\Longleftrightarrow\qquad x_1=3.19^{-1}\,z_1=0.313\,z_1$  $z_2=x_2\cdot e^{1.14}-9.39=3.13\,x_2-9.39\qquad\Longleftrightarrow\qquad x_2=3.13^{-1}\left(z_2+9.39\right)=0.319\,z_2+3.00\,z_1$  . 
In other words, the proposed ISR method identifies the true underlying transformation given by Eq. (19)
with a high accuracy. As discussed in Appendix B. However, the user can indeed add other operators (e.g.

log, etc.) when necessary or when domain knowledge is available, Appendix D explores a more challenging and noteworthy example through a toy inverse problem.

## 4.2 Inverse Kinematics

We now consider a geometrical benchmark example used by Ardizzone et al. (2019a); Kruse et al. (2021),
which simulates an inverse kinematics problem in a two-dimensional space: A multi-jointed 2D arm moves vertically along a rail and rotates at three joints. In this problem, we are interested in the configurations
(i.e. the four degrees of freedom) of the arm that place the arm's end point at a given position. The forward process computes the coordinates of the end point y ∈ R2, given a configuration x ∈ R4(i.e. dx = 4, dy = 2, and hence dz = 2). In particular, the forward process takes x = [x1, x2, x3, x4] as argument, where x1 denotes the arm's starting height, and x2, x3, x4 are its three joint angles, and returns the coordinates of its end point y = [y1, y2] given by

$$\begin{array}{l}{{y_{1}=\ell_{1}\sin(x_{2})+\ell_{2}\sin(x_{2}+x_{3})+\ell_{3}\sin(x_{2}+x_{3}+x_{4})+x_{1}}}\\ {{y_{2}=\ell_{1}\cos(x_{2})+\ell_{2}\cos(x_{2}+x_{3})+\ell_{3}\cos(x_{2}+x_{3}+x_{4})}}\end{array}$$

$\left(20\right)^3$

![11_image_0.png](11_image_0.png)

Figure 6: The SWellEx-96 experiment environment. The acoustic source is towed by a research vessel and transmits signals at various frequencies. The acoustic sensor consists of a vertical line array (VLA). Based on the measurements collected at the VLA, the objective is to estimate posterior distributions over parameters of interest (e.g. water depth, sound speed at the water-sediment interface, source range and depth, etc.).
where the segment lengths ℓ1 = 0.5, ℓ2 = 0.5, and ℓ3 = 1. The parameters x follow a Gaussian prior x ∼ N 0,σ 2· I4 with σ 2 = [0.252, 0.25, 0.25, 0.25], which favors a configuration with a centered origin and 180◦joint angles (see Figure 5). We consider a training dataset of size 106, constructed using this Gaussian prior and the forward process in Eq. (20). The inverse problem here asks to find the posterior distribution p(x | y
∗) of all possible configurations (or parameters) x that result in the arm's end point being positioned at a given y
∗location. This inverse kinematics problem, being low-dimensional, offers computationally inexpensive forward (and backward) process, which enables fast training, intuitive visualizations, and an approximation of the true posterior estimates via rejection sampling.3 An example of a challenging end point y
∗is shown in Figure 5, where we compare the proposed ISR method against the approximate true posterior (obtained via rejection sampling), as well as INN. The chosen y
∗
is particularly challenging, since this end point is unlikely under the prior p(x), and results in a strongly bi-modal posterior p(x | y
∗) (Ardizzone et al., 2019a; Kruse et al., 2021). As we can observe in Figure 5, compared to rejection sampling, all the considered architectures (i.e. INN, cINN, ISR, and cISR) are able to capture the two symmetric modes well. However, we can clearly see that they all generate x-samples such that their resulting end points miss the target y
∗ by a wider margin. Quantitative results are also provided in Appendix C.

## 4.3 Application: Geoacoustic Inversion

Predicting acoustic propagation at sea is vital for various applications, including sonar performance forecasting and mitigating noise pollution at sea. The ability to predict sound propagation in a shallow water environment depends on understanding the seabed's geoacoustic characteristics. Inferring those characteristics from ocean acoustic measurements (or signals) is known as geoacoustic inversion (GI). GI involves several components: (i) representation of the ocean environment, (ii) selection of the inversion method, including the forward propagation model implemented, and (iii) quantification of the uncertainty related to the parameters estimates.

3**Rejection sampling.** Assume we require Ns samples of x from the posterior p(x | y
∗) given some observation y
∗. After setting some acceptance threshold ϵ, we iteratively generate x-samples from the prior. For each sample, we simulate the corresponding y-values and only keep those with dist(y, y
∗) < ϵ. The process is repeated until Ns samples are collected (or accepted). Indeed, the smaller the threshold ϵ, the more x-samples candidates (and hence the more simulations) have to be generated. Hence, we adopt this approach in this low-dimensional inverse kinematics problem, where we can afford to run the forward process (or simulation) a huge number of times.

We start by describing the ocean environment. We consider the setup of SWellEx-96 Yardim et al. (2010);
Meyer & Gemba (2021), which was an experiment done off the coast of San Diego, CA, near Point Loma.

This experimental setting is one of the most used, documented, and understood studies in the undersea acoustics community.4 As depicted in Figure 6, the data is collected via a vertical line array (VLA). The specification of the 21 hydrophones of the VLA and sound speed profile (SSP) in the water column is provided in the SWellEx-96 documentation. The SSP and sediment parameters are considered to be rangeindependent. Water depth refers to the depth of the water at the array. The source is towed by a research vessel which consists of a comb signal comprising frequencies of 49, 79, 112, 148, 201, 283, and 388 Hz. While in the SWellEx-96 experiment the position of the source changes with time, for this task we consider the instant when the source depth is 60 m and the distance (or range) between the source and the VLA is 3 km.

The sediment layer is modeled with the following properties. The seabed consists initially of a sediment layer that is 23.5 meters thick, with a density of 1.76 g/cm3, and an attenuation of 0.2 dB/kmHz. The sound speed at the bottom of this layer is assumed to be 1593 m/s. The second layer is mudstone that is 800 meters thick, possessing a density of 2.06 g/cm3, and an attenuation of 0.06 dB/kmHz. The top and bottom sound speeds of this layer are 1881 m/s and 3245 m/s respectively. The description of the geoacoustic model of the SWellEx-96 experiment is complemented by a half-space featuring a density of 2.66 g/cm3, an attenuation of 0.020 dB/kmHz, and a sound speed of 5200 m/s. Here, we consider two geoacoustic inversion tasks:
Task 1. Based on the measurements at the VLA, the objective of this task is to infer the posterior distribution over the water depth as well as the sound speed at the water-sediment interface. For this task, we assume all the quantities above to be known. The unknown parameters m1 (the water depth) and m2
(the sound speed at the water-sediment interface) follow a uniform prior in [200.5, 236.5] m and [1532, 1592]
m/s, i.e. m1 ∼ U([200.5, 236.5]) and m2 ∼ U([1532, 1592]), where U(Ω) denotes a uniform distribution in the domain Ω.

Task 2. In addition to the two parameters considered in *Task 1* (i.e. the water depth m1 and the sound speed at the water-sediment interface m2), we also estimate the posterior distribution over the VLA tilt m3, as well as the thickness of the first (sediment) layer m4. All other quantities provided above are assumed to be known. As in *Task 1*, the unknown parameters follow a uniform prior, i.e. m1 ∼ U([200.5, 236.5]),
m2 ∼ U([1532, 1592]), m3 ∼ U([−2, 2]), and m4 ∼ U([18.5, 28.5]).

The received pressure y on each hydrophone and for each frequency is a function of unknown parameters m (e.g. water depth, sound speed at the water-sediment interface, etc.) and additive noise ϵ as follows

$${\bf y}=s({\bf m},\epsilon)=F({\bf m})+\epsilon,\qquad\qquad\epsilon\sim{\mathcal N}({\bf0},\Sigma)$$
$$(21)$$
y = s(m, ϵ) = F(m) + ϵ, ϵ ∼ N (0, Σ) (21)
where Σ is the covariance matrix of data noise. Here, s(m, ϵ) is a known forward model that, assuming an additive noise model, can be rewritten as F(m) + ϵ, where F(m) represents the undersea acoustic model Jensen et al. (2011). The SWellEx-96 experiment setup involves a complicated environment and no closed from analytical solution is available for F(m). In this case, F(m) can only be evaluated numerically, and we use the normal-modes program KRAKEN Porter (1992) for this purpose.

Recently, machine learning algorithms have gained attention in the ocean acoustics community (Bianco et al.,
2019) (Ying, 2022) for their notable performance and efficiency, especially when compared to traditional methods such as MCMC. In this work, for the first time, we use the concept of invertible networks to estimate posterior distributions in GI. Particularly appealing is that invertible architectures can replace both the forward propagation model as well as the inversion method.

We now discuss the training of the invertible architectures. We use the uniform prior on parameters m
(described in *Task 1* and *Task 2* above) and the forward model in Eq. (21) to construct a synthetic data set for the SWellEx-96 experimental setup using the normal-modes program KRAKEN. For each parameter m, we have 7 × 21 = 147 values for the pressure y received at hydrophones corresponding to the source's 7 different frequencies and the 21 active hydrophones. For inference, we use a test acoustic signal y
∗that corresponds to the actual parameters values from the SWellEx-96 experiment, where the source is 60 m deep 4see http://swellex96.ucsd.edu/

![13_image_0.png](13_image_0.png)

![13_image_1.png](13_image_1.png)

Figure 7: A conceptual figure of ISR (left) and cISR (right) for the geoacoustic inversion task. The posterior distribution of the parameters of interest m can be obtained by sampling z (e.g. from a standard Gaussian distribution) for a fixed observation y* and running the trained bijective model backwards. To appropriately account for noise in the data, we include random data noise e as additional model parameters.

![13_image_2.png](13_image_2.png)

Figure 8: Task 1. For a fixed observation y*, we compare the estimated posteriors p(x [y*) of INN, cINN, and the proposed ISR and cISR methods. Vertical dashed red lines show the ground truth values x *.
and its distance from the VLA is 3 km (i.e. m* = 216.5, m³ = 1572.368, m³ = 0, m* = 23.5). Also, the signal-to-noise ratio (SNR) is 15 dB.

Inspired by Zhang & Curtis (2021), for the invertible architectures, we include data noise e as additional model parameters to be learned. In this context, as depicted in the Figure 7, the input of the network is obtained by augmenting the unknown parameters m with additive noise €, i.e. x = [m, e]. There are several ways to use the measurements collected across the 21 hydrophones for training. For instance, one can stack all hydrophones' data and treat them as a single quantity at the network's output. Alternatively, one can treat each hydrophone measurement independently as an individual training example. For the former, the additive noise will be learned separately for each hydrophone pressure y, while for the latter, we essentially learn the effective additive noise over all hydrophones simultaneously. In this experiment, we adopt the latter training approach, which disregards the inter-hydrophone variations, thereby reducing computational overhead.

The pressures received on the hydrophones are considered in the frequency domain, and hence they can be complex numbers. While the invertible architectures can be constructed to address complex numbers, in this case study, we stack the real and imaginary parts of the pressure field at the network's output. That is, the pressure y = Re{y} + i Im {y} will be represented as [Re{y}, Im {y}] at the network's output. In short, the 7 pressures (corresponding to the 7 source's frequencies) received at each hydrophone are replaced by 14

![14_image_0.png](14_image_0.png)

Figure 9: Task 2. For a fixed observation y*, we compare the estimated posteriors p(x|y*) of INN, cINN,
and the proposed ISR and cISR methods. Vertical dashed red lines show the ground truth values x *.
real numbers at the output of the network. Also, the 14 corresponding additive noises are concatenated with the parameters m at the network's input. Since the dimension of the network's input is 14 + dm, the latent variables z at the network output will be dm-dimensional for ISR and INN, and (14 + dm)-dimensional for cINN and cISR. We compare the performance of the proposed ISR and cISR algorithms against INN and cINN in solving GI.

The inferred posterior distributions via INN, cINN, ISR, and cISR, for the GI Task 1 and Task 2 are depicted in Fig. 8 and Fig. 9,respectively. The performance of the ISR and cISR architectures is similar to that of the INN and cINN architectures. All methods produce point estimates - Maximum a Posteriori (MAP)
estimates - close to the ground truth values, showcasing the efficacy of invertible architectures in addressing cumbersome inversion tasks.

## 5 Conclusion

In this work, we introduce Invertible Symbolic Regression (ISR), a novel technique that identifies the relationships between the inputs and outputs of a given dataset using invertible architectures. This is achieved by bridging and integrating concepts of Invertible Neural Networks (INNs) and Equation Learner (EQL). This integration transforms the affine coupling blocks of INNs into a symbolic framework, resulting in an end-to-end differentiable symbolic inverse architecture that allows for efficient gradient-based learning. The proposed ISR method, equipped with sparsity promoting regularization, has the ability to not only capture complex functional relationships but also yield concise and interpretable invertible expressions. We demonstrate the versatility of ISR as a normalizing flow for density estimation and its applicability in solving inverse problems, particularly in the context of ocean acoustics, where it shows promising results in inferring posterior distributions of underlying parameters. This work is a first attempt toward creating interpretable symbolic invertible maps. While we mainly focused on introducing the ISR architecture and showing its applicability in density estimation tasks and inverse problems, an interesting research direction would be to explore the practicality of ISR in challenging generative modeling tasks (e.g. image or text generation, etc.).

## References

Wael H Ali, Aaron Charous, Chris Mirabito, Patrick J Haley, and Pierre FJ Lermusiaux. MSEAS-ParEq for coupled ocean-acoustic modeling around the globe. In *OCEANS 2023-MTS/IEEE US Gulf Coast*, pp.

1–10. IEEE, 2023.

Christophe Andrieu, Nando De Freitas, Arnaud Doucet, and Michael I Jordan. An introduction to MCMC
for machine learning. *Machine learning*, 50:5–43, 2003.

Lynton Ardizzone, Jakob Kruse, Carsten Rother, and Ullrich Köthe. Analyzing inverse problems with invertible neural networks. In *International Conference on Learning Representations*, 2019a.

Lynton Ardizzone, Carsten Lüth, Jakob Kruse, Carsten Rother, and Ullrich Köthe. Guided image generation with conditional invertible neural networks. *arXiv preprint arXiv:1907.02392*, 2019b.

Lynton Ardizzone, Jakob Kruse, Carsten Lüth, Niels Bracher, Carsten Rother, and Ullrich Köthe. Conditional invertible neural networks for diverse image-to-image translation. In Zeynep Akata, Andreas Geiger, and Torsten Sattler (eds.), *Pattern Recognition*, pp. 373–387, Cham, 2021. Springer International Publishing. ISBN 978-3-030-71278-5.

Yves F Atchadé and Jeffrey S Rosenthal. On adaptive Markov chain Monte Carlo algorithms. *Bernoulli*, 11
(5):815–828, 2005.

Jeremy Benson, N Ross Chapman, and Andreas Antoniou. Geoacoustic model inversion using artificial neural networks. *Inverse Problems*, 16(6):1627, 2000.

Michael J Bianco, Peter Gerstoft, James Traer, Emma Ozanich, Marie A Roch, Sharon Gannot, and CharlesAlban Deledalle. Machine learning in acoustics: Theory and applications. *The Journal of the Acoustical* Society of America, 146(5):3590–3628, 2019.

Luca Biggio, Tommaso Bendinelli, Alexander Neitz, Aurelien Lucchi, and Giambattista Parascandolo. Neural symbolic regression that scales. In *International Conference on Machine Learning*, 2021.

David M Blei, Alp Kucukelbir, and Jon D McAuliffe. Variational inference: A review for statisticians.

Journal of the American statistical Association, 112(518):859–877, 2017.

Steve Brooks, Andrew Gelman, Galin Jones, and Xiao-Li Meng. *Handbook of Markov chain Monte Carlo*.

CRC press, 2011.

Bogdan Burlacu, Gabriel Kronberger, and Michael Kommenda. Operon C++: An efficient genetic programming framework for symbolic regression. In Proceedings of the 2020 Genetic and Evolutionary Computation Conference Companion, GECCO '20, pp. 1562–1570, New York, NY, USA, 2020. Association for Computing Machinery. ISBN 9781450371278. doi: 10.1145/3377929.3398099.

N Ross Chapman and Er Chang Shang. Review of geoacoustic inversion in underwater acoustics. *Journal* of Theoretical and Computational Acoustics, 29(03):2130004, 2021.

Patrick R Conrad, Youssef M Marzouk, Natesh S Pillai, and Aaron Smith. Accelerating asymptotically exact MCMC for computationally intensive models via local approximations. Journal of the American Statistical Association, 111(516):1591–1607, 2016.

Paul G Constantine, Carson Kent, and Tan Bui-Thanh. Accelerating Markov chain Monte Carlo with active subspaces. *SIAM Journal on Scientific Computing*, 38(5):A2779–A2805, 2016.

Kyle Cranmer, Johann Brehmer, and Gilles Louppe. The frontier of simulation-based inference. Proceedings of the National Academy of Sciences, 117(48):30055–30062, 2020a.

Miles Cranmer, Alvaro Sanchez Gonzalez, Peter Battaglia, Rui Xu, Kyle Cranmer, David Spergel, and Shirley Ho. Discovering symbolic models from deep learning with inductive biases. *Advances in Neural* Information Processing Systems, 33:17429–17442, 2020b.

Katalin Csilléry, Michael GB Blum, Oscar E Gaggiotti, and Olivier François. Approximate Bayesian computation (abc) in practice. *Trends in ecology & evolution*, 25(7):410–418, 2010.

Laurent Dinh, David Krueger, and Yoshua Bengio. Nice: Non-linear independent components estimation.

arXiv preprint arXiv:1410.8516, 2014.

Laurent Dinh, Jascha Sohl-Dickstein, and Samy Bengio. Density estimation using real NVP. *arXiv preprint* arXiv:1605.08803, 2016.

Stan E Dosso and Jan Dettmer. Bayesian matched-field geoacoustic inversion. *Inverse Problems*, 27(5):
055009, 2011.

Arnaud Doucet and Xiaodong Wang. Monte Carlo methods for signal processing: a review in the statistical signal processing context. *IEEE Signal Processing Magazine*, 22(6):152–170, 2005.

Conor Durkan, Artur Bekasov, Iain Murray, and George Papamakarios. Neural spline flows. Advances in neural information processing systems, 32, 2019.

Anthony William Fairbank Edwards. *Likelihood*. CUP Archive, 1984.

Qinwei Fan, Jacek M Zurada, and Wei Wu. Convergence of online gradient method for feedforward neural networks with smoothing l1/2 regularization penalty. *Neurocomputing*, 131:208–216, 2014.

Yuwei Fan and Lexing Ying. Solving inverse wave scattering with deep learning. *arXiv preprint* arXiv:1911.13202, 2019.

Leilani H Gilpin, David Bau, Ben Z Yuan, Ayesha Bajwa, Michael Specter, and Lalana Kagal. Explaining explanations: An overview of interpretability of machine learning. In 2018 IEEE 5th International Conference on data science and advanced analytics (DSAA), pp. 80–89. IEEE, 2018.

Ian Goodfellow, Yoshua Bengio, and Aaron Courville. *Deep learning*. MIT press, 2016.

Arthur Gretton, Karsten M Borgwardt, Malte J Rasch, Bernhard Schölkopf, and Alexander Smola. A kernel two-sample test. *The Journal of Machine Learning Research*, 13(1):723–773, 2012.

Aditya Grover, Manik Dhar, and Stefano Ermon. Flow-GAN: Combining maximum likelihood and adversarial learning in generative models. In *Proceedings of the AAAI conference on artificial intelligence*, volume 32, 2018.

Xiaofei Guan, Xintong Wang, Hao Wu, Zihao Yang, and Peng Yu. Efficient bayesian inference using physicsinformed invertible neural networks for inverse problems. *Machine Learning: Science and Technology*.

Charles W Holland, Jan Dettmer, and Stan E Dosso. Remote sensing of sediment density and velocity gradients in the transition layer. *The Journal of the Acoustical Society of America*, 118(1):163–177, 2005.

Chen-Fen Huang, Peter Gerstoft, and William S Hodgkiss. Uncertainty analysis in matched-field geoacoustic inversions. *The Journal of the Acoustical Society of America*, 119(1):197–207, 2006.

Priyank Jaini, Kira A Selby, and Yaoliang Yu. Sum-of-squares polynomial flow. In International Conference on Machine Learning, pp. 3009–3018. PMLR, 2019.

Finn B Jensen, William A Kuperman, Michael B Porter, Henrik Schmidt, and Alexandra Tolstoy. *Computational ocean acoustics*, volume 2011. Springer, 2011.

Ying Jin, Weilin Fu, Jian Kang, Jiadong Guo, and Jian Guo. Bayesian symbolic regression. arXiv preprint arXiv:1910.08892, 2019.

Pierre-Alexandre Kamienny, Stéphane d'Ascoli, Guillaume Lample, and François Charton. End-to-end symbolic regression with transformers. *arXiv preprint arXiv:2204.10532*, 2022.

Alex Kendall and Yarin Gal. What uncertainties do we need in bayesian deep learning for computer vision?

Advances in neural information processing systems, 30, 2017.

Liron Simon Keren, Alex Liberzon, and Teddy Lazebnik. A computational framework for physics-informed symbolic regression with straightforward integration of domain knowledge. *Scientific Reports*, 13(1):1249, 2023.

Yuehaw Khoo and Lexing Ying. Switchnet: a neural network model for forward and inverse scattering problems. *SIAM Journal on Scientific Computing*, 41(5):A3182–A3201, 2019.

Samuel Kim, Peter Y Lu, Srijon Mukherjee, Michael Gilbert, Li Jing, Vladimir Čeperić, and Marin Soljačić.

Integration of neural network-based symbolic regression in deep learning for scientific discovery. *IEEE*
Transactions on Neural Networks and Learning Systems, 2020.

Durk P Kingma and Prafulla Dhariwal. Glow: Generative flow with invertible 1x1 convolutions. *Advances* in neural information processing systems, 31, 2018.

Ivan Kobyzev, Simon JD Prince, and Marcus A Brubaker. Normalizing flows: An introduction and review of current methods. *IEEE transactions on pattern analysis and machine intelligence*, 43(11):3964–3979, 2020.

Michael Kommenda, Bogdan Burlacu, Gabriel Kronberger, and Michael Affenzeller. Parameter identification for symbolic regression using nonlinear least squares. *Genetic Programming and Evolvable Machines*, 21
(3):471–501, 2020.

Anoop Korattikara, Yutian Chen, and Max Welling. Austerity in MCMC land: Cutting the metropolishastings budget. In *International conference on machine learning*, pp. 181–189. PMLR, 2014.

John R Koza and John R Koza. *Genetic programming: on the programming of computers by means of natural* selection, volume 1. MIT press, 1992.

Jakob Kruse, Lynton Ardizzone, Carsten Rother, and Ullrich Köthe. Benchmarking invertible architectures on inverse problems. *arXiv preprint arXiv:2101.10763*, 2021.

Vyacheslav Kungurtsev, Adam Cobb, Tara Javidi, and Brian Jalaian. Decentralized Bayesian learning with Metropolis-adjusted Hamiltonian Monte Carlo. *Machine Learning*, pp. 1–29, 2023.

William La Cava, Patryk Orzechowski, Bogdan Burlacu, Fabricio Olivetti de Franca, Marco Virgolin, Ying Jin, Michael Kommenda, and Jason H Moore. Contemporary symbolic regression methods and their relative performance. 2021.

Peter M Lee. *Bayesian statistics*. Arnold Publication, 1997. Thomas Leonard and John SJ Hsu. *Bayesian methods: an analysis for statisticians and interdisciplinary* researchers, volume 5. Cambridge University Press, 2001.

Yongchao Li, Yanyan Wang, and Liang Yan. Surrogate modeling for bayesian inverse problems based on physics-informed neural networks. *Journal of Computational Physics*, 475:111841, 2023.

Ziming Liu and Max Tegmark. Machine learning conservation laws from trajectories. *Physical Review Letters*,
126(18):180604, 2021.

Ziming Liu, Yixuan Wang, Sachin Vaidya, Fabian Ruehle, James Halverson, Marin Soljačić, Thomas Y Hou, and Max Tegmark. Kan: Kolmogorov-arnold networks. *arXiv preprint arXiv:2404.19756*, 2024.

Alexander Luce, Ali Mahdavi, Heribert Wankerl, and Florian Marquardt. Investigation of inverse design of multilayer thin-films with conditional invertible neural networks. *Machine Learning: Science and Technology*, 4(1):015014, feb 2023. doi: 10.1088/2632-2153/acb48d. URL https://dx.doi.org/10.1088/
2632-2153/acb48d.

David JC MacKay. *Information theory, inference and learning algorithms*. Cambridge university press, 2003.

Georg Martius and Christoph H Lampert. Extrapolation and learning equations. *arXiv preprint* arXiv:1610.02995, 2016.

Florian Meyer and Kay L. Gemba. Probabilistic focalization for shallow water localization. *J. Acoust. Soc.*
Am., 150(2):1057–1066, 08 2021.

T Nathan Mundhenk, Mikel Landajuela, Ruben Glatt, Claudio P Santiago, Daniel M Faissol, and Brenden K
Petersen. Symbolic regression via neural-guided genetic programming population seeding. *arXiv preprint* arXiv:2111.00053, 2021.

Kevin P Murphy. *Machine learning: a probabilistic perspective*. MIT press, 2012. Frank Noé, Simon Olsson, Jonas Köhler, and Hao Wu. Boltzmann generators: Sampling equilibrium states of many-body systems with deep learning. *Science*, 365(6457):eaaw1147, 2019.

Patryk Orzechowski, William La Cava, and Jason H Moore. Where are we now? A large benchmark study of recent symbolic regression methods. In *Proceedings of the Genetic and Evolutionary Computation* Conference, pp. 1183–1190, 2018.

Govinda Anantha Padmanabha and Nicholas Zabaras. Solving inverse problems using conditional invertible neural networks. *Journal of Computational Physics*, 433:110194, 2021.

George Papamakarios, David Sterratt, and Iain Murray. Sequential neural likelihood: Fast likelihood-free inference with autoregressive flows. In *The 22nd International Conference on Artificial Intelligence and* Statistics, pp. 837–848. PMLR, 2019.

George Papamakarios, Eric Nalisnick, Danilo Jimenez Rezende, Shakir Mohamed, and Balaji Lakshminarayanan. Normalizing flows for probabilistic modeling and inference. Journal of Machine Learning Research, 22(57):1–64, 2021.

Yudi Pawitan. *In all likelihood: statistical modelling and inference using likelihood*. Oxford University Press, 2001.

Brenden K Petersen, Mikel Landajuela Larma, Terrell N. Mundhenk, Claudio Prata Santiago, Soo Kyung Kim, and Joanne Taery Kim. Deep symbolic regression: Recovering mathematical expressions from data via risk-seeking policy gradients. In *International Conference on Learning Representations*, 2021.

Michael B Porter. The kraken normal mode program. *Naval Research Laboratory, Washington DC*, 1992. Patrick Putzky and Max Welling. Invert to learn to invert. *Advances in neural information processing* systems, 32, 2019.

Stefan T Radev, Frederik Graw, Simiao Chen, Nico T Mutters, Vanessa M Eichel, Till Bärnighausen, and Ullrich Köthe. Outbreakflow: Model-based Bayesian inference of disease outbreak dynamics with invertible neural networks and its application to the covid-19 pandemics in germany. *PLoS computational biology*,
17(10):e1009472, 2021.

Maziar Raissi, Paris Perdikaris, and George E Karniadakis. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations.

Journal of Computational physics, 378:686–707, 2019.

Simiao Ren, Willie Padilla, and Jordan Malof. Benchmarking deep inverse models over time, and the neuraladjoint method. *Advances in Neural Information Processing Systems*, 33:38–48, 2020. Danilo Rezende and Shakir Mohamed. Variational inference with normalizing flows. In *International conference on machine learning*, pp. 1530–1538. PMLR, 2015.

Subham Sahoo, Christoph Lampert, and Georg Martius. Learning equations for extrapolation and control.

In *International Conference on Machine Learning*, pp. 4442–4450. PMLR, 2018.

Tim Salimans, Diederik Kingma, and Max Welling. Markov chain Monte Carlo and variational inference:
Bridging the gap. In *International conference on machine learning*, pp. 1218–1226. PMLR, 2015.

Michael Schmidt and Hod Lipson. Distilling free-form natural laws from experimental data. *science*, 324
(5923):81–85, 2009.

Simon J Sheather. Density estimation. *Statistical science*, pp. 588–597, 2004.

Takeshi Teshima, Isao Ishikawa, Koichi Tojo, Kenta Oono, Masahiro Ikeda, and Masashi Sugiyama. Couplingbased invertible neural networks are universal diffeomorphis approximators. *Advances in Neural Information Processing Systems*, 33:3362–3373, 2020.

Robert Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society:
Series B (Methodological), 58(1):267–288, 1996.

Tony Tohme. *The Bayesian validation metric: a framework for probabilistic model calibration and validation*.

PhD thesis, Massachusetts Institute of Technology, 2020.

Tony Tohme, Kevin Vanslette, and Kamal Youcef-Toumi. A generalized Bayesian approach to model calibration. *Reliability Engineering & System Safety*, 204:107141, 2020.

Tony Tohme, Dehong Liu, and Kamal Youcef-Toumi. GSR: A generalized symbolic regression approach.

Transactions on Machine Learning Research, 2023a. ISSN 2835-8856. URL https://openreview.net/ forum?id=lheUXtDNvP.

Tony Tohme, Kevin Vanslette, and Kamal Youcef-Toumi. Reliable neural networks for regression uncertainty estimation. *Reliability Engineering & System Safety*, 229:108811, 2023b.

Tony Tohme, Mohsen Sadr, Kamal Youcef-Toumi, and Nicolas Hadjiconstantinou. MESSY Estimation:
Maximum-entropy based stochastic and symbolic density estimation. *Transactions on Machine Learning* Research, 2024. ISSN 2835-8856. URL https://openreview.net/forum?id=Y2ru0LuQeS.

Belinda Tzen and Maxim Raginsky. Neural stochastic differential equations: Deep latent Gaussian models in the diffusion limit. *arXiv preprint arXiv:1905.09883*, 2019.

Silviu-Marian Udrescu and Max Tegmark. AI Feynman: A physics-inspired method for symbolic regression.

Science Advances, 6(16):eaay2631, 2020.

Silviu-Marian Udrescu, Andrew Tan, Jiahai Feng, Orisvaldo Neto, Tailin Wu, and Max Tegmark. AI Feynman 2.0: Pareto-optimal symbolic regression exploiting graph modularity. Advances in Neural Information Processing Systems, 33:4860–4871, 2020.

Mojtaba Valipour, Bowen You, Maysum Panju, and Ali Ghodsi. Symbolicgpt: A generative transformer model for symbolic regression. *arXiv preprint arXiv:2106.14131*, 2021.

Kevin Vanslette, Tony Tohme, and Kamal Youcef-Toumi. A general model validation and testing tool.

Reliability Engineering & System Safety, 195:106684, 2020.

Marco Virgolin and Solon P Pissis. Symbolic regression is NP-hard. *arXiv preprint arXiv:2207.01018*, 2022. Sven Wang and Youssef Marzouk. On minimax density estimation via measure transport. arXiv preprint arXiv:2207.10231, 2022.

Li Wenliang, Danica J Sutherland, Heiko Strathmann, and Arthur Gretton. Learning deep kernels for exponential family densities. In *International Conference on Machine Learning*, pp. 6737–6746. PMLR,
2019.

CJ Wild and GAF Seber. *Nonlinear regression*. New York: Wiley, 1989.

Anqi Wu, Sebastian Nowozin, Edward Meeds, Richard E Turner, José Miguel Hernández-Lobato, and Alexander L Gaunt. Deterministic variational inference for robust bayesian neural networks. In *International* Conference on Learning Representations, 2018.

Sihong Wu, Qinghua Huang, and Li Zhao. Fast Bayesian inversion of airborne electromagnetic data based on the invertible neural network. *IEEE Transactions on Geoscience and Remote Sensing*, 61:1–11, 2023.

Wei Wu, Qinwei Fan, Jacek M Zurada, Jian Wang, Dakun Yang, and Yan Liu. Batch gradient method with smoothing l1/2 regularization for training of feedforward neural networks. *Neural Networks*, 50:72–78, 2014.

Zongben Xu, Hai Zhang, Yao Wang, XiangYu Chang, and Yong Liang. L 1/2 regularization. Science China Information Sciences, 53:1159–1169, 2010.

Caglar Yardim, Peter Gerstoft, and William S Hodgkiss. Geoacoustic and source tracking using particle filtering: Experimental results. *The Journal of the Acoustical Society of America*, 128(1):75–87, 2010.

Lexing Ying. Solving inverse problems with deep learning. In *Proc. Int. Cong. Math*, volume 7, pp. 5154–
5175, 2022.

Hengzhe Zhang, Aimin Zhou, Hong Qian, and Hu Zhang. PS-Tree: A piecewise symbolic regression tree.

Swarm and Evolutionary Computation, 71:101061, 2022.

Xin Zhang and Andrew Curtis. Bayesian geophysical inversion using invertible neural networks. Journal of Geophysical Research: Solid Earth, 126(7):e2021JB022320, 2021.

# A Invertible Symbolic Expressions Recovered By Isr For The Considered Distributions

Table 1: Invertible symbolic expressions recovered by our ISR method for density estimation of several distributions (see Figure 4) in Section 4.1. Here, MoGs denotes "Mixture of Gaussians."

| distributions (see Figure 4) in Section 4.1. Here, MoGs denotes "Mixture of Gaussians." Example Expression Gaussian u = x, i.e. u1 = x1, u2 = x2 s1(u2) = 1.16 t1(u2) = 0 v1 = u1 · exp (s1(u2)) + t1(u2) v2 = u2 s2(v1) = 1.14 t2(v1) = −9.39 o1 = v1 o2 = v2 · exp (s2(v1)) + t2(v1) z = o, i.e. z1 = o1, z2 = o2 Banana u = x, i.e. u1 = x1, u2 = x2 s1(u2) = 0.52 sin (1.86 u2) + 0.084 sin (5.5 sin (1.86 u2) + 2.55) + 1.7 t1(u2) = 0.74 − 0.12 sin (3.65 sin (2.45 u2) − 0.82) v1 = u1 · exp (s1(u2)) + t1(u2) v2 = u2 s2(v1) = 1.72 (0.025 − 0.47 sin (0.33 v1)) (0.23 sin (0.33 v1) − 0.29) + 2.24 t2(v1) = −3.74 (0.022 sin (0.62 v1) + 0.035 sin (0.63 v1) − 0.76) (0.45 sin (0.62 v1) + 0.7 sin (0.63 v1) + 0.38) + 0.027 o1 = v1 o2 = v2 · exp (s2(v1)) + t2(v1) z = o, i.e. z1 = o1, z2 = o2 Ring u = x, i.e. u1 = x1, u2 = x2 s1(u2) = −3.14 −0.15 u 2 2 − 0.26 sin (1.26 u2) + 0.094 sin (3.25 u2) − 0.3  0.098 u 2 2 + 0.39 sin (1.26 u2) + 0.014 sin (3.25 u2) + 0.16 +0.09 sin  0.17 u 2 2 + 0.69 sin (1.26 u2) + 0.76 sin (3.25 u2) + 0.25 − 0.30 sin  0.94 u 2 2 + 2.35 sin (1.26 u2) − 1.31 sin (3.25 u2) + 1.57 + 0.053 t1(u2) = −0.012 v1 = u1 · exp (s1(u2)) + t1(u2) v2 = u2 s2(v1) = 0.037 sin (0.22 sin (0.22 v1) + 0.22 sin (0.22 v1) − 0.27) + 0.052 sin (0.23 sin (0.22 v1) + 0.23 sin (0.22 v1) − 0.39) − 0.11 t2(v1) = 0.13 sin (0.13 sin (0.37 v1) + 0.13 sin (0.59 v1) − 1.16) + 0.65 sin (0.021 v1 + 1.34 sin (0.37 v1) + 2.29 sin (0.59 v1) + 1.44) + 0.33 o1 = v1 o2 = v2 · exp (s2(v1)) + t2(v1) u = o, i.e. u1 = o1, u2 = o2 s1(u2) = −3.65 (−0.47 sin (1.38 u2) − 0.026 sin (1.93 u2) − 0.1) (−0.014 sin (1.38 u2) − 0.39 sin (1.93 u2) − 0.35) − 0.44 sin (1.74 sin (1.38 u2) + 1.71 sin (1.93 u2) + 5.55)     |                                                                  |
|-----|------------------------------------------------------------------|
|     | −0.46 sin (3.31 sin (1.38 u2) + 0.44 sin (1.93 u2) − 0.7) + 0.38 |
|     | t1(u2) = 0.11 sin (0.85 sin (1.38 u2) + 0.86 sin (1.38 u2)) + 0.12 sin (0.87 sin (1.38 u2) + 0.87 sin (1.38 u2)) + 0.053 v1 = u1 · exp (s1(u2)) + t1(u2) v2 = u2 2 2 s2(v1) = 0.25 v 1 − 0.0071 sin (1.67 v1) − 0.1 sin (5.29 v1) − 0.63 sin −1.075 v 1 + 0.55 sin (1.67 v1) + 0.77 sin (5.29 v1)  +0.46 sin  1.89 v 1 − 0.27 sin (1.67 v1) − 1.69 sin (5.29 v1) + 0.9  + 1.22 2 t2(v1) = 0.62 sin (1.56 sin (0.43 v1) + 1.57 sin (0.43 v1) − 1.68) + 0.61 sin (1.56 sin (0.43 v1) + 1.57 sin (0.43 v1) − 1.68) − 0.48 o1 = v1 o2 = v2 · exp (s2(v1)) + t2(v1) z = o, i.e. z1 = o1, z2 = o2                                                                  |
| MoG | u = x, i.e. u1 = x1, u2 = x2 s1(u2) = −0.039 (−0.032 sin (1.44 u2) − sin (1.48 u2) + 0.94)2 − 3.03 (0.18 sin (1.44 u2) + 0.14 sin (1.48 u2) − 0.63) (0.26 sin (1.44 u2) + 0.28 sin (1.48 u2) − 0.31) +0.047 sin (1.44 u2) + 0.13 sin (1.48 u2) − 0.029 sin (0.15 sin (1.44 u2) + 1.4 sin (1.48 u2) − 3.47) − 0.11 sin (0.45 sin (1.44 u2) + 1.46 sin (1.48 u2) + 2.65) − 0.17 t1(u2) = 0.052 − 0.12 sin (1.13 sin (3.14 u2) + 1.64) v1 = u1 · exp (s1(u2)) + t1(u2) v2 = u2 s2(v1) = 0.34 (0.15 sin (1.024 v1) + 0.13 sin (2.022 v1)) sin (2.022 v1) + 0.014 sin2 (2.022 v1) + 0.15 sin (0.98 v1 sin (1.024 v1) + 1.9 sin (2.022 v1) − 1.41) −0.22 sin (3.38 sin (1.024 v1) + 1.83 sin (2.022 v1) + 1.5) + 0.39 t2(v1) = −0.46 sin (0.89 v1 + 0.21 sin (1.38 v1) − 1.56) + 0.33 sin (1.23 v1 + 1.69 sin (1.38 v1) − 0.44 sin (1.58 v1) + 1.75 v1) + 0.094 o1 = v1 o2 = v2 · exp (s2(v1)) + t2(v1) u = o, i.e. u1 = o1, u2 = o2 s1(u2) = 3.36 (−0.36 sin (3.1 u2) − 0.29) (0.26 sin (3.1 u2) + 0.19) − 0.45 sin (3.88 sin (3.1 u2) − 1.83) − 0.38 t1(u2) = 0 v1 = u1 · exp (s1(u2)) + t1(u2) v2 = u2 s2(v1) = 0.0035 v 1 − 2.88 −0.079 v 2 1 − 0.33 0.042 v 2 1 + 0.4  + 2.6 (−0.34 sin (1.7 v1) − 0.088 sin (1.71 v1)) (−0.053 sin (1.7 v1) − 0.38 sin (1.71 v1)) + 1.61 2 t2(v1) = −0.14 o1 = v1 o2 = v2 · exp (s2(v1)) + t2(v1) z = o, i.e. z1 = o1, z2 = o2                                                                  |

## B Details Of Network Architectures

In our experiments, we train all models using Adam optimizer with a dynamic learning rate decaying from 10−2to 10−4. In addition, for INN, all (hidden) neurons of the subnetworks are followed by Leaky ReLU
activations, while for ISR, each neuron in the hidden layers is followed by an activation function from the following library:

$$\left\{1,\mathrm{id},\circ^{2}(\times4),\sin(2\pi\circ),\sigma,\circ_{1}\times\circ_{2}\right\}$$

where 1 represents the constant function, "id" is the identity operator, σ denotes the sigmoid function, and - denotes placeholder operands, e.g. •
2corresponds to the square operator. Also, •1 × •2 denotes the multiplication operator, and each activation function may be duplicated within each layer. We adopt a regularization coefficient of 5 × 10−3. The library above is used for illustrative purposes, and indeed, additional arithmetic operators (e.g. ÷) or mathematical functions (e.g. log, cos, exp, etc.) can be included.

The architecture details are provided below.

Density estimation via normalizing flow. In Section 4.1, we train INN and ISR using a batch size of 64. For the "Gaussian" and "Banana" distributions, we adopt 1 affine coupling block with 2 fully connected (hidden)
layers per subnetwork. For the "Ring" and "Mixture of Gaussians (MoGs)" distributions, we use 2 invertible blocks with 2 fully connected layers for each subnetwork.

Inverse Kinematics. In Section 4.2, we train all models using a batch size of 100. For all models, we adopt 6 reversible blocks with 3 fully connected layers per subnetwork.

Geoacoustic Inversion. In Section 4.3, we train all models using a batch size of 200. For all models, we adopt

![22_image_0.png](22_image_0.png)

5 invertible blocks with 4 fully connected layers for each subnetwork.

Figure 10: Comparison between the normal L0.5 regularization and its smoothed version given by Eq. B.1.

For ease of visualization, we set the threshold to a = 0.1; however, in our experiments, we adopt a threshold of a = 0.05.

Smoothed L0.5 **regularization.** As discussed in Section 2, we use a smoothed L0.5 regularization (Wu et al., 2014; Fan et al., 2014; Kim et al., 2020) during training. Figure 10 compares L0.5 regularization and its smoothed version. The original (or normal) L0.5 regularization creates a gradient singularity as weights approach zero, which can make the training more challenging for gradient based techniques. The smoothed regularization resolves this issue by applying a piecewise function to smooth out the function at small magnitudes, i.e.

$$L_{0.5}(w)=\begin{cases}|w|^{1/2}&|w|\geq a\\ \left(-{\frac{w^{4}}{8a^{3}}}+{\frac{3w^{2}}{4a}}+{\frac{3a}{8}}\right)^{1/2}&|w|<a\end{cases}$$
1/2|w| < a(B.1)

$$(\mathbf{B.1})$$

## C Quantitative Evaluation - Inverse Kinematics

Here, we quantitatively evaluate the quality of the estimated posteriors by the different models considered in the inverse kinematics and the Geoacoustic inversion experiments. To ensure a fair comparison among all methods, we use the same training data and train all models for the same number of epochs, and using identical batches and architectures (as provided in the previous section).

## C.1 Inverse Kinematics

As suggested in (Kruse et al., 2021), we evaluate the correctness of the estimated posteriors using two metrics.

First, we use the Maximum Mean Discrepancy (MMD) introduced by Gretton et al. (2012), which computes the *posterior mismatch* between the distribution pˆ(x | y
∗) produced by a model and a ground truth estimate pgt(x | y
∗), which in this case is obtained via rejection sampling (see Section 4.2), i.e.

$$\mathrm{Err}_{\mathrm{post}}=\mathrm{MMD}\left({\hat{p}}(\mathbf{x}\,|\,\mathbf{y}^{*}),p_{\mathrm{gt}}(\mathbf{x}\,|\,\mathbf{y}^{*})\right)$$
$$(\mathrm{C.1})$$

∗)(C.1)
Second, we measure the *re-simulation error*, which applies the true forward process f in Eq. (20) to the generated samples x and computes the mean squared distance to the target y
∗, i.e.

$$\operatorname{Err}_{\operatorname{resim}}=\mathbb{E}_{\mathbf{x}\sim{\hat{p}}(\mathbf{x}\,|\,\mathbf{y}^{*})}\left[||f(\mathbf{x})-\mathbf{y}^{*}||_{2}^{2}\right]$$
(C.2)
$$(\mathrm{C.2})$$

| Method   | Errpost   | Errresim   |
|----------|-----------|------------|
| INN      | 0.0259    | 0.0163     |
| cINN     | 0.0162    | 0.0087     |
| ISR      | 0.0286    | 0.0196     |
| cISR     | 0.0221    | 0.0134     |

Table 2: Quantitative results for the inverse kinematics benchmark experiment.

## C.2 Geoacoustic Inversion

To evaluate the estimated posteriors of the unknown parameters (for both *Task 1* and *Task 2* from Section 4.3), we focus on the correctness of Maximum a Posteriori (MAP) estimates (of unknown parameters) mˆ by computing their root-mean-square distance from ground-truth values m∗ over test set observations y
∗, i.e.

| Task 1 (2-D)   | Task 2 (4-D)   |        |
|----------------|----------------|--------|
| Method         | ErrMAP         | ErrMAP |
| INN            | 3.822          | 3.981  |
| cINN           | 3.503          | 3.743  |
| ISR            | 5.101          | 5.574  |
| cISR           | 4.597          | 4.993  |

$${\bf Err}_{\rm MAP}=\sqrt{{\bf E}_{\bf y^{*}}[||{\bf\hat{m}}-{\bf m^{*}}||_{2}^{2}]}.\tag{10}$$
$$\mathbf{(C.3)}$$

Table 3: Quantitative results for the geoacoustic inversion experiment.

## D Illustrative Example

In this section, we consider an interesting toy inverse problem to illustrate the challenges and opportunities of the INN and ISR methods. We consider the following forward model

$$y=x^{2}+\epsilon$$

![24_image_0.png](24_image_0.png)

$$(\mathbf{D.1})$$
2 + ϵ (D.1)
where the input x and output y are scalar quantities. Here, we also consider the additive noise ϵ ∼ N (0, 0.1),
and we assume a standard normal prior on the input x. A closed-form inverse solution exists for this toy inverse problem and, as shown in Figure 11, the posterior p(x | y) is bimodal.

Figure 11: True posterior p(x | y
∗), conditioned on y
∗ = 1, for the forward model described in Eq. (D.1).

Indeed, for y
∗ = 1, two inverse solutions are possible, i.e. x = ±1, which explains the bimodal shape.
As discussed in Section 3.2, to use coupling-based invertible architectures with one-dimensional (or scalar)
input, we pad the input with an extra zero and utilize a loss term that prevents the encoding of information in the extra dimension. The expressive power of ISR (and cISR) depends on the library of arithmetic operators and mathematical functions being used, which is a design choice. Here, we adopt the library given in Appendix B. Indeed, one can use a different library when necessary or when domain knowledge is available. The estimated posteriors using INN, cINN, ISR, and cISR, are depicted in Figure 12. The invertible symbolic expressions generated by ISR and cISR are reported in Table 4. Interestingly, as illustrated in Figure 12, initial observations indicate that all methods provide a solution to the specified inverse problem. Additionally, it is noted that the INN, cINN, and cISR methods capture the bimodal shape of the distribution, whereas the ISR method identifies only one mode. Notably, the posterior predicted by cISR appears broader than those predicted by INN and cINN, which align closely with the true posterior. This phenomenon can be attributed to the inherent trade-off between accuracy and interpretability in SR methods. In this instance, the application of sparsity-promoting smoothed L0.5 regularization in training cISR yields a relatively simple and interpretable model, as demonstrated in Table 4.

However, this simplicity comes at the expense of accuracy, as evidenced by the broader posterior distribution observed with cISR.

Another interesting observation is that the solution recovered by ISR primarily produces a posterior with only one mode. Examination of Table 4 reveals that ISR accurately recovered the analytical expression for the forward model, specifically y = x 2. However, given that ISR employs a fixed invertible architecture characterized by affine coupling blocks, the exact analytical expression for the inverse function may not always be attainable. The inverse map approximation that ISR generates is straightforward and interpretable, yet it does not precisely match the expected expression (i.e., x = ±
√y). This discrepancy likely contributes to the unimodal shape observed in the predicted posterior distribution.

Indeed, the choice of library used for implementation can significantly influence the results. Additionally, the L0.5 regularization coefficient plays a critical role in determining the sparsity of the resulting symbolic solution. It is important to note that even with the same library and regularization settings, different potential symbolic expressions (or solutions to the inverse problem) may be derived from those shown, and these

![25_image_0.png](25_image_0.png)

expressions could exhibit either unimodal or bimodal distributions based on the produced approximations.

Figure 12: For a fixed observation y
∗ = 1, we compare the predicted posteriors pˆ(x | y
∗) of INN, cINN, and the proposed ISR and cISR methods.
Table 4: Invertible symbolic expressions recovered by our ISR and cISR methods for the toy example above.

| Method   |                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        | Expression   |
|----------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------|
| ISR      | u = [0, x], i.e. u1 = 0, u2 = x s1(u2) = 0.0018u 2 2 + 0.012u2 + 0.107942 sin (0.51u2) − 0.039 t1(u2) = u 2 2 v1 = u1 · exp (s1(u2)) + t1(u2) v2 = u2 s2(v1) = 0.0024v 1 + 0.011v1 − 0.13 sin (1.69v1) − 0.17 2 t2(v1) = −0.0052v 1 − 0.0092v1 − 0.101 sin (0.92v1) + 0.039 sin (1.38v1) − 0.036 2 o1 = v1 o2 = v2 · exp (s2(v1)) + t2(v1) [y, z] = o, i.e. y = o1, z = o2                                                                                                                                             |              |
| cISR     | u = [0, x], i.e. u1 = 0, u2 = x s1(u2, y) = 5.11 t1(u2, y) = 0.65u2 − 0.013y − 0.01 (−u2 − 0.61y) 2 + 0.025 (−u2 + 0.35y) 2 + 0.0087 v1 = u1 · exp (s1(u2, y)) + t1(u2, y) v2 = u2 s2(v1, y) = 0.3v 1 + 0.00071 (−0.19v1 − y) 2 2 + 4.48 t2(v1, y) = −134.037v1 − 0.39y − 4.56 (−v1 − 0.37y) 2 − 3.79 (−v1 − 0.37y) 2 − 6.38 (−v1 − 0.37y) 2 +8.64 (v1 − 0.36y) 2 + 4.97 (v1 − 0.35y) 2 + 4.703 (v1 − 0.35y) 2 − 1.89 (v1 + 0.37y) 2 + 0.15 o1 = v1 o2 = v2 · exp (s2(v1, y)) + t2(v1, y) z = o, i.e. z1 = o1, z2 = o2 |              |