File size: 99,176 Bytes
ae9197b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
# Inexact Alternating Direction Method Of Multipliers With Efficient Local Termination Criterion For Cross-Silo Federated Learning

Anonymous authors Paper under double-blind review

## Abstract

Federated learning has attracted increasing attention in the machine learning community at the past five years. In this paper, we propose a new cross-silo federated learning algorithm with fast convergence guarantee to solve the machine learning models with nonsmooth regularizers. To solve this type of problems, we design an inexact federated alternating direction method of multipliers (ADMM). This method enables each agent to solve a strongly convex local problem. We introduce a new local termination criterion that can be quickly satisfied when using efficient solvers such as stochastic variance reduced gradient (SVRG). We prove that our method has faster convergence than existing methods. Moreover, we show that our proposed method has sequential convergence guarantees under the Kurdyka-Łojasiewicz
(KL) assumption. We conduct experiments using both synthetic and real datasets to demonstrate the superiority of our new methods over existing algorithms.

## 1 Introduction

Federated learning (FL) is an emerging research paradigm in which multiple agents collaborate to solve a machine learning problem. Cross-silo FL is an important subclass where the participating agents are pre-defined silos, such as organizations or institutions (e.g., hospitals and banks) (Kairouz et al., 2021a).

Typically, there are around 2-100 agents in this setting. Cross-silo federated learning finds significant applications in many domains such as medical and healthcare, finance, and manufacturing (Nandury et al., 2021; Huang et al., 2022; Yang et al., 2019). In a cross-silo federated learning (FL) task, each agent possesses a specific portion of the data, which they use to train their machine learning model locally. Once the local training is completed, all agents send their outputs to a central server. The server then aggregates these outputs and sends an update back to the participating agents. Most FL works focus on the following federated composite optimization (Kairouz et al., 2021b; McMahan et al., 2017b; Pathak & Wainwright, 2020).

$$\operatorname*{min}_{x\in\mathbb{R}^{n}}\sum_{i=1}^{p}f_{i}(x)+g(x),$$

$$(1)$$
fi(x) + g(x), (1)
where each fi: R
n → R is smooth (probably nonconvex) and Li-smooth, and g : R
n → R ∪ {+∞} is a proper closed convex regularizer. In machine learning applications, fiis the loss function of the agent i's local data sets and g can be ℓ1-regularizer, grouped ℓ1-regularizer, nuclear-norm regularizer (for matrix variable)
(Candès & Recht, 2009; Bao et al., 2022), the indicator function of a convex constraint (Yuan et al., 2021; Bao et al., 2022), etc. Problem (1) is called the federated composite optimization in Yuan et al. (2021). In Yuan et al. (2021), the federated dual averaging (FedDualAvg) was proposed as an early attempt to deal with the nonsmooth g. Bao et al. (2022) proposed a fast federated dual averaging for problem (1) with a strongly convex f. Although FedAvg, FedProx, FedDualAvg, and their variants have intuitive approaches to distribute tasks and aggregate local outputs, they face limitations in both theory and practice. For instance, Braverman et

| Table 1: Comparison in the inner updates of federated splitting methods. i Local Solver   |                                  | Local Complexity                               |                              |                    |      |    |    |
|-------------------------------------------------------------------------------------------|----------------------------------|------------------------------------------------|------------------------------|--------------------|------|----|----|
|                                                                                           | fi                               | g                                              |                              |                    |      |    |    |
|                                                                                           | Model                            | Local Termination Criterion                    | Assumptions on ϵ t           |                    |      |    |    |
| FedSplit                                                                                  | t+1 i − Proxfi (˜x t i)∥ ≤ ϵ t i | ϵ t i ≤ O(ϵ)                                   | GD                           | log(ϵ−1 )          |      |    |    |
| Pathak & Wainwright (2020)                                                                | SC                               | 0                                              | ∥x                           |                    |      |    |    |
| FedPD                                                                                     | t+1 i )∥ 2 ≤ ϵ t i               | ϵ t i ≤ O(ϵ)                                   | GD (SGD)                     | log(ϵ−1 ) (ϵ−1 )   |      |    |    |
| Zhang et al. (2021)                                                                       | NC                               | 0                                              | E∥∇Li(x t+1                  | t                  | t    | t  |    |
| FedDR                                                                                     | i                                | − Proxfi (˜x i)∥ ≤ ϵ i                         | 1 p Pp i=1 PT t=0 ϵ i ≤ O(1) | -                  | -    |    |    |
| Tran-Dinh et al. (2021)                                                                   | NC NS                            | ∥x                                             |                              |                    |      |    |    |
|                                                                                           | t+1                              | t i)∥ ≤ r∥x t+1                                | t                            |                    |      |    |    |
|                                                                                           | ∥x i                             | − Proxfi (˜x                                   | i                            | − x i∥             | None | -  | -  |
| FedADMM1                                                                                  | t+1                              |                                                |                              |                    |      |    |    |
| Gong et al. (2022)                                                                        | NC                               | 0                                              | ∥∇Li(x i )∥ 2 ≤ ϵ t i        | ϵ t i ≤ O(ϵ)       | -    | -  |    |
| FedADMM2                                                                                  | t+1 i )∥ 2 ≤ ϵ t i               | ϵ t+1 i ≤ νiϵ t ; νi ∈ [1/2, 1)                | -                            | log[(ϵ t+1 i )−1 ] |      |    |    |
| Zhou & Li (2022)                                                                          | NC                               | 0                                              | ∥∇Li(x                       | i                  |      |    |    |
| FedADMM3                                                                                  | t+1 i − Proxfi (˜x t i)∥ ≤ ϵ t i | 1 Pp PT t=0 ϵ t i ≤ O(1)                       | -                            | -                  |      |    |    |
| Wang et al. (2022)                                                                        | NC NS                            | ∥x                                             | p                            | i=1                |      |    |    |
|                                                                                           | i                                | t+1                                            | t                            | t                  | t    |    |    |
| FIAELT(Ours)                                                                              | NC NS E t∥x i                    | − Proxfi (˜x i)∥ 2 ≤ ri∥x i − Proxfi (˜x i)∥ 2 | None                         | SVRG               | O(1) |    |    |

Table 2: Comparison in the server updates of the federated splitting methods in Table 1. SC = Strongly Convex, NC = Nonconvex, NS = Nonsmooth. ϵ is the same as in Table 1.

Model Convergence

fi g Gradient Sequence

FedSplit SC 0 - Linear

FedPD NC 0 O(T −1) + ϵ -

FedDR NC NS O(T −1) -

FedADMM1 NC 0 O(T −1) + ϵ -

FedADMM2 NC 0 O(T −1) - FedADMM3 NC NS O(T −1) -

FIAELT(Ours) NC NS O(T −1) **Linear when** θ ∈ (0,

1

2

)

al. McMahan et al. (2017a) demonstrated that FedAvg can diverge in certain scenarios. Even when FedAvg converges, as shown in Pathak & Wainwright (2020), the resulting fixed points may not necessarily be stationary points of the original problem. Additionally, the analysis in Yuan et al. (2021); Li et al. (2020a); Reddi et al. (2021) often assumes that the dissimilarity between agents is bounded, which may not hold in real-world applications. These shortcomings of existing methods motivate the exploration of federated splitting methods for solving (1). In general, the idea behind splitting methods in federated learning is to establish a connection between (1) and a constrained problem of the form:

$$\operatorname*{min}_{X}\sum_{i=1}^{p}f_{i}(x_{i})+g(x_{1}){\mathrm{~s.t.~}}x_{1}=x_{2}=\cdots=x_{p},$$
$$\left(2\right)$$

fi(xi) + g(x1) s.t. x1 = x2 = *· · ·* = xp, (2)
where X = (x1, x2*, . . . , x*p).

Popular splitting methods in federated learning include FedSplit Pathak & Wainwright (2020), FedDR TranDinh et al. (2021), FedPD Zhang et al. (2021), and ADMM based federated learning methods, Gong et al.

(2022); Zhou & Li (2021); Zhang et al. (2021); Yue et al. (2021); Zhou & Li (2022). FedDR considers nonzero regularizer g while FedSplit, FedPD, and FedADMM deal with the unregularized case where g = 0, which can not apply to the applications where regularizers are needed to induce sparse parameters Zou & Hastie
(2005); Yuan et al. (2021) or low rank matrices Candès & Recht (2009); Bao et al. (2022).

At each round t of federated splitting methods, each agent needs to find x t+1 ito approximate the proximal operator of each fi for the current point x˜
t i
(denoted as Proxfi
(˜x t i
)), via a number of local updates with a certain termination criterion. However, the number of local updates (defined as local complexity) required by existing criteria is either unexplored or tends to infinity with an infinitesimal tolerance ϵ as the number of server updates T increases, as shown in Table 1. Therefore, a more advanced criterion that leads to a known constant number of more efficient local updates is much desired, which is an important goal of this work.

Moreover, existing federated splitting methods on nonconvex optimization with nonsmooth regularizer g also only focus on the convergence rate of the gradient but ignore the convergence of the generated sequences to a desired critical point. Zhou & Li (2022); Yue et al. (2021) proves that the accumulation point is critical point for regularized case (g = 0) but the convergence rate is still unknown. To obtain sequential convergence rate for nonsmooth regularizer g ̸= 0 is also an important goal of this work.

## 1.1 Our Contributions

To fulfill the above two goals, we propose a novel splitting method called Federated Inexact ADMM with Efficient Local Termination (FIAELT) for the nonconvex nonsmooth composed optimization problem (1) in the context of cross-silo federated learning, based on the equivalence between (1) and an np-dimensional constrained problem (4). Compared with existing works on federated splitting methods, we summarize our contributions as follows.

- For the local update of our algorithm, we propose a new criterion E
i t∥x t+1 i − Proxfi
(˜x t i
)∥
2 ≤ ri∥x t i −
Proxfi
(˜x t i
)∥
2(see Algorithm 1 for detail) where the tolerance ri ∈ (0, 1) does not need to be infinitesimal with large number T of communication rounds. Hence, our local complexity can be O(1), which outperforms existing splitting methods with an unexplored or large number of local updates (see Table 1 for comparison).

At the same time, we keep the state-of-the-art gradient convergence rate O(1/T) in the server updates (see Table 2).

- Furthermore, we demonstrate that FIAELT has sequential convergence properties in the deterministic case. Specifically, we prove that any accumulation point of the sequence generated at the server of FIAELT
is a stationary point of (1). Moreover, we prove that FIAELT achieves global convergence under KurdykaŁojasiewicz (KL) geometry, which covers a wide range of functions in practice. Specifically, the server updates and the outputs of local servers converge in finitely many communications when the KL exponent α of the potential function is 0. These sequences converge linearly when α ∈ (0, 1 2
). These sequences converge sublinearly when α ∈ (
1 2
, 1). In the analysis, our proposed new criterion plays a key role. To the best of our knowledge, FIAELT is the first federated learning method that has sequential convergence rate in nonconvex nonsmooth settings.

- Finally, we conducted experiments involving the training of fully-connected neural networks. In these experiments, we compared our method against existing splitting methods as well as other state-of-the-art Federated methods. The experimental results revealed that our method is competitive and consistently outperforms the other approaches in terms of training loss, training accuracy, and testing accuracy. These findings indicate the superior performance and effectiveness of our proposed method in the task of training fully-connected neural networks.

## 1.2 Related Work

The literature of federated learning is rich. In this work, we only focus on the splitting methods in federated learning. A comparison between our method and existing splitting methods is summarized in Table 1.

In Pathak & Wainwright (2020), FedSplit was proposed. It implements the Peaceman-Rachford splitting P
method for (2). Pathak & Wainwright (2020) analyzed the proposed method in the case where g = 0 and i fiis strongly convex. Pathak & Wainwright (2020) showed that when the error between the local output and the Proxfiis under a threshold ϵ, the sequence generated at the server by FedSplit linearly converges to an inexact solution of (1) up to an error determined by ϵ. They also applied the FedSplit to a strongly convex majorization of the original problem. In this setting, they showed a complexity of O˜(
√ϵ) to obtain an ϵ-optimal function value. However, in general convex settings, it assumes FedSplit locally computes Proxfi exactly, which is unrealistic when the local server solves large-scale problems.

When g = 0, there are several work on federated ADMM, Zhang et al. (2021); Gong et al. (2022); Zhou & Li
(2022); Elgabli et al. (2022). Gong et al. (2022) proposed FedADMM that randomly selects agents to attend each round. The ith agent terminates the local iterations when the norm of the local gradient of the current iterate is under a threshold ϵi. When there is an upper bound ϵ for {ϵi}, they showed FedADMM has a complexity of O(ϵ
−1) + O(ϵ) to reach an ϵ-surrogate stationary point. When fi's are twice differentiable, ADMM is applied in designing a second-order FL method in Elgabli et al. (2022). Zhou & Li (2022) proposed an inexact ADMM for federated learning problems. At round t, the ith agent terminates the local updates when the norm of the local gradient is under a threshold ϵ t i
. They assume {ϵ t i
}t decreases exponentially, i.e.,
ϵ t+1 i ≤ νiϵ t i with νi ∈ [
1 2
, 1). They showed that the generated sequence accumulates at the stationary point.

By further assuming the accumulation point of the generated sequence is isolated, they show the generated sequence converges globally. Compared with this work, we do not assume the accumulation point of the generated sequence to be isolated when we analyze the sequential convergence of our method.

When g ̸= 0, Tran-Dinh et al. (2021) proposed FedDR that applies the Douglas-Rachford (DR) splitting algorithms for (2). They combined the DR method with randomized block-coordinate strategies and asynchronous implementation. They estimated the complexity of FedDR under different termination criteria for local updates.The termination criteria in Tran-Dinh et al. (2021) test whether the distance between the prox of f and its approximation can be bounded by a certain value. However, this distance is unable to check in practice, especially when we use stochastic gradient methods for local updates. Yue et al. (2021) also considered the case where g ̸= 0. Specifically, they considered the case when g is the Bregman distance.

Assuming the Hessian of fi's in (1) being Lipschitz continuous, Yue et al. (2021) showed any accumulation point of the generated sequence is a stationary point. Yue et al. (2021) also showed the proposed method has a complexity of O(ϵ
−1) to reach an ϵ-stationary point.

## 2 Preliminaries

In this paper, we denote R
n the n-dimensional Euclidean space with inner product ⟨·, ·⟩ and Euclidean norm
∥ · ∥. We denote the set of all positive numbers as R++. We denote the distance from a point a to a set A as d(a, A). For a random variable ξ defined on a probability space (Ξ, Σ, P), we denote its expectation as Eξ.

Given an event A, the conditional expectation of ξ is denoted as E(ξ|A).

An extended-real-valued function f : R
n → [−∞, ∞] is said to be proper if domf = {x ∈ R
n : f(x) < ∞} is not empty and f never equals −∞. We say a proper function f is closed if it is lower semicontinuous. We define the indicator function of a closed set A as δA(x), which is zero when x ∈ A and ∞ otherwise.

We define the regular subdifferential of a proper function f : R
n → [−∞, ∞] at x ∈ domf as ∂fˆ (x) :=
nξ∈R
n:lim infz→x, z̸=x f(z)−f(x)−⟨ξ,z−x⟩
∥z−x∥ ≥0 oThe (limiting) subdifferential of f at x ∈ domf is defined as
∂f(x):=nξ ∈ R
n:∃x k f→*x, ξ*k→ξ with ξ k ∈∂fˆ (x k),∀k o, where x k f→ x means both x k → x and f(x k) → f(x).

For x ̸∈ domf, we define ∂fˆ (x) = ∂f(x) = ∅. We denote dom∂f := {x : ∂f(x) ̸= ∅}. For a differential function h : R
m × R
n → R
l, we denote ∇xL(*x, y*) and ∇yL(*x, y*) as the partial derivatives with respect to x and y correspondingly. We defined the normal cone of a set A at x as NA(x) := ∂δA(x). For a proper function f : R
n → [−∞, ∞], we denote the proximal operator of f as Proxαf (x) = Arg minz∈Rnf(z) + 1 2α
∥z − x∥
2	.

Consider a problem min f + g, where f is a smooth function and g is properly closed convex. We say x is a stationary point of this problem when 0 ∈ ∇f(x) + ∂g(x). We say x is an ε-stationary point if d 2(0, ∇f(x) + ∂g(x)) ≤ ε.

We next introduce the KL property used in analyzing the sequential convergence. Let Ψa be defined as the set of concave functions ψ : [0, a) → [0, ∞) satisfying ψ(0) = 0, being continuously differentiable on (0, a),
and satisfying ψ
′ > 0 on (0, a).

Definition 1 (**Kurdyka-Łojasiewicz property and exponent**). *A proper closed function* f : R
n →
(−∞, ∞] *is said to satisfy the Kurdyka-Łojasiewicz (KL) property at an* xˆ ∈ dom∂f *if there are* a ∈ (0, ∞],
a neighborhood V of xˆ and a φ ∈ Ψa such that for any x ∈ V with f(ˆx) < f(x) < f(ˆx) + a*, it holds that* ψ
′(f(x) − f(ˆx))dist(0, ∂f(x)) ≥ 1. If f *satisfies the KL property at* xˆ ∈ dom∂f and ψ *can be chosen as* ψ(ν) = a0ν 1−α for some a0 > 0 and α ∈ [0, 1), then we say that f satisfies the KL property at xˆ *with* exponent α. A proper closed function f satisfying the KL property with exponent α ∈ [0, 1) *at every point in* dom∂f *is called a KL function with exponent* α.

Functions satisfying KL property includes proper closed semi-algebraic functions, the quadratic loss function plus possibly nonconvex piecewise linear regularizers Attouch et al. (2010); Li & Pong (2018); Attouch et al. (2013); Zeng et al. (2021).

## 3 Federated Inexact Admm With Efficient Termination Criterion

We relate the problem (1) to (2). For (2), we view it as the following np-dimensional problem:

$$\operatorname*{min}_{X\in\mathbb{R}^{n_{p}}}\ F(X)+G(X),$$
$$\mathbf{\Sigma}$$

$\downarrow$ . 
F(X) + G(X), (3)
where X = (x1, x2*, . . . , x*p) with each xi ∈ R
n, F(X) := Pp i=1 fi(xi) with fi's in (1), G(X) := g(x1) +δC(X)
with C := {X : x1 = *· · ·* = xp} and g in (1).

The following proposition establishes the relation between (3) and (1).

Proposition 1. If X∗ = (x
∗
1
, . . . , x∗p
) *is a stationary point of* (3)*, then* x
∗
1 is a stationary point of (1).

Furthermore, if X = (x1, . . . , xp) is an ε*-stationary point of* (1), then x1 is a pε*-stationary point of* (1).

Based on this relation, we consider ADMM to solve (3). Rewrite (3) as the following equivalent problem:

$$\operatorname*{min}_{X,Y\in\mathbb{R}^{n_{P}}}\ F(X)+G(Y){\mathrm{~s.~t.~}}X=Y.$$

$$\mathbf{\partial})$$
F(X) + G(Y ) s. t. X = Y. (4)
The augmented lagrangian function of (4) is defined as:

$$L_{\beta}(X,Y,Z):=F(X)+G(Y)+\langle X-Y,Z\rangle+\frac{\beta}{2}\|X-Y\|^{2}.$$
2. (5)
Given a starting point (X0, Y 0, Z0) ∈ R
np × R
np × R
np and *τ, β >* 0, the ADMM for (3) is as follows:

$$\begin{cases}X^{t+1}=\arg\min_{X}L_{\beta}(X,Y^{t},Z^{t}),\\ Z^{t+1}=Z^{t}+\tau\beta(X^{t+1}-Y^{t}),\\ Y^{t+1}=\arg\min_{Y}L_{\beta}(X^{t+1},Y,Z^{t+1}).\end{cases}$$
$$\mathbf{\Sigma}$$
(6)
Now we give an equivalent form of the third equation in (6) as follows.

Proposition 2. *Consider* (3). Let {(Xt+1, Y t+1, Zt+1)} *be generated by* (6). Suppose β > maxi Li*. Then* the solution of the problem in the third equation of (6) is (y1, . . . , y1) *with* y1 = Prox 1 βp g
(
1 p Pp i=1(x t+1 i +
1 β z t+1 i))).

P
On the other hand, since F(X) in (3) is separable, we can write Lβ(*X, Y, Z*) in (5) as Lβ(*X, Y, Z*) =
p i=1 Lβ,i(xi, yi, zi), where

$$L_{\beta,i}(x_{i},y_{i},z_{i}):=f_{i}(x_{i})+\langle x_{i}-y_{i}\,z_{i}\rangle+{\frac{\beta}{2}}\|x_{i}-y_{i}\|^{2}.$$

Therefore, the first equality in (6) can be rewritten as x t+1 i =x t+1 i,∗ where

$x_{i,*}^{t+1}:=\underset{x_{i}}{\operatorname{argmin}}L_{\beta,i}(x_{i},y^{t},z_{i}^{t});i=1,\ldots,p.$
In practice, (7) cannot be exactly solved as fiis usually a nonconvex loss function involving large training data. Hence, existing federated splitting methods inexactly solve (7) up to a certain local criterion. However, the computational complexities of the local updates required by these criteria are either unexplored or very large (see Table 1). To solve this limitation, we propose the following criterion.

$$\mathbb{E}_{t}^{i}\|x_{i}^{t+1}-x_{i,*}^{t+1}\|^{2}\leq r_{i}\|x_{i}^{t}-x_{i,*}^{t+1}\|^{2}.\tag{1}$$

where E
i t denotes conditional expectation given the past trajectory {(x s i
, ys, zs i
) : s = 0, 1*, . . . , t*}, and the tolerance ri ∈ (0, 1) does not need to be arbitrarily small to ensure O(1) local complexity even with stochastic gradient, as will be shown in the convergence analysis.

$$\left(7\right)$$
$$({\mathfrak{g}})$$

Algorithm 1 Federated Inexact ADMM with Efficient Local Termination (FIAELT) for (1)
1: **Input:** *β, τ >* 0, ri > 0, mi ∈ N+, ηi > 0. (x 0 i
, y0 i
, z0 i
) and x¯
0 =
1 p Pi x 0 i
, z¯
0 =
1 p Pi z 0 i for agents i = 1*, . . . , p*.

2: for iteration t = 0, 1*, . . . , T* − 1 do 3: for agent i = 1*, . . . , p* in parallel do 4: Find x t+1 ito approximately solve:

$$x_{i}^{t+1}\approx\operatorname*{min}_{x_{i}}L_{\beta,i}(x_{i},y_{i}^{t},z_{i}^{t}):=x_{i,\star}^{t+1}.$$
$$({\boldsymbol{\delta}})$$
i,⋆ . (8)
such that the criterion (9) is satisfied.

Upload ∆xi,t+1 = x t+1 i − x t i and ∆zi,t+1 = τ β(x t+1 i − y t i
) to the server.

5: **end for** 6: The server calculates x¯
t+1 = ¯x t+
1 p Pi ∆xi,t+1, z¯
t+1 = ¯z t+
1 p Pp i=1 ∆zi,t+1 and y t+1 = Prox 1 βp g
(¯x t+1+
1 β z¯
t+1), and broadcasts these variables to each agent.

7: **end for**
We propose Algorithm 1 that implements the ADMM rule (6) in a federated way, where x t+1 iinexactly solves
(7) with stochastic gradient methods.

When *β > L* := maxi Li, the local problem (8) is minimizing a strongly convex smooth function that has Lipscihtz continuous gradient. Hence, using the stochastic method called SVRG in Johnson & Zhang (2013),
we obtain x t+1 that satisfies the following property.

Proposition 3. *Consider* (1). Set *β > L* := maxi Li*. Let* {(x t i
, yt i
, zt i
)} *be generated by Algorithm 1. Using* SVRG in Johnson & Zhang (2013) with Option II with frequency mi, learning rate ηi*, and initialization* x t i for (8)*, such that*

$$\frac{1}{\eta_{i}(\beta-L_{i})(1-2\eta_{i}(\beta+L_{i}))m_{i}}+\frac{2\eta_{i}(\beta+L_{i})}{1-2\eta_{i}(\beta+L_{i})}=:\rho_{i}<1.$$
$$(10)$$

Then criterion (9) *is satisfied in at most* k i t = log1/ρiβ+Li ri(β−Li)
iterations of SVRG.

Remark 1. The above proposition shows that fixing any ri ∈ (0, 1), SVRG outputs an inexact solution of the local subproblem (8) *within* O(1) steps, independent of the number of communication rounds T. In contrast, the number of local updates required by other existing federated splitting methods is either unexplored or increases to infinity with T.

Remark 2. *When* (9) is deterministic, our subproblem degenerates to minimizing a strongly convex function.

According to the well know results, minimizing a strongly convex function with the simplest gradient descent method produce a linear convergent sequence of variables. Following the same analysis in the proofs of Proposition 3, we will have the local complexity of order O(1).

## 4 Convergence Analysis Of Algorithm 1

We analyze the convergence properties of the variables Xt:= [x t 1
; *. . .* ; x t p
], Y
t:= [y t 1
; *. . .* ; y t p
], Z
t:= [z t 1
; *. . .* ; z t p
]
generated by Algorithm 1. We also denote L := maxi Li, r := maxi ri, Xt+1
∗:= [x t+1 1,∗
; *. . .* ; x t+1 p,∗
] and W = infX F(X) + infY G(Y ) > −∞ throughout the paper. First, the update rules of Algorithm 1 can be rewritten into the combined vectors Xt, Y t, Zt as follows.

We first show the following property.

Proposition 4. *The update rules in Algorithm 1 satisfy*

$$\begin{array}{l}{{\mathbb{E}\|X^{t+1}-X_{\star}^{t+1}\|^{2}\leq r\|X^{t}-X_{\star}^{t+1}\|^{2},}}\\ {{Z^{t+1}=Z^{t}+\tau\beta(X^{t+1}-Y^{t}),}}\\ {{Y^{t+1}=\operatorname*{min}_{Y}L_{\beta}(X^{t+1},Y,Z^{t+1}),}}\end{array}$$
2, (11)
t), (12)
YLβ(Xt+1*, Y, Z*t+1), (13)
$$\begin{array}{l}{(11)}\\ {(12)}\end{array}$$

$$(13)$$

With Proposition 4, we can analyze {(Xt, Y t, Zt)} to analyze the convergence properties of Algorithm 1.

For {(Xt, Y t, Zt)}, we have the following theorem that is important in establishing our main convergence properties.

Proposition 5. *Select hyperparameters* β ≥ 5L, ri ∈ (0, 0.01], τ ∈ [1/2, 1)*. Denote* Γ := 1−τ τ, Θ =
2β 2 + 4L
2, Λ := 4L
2. Υ := Θ
τβ4r 1−2r and δ := 14
(β − L) − 2Υ*. Define*

$$H(X,Y,Z,X^{\prime},Z^{\prime}):=L_{\beta}(X,Y,Z)+\frac{\Gamma}{\tau\beta}\|Z-Z^{\prime}\|^{2}+\Upsilon\|X-X^{\prime}\|^{2}.$$
$${}^{+1},Y^{t+1},Z^{t+1},X^{t},Z^{t}).\;\;\;T h e$$

and Ht+1 := EH(Xt+1, Y t+1, Zt+1, Xt, Zt). Then for t ≥ 1, it holds that δ ≥ 0.1L and

$$H_{t+1}\leq H_{t}-\delta\mathbb{E}\|X^{t+1}-X^{t}\|^{2}-\frac{\beta}{2}\mathbb{E}\|Y^{t+1}-Y^{t}\|^{2}.\tag{14}$$

Hence, the sequence {Ht} *converges to some* H∗ ≥ W.

Thanks to Proposition 5, we have the following property with respect to the successive changes.

Corollary 1. *Consider* (1) and let (Xt, Y t, Zt) be defined as in Proposition 4. Suppose assumptions in Proposition 5 hold. Then limt E∥Xt − Xt+1∥
2 = limt E∥Y
t+1 − Y
t∥
2 = limt E∥Z
t+1 − Z
t∥
2 = lim E∥Y
t −
Xt∥
2 = 0.

Remark 3. *Corollary 1 together with Propositions 1 and 4 shows that the expectations of successive changes* of {(x t 1
, . . . , xtp
, yt, z1*, . . . , z*tp
)} *generated by Algorithm 1 also converge to* 0.

Based on Proposition 5, {(Xt, Y t, Zt)} has the following convergence property.

Theorem 1. Select hyper-parameters per Proposition 5 hold and let H∗ *be defined as in Proposition 5. Then*

$$\sum_{t=0}^{T}\mathbb{E}\|\nabla F(Y^{t+1})+\xi^{t+1}\|^{2}\leq D\left(\|\nabla L_{\beta}(X^{0},Y^{0},Z^{0})\|^{2}+\|X^{0}-Y^{0}\|^{2}\right)+D\left(L_{\beta}(X^{0},Y^{0},Z^{0})-W\right),\tag{15}$$

where

$$D:=\max\{3(L+\beta)^{2}\frac{2r}{1-2r},\left(\frac{L}{\tau\beta}+1\right)^{2},(L+\beta)^{2}\}\cdot\max\{D_{1},D_{2},D_{3}\}\tag{16}$$

with D1 :=
2Γ+Θ 8r 1−2r +2 min{δ, 12 β}, D2 := (1 + Γ) 3(r+1)
(L−β)
2 + D14
(L−β)
2 L+β+1 2 + 2τ β(Γ + 1) + Υ + (L−β)
2 8
, D3 :=
max{3, D12τ β(Γ + 1)}, Γ, Υ and Θ *being defined in Proposition 5.*
Combining Theorem 1 with Proposition 1 and Proposition 3, we immediately obtain the following convergence rate of Algorithm 1.

Corollary 2. *Select hyperparameters* β = 5L, ri = 0.005, τ = 1/2 in Algorithm 1. Then the following convergence rate holds.

$$\frac{1}{1+T}\sum_{t=0}^{T}\mathbb{E}d^{2}(0,\sum_{i}\nabla f_{i}(y^{t+1})+\partial g(y^{t+1}))$$ $$\leq p D\left(\|\nabla L_{\beta}(X^{0},Y^{0},Z^{0})\|^{2}+\|X^{0}-Y^{0}\|^{2}\right)+p D\left(L_{\beta}(X^{0},Y^{0},Z^{0})-W\right).$$

where D *is the one defined in Theorem 1. Furthermore, the criterion* (9) can be satisfied by implementing 10 iterations of SVRG Johnson & Zhang (2013) with Option II with frequency mi = 200*, learning rate* ηi =1 40L
,
and initialization x t i for (8).

Remark 4. Corollary 2 indicates that compared with existing federated methods, we keep the same state-ofthe-art convergence rate O(1/T) with T *being the number of the communication round, while only* O(1) local update steps for the local (8) *is required.*

![7_image_0.png](7_image_0.png)

Figure 1: Results on Synthetic-{(0,0), (0.5, 0.5), (1,1)} dataset.

## 4.1 Sequential Convergence In The Deterministic Case

In this section, we further investigate the convergence of the sequence {(Xt, Y t, Zt)} generated by Algorithm 1 when (9) holds deterministically, i.e., holds without the expectation. We first show the properties of the set of accumulation points of {(Xt, Y t, Zt, Xt−1, Zt−1)}.

Proposition 6. *Consider* (1) and let {(Xt, Y t, Zt)} *be generated by Algorithm 1 with* (9) holding deterministically. Suppose assumptions in Proposition 5 hold. Suppose {(Xt, Y t, Zt)} *is bounded. Then any* accumulation point of {Y
t} *is a stationary point of* (3).

Combining Proposition 6 with Proposition 1 and Proposition 2, we immediately have the subsequential convergence of the sequence generated by FIAELT.

Corollary 3. Let {(x t 1
, . . . , xtp
, yt, zt1
, . . . , ztp
)} *be generated by Algorithm 1 with* (9) *holding deterministically.*
Let (Xt, Y t, Zt) be defined as in Proposition 4. Suppose assumptions in Proposition 6 hold. Then any accumulation point of {y t} *is a stationary point of* (1).

Next, we present the convergence rate of (Xt, Y t, Zt). Theorem 2. *Consider* (1) *and Algorithm 1 with* (9) holding deterministically. Let (Xt, Y t, Zt) *be defined* as in Proposition 4. Suppose assumptions in Proposition 5 hold. Let H be defined as in Proposition 5 and suppose H is a KL function with exponent α ∈ [0, 1). Then {(Xt, Y t, Zt)} *converges globally. Denoting*
(X∗, Y ∗, Z∗) := limt(Xt, Y t, Zt) and d t s
:= ∥(Xt, Y t, Zt)−(X∗, Y ∗, Z∗)∥*, then the followings hold. If* α = 0, then {d t s} *converges finitely. If* α ∈ (0, 1 2
], then there exist b > 0, t1 ∈ N and ρ1 ∈ (0, 1) *such that* d t s ≤ bρt1 for t ≥ t1*. If* α ∈ (
1 2
, 1), then there exist t2 and c > 0 *such that* d t s ≤ ct− 1 4α−2 for t ≥ t2.

Remark 5. *Proposition 3 and Theorem 2 jointly illustrate that the local outputs* {x t i
}t *and the server updates* y t *achieve global linear convergence towards a stationary point of* (1) *when the Kurdyka-Lojasiewicz (KL)*
exponent of function H *is set to* 12
. The precise determination of the KL exponent of H *is interconnected*

![8_image_0.png](8_image_0.png)

Figure 4: Results of our algorithm on FEMNIST dataset with different learning rates. (L1-norm regularize.)
with another aspect involving the investigation of error bounds, which is beyond the boundaries of the present paper's scope. Interested readers are referred to sources such as Attouch et al. (2010); Li & Pong (2018);
Attouch et al. (2013); Zeng et al. (2021) for more deeper insights.

## 5 Experimental Results

To evaluate the performance of our proposed FIAELT algorithm, we conduct experiments on both realistic and synthetic datasets. When g = 0 in (1), we compare our algorithm with FedDRTran-Dinh et al. (2021),
FedPD Zhang et al. (2021), FedAvg McMahan et al. (2017b), FedAdmm Zhou & Li (2022). When g = λ| - ||1 for some \ E R++, we compare our algorithm with FedMid Yuan et al. (2021), FedDualAvg Yuan et al. (2021),
and FedDR. Following FedDR Tran-Dinh et al. (2021), we choose the neural network as our model, and the details are deferred to the supplementary materials. For FedDR, FedPD, we refer to the code provided in Tran-Dinh et al. (2021), and we also re-implement the FedAdmm based on them. All experiments are running on the Linux-based server with the configuration: 8xA6000 GPU with 48GB memory each. To be in accordance with the theoretical analysis, we sample all the clients to perform updates for our algorithm in each communication round. We pick up hyper-parameters carefully and show the best results for each algorithm. For evaluation metrics, we use training loss, training accuracy, and test accuracy. Our code is available at https://anonymous.4open.science/r/FIAELT_TMLR-D6C7/. Results on synthetic datasets. Following the data generation process on Li et al. (2020a); Tran-Dinh et al. (2021), we generate three datasets: synthetic-{(0,0), (0.5, 0.5), (1,1)}. All agents perform updates at each communication round. Our algorithm is compared using synthetic datasets in both iid and non-iid settings. The performance of five algorithms on non-iid synthetic datasets is shown as Figure 1. Our algorithm can achieve better results than FedPD, FedAdmm, FedAvg, and FedDR on all three synthetic datasets. FEMNIST Cohen et al. (2017); Caldas et al. (2018) dataset is a more complex and federated extended MNIST. It has 62-class (26 upper-case and 26 lower-case letters, 10 digits) and the data is distributed to 200 devices. Figure 2 depicts the results of all 5 algorithms on FEMNIST. As it shows, FIAELT can achieve comparable training accuracy and loss value with FedDR. In comparison with FedAdmm, FedPD,
and FedAvg, FIAELT has a significant improvement in both training accuracy and loss value. Our algorithm can also work much better with test accuracy than the other 4 algorithms.

Results with the L1 **norm.** Following FedDR Tran-Dinh et al. (2021), we also consider the composite setting with g(x) := 0.01∥x∥1 to verify our algorithm by selecting different learning rates and the number of local SGD epochs. We conduct the experiment on the FEMNIST dataset and we show the results as Figure 3. As we can see from the training loss and training accuracy, FIAELT has competitive efficiency with FedDR and outperform FedDualAvg and FedMid. In addition, in testing accuracy, FIAELT outperforms all the other methods. Figure 5 shows how different learning rates affect the performance of our FIAME on the FEMNIST dataset.

## 6 Conclusion

In this paper, we propose a federated inexact ADMM with a new local termination criterion. This criterion is efficient and can be satisfied within iterations unrelated to the communication rounds, particularly when using stochastic gradient methods as the local solver. Our new method has the best-known complexity while having efficient local updates. Additionally, we provide proof that the proposed method has sequential convergence guarantees in the deterministic case. Under KL assumptions, we demonstrate that the whole generated sequence can converge sublinearly, linearly, or even finitely. Our experiments consistently demonstrate that the proposed method consistently outperforms state-of-the-art methods, especially in terms of testing accuracy.

## References

Hédy Attouch and Jérôme Bolte. On the convergence of the proximal algorithm for nonsmooth functions involving analytic features. *Math. Program.*, 116(1-2):5–16, 2009.

Hédy Attouch, Jérôme Bolte, Patrick Redont, and Antoine Soubeyran. Proximal alternating minimization and projection methods for nonconvex problems: An approach based on the kurdyka-lojasiewicz inequality. Math. Oper. Res., 35(2):438–457, 2010.

Hédy Attouch, Jérôme Bolte, and Benar Fux Svaiter. Convergence of descent methods for semi-algebraic and tame problems: proximal algorithms, forward-backward splitting, and regularized gauss-seidel methods.

Math. Program., 137(1-2):91–129, 2013.

Yajie Bao, Michael Crawshaw, Shan Luo, and Mingrui Liu. Fast composite optimization and statistical recovery in federated learning. In International Conference on Machine Learning, ICML 2022, 17-23 July 2022, Baltimore, Maryland, USA, 2022.

Jérôme Bolte, Shoham Sabach, and Marc Teboulle. Proximal alternating linearized minimization for nonconvex and nonsmooth problems. *Math. Program.*, 146(1-2):459–494, 2014.

Jonathan M. Borwein, Guoyin Li, and Matthew K. Tam. Convergence rate analysis for averaged fixed point iterations in common fixed point problems. *SIAM J. Optim.*, 27(1):1–33, 2017.

Sebastian Caldas, Peter Wu, Tian Li, Jakub Konečný, H. Brendan McMahan, Virginia Smith, and Ameet Talwalkar. Leaf: A benchmark for federated settings. *CoRR*, abs/1812.01097, 2018.

Emmanuel J. Candès and Benjamin Recht. Exact matrix completion via convex optimization. Found.

Comput. Math., 9(6):717–772, 2009.

Gregory Cohen, Saeed Afshar, Jonathan Tapson, and Andre Van Schaik. Emnist: Extending mnist to handwritten letters. In *2017 international joint conference on neural networks (IJCNN)*, pp. 2921–2926.

IEEE, 2017.

Anis Elgabli, Chaouki Ben Issaid, Amrit Singh Bedi, Ketan Rajawat, Mehdi Bennis, and Vaneet Aggarwal.

Fednew: A communication-efficient and privacy-preserving newton-type method for federated learning.

In *International Conference on Machine Learning, ICML 2022, 17-23 July , Baltimore, Maryland, USA*,
2022.

Ziqing Fan, Yanfeng Wang, Jiangchao Yao, Lingjuan Lyu, Ya Zhang, and Qi Tian. Fedskip: Combatting statistical heterogeneity with federated skip aggregation. In Xingquan Zhu, Sanjay Ranka, My T.

Thai, Takashi Washio, and Xindong Wu (eds.), *IEEE International Conference on Data Mining, ICDM,*
Orlando, FL, USA, November 28 - Dec. 1, pp. 131–140. IEEE, 2022.

Yonghai Gong, Yichuan Li, and Nikolaos M. Freris. Fedadmm: A robust federated deep learning framework with adaptivity to system heterogeneity. In 38th IEEE International Conference on Data Engineering, ICDE 2022, Kuala Lumpur, Malaysia, May 9-12,, 2022.

Chao Huang, Jianwei Huang, and Xin Liu. Cross-silo federated learning: Challenges and opportunities.

CoRR, abs/2206.12949, 2022.

Rie Johnson and Tong Zhang. Accelerating stochastic gradient descent using predictive variance reduction.

In *Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information* Processing Systems 2013. Proceedings of a meeting held December 5-8, 2013, Lake Tahoe, Nevada, United States, pp. 315–323, 2013.

Peter Kairouz, H. Brendan McMahan, Brendan Avent, Aurélien Bellet, Mehdi Bennis, Arjun Nitin Bhagoji, Kallista A. Bonawitz, Zachary Charles, Graham Cormode, Rachel Cummings, Rafael G. L. D'Oliveira, Hubert Eichner, Salim El Rouayheb, David Evans, Josh Gardner, Zachary Garrett, Adrià Gascón, Badih Ghazi, Phillip B. Gibbons, Marco Gruteser, Zaïd Harchaoui, Chaoyang He, Lie He, Zhouyuan Huo, Ben Hutchinson, Justin Hsu, Martin Jaggi, Tara Javidi, Gauri Joshi, Mikhail Khodak, Jakub Konečný, Aleksandra Korolova, Farinaz Koushanfar, Sanmi Koyejo, Tancrède Lepoint, Yang Liu, Prateek Mittal, Mehryar Mohri, Richard Nock, Ayfer Özgür, Rasmus Pagh, Hang Qi, Daniel Ramage, Ramesh Raskar, Mariana Raykova, Dawn Song, Weikang Song, Sebastian U. Stich, Ziteng Sun, Ananda Theertha Suresh, Florian Tramèr, Praneeth Vepakomma, Jianyu Wang, Li Xiong, Zheng Xu, Qiang Yang, Felix X. Yu, Han Yu, and Sen Zhao. Advances and open problems in federated learning. *Found. Trends Mach. Learn.*, 14
(1-2):1–210, 2021a.

Peter Kairouz, H. Brendan McMahan, Brendan Avent, Aurélien Bellet, Mehdi Bennis, Arjun Nitin Bhagoji, Kallista A. Bonawitz, Zachary Charles, Graham Cormode, Rachel Cummings, Rafael G. L. D'Oliveira, Hubert Eichner, Salim El Rouayheb, David Evans, Josh Gardner, Zachary Garrett, Adrià Gascón, Badih Ghazi, Phillip B. Gibbons, Marco Gruteser, Zaïd Harchaoui, Chaoyang He, Lie He, Zhouyuan Huo, Ben Hutchinson, Justin Hsu, Martin Jaggi, Tara Javidi, Gauri Joshi, Mikhail Khodak, Jakub Konečný, Aleksandra Korolova, Farinaz Koushanfar, Sanmi Koyejo, Tancrède Lepoint, Yang Liu, Prateek Mittal, Mehryar Mohri, Richard Nock, Ayfer Özgür, Rasmus Pagh, Hang Qi, Daniel Ramage, Ramesh Raskar, Mariana Raykova, Dawn Song, Weikang Song, Sebastian U. Stich, Ziteng Sun, Ananda Theertha Suresh, Florian Tramèr, Praneeth Vepakomma, Jianyu Wang, Li Xiong, Zheng Xu, Qiang Yang, Felix X. Yu, Han Yu, and Sen Zhao. Advances and open problems in federated learning. *Found. Trends Mach. Learn.*, 14
(1-2):1–210, 2021b.

Hamed Karimi, Julie Nutini, and Mark Schmidt. Linear convergence of gradient and proximal-gradient methods under the Polyak-Łojasiewicz condition. In Paolo Frasconi, Niels Landwehr, Giuseppe Manco, and Jilles Vreeken (eds.), *Machine Learning and Knowledge Discovery in Databases - European Conference,* ECML PKDD 2016, Riva del Garda, Italy, September 19-23, Proceedings, Part I, 2016.

Sai Praneeth Karimireddy, Satyen Kale, Mehryar Mohri, Sashank J. Reddi, Sebastian U. Stich, and Ananda Theertha Suresh. SCAFFOLD: stochastic controlled averaging for federated learning. In *Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July, Virtual Event*,
2020.

Guoyin Li and Ting Kei Pong. Douglas-rachford splitting for nonconvex optimization with application to nonconvex feasibility problems. *Math. Program.*, 159(1-2):371–401, 2016.

Guoyin Li and Ting Kei Pong. Calculus of the exponent of kurdyka-łojasiewicz inequality and its applications to linear convergence of first-order methods. *Found. Comput. Math.*, 18(5):1199–1232, 2018.

Tian Li, Anit Kumar Sahu, Manzil Zaheer, Maziar Sanjabi, Ameet Talwalkar, and Virginia Smith. Federated optimization in heterogeneous networks. In Inderjit S. Dhillon, Dimitris S. Papailiopoulos, and Vivienne Sze (eds.), Proceedings of Machine Learning and Systems 2020, MLSys 2020, Austin, TX, USA, March 2-4, 2020a.

Xiang Li, Kaixuan Huang, Wenhao Yang, Shusen Wang, and Zhihua Zhang. On the convergence of fedavg on non-iid data. In 8th International Conference on Learning Representations, ICLR, Addis Ababa, Ethiopia, April 26-30, 2020b.

Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Agüera y Arcas.

Communication-efficient learning of deep networks from decentralized data. In *Proceedings of the 20th* International Conference on Artificial Intelligence and Statistics, AISTATS 2017, 20-22 April, Fort Lauderdale, FL, USA, 2017a.

Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Agüera y Arcas.

Communication-efficient learning of deep networks from decentralized data. In *Proceedings of the 20th* International Conference on Artificial Intelligence and Statistics, AISTATS 2017, 20-22 April, Fort Lauderdale, FL, USA, 2017b.

Kishore Nandury, Anand Mohan, and Frederick Weber. Cross-silo federated training in the cloud with diversity scaling and semi-supervised learning. In IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2021, Toronto, ON, Canada, June 6-11, 2021, pp. 3085–3089. IEEE, 2021.

Reese Pathak and Martin J. Wainwright. FedSplit: an algorithmic framework for fast federated optimization.

In Hugo Larochelle, Marc'Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin
(eds.), *Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information* Processing Systems 2020, NeurIPS 2020, December 6-12, 2020.

Sashank J. Reddi, Zachary Charles, Manzil Zaheer, Zachary Garrett, Keith Rush, Jakub Konečný, Sanjiv Kumar, and Hugh Brendan McMahan. Adaptive federated optimization. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021.

R. Tyrrell Rockafellar and Roger J.-B. Wets. *Variational Analysis*, volume 317 of *Grundlehren der mathematischen Wissenschaften*. Springer, 1998.

Quoc Tran-Dinh, Nhan H. Pham, Dzung T. Phan, and Lam M. Nguyen. FedDR - randomized douglasrachford splitting algorithms for nonconvex federated composite optimization. In Marc'Aurelio Ranzato, Alina Beygelzimer, Yann N. Dauphin, Percy Liang, and Jennifer Wortman Vaughan (eds.), Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021.

Han Wang, Siddartha Marella, and James Anderson. Fedadmm: A federated primal-dual algorithm allowing partial participation. In *2022 IEEE 61st Conference on Decision and Control (CDC)*, pp. 287–294. IEEE,
2022.

Qiang Yang, Yang Liu, Tianjian Chen, and Yongxin Tong. Federated machine learning: Concept and applications. *ACM Trans. Intell. Syst. Technol.*, 10(2):12:1–12:19, 2019.

| Dataset   | Size(Input x FC layer x Output)   |
|-----------|-----------------------------------|
| Synthetic | 60 x 32 x 10                      |
| MNIST     | 784 x 128 x 10                    |
| FEMNIST   | 784 x 128 x 26                    |

Table 3: The details of the neural networks in our numerical experiments.

![12_image_0.png](12_image_0.png)

Figure 5: Results of our algorithm on FEMNIST dataset with different learning rates. (L1-norm regularize.)
Honglin Yuan, Manzil Zaheer, and Sashank J. Reddi. Federated composite optimization. In *Proceedings of* the 38th International Conference on Machine Learning, ICML 2021, 18-24 July, 2021.

Sheng Yue, Ju Ren, Jiang Xin, Sen Lin, and Junshan Zhang. Inexact-admm based federated meta-learning for fast and continual edge learning. In *MobiHoc '21: The Twenty-second International Symposium on Theory,*
Algorithmic Foundations, and Protocol Design for Mobile Networks and Mobile Computing, Shanghai, China, 26-29 July, 2021.

Liaoyuan Zeng, Peiran Yu, and Ting Kei Pong. Analysis and algorithms for some compressed sensing models based on L1/L2 minimization. *SIAM J. Optim.*, 31(2):1576–1603, 2021.

Xinwei Zhang, Mingyi Hong, Sairaj V. Dhople, Wotao Yin, and Yang Liu. Fedpd: A federated learning framework with adaptivity to non-iid data. *IEEE Trans. Signal Process.*, 69:6055–6070, 2021.

Shenglong Zhou and Geoffrey Ye Li. Communication-efficient admm-based federated learning. *CoRR*,
abs/2110.15318, 2021. URL https://arxiv.org/abs/2110.15318.

Shenglong Zhou and Geoffrey Ye Li. Federated learning via inexact ADMM. *CoRR*, abs/2204.10607, 2022. H. Zou and T. Hastie. Regularization and variable selection via the elastic net. *J. R. Statist. Soc. B*, 67(2):
301–320, 2005.

## A Supplement For Experiment

The details of the training models. For all datasets, we apply neural networks with only Fullyconnected (FC) layers as training models. The size of the models is shown as Table 3. Our code is available at https://anonymous.4open.science/r/FIAELT-8CC5/. Hyperparameter choosing. The learning rates are 0.012 for synthetic datasets, and 0.009 for FEMNIST.

For FedPD, FedDR, and FedProx, we follow Tran-Dinh et al. (2021) to select the hyper-parameters, including µ for FedProx, η for FedPD, and *η, α* for FedDR. As for FedMid Yuan et al. (2021) and FedDualAvg Yuan et al. (2021), we also select the hyper-parameters working best for plotting the performance and comparison. Additional Results with Different Learning Rates Figure 5 shows how different learning rates affect the performance of our FIAME on the FEMNIST dataset.

![13_image_0.png](13_image_0.png)

![13_image_1.png](13_image_1.png)

Figure 6: Results on Synthetic-{(0,0), (0.5, 0.5), (1,1)} dataset.

![13_image_2.png](13_image_2.png)

Figure 7: Results on FEMNIST dataset.

## Additional Results Comparing Fiame With Non-Admm Based Fl Algorithms A.1

We compare our method with FedAvg Li et al. (2020b), SCAFFOLD Karimireddy et al. (2020), FedSkip Fan et al. (2022). Results on synthetic datasets. Following the data generation process on Li et al. (2020a); Tran-Dinh et al. (2021), we generate three datasets: synthetic-{(0,0), (0.5, 0.5), (1,1)}. All agents perform updates at each communication round. Our algorithm is compared using synthetic datasets in both iid and non-iid settings. The performance of 4 algorithms on non-iid synthetic datasets is shown as Figure 6. Our algorithm can achieve better results than FedAvg, SCAFFOLD, FedSkip on all three synthetic datasets.

Results on FEMNIST dataset. FEMNIST Cohen et al. (2017); Caldas et al. (2018) dataset is a more complex and federated extended MNIST. It has 62-class (26 upper-case and 26 lower-case letters, 10 digits)
and the data is distributed to 200 devices. Figure 7 depicts the results of all 4 algorithms on FEMNIST. As it shows, compared with the other 3 methods, FIAME has a significant improvement in both training accuracy and loss value. Our algorithm can also work much better with test accuracy than the other 3 algorithms.

## B Convergence Analysis Of Algorithm 1

Proposition 1. If X∗ = (x
∗ 1
, . . . , x∗p
) *is a stationary point of* (3)*, then* x
∗ 1 is a stationary point of (1).

Furthermore, if X = (x1, . . . , xp) is an ε*-stationary point of* (1), then x1 is a pε*-stationary point of* (1).

Proof. Note that

$\mathfrak{C}=\{(x_{1},\ldots,x_{p}):\ x_{1}-x_{2}=0,\ x_{2}-x_{3}=0,\ \ldots,\ x_{p-1}-x_{p}=0\}\,.$
Using Theorem 6.14 of Rockafellar & Wets (1998), we have

$$N_{\mathfrak{C}}=\left\{\sum_{i=1}^{p-1}\lambda_{i}(0,\ldots,0,\underbrace{1}_{\text{the$i_{\text{th}}$coordinate}},-1,0,\ldots,0):\ (\lambda_{1},\ldots,\lambda_{p-1})\in\mathbb{R}^{p-1}\right\},$$

where 1 is the vector in R
p whose coordinates are all one.

This together with Corollary 10.9, Proposition 10.5 shows that for any Y ∈ dom∂G, ∂G(Y ) can be represetned as

$$\left\{(\xi,0,\ldots,0)+\sum_{i=1}^{p-1}\lambda_{i}(0,\ldots,0,\underbrace{1}_{i_{0}},-1,0,\ldots,0):\;\xi\in\partial g(y_{1}),\;(\lambda_{1},\ldots,\lambda_{p-1})\in\mathbb{R}^{p-1}\right\}.$$
$$(17)$$
$$(18)$$
. (17)
Suppose Y
∗ = (y

1
, . . . , y∗
p
) is a stationary point of (3). Then Y
∗ ∈ dom∂G ⊆ domG. Thus, y

1 = *· · ·* = y

p
.

In addition, it holds that

$0\in\nabla F(Y^{*})+\partial G(Y^{*})$  $$=(\nabla f_{1}(y^{*}),\ldots,\nabla f_{p}(y^{*}))+(\partial g(y_{1}^{*}),0,\ldots,0)+\sum_{i=1}^{p}\lambda_{i}(0,\ldots,0,\underbrace{1}_{t_{i_{k}}},-1,0,\ldots,0),$$  $\lambda_{i}$ is the $i_{i_{1}}$-norm of $\lambda_{i}$.  
where the second equality uses (17) together with Exercise 8.8 and Proposition 10.5 of Rockafellar & Wets
(1998). The above relation is equivalent to

$$\begin{array}{l}{{0\in\nabla f_{1}(y^{*})+\partial g(y_{1}^{*})+\lambda_{1}{\bf1}}}\\ {{0=\nabla f_{2}-\lambda_{1}{\bf1}+\lambda_{2}{\bf1}}}\end{array}$$
$$(19)$$
...
$$\begin{array}{l}{{\vdots}}\\ {{0=\nabla f_{p-1}-\lambda_{p-2}{\bf1}+\lambda_{p-1}{\bf1}}}\\ {{0=\nabla f_{p}(y^{*})-\lambda_{p-1}{\bf1}.}}\end{array}$$

Substituting λ1 in (19) using the rest equality in the above relation, we have that

$$0\in\sum_{i}\nabla f_{i}(y^{*})+\partial g(y_{1}^{*}).$$

Thus y
∗is a stationary point of (1).

Now, suppose Y = (y1*, . . . , y*p) is a ε-stationary point of (3). Then Y ∈ dom∂G ⊆ domG. Thus, y1 = *· · ·* = yp
and
$$\varepsilon\geq d^{2}(0,\nabla F(Y)+\partial G(Y)).$$
2(0, ∇F(Y ) + ∂G(Y )). (20)
Using (17) and Proposition 10.5 of Rockafellar & Wets (1998), we have that d 2(0, ∇F(Y ) + ∂G(Y ))

= min ξ∈∂g(y1),λ∈Rp−1 ∥∇f1(y1) + ξ + λ11∥ 2 + Xp−2 i=2 ∥∇f1(y1) + λi1 − λi−11∥ 2 + ∥∇fp(y1) − λp−11∥ 2 (21) ≥ min ξ∈∂g(y1),λ∈Rp−1 1 p ∥ X i ∇fi(y1) + ξ∥ 2∥ 2 = min ξ∈∂g(y1) 1 p ∥ X i ∇fi(y1) + ξ∥ 2∥ 2 = 1 p d 2(0, X i ∇fi(y1) + ∂g(y1)).
$\square$ . 
$$(23)$$
$$\begin{array}{l}{(24)}\\ {(25)}\end{array}$$
This together with (20) shows that y1 is a pε-stationary point.

## B.1 Proofs Of Proposition 2

The problem in updating Y
t+1 in (6) is a constrained problem:

 In (8) is a concatenated problem:  $\min\limits_{Y}\,g(y_1)+\left<Z^t,X^{t+1}-Y\right>+\frac{\beta}{2}\|X^{t+1}-Y\|^2$  s.t. $y_2=y_3=\cdots=y_p=y_1.$  thanks you claim an issue.Than author used here. 
$$(22)$$
Since *β > L*, the objective in the above problem is strongly convex. Thus, there exists a unique solution
(y1, y2*, . . . , y*p) to (22). Denote the Lagrange multiplier for the above problem as W = (w1, w2*, . . . , w*p).

Then the Karush–Kuhn–Tucker condition for the above problem is

$\alpha$ condition for the above problem is  $$0\in\partial g(y_{1})-z_{1}^{t+1}-\beta(x_{1}^{t+1}-y_{1})-\sum_{i=2}^{p}w_{i}$$  $$0=-z_{i}^{t+1}+w_{i}-\beta(x_{i}^{t+1}-y_{i}),\,i=2,\ldots,p$$  $$y_{i}=y_{1},\,\,i=2,\ldots,p.$$
Combining (24) with (25) gives

 With (20) gives  $ \sum_{i=2}^p w_i=\beta\sum_{i=2}^p(x_i^{t+1}-y_i)+\sum_{i=2}^p z_i^{t+1}=\beta\sum_{i=2}^p x_i^{t+1}-(p-1)\beta y_1+\sum_{i=2}^p z_i^{t+1}.$  with (22) shows that... 
This together with (23) shows that

$$\beta\sum_{i=2}^{p}x_{i}^{t+1}-(p-1)\beta y_{1}+\sum_{i=2}^{p}z_{i}^{t+1}+z_{1}^{t+1}+\beta x_{1}^{t+1}\in\partial g(y_{1})+\beta y_{1},$$

which is equivalent to

$${\frac{1}{p}}\sum_{i=1}^{p}(x_{i}^{t+1}+{\frac{1}{\beta}}z_{i}^{t+1})\in{\frac{1}{\beta p}}\partial g(y_{1})+y_{1}.$$

This implies that y1 ∈ Prox 1 βp g
(
1 p Pp i=1(x t+1 i +
1 β z t+1 i)). Recalling (25), we deduce that the solution of the problem in the third equation of (6) is (y1*, . . . , y*1) with y1 = Prox 1 βp g
(
1 p Pp i=1(x t+1 i +
1 β z t+1 i))).

Proposition 3. *Consider* (1). Set *β > L* := maxi Li*. Let* {(x t i
, yt i
, zt i
)} *be generated by Algorithm 1. Using*

SVRG in Johnson & Zhang (2013) with Option II with frequency mi, learning rate ηi*, and initialization* x
t
i
for (8)*, such that*
$$\frac{1}{\eta_{i}(\beta-L_{i})(1-2\eta_{i}(\beta+L_{i}))m_{i}}+\frac{2\eta_{i}(\beta+L_{i})}{1-2\eta_{i}(\beta+L_{i})}=:\rho_{i}<1.\tag{10}$$
Then criterion (9) *is satisfied in at most* k i t = log1/ρiβ+Li ri(β−Li)
iterations of SVRG.

Proof. Note that L(*x, y*t i
, zt i
) is strongly convex with modulos β−Li and ∇L(x, yt i
, zt i
) is Lipschitz continuous with modulos Li + β. Let ρi:= 1
(β−Li)η(1−2ηi(β+Li))mi
+2ηi(β+Li)
1−2ηi(β+Li)
, where mi and ηiis the frequency and learning rate in SVRG respectively. Using Theorem 1 of Johnson & Zhang (2013), there exists large m such that E

t iLβ,i(x t+1 i, yt, zt i
) − Lβ,i(x t+1 i,⋆ , yt, zt i
) ≤ ρ kt i Lβ,i(x t i
, yt, zt i
)−Lβ,i(x t+1 i,⋆ , yt, zt i
)(26)
Combing this with the strong convexity of L(*x, y*t i
, zt i
) and the Lipschitz continuity of ∇L(*x, y*t i
, zt i
), we have

that  $$\mathbb{E}_{i}^{t}\|x_{i}^{t+1}-x_{i,*}^{t+1}\|^{2}\leq\frac{\beta+L_{i}}{\beta-L_{i}}\rho_{i}^{t_{i}}\|x_{i}^{t}-x_{i,*}^{t+1}\|^{2}\leq r_{i}\|x_{i}^{t}-x_{i,*}^{t+1}\|^{2},\tag{27}$$  where the second inequality is based on $\frac{\beta+L_{i}}{\beta-L_{i}}\rho_{i}^{t_{i}}\leq r_{i}$. This completes the proof.  

## C Proof For Convergence Analysis

To prove the results in Section Convergence Analysis of Algorithm 1, we first present the following well known facts for strongly convex functions, see Theorem 2 in Karimi et al. (2016) for example.

Proposition 7. Let f : R
n → R be a strongly convex function with modulus µ*. Suppose in addition that* f is smooth and has Lipschitz continuous gradient with modulus L*. Then there exists unique minimizer* x
∗
that minimize f *and it holds that*

$$\|\nabla f(x)\|^{2}\geq2\mu\left(f(x)-f(x^{*})\right)\geq\mu^{2}\|x-x^{*}\|^{2}.$$

Proposition 2. *Consider* (3). Let {(Xt+1, Y t+1, Zt+1)} *be generated by* (6). Suppose β > maxi Li*. Then* the solution of the problem in the third equation of (6) is (y1, . . . , y1) *with* y1 = Prox 1 βp g
(
1 p Pp i=1(x t+1 i +
1 β z t+1 i))).

The second and third relation in this proposition are obvious. We only need show that Xtsatisfies (11).

Using (9) and the definition that r = maxi r, we have

$$\mathbb{E}_{i}^{t}\|x_{i}^{t+1}-x_{i,\star}^{t+1}\|^{2}\leq r_{i}\|x_{i}^{t}-x_{i,\star}^{t+1}\|^{2}\leq r\|x_{i}^{t}-x_{i,\star}^{t+1}\|^{2},$$

summing i = 1*, . . . , p*, we obtain (11).

## C.1 Details And Proofs Of Proposition 5

Before proving Proposition 5, we first present several properties of the problem:

$$\operatorname*{min}_{X}L_{\beta}(X,Y^{t},Z^{t}),$$
$$(28)$$
XLβ(X, Y t, Zt), (28)
where Y
t and z t are defined as in Proposition 4.

Proposition 8. *Consider* (1). Let (Xt, Y t, Zt) be defined as in Proposition 4. Let β ≥Pi Li*. Denote* Xt+1
⋆:= minX Lβ(X, Y t, Zt+1).

1 *Then the following statements hold:*

_(i) Denote $e^{t+1}=X^{t+1}-X^{t+1}_{\star}$. Then there exists $\xi^{t+1}\in\partial G(Y^{t+1})$ such that_
$$0=\nabla F(X_{\star}^{t+1})+Z^{t}+\beta(X_{\star}^{t+1}-Y^{t})\Leftrightarrow-Z^{t}-\beta(X^{t+1}-e^{t+1}-Y^{t})=\nabla F(X_{\star}^{t+1})$$
⋆) (29)
and
$$0=\xi^{t+1}-Z^{t+1}-\beta(X^{t+1}-Y^{t+1})$$
t+1) (30)
(ii) *It holds that*

$$Z^{t+1}=(1-\tau)Z^{t}+\beta\tau e^{t+1}+\tau\nabla F(X_{\star}^{t+1})$$

⋆) (31)
1The existence and uniqueness of X
t+1
⋆ are thanks to β ≥ maxi Li and Proposition 7.

$$(29)$$
$$(30)$$
$$(31)$$

(iii) Let r = maxi ri*. It holds that*

$$\mathbb{E}\|e^{t}\|^{2}\leq{\frac{2r}{1-2r}}\mathbb{E}\|x^{t}-x^{t-1}\|^{2}$$
$$(32)$$

Proof. (i) follows from the first optimality condition of (28) and (13). Combining (29) with (12), we have that

$$-Z^{t}-\frac{1}{\tau}(Z^{t+1}-Z^{t})+\beta e^{t+1}=-Z^{t}-\beta(X^{t+1}-e^{t+1}-Y^{t})=\nabla F(X_{\star}^{t+1}).$$ $$\Leftrightarrow Z^{t+1}=(1-\tau)Z^{t}+\beta\tau e^{t+1}+\tau\nabla F(X_{\star}^{t+1}).$$

Now, we bound E∥e t∥
2. Denote e t i
:= x t i − x t i∗
. Then using (27), we have that

$\mathbb{E}_{t-1}\|e_{i}^{t}\|^{2}\leq r_{i}\|x_{i}^{t-1}-x_{i*}^{t}\|^{2}\leq2r_{i}(\|x_{i}^{t}-x_{i}^{t-1}\|^{2}+\|e_{i}^{t}\|^{2})$.  
where c
′ i
:=
β+Li β−Li
. Denote c
′ = maxi c
′ i
, ρ := maxi ρi and kt := maxi k i t
, r = maxi ri. Summing both sides of the above inequality from i = 1*, . . . , p*, we obtain that

$$\mathbb{E}_{t-1}\|e^{t}\|^{2}\leq2r(\|x^{t}-x^{t-1}\|^{2}+\mathbb{E}_{t-1}\|e^{t}\|^{2}).$$

Taking expectation on both sides over all randomness and rearranging the above inequality we obtain (32).

Now, we are ready to prove Proposition 5.

Proposition 5. *Select hyperparameters* β ≥ 5L, ri ∈ (0, 0.01], τ ∈ [1/2, 1)*. Denote* Γ := 1−τ τ, Θ =
2β 2 + 4L
2, Λ := 4L
2. Υ := Θ
τβ4r 1−2r and δ := 14
(β − L) − 2Υ*. Define*

$$H(X,Y,Z,X^{\prime},Z^{\prime}):=L_{\beta}(X,Y,Z)+\frac{\Gamma}{\tau\beta}\|Z-Z^{\prime}\|^{2}+\Upsilon\|X-X^{\prime}\|^{2}.$$
$${}^{t+1},Y^{t+1},Z^{t+1},X^{t},Z^{t}).\;\;T$$

and Ht+1 := EH(Xt+1, Y t+1, Zt+1, Xt, Zt). Then for t ≥ 1, it holds that δ ≥ 0.1L and

$$H_{t+1}\leq H_{t}-\delta\mathbb{E}\|X^{t+1}-X^{t}\|^{2}-\frac{\beta}{2}\mathbb{E}\|Y^{t+1}-Y^{t}\|^{2}.\tag{1}$$
$$(14)$$

Hence, the sequence {Ht} *converges to some* H∗ ≥ W.

Proof. Note that

EtLβ(Xt+1, Y t, Zt) − Lβ(Xt, Y t, Zt) = Lβ(Xt+1, Y t, Zt) − Lβ(Xt+1 ⋆, Y t, Zt) + Lβ(Xt+1 ⋆, Y t, Zt) − Lβ(Xt, Y t, Zt) ≤ ρ ktLβ(Xt, Y t, Zt) − Lβ(Xt+1 ⋆, Y t, Zt)+ Lβ(Xt+1 ⋆, Y t, Zt) − Lβ(Xt, Y t, Zt) ≤ ρ ktLβ(Xt, Y t, Zt) − Lβ(Xt+1 ⋆, Y t, Zt)− β − L 2∥Xt − Xt+1 ⋆ ∥ 2 ≤ ρ ktLβ(Xt, Y t, Zt) − Lβ(Xt+1 ⋆, Y t, Zt)− β − L 4 Et∥Xt − Xt+1∥ 2 + β − L 2 Et∥Xt+1 − Xt+1 ⋆ ∥ 2 ≤ ρ kt β + L 2∥Xt − Xt+1 ⋆ ∥ 2 − β − L 4 Et∥Xt − Xt+1∥ 2 + β − L 2 Et∥e t+1∥ 2, (33)
where the first inequality makes use of (26), the second inequality is because Lβ(X, Y t, Zt) is strongly convex with modulus β − maxi Li and Xt+1
⋆is the minimizer of minX Lβ(X, Y t, Zt), the third inequality uses Young's inequality, the last inequality uses the Lipschitz continuity of ∇XLβ(X, Y t, Zt).

Using the fact that ∥Xt − Xt+1 ⋆ ∥ 2 ≤ 2Et∥Xt − Xt+1∥ 2 + 2Et∥e t+1∥ 2, (33) can be further passed to EtLβ(Xt+1, Y t, Zt) − Lβ(Xt, Y t, Zt) ≤ 2ρ kt β + L 2 Et∥Xt − Xt+1∥ 2 + 2ρ kt β + L 2 Et∥e t+1∥ 2 − β − L 4 Et∥Xt − Xt+1∥ 2 + β − L 2 Et∥e t+1∥ 2 = 2ρ kt β + L 2− β − L 4 Et∥Xt − Xt+1∥ 2 + 2ρ kt β + L 2+ β − L 2 Et∥e t+1∥ 2 (34) ≤ 2ρ kt β + L 2− β − L 4+ 2ρ kt β + L 2+ β − L 2 2r 1 − 2r Et∥Xt − Xt+1∥ 2 = ρ kt 1 − 2r (β + L) − 1 4 −r 1 − 2r (β − L) Et∥Xt − Xt+1∥ 2 where the second inequality uses (32).
Next, using (12), we have

$$L_{\beta}(X^{t+1},Y^{t},Z^{t+1})-L_{\beta}(X^{t+1},Y^{t},Z^{t})=\frac{1}{\tau\beta}\|Z^{t+1}-Z^{t}\|^{2}\tag{35}$$

When τ ∈ (0, 1), combining (31) and the convexity of *∥ · ∥*2, we have that

$$\begin{array}{l}{{\|Z^{t+1}-Z^{t}\|^{2}\leq(1-\tau)\|Z^{t}-Z^{t-1}\|^{2}+\tau\|\beta(e^{t+1}-e^{t})+\nabla(F(X_{*}^{t+1})-F(X_{*}^{t}))\|^{2}}}\\ {{\leq(1-\tau)\|Z^{t}-Z^{t-1}\|^{2}+2\tau\beta^{2}\|e^{t+1}-e^{t}\|^{2}+2\tau\|\nabla(F(X_{*}^{t+1})-F(X_{*}^{t}))\|^{2}}}\\ {{\leq(1-\tau)\|Z^{t}-Z^{t-1}\|^{2}+2\tau\beta^{2}\|e^{t+1}-e^{t}\|^{2}+2\tau L^{2}\|X_{*}^{t+1}-X_{*}^{t}\|^{2},}}\end{array}$$

where the second inequality uses the Young's inequality for product, and the last inequality uses the Lipschitz continuity of ∇F. Rearranging the above inequality, we have that

∥Z t+1 − Z t∥ 2 ≤ 1 − τ τ ∥Z t − Z t−1∥ 2 − ∥Z t+1 − Z t∥ 2+ 2β 2∥e t+1 − e t∥ 2 + 2L 2∥Xt+1 ⋆ − Xt⋆∥ 2 ≤ 1 − τ τ ∥Z t − Z t−1∥ 2 − ∥Z t+1 − Z t∥ 2+ 2β 2∥e t+1 − e t∥ 2 (36) + 2L 2(1 + κ 2)∥Xt+1 − Xt∥ 2 + (1 + κ −2)∥e t+1 − e t∥ 2 = 1 − τ τ ∥Z t − Z t−1∥ 2 − ∥Z t+1 − Z t∥ 2+2β 2 + 4L 2∥e t+1 − e t∥ 2 + 4L 2∥Xt+1 − Xt∥ 2,
$$(37)$$
where κ > 0 and the last inequality uses the definition of e t+1 and Young's inequality for products.

Using the definition of Γ, Θ and Λ, (36) becomes

$$\left\|Z^{t+1}-Z^{t}\right\|^{2}\leq\Gamma\left(\left\|Z^{t-1}-Z^{t}\right\|^{2}-\left\|Z^{t+1}-Z^{t}\right\|^{2}\right)+\Theta\|e^{t}-e^{t+1}\|^{2}+\Lambda\left\|X^{t}-X^{t+1}\right\|^{2}.$$
2. (37)
Now, combining (34), (35) and (37), we obtain that

$$\mathbb{E}_{t}L_{\beta}(X^{t+1},Y^{t},Z^{t+1})$$
EtLβ(Xt+1, Y t, Zt+1) ≤ Lβ(Xt, Y t, Zt) + ρ kt 1 − 2r (β + L) − 1 4 −r 1 − 2r (β − L) Et∥Xt − Xt+1∥ 2 + Γ τ β ∥Z t−1 − Z t∥ 2 − Et∥Z t+1 − Z t∥ 2+ Θ τ β Et∥e t − e t+1∥ 2 + Λ τ β Et Xt − Xt+1 2 = Lβ(Xt, Y t, Zt) + ρ kt 1 − 2r (β + L) − 1 4 −r 1 − 2r (β − L) ∥Xt − Xt+1∥ 2 + Γ τ β ∥Z t−1 − Z t∥ 2 − Et∥Z t+1 − Z t∥ 2+ Θ τ β Et∥e t − e t+1∥ 2.
Taking expectations with respect to X
t, the above inequality implies

with respect to $\lambda^{t}$, the above inequality implies  $$\begin{split}&\mathbb{E}L_{\beta}(X^{t+1},Y^{t},Z^{t+1})\leq\mathbb{E}L_{\beta}(X^{t},Y^{t},Z^{t})\\ &+\left(\frac{\rho^{t_{k}}}{1-2r}(\beta+L)-\left(\frac{1}{4}-\frac{r}{1-2r}\right)(\beta-L)\right)\mathbb{E}\|X^{t}-X^{t+1}\|^{2}\\ &+\frac{\Gamma}{r\beta}\left(\mathbb{E}\|Z^{t-1}-Z^{t}\|^{2}-\mathbb{E}\|Z^{t+1}-Z^{t}\|^{2}\right)+\frac{\Theta}{r\beta}\mathbb{E}\|e^{t}-e^{t+1}\|^{2}.\end{split}\tag{38}$$  (39) we obtain that 
Combining (32) with (38), we obtain that

ELβ(Xt+1, Y t, Zt+1) ≤ ELβ(Xt, Y t, Zt) + ρ kt 1 − 2r (β + L) − 1 4 −r 1 − 2r (β − L) E∥Xt − Xt+1∥ 2 (39) + Γ τ β E∥Z t−1 − Z t∥ 2 − E∥Z t+1 − Z t∥ 2 + Θ τ β 4r 1 − 2r E∥Xt − Xt−1∥ 2 + Θ τ β 4r 1 − 2r E∥Xt − Xt+1∥ 2. i , L = maxi Li, ρ = maxi ρi, r = maxi ri and k i satisfies β+L ρ kt i ≤ ri. This implies
Recall that kt = mini k t t β−L

$$\rho^{k_{t}}\leq\frac{\beta-L}{\beta+L}r.$$

This together with (39) shows that

The general case ($\alpha$) shows that  $$\mathbb{E}L_{\beta}(X^{t+1},Y^{t},Z^{t+1})\leq\mathbb{E}L_{\beta}(X^{t},Y^{t},Z^{t})-\frac{1}{4}(\beta-L)\mathbb{E}\|X^{t}-X^{t+1}\|^{2}$$ $$+\frac{\Gamma}{\tau\beta}\left(\mathbb{E}\|Z^{t-1}-Z^{t}\|^{2}-\mathbb{E}\|Z^{t+1}-Z^{t}\|^{2}\right)+\underbrace{\frac{\Theta}{\tau\beta}\frac{4r}{1-2r}}_{\Upsilon}\mathbb{E}\|X^{t}-X^{t-1}\|^{2}+\frac{\Theta}{\tau\beta}\frac{4r}{1-2r}\mathbb{E}\|X^{t}-X^{t+1}\|^{2}.$$
2.(40)
Finally, using the definition of δ and Υ, (40) further implies

$$\mathbb{E}L_{\beta}(X^{t+1},Y^{t},Z^{t+1})$$ $$\leq\mathbb{E}L_{\beta}(X^{t},Y^{t},Z^{t})-\delta\mathbb{E}\|X^{t}-X^{t+1}\|^{2}$$ $$+\frac{\Gamma}{\tau\beta}\left(\mathbb{E}\|Z^{t-1}-Z^{t}\|^{2}-\mathbb{E}\|Z^{t+1}-Z^{t}\|^{2}\right)$$ $$+\Upsilon\left(\mathbb{E}\|X^{t}-X^{t-1}\|^{2}-\mathbb{E}\|X^{t+1}-X^{t}\|^{2}\right).$$
$$(40)$$
$$(41)$$
$$\left(42\right)$$

Next, noting that Y
t+1 is the minimizer of (13) which is β-strongly convex, it holds that

$$\mathbb{E}L_{\beta}(X^{t+1},Y^{t+1},Z^{t+1})\leq\mathbb{E}L_{\beta}(X^{t+1},Y^{t},Z^{t+1})-\frac{\beta}{2}\mathbb{E}\|Y^{t+1}-Y^{t}\|^{2}.$$
Summing (42) and (41), we have that  $$\mathbb{E}L_{\beta}(X^{t+1},Y^{t+1},Z^{t+1})$$ $$\leq\mathbb{E}L_{\beta}(X^{t},Y^{t},Z^{t})-\delta\mathbb{E}\|X^{t}-X^{t+1}\|^{2}+\frac{\Gamma}{r\beta}\left(\mathbb{E}\|Z^{t-1}-Z^{t}\|^{2}-\mathbb{E}\|Z^{t+1}-Z^{t}\|^{2}\right)$$ $$\quad+\Upsilon\left(\mathbb{E}\|X^{t}-X^{t-1}\|^{2}-\mathbb{E}\|X^{t+1}-X^{t}\|^{2}\right)-\frac{\beta}{2}\mathbb{E}\|Y^{t+1}-Y^{t}\|^{2}.$$  Rearranging the above inequality and recalling the definition of $H(X,Y,Z,X^{\prime},Z^{\prime})$, we have that 
above inequality and recalling the definition of $H(X,Y,Z,X',Z')$, we have  $\;\;\;\mathbb{E}H(X^{t+1},Y^{t+1},Z^{t+1},X^t,Z^t)$  $\;\;\;\leq\mathbb{E}H(X^t,Y^t,Z^t,X^{t-1},Z^{t-1})-\delta\mathbb{E}\|X^t-X^{t+1}\|^2-\frac{\beta}{2}\mathbb{E}\|Y^{t+1}-Y^t\|^2$. 
Now we prove {Ht} is convergent. Inequality (14) implies that {Ht} is nonincreasing. Since F and G are bounded from below, we denote W = inf F + inf G . Now we show that Ht ≥ W for all t. Suppose to the contrary that there exists t0 such that Ht0 < W. Since (14) implies Ht is nonincreasing, it hold that

$$\sum_{t=1}^{T}(H_{t}-W)\leq\sum_{t=1}^{t_{0}-1}(H_{t}-W)+(T-t_{0}+1)(H_{t_{0}}-W).$$

Thus

$$\lim_{T\to\infty}\sum_{t=1}^{T}(H_{t}-W)=-\infty.\tag{1}$$
$$(43)$$

On the other hand, using (41), for t ≥ 1, it holds that

Ht − W ≥ EH(Xt+1, Y t+1, Zt+1, Xt, Zt) − W (a) ≥ ELβ(Xt+1, Y t, Zt+1) − W ≥ EF(Xt+1) + G(Y t) + Xt+1 − Y t, Zt+1− W ≥ EXt+1 − Y t, Zt+1 (b) =1 τ β EZ t+1 − Z t, Zt+1=1 τ β E∥Z t+1∥ 2 − E∥Z t∥ 2 + E∥Z t+1 − Z t∥ 2 ≥1 τ β (E∥Z t+1∥ 2 − E∥Z t∥ 2).
where (a) makes use of the definition of Ht and Lβ, (b) uses (12). Summing the above inequality from t = 0 to T and take T to the infinity, we have that

$$\begin{array}{l}{{\operatorname*{lim}_{T\to\infty}\sum_{t=1}^{T}(H_{t}-W)\geq\operatorname*{lim}_{T\to\infty}\sum_{t=1}^{T}\frac{1}{\tau\beta}(\|Z^{t+1}\|^{2}-\|Z^{t}\|^{2})}}\\ {{\ =\frac{1}{\tau\beta}\operatorname*{lim}_{T\to\infty}(\mathbb{E}\|Z^{T+1}\|^{2}-\mathbb{E}\|Z^{0}\|^{2})\geq-\frac{1}{\tau\beta}\|Z^{0}\|^{2}>-\infty,}}\end{array}$$

which contradicts with (43). Therefore, Ht is bounded from below. This together with (14) gives that {Ht}
is convergent.

## C.2 Details And Proofs Of Corollary 1

Thanks to Proposition 5, we have the following properties with respect to the successive changes.

Corollary 4. *Consider* (1) and let (Xt, Y t, Zt) *be defined as in Proposition 4. Suppose assumptions in* Proposition 5 hold. Then the following statements hold.

(i) *It holds that*

$$\sum_{t=0}^{T}\mathbb{E}\|X^{t}-X^{t+1}\|^{2}+\sum_{t=0}^{T}\mathbb{E}\|Y^{t+1}-Y^{t}\|^{2}\leq\frac{L_{\beta}(X^{0},Y^{0},Z^{0})+C-H_{*}}{\min\{\delta,\frac{\beta}{2}\}}.\tag{44}$$

and

$$\sum_{t=0}^{T}\mathbb{E}\|Z^{t}-Z^{t+1}\|^{2}\leq(1+\Gamma)\frac{3(r+1)}{(L-\beta)^{2}}\|\nabla L_{\beta}(X^{0},Y^{0},Z^{0})\|^{2}+3\|X^{0}-Y^{0}\|^{2}\tag{45}$$ $$+2\left(\Gamma+2\Theta\frac{2r}{1-2r}\right)\frac{L_{\beta}(X^{0},Y^{0},Z^{0})+C-W}{\min\{\delta,\frac{\beta}{2}\}},$$

where C := 2τ β(Γ + 1)∥X0 −Y
0∥
2 +4
(L−β)
2 L+β+1 2 + 2τ β(Γ + 1) + Υ + (L−β)
2 8
∥∇XLβ(X0, Y 0, Z0)∥
2.

with Θ and Γ *being defined as in Proposition 5.*

$$\delta\sum_{t=1}^{T-1}\mathbb{E}\|X^{t}-X^{t+1}\|^{2}+\frac{\beta}{2}\sum_{t=1}^{T-1}\mathbb{E}\|Y^{t+1}-Y^{t}\|^{2}$$ $$\leq L_{\beta}(X^{0},Y^{0},Z^{0})+C-H_{T}\leq L_{\beta}(X^{0},Y^{0},Z^{0})+C-H_{*},$$
$$(49)$$

Rearranging the above inequality, we have that

(40), we have  $H_{T}\leq L_{\beta}(X^{0},Y^{0},Z^{0})+C$  $-\delta\sum_{t=1}^{T}\mathbb{E}\|X^{t}-X^{t+1}\|^{2}-\frac{\beta}{2}\sum_{t=1}^{T}\mathbb{E}\|Y^{t+1}-Y^{t}\|^{2}$  $\leq H_{1}-\delta\sum_{t=1}^{T-1}\mathbb{E}\|X^{t}-X^{t+1}\|^{2}-\frac{\beta}{2}\sum_{t=1}^{T-1}\mathbb{E}\|Y^{t+1}-Y^{t}\|^{2}$.  
Now we bound H1. Note that
H1 = ELβ(X1, Y 1, Z1) + Γ
τ β E∥Z
1 − Z
0∥
2 + ΥE∥X1 − X0∥
2
(i)
≤ ELβ(X1, Y 0, Z1) + Γ
τ β E∥Z
1 − Z
0∥
2 + ΥE∥X1 − X0∥
2
(ii)
≤ ELβ(X1, Y 0, Z0) + Γ + 1
τ β E∥Z
1 − Z
0∥
2 + ΥE∥X1 − X0∥
2
(iii)
≤ E
Lβ(X0, Y 0, Z0) + ∇XLβ(X0, Y 0, Z0)
⊤(X1 − X0)
+
L + β
2∥X1 − X0∥
2+ τ β(Γ + 1)E∥X1 − Y
0∥
2 + ΥE∥X1 − X0∥
2
≤ Lβ(X0, Y 0, Z0) + 12
∥∇XLβ(X0, Y 0, Z0)∥
2 + 2τ β(Γ + 1)∥X0 − Y
0∥
2
+
L + β + 1
2+ 2τ β(Γ + 1) + ΥE∥X1 − X0∥
2
(iv)
≤ Lβ(X0, Y 0, Z0) + 2τ β(Γ + 1)∥X0 − Y
0∥
2
+4
(L − β)
2
L + β + 1
2+ 2τ β(Γ + 1) + Υ + (L − β)
2
8
∥∇XLβ(X0, Y 0, Z0)∥
2, (48)
where (i) uses (42), (ii) uses (35), (iii) uses the property that Lβ(*X, Y,* ·) is (L + β)-smooth, and (iv) uses
the following inequality.
E∥X1 − X0∥
2 ≤ 2E∥X1 − X1
∗ ∥
2 + 2E∥X0 − X1
∗ ∥
2
≤ 4E∥X0 − X1
∗ ∥
2
≤4
(L − β)
2
∥∇XLβ(X0, Y 0, Z0)∥
2
Thus, summing (47) and (48), we have
$$H_{T}\leq H_{1}-\delta\sum_{t=1}^{T}\mathbb{E}\|X^{t}-X^{t+1}\|^{2}-\frac{\beta}{2}\sum_{t=1}^{T}\mathbb{E}\|Y^{t+1}-Y^{t}\|^{2}\tag{1}$$ $$\leq H_{1}-\delta\sum_{t=1}^{T-1}\mathbb{E}\|X^{t}-X^{t+1}\|^{2}-\frac{\beta}{2}\sum_{t=1}^{T-1}\mathbb{E}\|Y^{t+1}-Y^{t}\|^{2}$$
$$(46)$$
$$(47)$$
Proof. Summing (14) from t = 1 to T, it holds that

_holds that_  $$\lim_{t}\mathbb{E}\|X^{t}-X^{t+1}\|^{2}=\lim_{t}\mathbb{E}\|Y^{t+1}-Y^{t}\|^{2}=\lim_{t}\mathbb{E}\|Z^{t+1}-Z^{t}\|^{2}=\lim_{t}\mathbb{E}\|Y^{t}-X^{t}\|^{2}=0.$$
2 = 0. (46)
(ii) *It holds that* where the second inequality is because {Ht} is nonincreasing and convergent. This implies (44).

Taking T in the above inequality to infinity, we deduce that

$$\delta\sum_{t=0}^{\infty}\mathbb{E}\|X^{t}-X^{t+1}\|^{2}+\frac{\beta}{2}\sum_{t=0}^{\infty}\mathbb{E}\|Y^{t+1}-Y^{t}\|^{2}<\infty.$$

where the last inequality is because {Ht} is convergent. Therefore, we have {E∥Xt − Xt+1∥
2}, and limt E∥Y
t+1 − Y
t∥
2 are summable and

$$\lim_{t}\mathbb{E}\|X^{t}-X^{t+1}\|^{2}=\lim_{t}\mathbb{E}\|Y^{t+1}-Y^{t}\|^{2}=0.\tag{50}$$

In addition, summing (37) from t = 1 to T, we have that

X T t=0 E∥Z t − Z t+1∥ 2 ≤ (1 + Γ)∥Z 0 − Z 1∥ 2 + ΘX T t=1 E∥e t − e t+1∥ 2 + ΓX T t=1 E∥Xt − Xt+1∥ 2 ≤ (1 + Γ)∥Z 0 − Z 1∥ 2 + 2Θ 2r 1 − 2r X T t=1 E∥Xt − Xt−1∥ 2+(Γ + 2Θ 2r 1 − 2r ) X T (51) t=0 E∥Xt−Xt+1∥ 2 ≤ (1 + Γ)∥Z 0 − Z 1∥ 2 + 2 Γ + 2Θ 2r 1 − 2r X T t=0 E∥Xt − Xt+1∥ 2 ≤ (1 + Γ)∥Z 0 − Z 1∥ 2 + 2 Γ + 2Θ 2r 1 − 2r Lβ(X0, Y 0, Z0) + C − H∗ min{δ, β 2 } , where the second inequality uses (32). Recall the definition of Z 1, we have that
$$\mathbb{E}\|Z^{1}-Z^{0}\|^{2}=\mathbb{E}\|X^{1}-Y^{0}\|^{2}\leq3\mathbb{E}\|X^{1}-X^{1}_{\star}\|^{2}+3\|X^{1}_{\star}-X^{0}\|^{2}+3\|X^{0}-Y^{0}\|^{2}$$ $$\leq3r\|X^{0}-X^{1}_{\star}\|^{2}+3\|X^{1}_{\star}-X^{0}\|^{2}+3\|X^{1}_{\star}-Y^{0}\|^{2}$$ $$\leq\frac{3(r+1)}{(L-\beta)^{2}}\|\nabla L_{\beta}(X^{0},Y^{0},Z^{0})\|^{2}+3\|X^{0}-Y^{0}\|^{2}.$$
This together with (51) gives

$$\sum_{t=0}^{T}\mathbb{E}\|Z^{t}-Z^{t+1}\|^{2}\leq(1+\Gamma)\frac{3(r+1)}{(L-\beta)^{2}}\|\nabla L_{\beta}(X^{0},Y^{0},Z^{0})\|^{2}+3\|X^{0}-Y^{0}\|^{2}\tag{52}$$ $$+2\left(\Gamma+2\Theta\frac{2r}{1-2r}\right)\frac{L_{\beta}(X^{0},Y^{0},Z^{0})+C-H_{\star}}{\min\{\delta,\frac{\beta}{2}\}}.$$

Taking T in the above inequality to infinity we deduce that {E∥Z
t − Z
t+1∥
2} is summable and using (12),
we have that

$$\operatorname*{lim}\mathbb{E}\|Y^{t}-X^{t+1}\|^{2}=\operatorname*{lim}_{t}\mathbb{E}\|Z^{t}-Z^{t+1}\|^{2}=0.$$

This together with (50) gives that

$$\operatorname*{lim}\mathbb{E}\|Y^{t}-X^{t}\|^{2}=0.$$

## C.3 Details And Proofs Of Theorem 1

Here, we prove Theorem 1.

Theorem 3. *Consider* (1)*. Let* {(x t 1
, . . . , xtp
, yt, zt1
, . . . , ztp} be generated by Algorithm 1. Let (Xt, Y t, Zt) be defined as in Proposition 4. Suppose assumptions in Proposition 5 hold. Then the following statements hold.

(i) There exists E > 0 *such that*

$\|\nabla F(Y^{t+1})+\xi^{t+1}\|\leq E\left(\|X^{t+1}-X^{t}\|+\|Z^{t+1}-Z^{t}\|+\|Y^{t}-Y^{t+1}\|\right).$ (53)  In section (53) we have 
$$w h e r e\;\xi^{t+1}\in\partial F(Y^{t+1}).$$
(ii) *It holds that*

$$\frac{1}{1+T}\sum_{t=0}^{T}\mathbb{E}d^{2}(0,\nabla F(Y^{t+1})+\partial G(Y^{t+1}))$$ $$\leq\frac{1}{T+1}R\left((1+\Gamma)\frac{3(r+1)}{(L-\beta)^{2}}\|\nabla L_{\beta}(X^{0},Y^{0},Z^{0})\|^{2}+3\|X^{0}-Y^{0}\|^{2}\right)$$ $$+\frac{1}{T+1}R\left(2\Gamma+\Theta\frac{8r}{1-2r}+2\right)\frac{L_{\beta}(X^{0},Y^{0},Z^{0})+C-H_{*}}{\min\{\delta,\frac{\beta}{2}\}},$$  _are defined in Proposition 5. $H_{*}$ and $C$ is defined in Proposition 5. $\alpha$._
where Γ and Θ are defined in Proposition 5, H∗ and C is defined in Proposition 5 and Corollary 4 respectively, R := max{3(L + β)
2 2r 1−2r
,
L
τβ + 12,(L + β)
2}.

Proof. Using (29), it hold that

$$0=\nabla F(Y^{t+1})+\nabla F(X_{\star}^{t+1})-\nabla F(Y^{t+1})+Z^{t}+\beta(X_{\star}^{t+1}-Y^{t}).$$

Summing this with (30), we have that

$0=\nabla F(Y^{t+1})+\xi^{t+1}+\nabla F(X^{t+1}_{*})-\nabla F(Y^{t+1})+Z^{t}-Z^{t+1}+\beta(X^{t+1}_{*}-X^{t+1})-\beta(Y^{t+1}-Y^{t})$.  
This implies that

∥∇F(Y t+1) + ξ t+1∥ ≤ ∥∇F(Xt+1 ⋆) − ∇F(Y t+1)∥ + ∥Z t − Z t+1∥ + β∥Xt+1 ⋆ − Xt+1∥ + β∥Y t+1 − Y t∥ ≤ L∥Xt+1 ⋆ − Y t+1∥ + ∥Z t − Z t+1∥ + β∥Xt+1 ⋆ − Xt+1∥ + β∥Y t+1 − Y t∥ ≤ L∥Xt+1 ⋆ − Xt+1∥ + L∥Xt+1 − Y t∥ + (L + β)∥Y t − Y t+1∥ + ∥Z t − Z t+1∥ + β∥Xt+1 ⋆ − Xt+1∥ = (L + β)∥Xt+1 ⋆ − Xt+1∥ + L τ β + 1∥Z t+1 − Z t∥ + (L + β)∥Y t − Y t+1∥,
$$(54)$$
where the last equality uses (12). Using (32), we have that E∥Xt+1
⋆ − Xt+1∥
2 ≤
q 2r 1−2r E∥Xt+1 − Xt∥
2.

Using this, (54) can be further passed to

 In, (54) can be further passed to $$\mathbb{E}\|\nabla F(Y^{t+1})+\xi^{t+1}\|^2\leq(L+\beta)\sqrt{\frac{2r}{1-2r}}3\mathbb{E}\|X^{t+1}-X^t\|^2+\left(\frac{L}{\tau\beta}+1\right)3\mathbb{E}\|Z^{t+1}-Z^t\|^2$$ $$+(L+\beta)3\mathbb{E}\|Y^t-Y^{t+1}\|^2.$$
This together with Cauchy-Schwarz inequality, we have that

After what could easily be imaginary, we have that $$\mathbb{E}\|\nabla F(Y^{t+1})+\mathbb{E}^{t+1}\|^2\leq3(L+\beta)^2\frac{2r}{1-2r}\mathbb{E}\|X^{t+1}-X^t\|^2+\left(\frac{L}{\tau\beta}+1\right)^2\mathbb{E}\|Z^{t+1}-Z^t\|^2$$ $$+(L+\beta)^2\mathbb{E}\|Y^t-Y^{t+1}\|^2.$$
$$(55)$$
This proves (53). Summing the above inequality from t = 0 to T, it holds that

X T t=0 E∥∇F(Y t+1) + ξ t+1∥ 2 ≤ 3(L + β) 22r 1 − 2r X T t=0 E∥Xt+1 − Xt∥ 2 + L τ β + 12X T t=0 E∥Z t+1 − Z t∥ 2 + (L + β) 2X T t=0 E∥Y t − Y t+1∥ 2 ≤ max{3(L + β) 22r 1 − 2r , L τ β + 12,(L + β) 2} ·  X T t=0 E∥Xt+1 − Xt∥ 2 + ∥Y t − Y t+1∥ 2 + ∥Z t+1 − Z t∥ 2 ! ≤ max{3(L + β) 22r 1 − 2r , L τ β + 12,(L + β) 2} ·   (1 + Γ) 3(r + 1) (L − β) 2 ∥∇Lβ(X0, Y 0, Z0)∥ 2 + 3∥X0 − Y 0∥ 2 + 2Γ + Θ 8r 1 − 2r + 2Lβ(X0, Y 0, Z0) + C − H∗ min{δ, β 2 } ! , 2
where $C:=2\tau\beta(\Gamma+1)\|X^{0}-Y^{0}\|^{2}+\frac{4}{(L-\beta)^{2}}\Big{(}\frac{L+\beta+1}{2}+2\tau\beta(\Gamma+1)+\Upsilon+\frac{(L-\beta)^{2}}{8}\Big{)}$, the last term will be $\tau\beta(\Gamma)=\frac{1}{2}\left(\frac{L+\beta+1}{2}+2\tau\beta(\Gamma+1)+\Upsilon+\frac{(L-\beta)^{2}}{8}\right)$.  
· ∥∇XLβ(X0, Y 0, Z0)∥
2, the last inequality uses (44) and (45). Dividing both sides with T + 1 and recalling ξ t+1 ∈ ∂G(Y
t+1), we have the conclusion. Grouping the constants of ∥X0 − Y
0∥
2, ∥∇XLβ(X0, Y 0, Z0)∥
2, Lβ(X0, Y 0, Z0), we have that

$$\sum_{t=0}^{T}\mathbb{E}\|\nabla F(Y^{t+1})+\xi^{t+1}\|^{2}\tag{56}$$ $$\leq D\left(\|\nabla L_{\beta}(X^{0},Y^{0},Z^{0})\|^{2}+\|X^{0}-Y^{0}\|^{2}+L_{\beta}(X^{0},Y^{0},Z^{0})-W\right),$$
$$\square$$

where

$$D:=\max\{3(L+\beta)^{2}\frac{2r}{1-2r},\left(\frac{L}{\tau\beta}+1\right)^{2},(L+\beta)^{2}\}\cdot\max\{D_{1},D_{2},D_{3}\}\tag{57}$$

max{3, D12τ β(Γ + 1)}.

with D1 :=
2Γ+Θ 8r
1−2r +2
min{δ, 12
β}, D2 := (1 + Γ) 3(r+1)
$$\frac{(r+1)}{L-\beta)^{2}}+D_{1}\frac{4}{(L-\beta)^{2}}\Big(\frac{L+\beta+1}{2}+2\tau\beta(\Gamma+1)+\Upsilon+\frac{(L-\beta)^{2}}{8}\Big),\;D_{3}:=0.$$

## C.3.1 Proofs Of Proposition 6 And Corollary 3

We provide the detailed version of Proposition 6 as follows.

Proposition 9. *Consider* (1)*. Let* {(x t 1
, . . . , xtp
, yt, zt1
, . . . , ztp} be generated by Algorithm 1. Let (Xt, Y t, Zt)
be defined as in Proposition 4. Suppose assumptions in Proposition 5 hold. Suppose {(Xt, Y t, Zt)} is bounded and denote the set of accumulation points of {(Xt, Y t, Zt, Xt−1, Zt−1)} as Ω*. The following statements hold:*
(i) limt d((Xt, Y t, Zt, Xt−1, Zt−1), Ω) = 0.

(ii) *Any accumulation point of* {Y
t} *is a stationary point of* (1).

(iii) H ≡ H∗ on Ω.

Proof. For (i), let Y
∗ be an accumulation point of {Y
t} with Y
ti → Y
∗. Using (29) and (30), there exists ξ ti ∈ G(Y
ti ) such that

$$0=\nabla F(X_{*}^{t_{i}})+Z^{t_{i}-1}+\beta(X_{*}^{t_{i}}-Y^{t_{i}-1})$$ $$=\nabla F(Y^{t})+\nabla F(X_{*}^{t_{i}})-\nabla F(Y^{t})+Z^{t_{i}-1}+\beta(X_{*}^{t_{i}}-Y^{t_{i}-1}).$$

and
$$0=\xi^{t_{i}}-Z^{t_{i}}-\beta(X^{t_{i}}-Y^{t_{i}}).$$
The above relations shows that
ti)(58)
above relations shows that  $0=\nabla F(Y^t)+\xi^{ti}+\nabla F(X^t_i)-\nabla F(Y^t)+Z^{ti-1}-Z^{ti}+\beta(X^{t_i}-Y^{t_{i-1}})-\beta(X^{t_i}-Y^{t_i})$ $=\nabla F(Y^t)+\xi^{ti}+\nabla F(X^t_i)-\nabla F(Y^t)+\tau\beta(X^{t_i}-Y^{t_{i-1}})+\beta(X^{t_i}-Y^{t_{i-1}})-\beta(X^{t_i}-Y^{t_i})$  then we list as above as of (12). Now we have that $\|X^t-Y^{t_i}-\beta_i Y^{t_i}-\beta_i Y^{t_i}\|$ is a B-positive
where the equality makes uses of (12). Now we show that limi ∥Xt⋆ − Xt∥ = 0. Using Proposition 7 and
(11), we have that

$$\|e^{t}\|^{2}=\|X_{\star}^{t}-X^{t}\|^{2}\leq{\frac{2r}{1-2r}}\|X^{t}-X^{t-1}\|^{2}.$$

Since limt ∥Xt − Xt−1∥ = 0, we have that

$$(59)$$
$$\operatorname*{lim}_{i}\|X_{\star}^{t}-X^{t}\|=0.$$
i∥Xt⋆ − Xt∥ = 0. (59)
Next, we show that limi ∥Xt − Y
t−1∥ = 0 . Using (12), it holds that

$$\left\|Z^{t}-Z^{t-1}\right\|^{2}$$ $$\leq\Gamma\left(\left\|Z^{t-2}-Z^{t-1}\right\|^{2}-\left\|Z^{t}-Z^{t-1}\right\|^{2}\right)+\Theta\|e^{t-1}-e^{t}\|^{2}+\Lambda\left\|X^{t-1}-X^{t}\right\|^{2}$$ $$\leq\Gamma\left(\left\|Z^{t-2}-Z^{t-1}\right\|^{2}-\left\|Z^{t}-Z^{t-1}\right\|^{2}\right)+\Theta\frac{4r}{1-2r}\|X^{t-1}-X^{t-2}\|^{2}+(\Lambda+\frac{4r}{1-2r})\left\|X^{t-1}-X^{t}\right\|^{2}$$  where the first inequality uses (37) and the second inequality is due to (32). Summing the above inequality,
from t = 1 to T, we have that
X T 1=1 Z t − Z t−1 2≤ Γ∥Z t1−2 − Z t1−1∥ 2 − ∥Z tK − Z tK−1∥ 2 +1 τ β Θ 4r 1 − 2r X T 1=1 ∥Xt−1 − Xt−2∥ 2 + (Λ + 4r 1 − 2r ) X T 1=1 Xt−1 − Xt 2 ≤ Γ∥Z t1−2 − Z t1−1∥ 2 − ∥Z tK − Z tK−1∥ 2+ Θ 4r 1 − 2r X K i=1 ∥Xt−1 − Xt−2∥ 2 + (Λ + 4r 1 − 2r ) X K i=1 Xt−1 − Xt 2 ≤ Γ∥Z t1−2 − Z t1−1∥ 2 + Θ 4r 1 − 2r X T 1=1 ∥Xt−1 − Xt−2∥ 2 + (Λ + 4r 1 − 2r ) X T 1=1 Xt−1 − Xt 2.

$$(60)$$

Taking K in the above inequality to infinity and recalling thatXt−1 − Xt 2is summable, we deduce that PT
1=1 ∥Z
t − Z
t−1∥
2 < ∞. This together with (12) show that

$$\operatorname*{lim}_{t}\|X^{t}-Y^{t-1}\|={\frac{1}{\tau\beta}}\operatorname*{lim}_{t}\|Z^{t}-Z^{t-1}\|=0.$$
t−1∥ = 0. (60)
Next, we show that limt ∥Y
t − Y
t−1∥ = 0. Using (12) again, we have that

$$Y^{t}-Y^{t-1}=X^{t+1}-X^{t}-\frac{1}{\tau\beta}(Z^{t+1}-Z^{t})-\frac{1}{\tau\beta}(Z^{t}-Z^{t-1}).$$

This together with the fact that limt ∥Xt −Xt−1∥ = limt ∥Z
t −Z
t−1∥ = 0 implies that limt ∥Y
t −Y
t−1∥ = 0.

Since Y
ti → Y
∗, combining (59), (60) and (46), we have that

$\lim_i Y^{t_i-1}=\lim_i X^{t_i}=\lim_i X^{t_i}=\lim_i Y^{t_i}=Y^{*}$. 
$$\mathrm{n}\,Y^{t_{i}}=Y^{*}.$$
This together with the continuity of ∇F, the closedness of ∂G and (58) shows that

$$0\in\nabla F(Y^{*})+\partial G(Y^{*}).$$
$$(61)$$

This completes the proof.

Now we prove (ii). Fix any (X∗, Y ∗, Z∗, X¯ ∗,Z¯∗) ∈ Ω. Then there exists {ti}i such that
(Xti, Y ti, Zti, Xti−1, Y ti−1) converges to (X∗, Y ∗, Z∗, X¯ ∗,Z¯∗). Thanks to Proposition 5 (ii), we know that

$$H_{*}=\operatorname*{lim}_{i}H(X^{t_{i}},Y^{t_{i}},Z^{t_{i}},X^{t_{i}-1},Y^{t_{i}-1})$$
iH(Xti, Y ti, Zti, Xti−1, Y ti−1) (61)
and

$$H(X^{*},Y^{*},Z^{*},\bar{X}^{*},\bar{Z}^{*})=L_{\beta}(X^{*},Y^{*},Z^{*})=F(X^{*})+G(Y^{*})+\langle X^{*}-Y^{*},Z^{*}\rangle+\frac{\beta}{2}\|X^{*}-Y^{*}\|^{2}.\tag{62}$$

Since Y
tis the minimizer of (13), it holds that

$$G(Y^{t_{i}})+\left\langle X^{t_{i}}-Y^{t_{i}},Z^{t_{i}}\right\rangle+{\frac{\beta}{2}}\|X^{t_{i}}-Y^{t_{i}}\|^{2}\leq G(Y^{*})+\left\langle X^{t_{i}}-Y^{*},Z^{t_{i}}\right\rangle+{\frac{\beta}{2}}\|X^{t_{i}}-Y^{*}\|^{2}.$$

Taking the above inequality to infinity, we have that

$$\limsup_{i}G(Y^{t_{i}})+\langle X^{*}-Y^{*},Z^{*}\rangle+\frac{\beta}{2}\|X^{*}-Y^{*}\|^{2}$$ $$=\limsup_{i}G(Y^{t_{i}})+\langle X^{t_{i}}-Y^{t_{i}},Z^{t_{i}}\rangle+\frac{\beta}{2}\|X^{t_{i}}-Y^{t_{i}}\|^{2}$$ $$\leq G(Y^{*})+\langle X^{*}-Y^{*},Z^{*}\rangle+\frac{\beta}{2}\|X^{*}-Y^{*}\|^{2}.$$

This together with the closedness of G shows that limi G(Y
ti ) = G(Y
∗). This together with the continuity of F, Corollary 4 (ii) and (61) gives that

$$H_{*}=\operatorname*{lim}_{i}H(X^{t_{i}},Y^{t_{i}},Z^{t_{i}},X^{t_{i}-1},Y^{t_{i}-1})$$
$H_{0}=\underset{i}{\max}(X^{*},Y^{*},Z^{*},Z^{*})$,  $=F(X^{*})+G(Y^{*})+\langle X^{*}-Y^{*},Z^{*}\rangle+\frac{\beta}{2}\|X^{*}-Y^{*}\|^{2}=H(X^{*},Y^{*},Z^{*},\bar{X}^{*},\bar{Z}^{*})$,
where the second equality uses (62).

Corollary 3. Let {(x t 1
, . . . , xtp
, yt, zt1
, . . . , ztp
)} *be generated by Algorithm 1 with* (9) *holding deterministically.*
Let (Xt, Y t, Zt) be defined as in Proposition 4. Suppose assumptions in Proposition 6 hold. Then any accumulation point of {y t} *is a stationary point of* (1).

Proof. From Proposition 2, we understand that Y
t = (y t*, . . . , y*t) for any t. Let y
∗ be any accumulation point of y t. Then Y
∗ = (y
∗*, . . . , y*∗) is an accumulation point of {Y
t}. Proposition 6 demonstrates that the Y
∗is a stationary point of (3). By applying Proposition 1, we deduce that y
∗is a stationary point of (1).

## C.3.2 Details And Proofs For Theorem 2

To show the global convergence of the generated sequence, we first need to bound the subdifferential of
∂H(Xt+1, Y t+1, Zt+1, Xt, Zt).

$$\square$$

Lemma 1. *Consider* (1)*. Let* {(x t 1
, . . . , xtp
, yt, zt1
, . . . , ztp} be generated by Algorithm 1. Let (Xt, Y t, Zt) be defined as in Proposition 4. Suppose (9) *is satisfied deterministically (satisfied without expectation). Suppose* assumptions in Proposition 5 hold. There exists D > 0 *such that*

$$d(0,\partial H(X^{t+1},Y^{t+1},Z^{t+1},X^{t},Z^{t}))\leq D\left(\|X^{t+1}-X^{t}\|+\|Y^{t+1}-Y^{t}\|+\|Z^{t+1}-Z^{t}\|\right).$$

Proof. Using Exercise 8.8, Proposition 10.5 and Corollary 10.9 of RockWets98, it holds that

$$\partial H(X,Y,Z,X^{\prime},Z^{\prime})\supseteq\begin{pmatrix}\nabla F(X)\\ \partial G(Y)\\ 0\\ 0\end{pmatrix}+\begin{pmatrix}Z+\beta(X-Y)+\frac{\Theta}{\pi^{3}}\frac{16\pi}{1-2\tau}(X-X^{\prime})\\ -Z-\beta(X-Y)\\ X-Y+\frac{2\Gamma}{\pi^{3}}(Z-Z^{\prime})\\ -\frac{\Theta}{\pi^{3}}\frac{16}{1-2\tau}(X-X^{\prime})\\ -\frac{3\Gamma}{\pi^{3}}(Z-Z^{\prime}).\end{pmatrix}.$$

Thus,

∂H(Xt+1, Y t+1, Zt+1, Xt, Zt) ⊇   ∇F(Xt+1) + Z t+1 + β(Xt+1 − Y t+1) + Θ τβ 16r 1−2r (Xt+1 − Xt) ∂G(Y t+1) − Z t+1 − β(Xt+1 − Y t+1) Xt+1 − Y t+1 + 2Γ τβ (Z t+1 − Z t) − Θ τβ 16r 1−2r (Xt+1 − Xt) − 2Γ τβ (Z t+1 − Z t)   (63) ⊇   ∇F(Xt+1) + Z t+1 + β(Xt+1 − Y t+1) + Θ τβ 16r 1−2r (Xt+1 − Xt) 0 Xt+1 − Y t+1 + 2Γ τβ (Z t+1 − Z t) − Θ τβ 16r 1−2r (Xt+1 − Xt) − 2Γ τβ (Z t+1 − Z t)  
where the seconde inclusion follows from (30).

Now, we bound each coordinate in the right hand side of the relation. For the first one, we denote At+1 :=
∇F(Xt+1) + Z
t+1 + β(Xt+1 − Y
t+1) + Θ
τβ 16r 1−2r
(Xt+1 − Xt). Using (29), we have that

$$\begin{array}{l}{{{\mathcal{A}}^{t+1}\ni\nabla F(X^{t+1})-\nabla F(X_{\star}^{t+1})+(Z^{t+1}-Z^{t})}}\\ {{{}}}\\ {{{}+\beta(X^{t+1}-Y^{t+1}-X_{\star}^{t+1}+Y^{t})+\frac{\Theta}{\tau\beta}\frac{16r}{1-2r}(X^{t+1}-X^{t}).}}\end{array}$$

Thus, we deduce that d 2(0, At+1) is bounded above by

$$4(L+\beta)^{2}\|X^{t+1}-X^{t+1}\|^{2}+4\|Z^{t+1}-Z^{t}\|^{2}+4\beta^{2}\|Y^{t}-Y^{t+1}\|^{2}$$ $$+\frac{4\Theta^{2}}{\tau^{2}\beta^{2}}\frac{64r^{2}}{(1-2r)^{2}}\|X^{t+1}-X^{t}\|^{2}\tag{64}$$

where we also make use of the Lipscitz continuity of ∇F.

For the third coordinate in (63), using (12), it holds that

$$\left\|X^{t+1}-Y^{t+1}+\frac{2\Gamma}{\tau\beta}(Z^{t+1}-Z^{t})\right\|^{2}=\left\|\frac{1}{\tau\beta}(Z^{t+1}-Z^{t})+Y^{t}-Y^{t+1}+\frac{2\Gamma}{\tau\beta}(Z^{t+1}-Z^{t})\right\|^{2}$$ $$\leq2\|Y^{t}-Y^{t+1}\|^{2}+\frac{(1+2\Gamma)^{2}}{\tau^{2}\beta^{2}}\|Z^{t+1}-Z^{t}\|^{2}$$

This together with (63) and (64), we deduce that

d 2(0, ∂H(Xt+1, Y t+1, Zt+1, Xt, Zt)) ≤ 4(L + β) 2∥Xt+1 − Xt+1 ⋆ ∥ 2 + 4∥Z t+1 − Z t∥ 2 + 4β 2∥Y t − Y t+1∥ 2 + 4Θ2 τ 2β 2 64 ∗ 4r 2 (1 − 2r) 2 ∥Xt+1 − Xt∥ 2 + 2∥Y t − Y t+1∥ 2 + (1 + 2Γ)2 τ 2β 2∥Z t+1 − Z t)∥ 2 (65) +Θ2 τ 2β 2 64 ∗ 4r 2 (1 − 2r) 2 ∥Xt+1 − Xt∥ 2 + 4Γ2 τ 2β 2 ∥Z t+1 − Z t∥ 2.
$$\square$$
Note that using 32, we have that

$$\|X^{t+1}-X^{t+1}_{*}\|^{2}\leq\frac{2r}{1-2r}\|X^{t+1}-X^{t}\|^{2}.\tag{66}$$

Combining (65) with (66), we have that

d 2(0, ∂H(Xt+1, Y t+1, Zt+1, Xt, Zt)) ≤ 4(L + β) 22r 1 − 2r ∥Xt+1 − Xt∥ 2 + 4∥Z t+1 − Z t∥ 2 + 4β 2∥Y t − Y t+1∥ 2 + 4Θ2 τ 2β 2 64 ∗ 4r 2 (1 − 2r) 2 ∥Xt+1 − Xt∥ 2 + 2∥Y t − Y t+1∥ 2 + (1 + 2Γ)2 τ 2β 2∥Z t+1 − Z t)∥ 2 +Θ2 τ 2β 2 64 ∗ 4r 2 (1 − 2r) 2 ∥Xt+1 − Xt∥ 2 + 4Γ2 τ 2β 2 ∥Z t+1 − Z t∥ 2 = D′(∥Xt+1 − Xt∥ 2 + ∥Y t − Y t+1∥ 2 + ∥Z t+1 − Z t∥ 2),
where D is the maximum of the coordinates of ∥Xt+1 − Xt∥
2, ∥Y
t − Y
t+1∥ and ∥Z
t+1 − Z
t∥
2 on the right hand side of the above inequality. Finally, using the fact that P3 i s 2 i ≤ (P3 i ai)
2for any a1, a2, a3 ≥ 0, the above inequality can be further passed to

$$d^{2}(0,\partial H(X^{t+1},Y^{t+1},Z^{t+1},X^{t},Z^{t}))\leq D^{\prime}(\|X^{t+1}-X^{t}\|+\|Y^{t}-Y^{t+1}\|+\|Z^{t+1}-Z^{t}\|).$$

Taking square root on both sides of the above inequality we have the conclusion.

Now we are ready to prove Theorem 2. In fact, we already show the key properties that will be needed. They are Proposition 5, Corollary 4, Proposition C.3.1 and Lemma 1. The rest steps are routine. We follow the proofs in Borwein et al. (2017); Bolte et al. (2014); Li & Pong (2016) and include it only for completeness.

Theorem 2. *Consider* (1) *and Algorithm 1 with* (9) holding deterministically. Let (Xt, Y t, Zt) *be defined* as in Proposition 4. Suppose assumptions in Proposition 5 hold. Let H be defined as in Proposition 5 and suppose H is a KL function with exponent α ∈ [0, 1). Then {(Xt, Y t, Zt)} *converges globally. Denoting*
(X∗, Y ∗, Z∗) := limt(Xt, Y t, Zt) and d t s
:= ∥(Xt, Y t, Zt)−(X∗, Y ∗, Z∗)∥*, then the followings hold. If* α = 0, then {d t s} *converges finitely. If* α ∈ (0, 1 2
], then there exist b > 0, t1 ∈ N and ρ1 ∈ (0, 1) *such that* d t s ≤ bρt1 for t ≥ t1*. If* α ∈ (
1 2
, 1), then there exist t2 and c > 0 *such that* d t s ≤ ct− 1 4α−2 for t ≥ t2.

Proof. We first show that {(Xt, Y t, Zt)} is convergent. If there exists t0 such that Ht0 = H∗. Since {Ht} is nonincreasing thanks to (14), we deduce that Ht = H∗ for all t ≥ t0. Using (14) again we have that for all t ≥ t0, it holds that Xt = Xt−1 = *· · ·* = Xt0−1 and Y
t = Y
t−1 = *· · ·* = Y
t0. Recalling in (46) we have that limt(Xt − Y
t) = 0, we have that Y
t0 = Xt0−1. Thus, Xt+1 − Y
t = Y
t0 − Xt0−1 = 0 for all t ≥ t0. This together with (12), we deduce that Z
t+1 = Z
t = *· · ·* = Z
t0for all t ≥ t0. Therefore, when there exists t0 such that Ht0 = H∗, {(Xt, Y t, Zt)} converge finitely.

Next, we consider the case where Ht > H∗ for all t. Thanks to Proposition C.3.1 (iii), using Lemma 6 of Bolte et al. (2014), there exists r > 0, a > 0 and ψ ∈ Ψa such that

$$\psi^{\prime}(H(X,Y,Z,X^{\prime}Z^{\prime})-H_{*})d(0,\partial H(X,Y,Z,X^{\prime},Z^{\prime}))\geq1$$

when d((X, Y, Z, X′, Z′), Ω) ≤ r and H∗ < H(X, Y, Z, X′, Z′) < H∗ +a. Thanks to Corollary 4 and Theorem 5, we know that there exists t1 such that when t > t1, d((Xt, Y t, Zt, Xt−1, Zt−1), Ω) ≤ r and H∗ <
H(Xt, Y t, Zt, Xt−1, Zt−1) < H∗ + a. Thus, it holds that ψ
′(H((Xt, Y t, Zt, Xt−1, Zt−1) − H∗)d(0, ∂H((Xt, Y t, Zt, Xt−1, Zt−1)) ≥ 1. (67)
Recaling (14), we have that Since ψ is concave, using the above inequality we have that

$$(X^{t},Y^{t},Z^{t},X^{t-1},Z^{t-1}))\geq1.$$
$$\delta\|X^{t+1}-X^{t}\|^{2}+\frac{\beta}{2}\|Y^{t+1}-Y^{t}\|^{2}\leq H_{t}-H_{t+1}$$ $$\leq\psi^{\prime}(H_{t}-H_{s})d(0,\beta H(X^{t},Y^{t},Z^{t},X^{t-1},Z^{t-1}))\left(H_{t}-H_{t+1}\right)\tag{68}$$ $$\leq\Delta_{0}^{t+1}d(0,\beta H(X^{t},Y^{t},Z^{t},X^{t-1},Z^{t-1}))$$  where the second inequality uses (67) and the last inequality uses the concavity of $\psi$. Using Lemma 1, we have 
$$(67)$$
have from (68) that
on (85) that  $ \frac{1}{2}\min\{\delta,\frac{\beta}{2}\}\left(\|X^{t+1}-X^t\|+\|Y^{t+1}-Y^t\|\right)^2\leq\min\{\delta,\frac{\beta}{2}\}\left(\|X^{t+1}-X^t\|^2+\|Y^{t+1}-Y^t\|^2\right)$  $ \leq\delta\|X^{t+1}-X^t\|^2+\frac{\beta}{2}\|Y^{t+1}-Y^t\|^2$  $ \leq\Delta_{\theta}^{t+1}D\left(\|X^t-X^{t-1}\|+\|Y^t-Y^{t-1}\|+\|Z^t-Z^{t-1}\|\right)$  In fact, we will use the fact that $ \|X^t\|^2+\frac{\beta}{2}\|Z^t\|^2$ for every $ t\in\mathbb{R}$. 
$$(69)$$
where the first inequality uses the fact that 12 (a + b) 2 ≤ a 2 + b 2for any a, b ∈ R. Now we bound ∥Z t − Z t−1∥. Using (31), we have that ∥Z t+1 − Z t∥ = |1 − τ |∥Z t − Z t−1∥ + βτ∥e t+1 − e t∥ + τ∥∇F(Xt+1 ⋆) − ∇F(Xt⋆ )∥ ≤ |1 − τ |∥Z t − Z t−1∥ + βτ∥e t+1 − e t∥ + τL∥Xt+1 ⋆ − Xt⋆∥ ≤ |1 − τ |∥Z t − Z t−1∥ + (β + L)τ∥e t+1 − e t∥ + τL∥Xt+1 − Xt∥ ≤ |1 − τ |∥Z t − Z t−1∥ + (β + L)τ4 (β − L) 2 ∥Xt − Xt−1∥ + τL∥Xt+1 − Xt∥ where the second inequality uses the definition of e t and last inequality uses (32). Rearranging the above
inequality, it holds that

$$\|Z^{t}-Z^{t-1}\|\leq\frac{1+|1-\tau|}{1-|1-\tau|}\left(\|Z^{t}-Z^{t-1}\|-\|Z^{t}-Z^{t+1}\|\right)-\|Z^{t}-Z^{t+1}\|$$ $$+\frac{2}{1-|1-\tau|}(\beta+L)\tau\frac{4}{(\beta-L)^{2}}\|X^{t}-X^{t-1}\|+\frac{2}{1-|1-\tau|}\tau L\|X^{t+1}-X^{t}\|.$$

Plugging this bound into (69), we have that

1 2 min{δ, β 2 }∥Xt+1 − Xt∥ + ∥Y t+1 − Y t∥2 ≤ ∆ t+1 ψ D∥Xt − Xt−1∥ + ∥Y t − Y t−1∥ + ∆t+1 ψ D 1 + |1 − τ | 1 − |1 − τ | ∥Z t − Z t−1∥ − ∥Z t − Z t+1∥− ∥Z t − Z t+1∥  + ∆t+1 ψ D 2(β + L)τ 1 − |1 − τ | 4 (β − L) 2 ∥Xt − Xt−1∥ +2τL 1 − |1 − τ | ∥Xt+1 − Xt∥  ≤ ∆ t+1 ψ DD1 ∆1 t + ∆2 t ,
where

$$\begin{array}{l}{{\Delta_{\psi}^{t+1}:=\psi(H_{t}-H_{*})-\psi(H_{t+1}-H_{*}),}}\\ {{D_{1}:=\mathrm{max}\{1+\frac{2(\beta+L)\tau}{1-|1-\tau|}\frac{4}{(\beta-L)^{2}},\frac{2\tau L}{1-|1-\tau|},1,\frac{1+|1-\tau|}{1-|1-\tau|}\},}}\\ {{\Delta_{t}:=\|X^{t}-X^{t-1}\|+\|X^{t+1}-X^{t}\|+\|Y^{t}-Y^{t-1}\|,}}\\ {{\Delta_{t}^{2}:=\left(\|Z^{t}-Z^{t-1}\|-\|Z^{t}-Z^{t+1}\|\right)-\|Z^{t}-Z^{t+1}\|.}}\end{array}$$

Rearranging the above inequality and taking square toot on both sides, we obtain that

$$\begin{array}{l}{{\|X^{t+1}-X^{t}\|+\|Y^{t+1}-Y^{t}\|\leq\sqrt{\frac{2}{\operatorname*{min}\{\delta,\frac{\beta}{2}\}}}\Delta_{\psi}^{t+1}D D_{1}\left(\Delta_{t}^{1}+\Delta_{t}^{2}\right)}}\\ {{\leq\frac{2}{\operatorname*{min}\{\delta,\frac{\beta}{2}\}}\Delta_{\psi}^{t+1}D D_{1}+\frac{1}{4}\left(\Delta_{t}^{1}+\Delta_{t}^{2}\right)}}\end{array}$$

where the second inequality uses the fact that √ab ≤
1 2
(a + b) for any *a, b >* 0. Recalling the definitions of
∆1 t and ∆2 t
, and rearranging the above inequality, we have that

∥Xt+1 − Xt∥ + ∥Y t+1 − Y t∥ ≤ s2 min{δ, β 2 } ∆ t+1 ψ DD1∆ ≤2 min{δ, β 2 } ∆ t+1 ψ DD1 + 1 4 ∥Xt − Xt−1∥ + ∥Xt+1 − Xt∥ + ∥Y t − Y t−1∥ + 1 4 ∥Z t − Z t−1∥ − ∥Z t − Z t+1∥ − ∥Z t − Z t+1∥
Further rearranging the above inequality, we have

$$\begin{split}&\frac{1}{4}\|X^{t+1}-X^{t}\|+\frac{3}{4}\|Y^{t+1}-Y^{t}\|+\frac{1}{4}\|Z^{t}-Z^{t+1}\|\\ &\leq\frac{2}{\min\{\delta,\frac{\delta}{2}\}}\Delta_{\mathcal{V}}^{t+1}DD_{1}\\ &+\frac{1}{4}\left(\|X^{t}-X^{t-1}\|-\|X^{t+1}-X^{t}\|+\|Y^{t}-Y^{t-1}\|-\|Y^{t}-Y^{t+1}\|\right)\\ &+\frac{1}{4}\left(\|Z^{t}-Z^{t-1}\|-\|Z^{t}-Z^{t+1}\|\right).\end{split}\tag{1}$$
$$(70)$$
$$(71)$$
Then, denoting ∆t+1 := ∥Xt+1 − Xt∥ + ∥Y
$X^{t+1}-X^{t}\|+\|Y^{t+1}-Y^{t}\|+D_{2}\|Z^{t+1}-Z^{t}\|$ (70) can be further passed to  $$\frac{1}{4}\Delta_{t+1}\leq\frac{2}{\min\{\delta,\frac{\beta}{2}\}}\Delta_{\psi}^{t+1}DD_{1}+\frac{1}{4}\left(\Delta_{t}-\Delta_{t+1}\right)$$

Summing the above inequality from t = t1 + 1 to T, we have that

$$\begin{array}{l}{{\frac{1}{4}\sum_{t=t_{1}+1}^{T}\Delta_{t+1}\leq\frac{2}{\operatorname*{min}\{\delta,\frac{\beta}{2}\}}\Delta_{\psi}^{t+1}D D_{1}+\frac{1}{4}\left(\Delta_{t_{1}+1}-\Delta_{T+1}\right)}}\\ {{\leq\frac{2}{\operatorname*{min}\{\delta,\frac{\beta}{2}\}}\psi(H_{t}-H_{*})D D_{1}+\frac{1}{4}\Delta_{t_{1}+1}}}\end{array}$$

where the last inequality uses the fact that P
ψ > 0. Taking T in the above inequality to infinity, we see that

t=t1+1 ∆t+1 < ∞. Thus {(Xt, Y t, Zt)} is convergent.

Next, we show the convergence rate of the generated sequence. Denote the limit of (Xt, Y t, Zt) as
(X∗, Y ∗, Z∗). Define St =P∞
i=t+1 ∆i. Noting that ∥X∗ − Xt∥ + ∥Y
∗ − Y
t∥ + ∥Z
t − Z
∗∥ ≤ P∞
i=t ∆i = St, it suffices to show the convergence rate of St. Using (71), there exists D2 > 0 such that

$$S_{t}=\sum_{i=t}^{\infty}\Delta_{i}\leq D_{2}\left(\psi(H_{t}-H_{*})-\psi(H_{t+1}-H_{*})\right)+\left(\Delta_{t}-\Delta_{t+1}\right)\tag{72}$$ $$\leq D_{2}\psi(H_{t}-H_{*})+\Delta_{t}=D_{2}\psi(H_{t}-H_{*})+\left(S_{t-1}-S_{t}\right).$$

Now we bound ψ(Ht − H∗). From the KL assumption, ψ(w) = cw1−θ with some c >. Thanks to Theorem 5 (ii) and (14), we have from the KL inequality, it holds that

$$c(1-\theta)d(0,\partial H(X^{t},Y^{t},Z^{t},X^{t-1},Z^{t-1}))\geq(H_{t}-H_{*})^{\theta}.$$
θ. (73)
Combining this with (1), we have that

$$(73)$$
$$c(1-\theta)D(S_{t-1}-S_{t})\geq(H_{t}-H_{*})^{\theta}.$$

This is equivalent to

$$c\left(c(1-\theta)D(S_{t-1}-S_{t})\right)^{\frac{1-\theta}{\theta}}\geq c(H_{t}-H_{*})^{1-\theta}=\psi(H_{t}-H_{*}).$$

Using this (72) can be further passed to

$$S_{t}\leq D_{3}(S_{t-1}-S_{t})^{\frac{1-\theta}{\theta}}+(S_{t-1}-S_{t})\,,$$
$$\left(74\right)$$

θ + (St−1 − St), (74)
where D3 := D2c (c(1 − θ)D)
1−θ θ. Now we claim 1. When θ = 0, {(Xt, Y t, Zt)} converges finitely.

2. When θ ∈ (0, 1 2
], there exist a > 0 and ρ1 ∈ (0, 1) such that St ≤ aρt1
.

3. When θ ∈ (
1 2
, 1), there exists D4 such that St ≤ ct−
1−θ 2θ−1 for large t.

When θ = 0, we claim that there exists t such that Ht = H∗. Suppose to the contrary that Ht > H∗ for all t. Then, for large t, (73) holds, i.e., d(0, ∂H(Xt, Y t, Zt, Xt−1, Zt−1)) ≥1 c(1−θ) > 0. However, thanks to 1 and Corollary 4, we know that limt d(0, ∂H(Xt, Y t, Zt, Xt−1, Zt−1)) = 0, a contradiction. Therefore, there exists t such that Ht = H∗. From the argument in the beginning of this proof, we see that {(Xt, Y t, Zt)}
converges finitely.

When θ ∈ (0, 1 2
], we have 1−θ θ ≥ 1. Thanks to Corollary 4, we know that there exists t2 such that St−St−1 <
1. Thus, (74) can be further passed to St ≤ D3(St−1 − St) + (St−1 − St). This implies that

$$S_{t}\leq{\frac{D_{3}+1}{D_{3}+2}}S_{t-1}.$$

Thus there exist a > 0 and ρ1 ∈ (0, 1) such that St ≤ aρt1
.

When θ ∈ (
1 2
, 1), it holds that 1−θ θ < 1. From the last case, we know that St − St−1 < 1 when *t > t*2. Using
(74), we have that St ≤ D3(St−1 − St)
1−θ θ + (St−1 − St)
1−θ θ = (D3 + 1) (St−1 − St)
1−θ θ. This implies that

$$S_{t}^{\frac{\theta}{1-\theta}}\leq D_{3}^{\frac{\theta}{1+\theta}}(S_{t-1}-S_{t}).$$

With this inequality, following the arguments in Theorem 2 of Attouch & Bolte (2009) starting from Equation
(13) in Attouch & Bolte (2009), there exists c > 0 such that St ≤ ct−
1−θ 2θ−1 for large t. Thus, {S
t} converges sublinearly.