File size: 200,689 Bytes
6d0ea6c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
1567
1568
1569
1570
1571
1572
1573
1574
1575
1576
1577
1578
1579
1580
1581
1582
1583
1584
1585
1586
1587
1588
1589
1590
1591
1592
1593
1594
1595
1596
1597
1598
1599
1600
1601
1602
1603
1604
1605
1606
1607
1608
1609
1610
1611
1612
1613
1614
1615
1616
1617
1618
1619
1620
1621
1622
1623
1624
1625
1626
1627
1628
1629
1630
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
1643
1644
1645
1646
1647
1648
1649
1650
1651
1652
1653
1654
1655
1656
1657
1658
1659
1660
1661
1662
1663
1664
1665
1666
1667
1668
1669
1670
1671
1672
1673
1674
1675
1676
1677
1678
1679
1680
1681
1682
1683
1684
1685
1686
1687
1688
1689
1690
1691
1692
1693
1694
1695
1696
1697
1698
1699
1700
1701
1702
1703
1704
1705
1706
1707
1708
1709
1710
1711
1712
1713
1714
1715
1716
1717
1718
1719
1720
1721
1722
1723
1724
1725
1726
1727
1728
1729
1730
1731
1732
1733
1734
1735
1736
1737
1738
1739
1740
1741
1742
1743
1744
1745
1746
1747
1748
1749
1750
1751
1752
1753
1754
1755
1756
1757
1758
1759
1760
1761
1762
1763
1764
1765
1766
1767
1768
1769
1770
1771
1772
1773
1774
1775
1776
1777
1778
1779
1780
1781
1782
1783
1784
1785
1786
1787
1788
1789
1790
1791
1792
1793
1794
1795
1796
1797
1798
1799
1800
1801
1802
1803
1804
1805
1806
1807
1808
# New Guarantees For **Learning Revenue Maximizing Menus** Of Lotteries And Two-Part Tariffs

Maria-Florina Balcan ninamf@cs.cmu.edu Carnegie Mellon University Hedyeh Beyhaghi hedyeh@cmu.edu Carnegie Mellon University Reviewed on OpenReview: *https: // openreview. net/ forum? id= mhawjZcmrJ*

## Abstract

We advance a recently flourishing line of work at the intersection of learning theory and computational economics by studying the learnability of two classes of mechanisms prominent in economics, namely *menus of lotteries* and *two-part tariffs*. The former is a family of randomized mechanisms designed for selling multiple items, known to achieve revenue beyond deterministic mechanisms, while the latter is designed for selling multiple units (copies) of a single item with applications in real-world scenarios such as car or bike-sharing services.

We focus on learning high-revenue mechanisms of this form from buyer valuation data in both distributional settings, where we have access to buyers' valuation samples up-front, and the more challenging and less-studied online settings, where buyers arrive one-at-a-time and no distributional assumption is made about their values. We provide a suite of results with regard to these two families of mechanisms. We provide the first online learning algorithms for menus of lotteries and two-part tariffs with strong regret-bound guarantees.

Since the space of parameters is infinite and the revenue functions have discontinuities, the known techniques do not readily apply. However, we are able to provide a reduction to online learning over a finite number of *experts*, in our case, a finite number of parameters.

Furthermore, in the limited buyers type case, we show a reduction to online linear optimization, which allows us to obtain no-regret guarantees by presenting buyers with menus that correspond to a barycentric spanner. In addition, we provide algorithms with improved running times over prior work for the distributional settings. Finally, we demonstrate how techniques from the recent literature in data-driven algorithm design are insufficient for our studied problems.

## 1 Introduction

Overview. In recent years, a growing body of work has emerged in the field of machine learning for pricing and mechanism design problems. These problems involve selling items to buyers with the objective of maximizing revenue. The majority of the existing research has primarily concentrated on *distributional settings*,
i.e., when the buyers' values for the items are drawn from an unknown distribution. Less attention has been paid to the more challenging case of *online setting*, where buyers arrive one by one and no distributional assumption about buyers' values is considered. In this case, the previous literature has mostly focused on simple mechanisms such as posted pricing or, more generally, mechanisms that sell the items separately (Blum et al., 2004; Kleinberg and Leighton, 2003; Blum and Hartline, 2005; Balcan and Blum, 2006; Bubeck et al., 2017; Cesa-Bianchi et al., 2014; Balcan et al., 2018b; 2020a). We advance this line of work by studying the learnability of two prominent classes of mechanisms, both represented as menus providing the buyers a list of allocation and payment options to choose from, namely menus of two-part tariffs and lotteries. These mechanisms go beyond selling the items separately, resulting in potentially higher revenue guarantees with applications to modern real-world scenarios. We provide a collection of results for these mechanisms while discovering technical surprises compared to prior work in data-driven algorithm and mechanism design. Our results include the first online learning guarantees for menus of two-part tariffs and lotteries and improved guarantees for distributional learning. In the process, we establish a data-independent discretization method, despite the drastic failure of this technique in problems with a similar utility function (Balcan et al., 2017; 2018a; 2023a;b). In addition, we demonstrate inadequacy of recently developed techniques in data-driven algorithm design for our settings. In particular, for the first time, we provide evidence for the failure of the dispersion property (Balcan et al., 2018b; 2020a)—a sufficient condition to provide a no-regret algorithm under the smooth distributional assumption, which is widely applied to parametric algorithm and mechanism design problems—for a specific problem (menus of lotteries). Problem Setup. The first class we study is *menus of two-part tariffs* (Lewis, 1941), used for selling multiple units (i.e., copies) of a single item. In this family of mechanisms, the buyer is presented with a list (menu) of *two-part tariffs*, where *tariff* i is a pair consisting of an up-front fee, p
(i)
1
, and a per-unit fee, p
(i)
2
. If the buyer wishes to buy k ≥ 1 units of tariff i, she pays in total p
(i)
1 + kp(i)
2
, and if she does not want to buy anything, she does not pay anything. The buyer has the freedom to select any of the tariffs. In particular, the cost for purchasing k ≥ 1 units is the minimum cost among all the tariffs, i.e., mini(p
(i)
1 +kp(i)
2
). Various products in the real world are sold via menus of two-part tariffs; for example, car or bike-sharing services and delivery service subscriptions.

The second class we study is the *menus of lotteries* for selling multiple items. In this context, the buyer is presented with a list (menu) of *lotteries*, where *lottery* i is defined as a pair consisting of a vector of probabilities for allocating each item, ϕ
(i), and a price, p
(i). If the buyer wishes to choose lottery i, she receives each item j with probability ϕ
(i)[j] and pays p
(i). Menus of lotteries are a crucial family of mechanisms because (1) this family captures all possible mechanisms, including the optimal one (Dasgupta et al., 1979; Guesnerie and Oddou, 1981), and (2) menus of lotteries achieve revenue beyond other well-studied families of mechanisms such as posted pricing and, more generally, any deterministic mechanism (Briest et al., 2010; Hart and Nisan, 2019).

We study menus of two-part tariffs and lotteries in the context of parameter optimization, where the objective function (revenue) depends on parameter vectors. In menus of two-part tariffs, the parameters determining the mechanisms are the up-front fees and per-unit fees for each tariff, while for menus of lotteries, the allocation probability vectors and the prices for the lotteries determine the mechanism. In the parameter space, each point corresponds to a mechanism. A common approach in learning algorithms involves considering the objective function for a fixed buyer's valuation (Balcan et al., 2017; 2018c;b). In our context, the mechanism designer faces a utility-maximizing buyer, who, given the parameters determining the menu, chooses the entry, i.e., a lottery or a two-part tariff, in the menu that maximizes her utility. Therefore, the revenue function at any parameter vector is equal to the payment corresponding to the entry selected by the buyer.

## 1.1 Our Contributions

We study the learnability of menus of two-part tariffs and lotteries in both online and distributional settings.

We advance the state-of-the-art in several aspects.

Technical challenges, **Structural Properties, and a Revenue Preserving Cover.** "Discretization" is a natural technique in data-driven algorithm design. In this approach, a finite set of parameter vectors, each representing a menu in the parameter space, are selected, and the algorithms optimize over that set.

The smaller the set, the better the generalization guarantees will be in the distributional setting, and the better the regret guarantees will be in the online setting, with respect to the best menu in the set. In our setting, a proper data-independent discretization scheme would guarantee that independent of the buyer's valuation, this set always contains a nearly optimal menu. More specifically, for any arbitrary parameter vector representing a menu, a menu in the set should generate almost as much revenue, independent of the buyer's valuation. However, due to sharp discontinuities of revenue in the parameter space, devising such a discretization can be challenging. For instance, consider a menu with two high-utility entries for a buyer such that these entries have similar utility for the buyer but very different prices (e.g., one with high allocation and high price, the other with low allocation and low price). Minor changes in the parameters of these entries; e.g., rounding the parameters down to multiples of ϵ, may alter their utility order, causing the buyer to switch between them, resulting in an arbitrary loss in revenue. By extracting structural properties for menus of **two-part tariffs**, we develop a novel discretization method that identifies a finite set of menus that approximate the revenue of any arbitrary limited-length menu
(Theorem 1). At a high level, in finding a corresponding menu with approximate revenue guarantee, the options (tariff and quantity pairs) with higher prices need to experience a more significant decrease in price (compared to the lower price ones) so that no buyer switches from a high-price to a low-price option. In menus of **lotteries**, we extend the discretization of menus of lotteries developed by Dughmi et al. (2014) (Theorem 27). Our extension is three-fold: we remove the lower bound assumption on value distribution, support additive valuations, and provide improved regret bounds and running times when the size of the menu is limited. In both settings (two-part tariffs lotteries), our discretization is data-independent; e.g., the set of discretized menus consists of all menus with parameters that are multiples of ε or powers of (1−ε). The novelty of the result, however, lies in the analysis, which illustrates despite the challenges discussed above, for each arbitrary menu and valuation, this set contains a corresponding approximately revenue-preserving menu.

Online Learning (adversarial inputs and smooth distributional assumptions). For menus of **twopart tariffs**, we provide the first no-regret online learning algorithms under adversarial (worst-case) inputs and also smooth distributional assumptions. For the full information setting, both settings lead to similar regret terms; however, the comparison of their running time depends on the support of the distribution and the maximum number of units available (Theorems 7 and 10). In the bandit setting, again, the regrets of both settings are similar. However, the comparison between the efficiencies of the algorithms depends on the smoothness factor of the distributions (Theorems 8 and 11). Furthermore, we provide the first noregret algorithm for a *semi-bandit* setting (Theorem 12) with a polynomial running time in the number of discontinuities in the parameter space. This setting lies between the full-information and bandit settings, and the learner observes the revenue function for a set of menus containing the menu used. For menus of **lotteries**, we provide the first no-regret online learning algorithms under adversarial inputs (Theorems 28 to 30). In addition, we provide evidence that menus of lotteries may not satisfy *dispersion* (Balcan et al., 2018b; 2020a)—a sufficient condition to provide a no-regret algorithm under smooth distributional assumptionwithout assuming extra structures about the optimal solution (Theorem 33). Menus of lotteries are the first family of mechanisms for which there is evidence of a potential failure of the dispersion property. Distributional Learning. We also provide novel distributional learning algorithms for menus of two-part tariffs and lotteries. Our algorithms choose several menus in a data-independent way (via data-independent discretization) and then select the best of them based on the data. In the context of **two-part tariffs**,
our algorithm is much simpler than prior ones for the same problem, yet it enjoys improved worst-case runtime guarantees compared to them (Balcan et al., 2018c; 2020b) when the length of the menu is more than one (Theorem 26). We note that for other data-driven algorithm design problems, such as data-driven clustering and data-driven learning to branch, it was proven that algorithms that use data-independent discretization could perform very poorly (Balcan et al., 2017; 2018a; 2023a). Thus, by contrast, our work shows the power of data-independent discretization for data-driven mechanism design and algorithm design more generally. In the context of **lotteries**, compared to the previous distributional learning results for fixed-length menus (Balcan et al., 2018c), our algorithm requires similar sample complexity; however, it has an efficient implementation (Theorems 34 and 57).

Limited Buyer Types. For limited buyer types, we provide improved regret bounds for both the fullinformation and partial-information (bandit) settings for both menus of two-part tariffs and lotteries (Theorems 24, 25, 31 and 32). The high-level idea is as follows. Consider the revenue function in the parameter space for a fixed buyer. The parameter space is partitioned into regions where, within each region, the buyer selects the same option in the menu, e.g., the same lottery, resulting in a continuous revenue function.

Discontinuity occurs across regions. For limited-type buyers, by superimposing the revenue functions for all types, the parameter space divides into more (albeit still a limited number of) regions. Regardless of the buyer type at hand, the revenue function is continuous within each region and in our case, linear. Therefore, it is sufficient to only consider the corner points as potential parameter vectors that maximize the revenue.

We show that in the full information case, running the weighted majority algorithm on the set of menus corresponding to the regions' corner points results in sublinear regret.

In the partial information setting, we show a reduction to online linear optimization, allowing us to obtain noregret guarantees by presenting buyers with menus corresponding to a barycentric spanner. Our reduction is inspired by Balcan et al. (2015); however, we apply the reduction in the different contexts of pricing schemes. In the partial information setting, in each round, we only observe the revenue of the current menu. To estimate the revenue from all the menus efficiently, or in other words, to find an *unbiased estimator* with a *bounded range*, we employ the notion of *barycentric spanners* in online learning introduced by Awerbuch and Kleinberg (2008). By utilizing this concept, we provide algorithms with a regret bound that is sublinear in the number of timesteps and polynomial in other parameters. This is the first time that the barycentric spanner notion has been applied to an auction design setting.

## 1.2 Summary Of Contributions

First, we overview the results related to **menus of two-part tariffs**.

- By extracting structural properties, we develop a novel discretization method that identifies a finite set of menus that approximate the revenue of any arbitrary menu, including the optimum for any valuation. This allows the development of new no-regret online learning algorithms as well as improved distributional learning algorithms (see the two bullet points below).

- We provide the first no-regret online learning algorithms under adversarial inputs, smooth distributional assumptions, and limited buyer-type assumptions (under full information, bandit setting, and semi-bandit setting).

- We also provide a novel distributional learning algorithm for menus of two-part tariffs. Our algorithm chooses several menus of two-part tariffs in a data-independent way (via data-independent discretization) and then selects the best of them based on data. This is much simpler than previous algorithms (Balcan et al., 2018c; 2020b) for the same problem, yet it enjoys improved runtime guarantees in the worst-case scenario when the length of the menu is more than one.

- For limited buyer types, we provide improved regret bounds for both the full-information and bandit settings. We show a reduction to online linear optimization, which allows us to obtain no-regret guarantees by presenting buyers with menus that correspond to a barycentric spanner.

Next, we overview our results related to **menus of lotteries**.

- We extend the discretization of menus of lotteries developed by Dughmi et al. (2014). Our extension is three-fold: we remove the lower bound assumption on value distribution, support additive valuations, and provide improved regret bounds and running times when the size of the menu is limited.

- We provide the first no-regret online learning algorithms under adversarial inputs. - Compared to the previous distributional learning results for fixed-length menus Balcan et al. (2018c),
our algorithm requires similar sample complexity; however, it has an efficient implementation.

- We provide evidence that menus of lotteries may not satisfy dispersion—a sufficient condition to provide a no-regret algorithm under smooth distributional assumption—without assuming extra structures about the optimal solution. Menus of lotteries are the first family of mechanisms where there is evidence for potential failure of the dispersion property.

- For limited buyer types, we provide improved regret bounds for both the full-information and bandit settings. We show a reduction to online linear optimization, which allows us to obtain no-regret guarantees by presenting buyers with menus that correspond to a barycentric spanner.

## 1.3 Related Work

Studying learnability of classes of mechanisms for the revenue maximization objective has been of great interest in recent years (Alon et al., 2017a; Cole and Roughgarden, 2014; Devanur et al., 2016; Elkind, 2007; Gonczarowski and Nisan, 2017; Guo et al., 2019; Roughgarden and Schrijvers, 2016). These mechanisms have been studied mostly in a distributional setting, where buyers' values are drawn from an unknown distribution, and the online setting, where there is no distributional assumption on the buyers' values, has been explored less.1In the distributional setting, various mechanism classes, including posted-price mechanisms, second-price auctions with reserves, menus of two-part tariffs, and menus of lotteries, are known to be learnable (Morgenstern and Roughgarden, 2015; 2016; Balcan et al., 2016; 2018c; 2021a; Dughmi et al.,
2014; Gonczarowski and Weinberg, 2021; Mohri and Medina, 2016; Syrgkanis, 2017; Dütting et al., 2019). In the online setting, under adversarial input (Blum et al., 2004; Kleinberg and Leighton, 2003; Blum and Hartline, 2005; Balcan and Blum, 2006; Roughgarden and Wang, 2016; Bubeck et al., 2017), and also under stochastic input (Cesa-Bianchi et al., 2014; Balcan et al., 2018b; 2020a) mostly simple mechanisms such as posted pricing and second-price auction are considered where both mechanisms sell the items separately.

An exception is Roughgarden and Wang (2016) who study Vickrey-Clarke-Groves (VCG) mechanism with multiple reserves; however, the algorithms provided are not no-regret in the classic sense but are boundedregret compared to a constant approximation of the optimal solution.

Two of the prominent approaches used for developing distributional results are pseudo-dimension-based and discretization-based. In the first approach, despite the discontinuity present in the utility of buyers as a function of the parameters used in the mechanism, it is shown that the pseudo-dimension of the family is bounded by using smoothness assumptions on the distribution. This approach applies to all the mechanisms mentioned above. In the discretization approach, a finite set of parameters is identified such that limiting the search space to this set is approximately optimal. This approach has been used for a limited number of mechanisms, such as item-pricing for combinatorial auctions for unrestricted supply (Balcan et al., 2008)
and menus of lotteries in a limited setting (Dughmi et al., 2014). In the online setting, Balcan et al. (2018b) and Balcan et al. (2020a) introduce *dispersion* as a sufficient condition for online learnability of families of mechanisms. They show several classes of mechanisms, such as posted-price mechanisms and second-price auctions with reserves, satisfy dispersion and, therefore, establish strong regret bounds for online learning.

Discretization-based techniques in online learning scenarios have been used for the simple cases of itempricing (Blum et al., 2004) and the second-price auctions (Cesa-Bianchi et al., 2014).

Two-Part Tariffs. Two-part tariff pricing schemes were first introduced by Lewis (1941) and later analyzed by Oi (1971). Menus of two-part tariffs have been studied recently in the context of distributional learning (Balcan et al., 2018c; 2020b; 2022b). A recent work (Balcan et al., 2022b) provides improved running time bounds over Balcan et al. (2020b) for distributional learning of two-part tariffs in the case where the number of pieces with continuous sum of utility functions u(xi, ·) across all problem instances is small (as defined in Section 3.2.2 the utility function u(xi, .) measures the performance of our two-part tariff mechanisms on a fixed problem instance xi as a function of its parameters). However, for the case where the menu length is strictly greater than 1, Balcan et al. (2022b)'s approach does not lead to improved running time over Balcan et al. (2020b) for worst-case instances. So, for worst-case instances and menu length > 1, our approach for distributional learning improves over previously best-known results.

Menus of Lotteries. Menus of lotteries capture all possible mechanisms, including the optimal one, for selling items to buyers. The Taxation Principle (Dasgupta et al., 1979; Guesnerie and Oddou, 1981) asserts that any mechanism for a single buyer can be represented as a menu of lotteries, where the buyer selects their favorite lottery (that is, the one that maximizes the buyer's expected value for the randomized allocation minus the price paid). Furthermore, menus of lotteries achieve revenue beyond other well-studied families of mechanisms such as posted pricing and, more generally, any deterministic mechanism. For a correlated buyer (a buyer whose values for items are correlated), even in the simple cases where the buyer is additive
(their value for a bundle of items is the sum of the value for individualized items) or unit-demand (their value for a bundle of items is the maximum value for an item in the bundle), the gap between optimal

1Some online learning algorithms, including those proved via the dispersion method, explained later, still make distributional assumptions; however, unlike the distributional learning setting, the draws are not necessarily from identical distributions.
randomized mechanism (lotteries) and item-pricing is infinite (Briest et al., 2010; Hart and Nisan, 2019).

Daskalakis et al. (2014) show that even for an independent additive buyer (the values for the items are independent), lotteries (randomized mechanisms) are necessary and provide strictly more revenue compared to any deterministic mechanism, including pricing mechanisms.

Failure of data-independent discretization-based learning. Discretization is a natural approach for designing algorithms to tune parameters (e.g., prices for menus of two-part tariffs and allocation probabilities and prices for menus of lotteries) and is commonly used in applied fields such as applied machine learning. However, recent work has shown that in tuning parameters of algorithms for solving discrete combinatorial problems, discretization in the context of data-driven algorithm design does not always work if discretization is done in a data-independent way. For the case of tuning parameters for linkage-based algorithms, Balcan et al. (2017) showed that for several natural parameterized families of clustering procedures, for any dataindependent discretization, there exists an infinite family of clustering instances such that any of the discrete parameters will output a clustering that is an Ω(n) factor worse than the optimal parameter, where n is the input size. Here, the quality of clustering can be defined according to several well-known objectives, including k-median, k-means, and k-center. Balcan et al. (2018a; 2023a) show that for the data-driven problem of learning to branch for solving mixed integer linear programs (MILPs), data-independent discretization will not work either. More specifically, for any discretization of the parameter space [0, 1], there exists an infinite family of distributions over MILP problem instances such that for any parameter in the discretization, the expected tree size is exponential in the input parameter. Yet, there exists an infinite number of parameters such that the tree size is just a constant (with probability 1). Remarkably, we show that in our context, even data-independent discretization works.

Dispersion and Online Data-Driven Algorithm Design. Dispersion is a recently-developed notion for families of algorithmic and mechanism design problems and serves as a sufficient condition for the existence of bounded-regret online learning algorithms (Balcan et al., 2018b; 2020a; Balcan, 2020) and differentially private distributional learning algorithms (Balcan et al., 2018b). Generally speaking, this condition bounds the concentration of discontinuities of the objective function in any small regions in the parameter space. Dispersion-based techniques have been established successfully for a variety of algorithms (Balcan and Sharma, 2021; Balcan et al., 2021b; 2022a), among which is tuning parameters in combinatorial problems, such as clustering problems discussed above (Balcan et al., 2018b). For menus of two-part tariffs, we show that the dispersion condition is satisfied, immediately implying no-regret online learning algorithms and differentially private algorithms for distributional learning. Surprisingly, we present evidence that dispersion might not apply to menus of lotteries. In particular, we show in menus of lotteries the objective function might have sharp discontinuities concentrated in a small region. This structural property is in stark contrast with menus of two-part tariffs and other mechanism and algorithm families satisfying dispersion. Despite this evidence, we show that a simple discretization-based approach leads to no-regret online learning algorithms for menus of lotteries.

Sample Complexity for Menus of Lotteries. The sample complexity for menus of lotteries has been studied under two different assumptions: independence of valuation across items, as studied by Gonczarowski and Weinberg (2021) and correlated valuation across items, as studied by Dughmi et al. (2014); Brustle et al.

(2020). By assuming independence simultaneously among the buyers and the items, a significant improvement over the sample complexity is possible (Gonczarowski and Weinberg, 2021). However, when the values for the items are possibly correlated, Dughmi et al. show a lower bound on the sample complexity, verifying an exponential gap on the dependence in the number of items compared to Gonczarowski and Weinberg.

Brustle et al. (2020) study a setting between the two extremes of arbitrary correlation and independence where they assume structured dependence across items, generalizing the results of Gonczarowski and Weinberg (2021) and improving the sample complexity over Dughmi et al. (2014) for special cases of correlation.

Similar to Dughmi et al. and in contrast with Gonczarowski and Weinberg, we do not assume independence (or structured dependence) across items and only assume independence among the buyers.

## 2 Model And Preliminaries

We consider selling items to a single buyer for the revenue objective through parameterized families of mechanisms. In this paper, the family of mechanisms is either the set of menus of two-part tariffs or lotteries. To put our notations in context, in this section, we focus on menus of two-part tariffs as our running example. The discussed settings also hold for menus of lotteries - we defer the discussion related to menus of lotteries to Section 4.

Menus of two-part tariffs are used for selling multiple units (i.e., copies) of a single item through a list of upfront and per-unit fee pairs that the buyer can choose from. Menu M =
np
(1)
1, p
(1)
2*, . . . ,* p
(ℓ)
1
, p
(ℓ)
2 o ⊆ R
2ℓ, is a length-ℓ menu of two-part tariffs. Each menu M is parameterized by ρ which in this case is 2ℓ-dimensional and contains all p
(j)
1and p
(j)
2 where all p
(j) 1
, p
(j)
2 ∈ [0, H]. p
(j)
1and p
(j)
2are called the up-front fee (price)
and per-unit fee (price) of tariff j, respectively. We denote a buyer's valuations for all 1, 2*, . . . , K* units by v = (v(1)*, . . . , v*(K)), where the values are nonnegative, monotonically increasing, belong to [0, H], and v(0) = 0. Under the tariff j denoted by p
(j) 1
, p
(j) 2 and the number of units k ∈ {1*, . . . , K*} that the buyer selects, she receives k units of the item and pays p
(j)
1 +kp(j)
2
. The buyer's utility is her value for the number of units bought v(k) less the payment. Each buyer has the option of buying their utility-maximizing tariff and number of units. In other words, the buyer will buy k units using tariff j that maximizes v(k) − p
(j)
1 − kp(j)
2 or does not buy and does not pay anything.

Let M be an infinite set of mechanisms parameterized by a set C ⊆ R
d. In this paper, M is either the set of two-part tariff menus or lottery menus. Consider the case where M is the set of two-part tariff menus for selling multiple units of a single item to a buyer with value v while the menu corresponds to parameter ρ ∈ C. Next, let Π be a set of problem instances for M, such as a set of buyer valuations v, and let u : Π × C → [0, H] be a utility function where u(x, ρ) measures the performance of the mechanism with parameters ρ on problem instance x ∈ Π. In our case, u(x, ρ) is the revenue of the mechanism (a menu of two-part tariffs or lotteries) with parameters ρ on input x. For example, for the menus of two-part tariffs, M is the set of possible menus M and since each menu is 2ℓ-dimensional with each dimension in [0, H],
C = [0, H]
2ℓ ⊆ R
2ℓ. Π is the set of buyer valuations v and u : Π × C → [0, H] be a utility function where u(v, ρ) measures the revenue of the menu with parameters ρ on buyer valuations v ∈ Π.

Online Setting. In this setting, a sequence of functions u1, . . . , uT : C → [0, H] arrive one by one. Unlike u, ut only takes parameter ρt as the input and is defined as ut(ρt) := u(ρt, xt), where xt is the problem instance at timestep t. At the time t, the no-regret learning algorithm chooses a parameter vector ρt and then either observes the function ut in the full information setting, the scalar ut(ρt) in the bandit setting, or ut(ρt) for a set of ρ in the semi-bandit setting. The goal is to minimize the expected regret, E[maxρ∈C Put(ρ)−ut(ρt)].

We study the online setting both under adversarial input, where ut() are selected adversarially, and under smoothed distribution inputs which assume more structure. The expectation in the regret formula is taken over the randomness of the algorithm in the adversarial setting and over the randomness of the algorithm and distribution of buyers in the smoothed distributional setting. Distributional Setting. In the distributional setting, the algorithm receives samples from an unknown distribution D over problem instances Π. The goal is to find a parameter vector ρˆ that nearly maximizes the expected utility, i.e., maxρ∈C Ex∼D[u(x, ρ)] similar to statistical learning theory (Vapnik, 1998) or PAC
learning (Valiant, 1984).

## 3 Menus Of Two-Part Tariffs

In this section, we consider M =
np
(1)
1, p
(1)
2*, . . . ,* p
(ℓ) 1
, p
(ℓ) 2 o ⊆ R
2ℓ as a length-ℓ menu of two-part tariffs.

See Section 2 for a detailed description.

## 3.1 Discretization Procedure

This section shows a discretization procedure for the menus of two-part tariffs. Given any menu and value 0 *< α <* 1, we provide an alternate menu such that all the price elements, p
(i)
1and p
(i)
2for all i, are multiples of α and the alternate menu provides nearly as much revenue as the given menu up to a term that depends on α. The main result of this section is summarized in the following statement. Theorem 1. Given a menu of two-part tariffs M and parameter 0 < α < 1*, Algorithm 1 outputs menu* M′ whose revenue is at least the revenue of M minus 2Kαℓ, for any buyer's valuation. Furthermore, for all i*, all* p
(i)
1and p
(i)
2are multiples of α*. The set of potential outcomes constitutes a space with at most* min{(H/α)
2ℓ, 2 H2/α2} menus, where H *is the maximum value for any number of units.*
Correctness of the rounding procedure (Algorithm 1) as in the proof of Theorem 1 implies that the set of menus whose prices, i.e., p
(i) 1
, p
(i) 2
, are multiples of α constitute (an almost) revenue-preserving set of menus.

Algorithm 1: (Almost) revenue preserving rounding for menus of two-part tariffs Input: Menu M, discretization parameter α.

1: Let M′ be the menu of *Pareto frontier tariffs* in M, derived by one by one deleting tariffs i for which there exists tariff j ̸= i such that p
(i)
1 ≥ p
(j)
1and p
(i)
2 ≥ p
(j)
2
.

2: Reindex the tariffs in M′in increasing order of p1 (and hence, decreasing order of p2).

3: For each tariff i, decrease p
(i)
1and p
(i)
2 by (i − 1)α. ▷ The revenue preserving step.

4: Round down all p
(i)
1and p
(i)
2to the closest multiple of α.

5: Remove the duplicate tariffs.

Output: Menu M′.

Proof idea of Theorem 1 and intuition behind Algorithm 1. At a high level, for finding corresponding menus through Algorithm 1, the options (tariff quantity pairs) with higher prices need to experience a larger decrease in price so that no buyer switches from a high-price to a low-price option. The main structural ideas deriving the algorithm and the proof of the revenue guarantee are as follows: (i) for a fixed number of units k to be purchased, the utility-maximizing tariff is the same across all the buyer's valuations; namely, the tariff that has the smallest overall price (upfront price plus k times per-unit price), and (ii) as the number of units to be purchased increases, the per-unit price of the utility-maximizing tariff decreases. The main idea of the rounding algorithm is decreasing the corresponding prices of tariffs with lower per-unit fees by a larger amount (Line 3). By doing so, for each buyer, the total price of buying more units decreases more than the total price of buying fewer units. This step ensures that the buyer does not switch from purchasing more units to fewer units after the rounding. This property is sufficient for the revenue guarantees. The other steps of the algorithm delete redundant tariffs (Lines 1 and 5) and ensure the final prices are multiples of α (Line 4). The theorem provides two upper bounds for the size of the discretized space. By Line 4, all the prices are multiples of α. Therefore, the 2ℓ price components in a length-ℓ menu each have H/α options. This gives the first bound. On the other hand, if we consider a single tariff, each of the up-front fee and the per-unit fee has H/α possibilities, therefore, the total number of possible unique tariffs are H2/α2. Each of these possible tariffs may or may not be on the menu, giving the second bound. The full proof is provided below.

Remark. Our rounding scheme (Algorithm 1) is only described for the purpose of the proof to argue that the multiples of α provide (an almost) revenue-preserving set of menus. Algorithmically, we only need to enumerate the multiples of α.

## 3.1.1 **Proof Of Theorem 1**

Before providing the proof of the discretization procedure, we provide intuition as to why discretization is a nontrivial procedure for menus of two-part tariffs. For this family of mechanisms, standard procedures, such as rounding down the prices to multiples of α, may result in arbitrary revenue loss because the price parameters of each tariff decrease by different amounts affecting unpredictable changes in utilities of selecting each tariff and number of units. It would be possible that the utility-maximizing choice for a buyer switches from a higher-price tariff and more units (that originally has slightly higher utility for the buyer) to a lowprice tariff and fewer units (that originally has slightly lower utility for the buyer) after a simple rounding.

Now, we provide structural results that enable us to design a discretization procedure. Given a menu of two-part tariffs, the following definition deletes the dominated tariffs (independent of the valuation).

Definition 2 (Pareto frontier tariffs). Given menu M with distinct tariffs, the Pareto frontier of M′is derived by deleting all tariffs i for which there exists a tariff j ̸= i *such that* p
(j)
1 ≤ p
(i)
1and p
(j)
2 ≤ p
(i) 2
.

Lemma 3. *Given a menu of tariffs, a user only selects a tariff in the Pareto frontier.*
Lemma 4. Sorting the tariffs in the Pareto frontier in increasing order of p1 is equivalent to sorting them in decreasing order of p2.

Lemma 5. For any fixed number of units k*, the highest utility tariff in* M is argmin p
(i)
1 + kp(i)
2
. This is independent of the buyers' values.

The following lemma states that as we increase the number of units the utility-maximizing tariff has higher p1 and lower p2.

Lemma 6. Let M′be the menu of Pareto frontier tariffs derived from menu M*. Suppose the tariffs in* M′
are reindexed in increasing order of p1*. Consider the index of the utility-maximizing tariff for each number* of units. This index is increasing as a function of the number of units.

Proof of Theorem 1. First, we reason about the length of the outcome menu. Let ℓ and ℓ
′ be the length of the original menu and outcome menu, respectively. First, note that ℓ
′is also the length of the menu after rounding down p
(i)
1and p
(i)
2to their closest multiples of α. Observe that ℓ
′is at most ℓ (because we never add extra tariffs) and also at most H2/α2 because there are H/α distinct options for each p1 and p2. Therefore, ℓ
′ ≤ min{*ℓ, H*2/α2}.

Then, we reason about the maximum loss in revenue. First, note that for any fixed tariff and number of units, the total price decreases by at most 2Kℓ′α. We only need to show that the buyer does not switch from buying more units to fewer. Switching in the opposite order does not decrease the revenue more than 2Kℓ′α. The reason is that the total price of each tariff is an increasing function as the number of units.

Therefore, the minimum total price is increasing as a function of the number of units. Next, we prove that a buyer never switches from buying more units to less. We show two cases: switching between tariffs and staying with the same tariff. In the first case, by Lemma 6, this means that that a buyer never switches from a tariff with higher p1 (lower p2) to a lower p1 (higher p2). Since in the discretization procedure, the price of tariffs with higher p1 decreases more than lower p1, the lower p1 tariffs do not become utility-maximizing if they were not before. In the second case, by the rounding procedure, the total price of more units in the same tariff always decreases more; therefore, the lower number of units never becomes utility maximizing.

Therefore, we conclude the payment of each tariff and therefore the revenue decreases at most by 2Kℓ′α. Thus, Rev(M′) ≥ Rev(M) − 2Kαℓ.

Finally, we find the total number of possible menus. Also, after the discretization all p
(i)
1and p
(i)
2are multiples of α. Therefore, when restricted to length-ℓ menus, there are H/α choices for each 2ℓ parameter of the menu, making an upper bound of (H/α)
2ℓ. On the other hand, there are at most H2/α2 possible tariffs, and each one of them may appear or not in the menu. Therefore, the number of menus is also bounded by 2 H2/α2.

Technical contribution. The establishment of data-driven discretization (and the subsequent online learning and distributional learning algorithms) are in contrast with previous findings. For other data-driven algorithm design problems, such as data-driven clustering and data-driven learning to branch that share a similar piecewise structure in the utility functions, it has been proven that algorithms that use data-independent discretization could perform very poorly (Balcan et al., 2017; 2018a; 2023a). Thus, by contrast, our work shows the power of data-independent discretization for data-driven mechanism design and algorithm design more generally.

## 3.2 Online Learning

We provide bounded-regret online learning algorithms in full and partial information settings. Sections 3.2.1 to 3.2.3 provide online algorithms under adversarial input, under smooth distributions, and for limited type buyers, respectively. No online learning algorithms have been known previously for menus of two-part tariffs.

## 3.2.1 Online Learning Under Adversarial Inputs

The main statements are Theorems 7 and 8 which provide regret guarantees for the full-information case and partial-information case, respectively. Using the discretization in Section 3.1, we show a reduction to a finite number of experts and run standard learning algorithms (weighted majority and Exp3) over the menus in the discretized set. Similar ideas were used in previous papers, for example Blum et al. (2004); Balcan et al. (2018b).

Full Information. In the full information setting, the seller sees the revenue generated for all the possible menus. To design an online algorithm in this case, we use a variant of the weighted majority algorithm by Auer et al. (1995). The experts in our case are the discretized menus from the previous section, denoted in the algorithm by set X = m1*, . . . , m*n. Furthermore, vt is the valuation of the buyer are time t and Revk(v1*, . . . ,* vt) is the cumulative revenue of menu mk for the buyers until time step t.

Algorithm 2: Full-information (Weighted majority on discretized menus)

![9_image_0.png](9_image_0.png)

Input: Set of menus (experts) X = m1*, . . . , m*n, learning rate β ∈ (0, 1].

1: **Initialize:** For each menu mk, initialize Revk() = 0, wk(0) = 1 2: for *buyer* t = 1*, . . . , T* do Select menu at time t to be mk with probability πk[t] = Pwk(t−1)
n j=1 wj (t−1) ;
Observe valuation of buyer t as vt ; For each menu mk, update Revk(v1*, . . . ,* vt) and wk(t) = (1 + β)
Revk(v1,v2*,...,*vt)/H;
Theorem 7. In the full information case for length-ℓ *menus of two-part tariffs, running Algorithm 2 over discretized set of menus specified in Theorem 1 for* α = β = 1/
√T *has regret bounded by* O˜ℓ(K + H ln H)
√T
,
and running time O(*T ℓK* min{H2ℓT
ℓ, 2 H2T }).

The proof follows by combining the guarantees of the discretization procedure (Theorem 1) and previously known results (specifically Auer et al. (1995), Theorem 3.2) and is deferred to Appendix A.

Partial Information (Bandit Setting). In the partial information setting, the seller does not see the outcome for all the possible menus and only observes the outcome of the menu used (the tariff and number of units chosen by the buyer). To design an online algorithm in this case, we use a version of the Exp3 algorithm in Auer et al. (1995). This variant of the Exp3 algorithm contains the weighted majority algorithm
(Algorithm 2) as a subroutine. At each step, we mix the probability distribution π, used by the weighted majority algorithm, with the uniform distribution to obtain a modified probability distribution π, which is then used to select a menu from our discretized set. Following the tariff and the number of units chosen by buyer t, we use the price paid (the gain from the chosen menu) to formulate a simulated gain vector, which is then used to update the weights maintained by the weighted majority algorithm.

Theorem 8. In the partial information case for length-ℓ *menus of two-part tariffs, running Algorithm 3* over discretized set of menus in Theorem 1 for α = T
−1/(2(1+ℓ)), β = γ = T
−1/(4(1+ℓ)) *has regret bound* O˜T
1− 1 2(1+ℓ) ℓ(K + H2ℓ+1)
*, and running time* O(T min{min{H2ℓT
ℓ, 2 H2T }, 2 H2T }).

The proof follows by combining the guarantees of the discretization procedure (Theorem 1) and previously known results (specifically (Auer et al., 1995), Theorem 4.1) and is deferred to Appendix A. Algorithm 3: Partial-information (Exp3 on discretized menus)
Input: Set of menus (experts) X = m1*, . . . , m*n, learning rate β ∈ (0, 1], parameter γ ∈ (0, 1].

1: **Initialize:** For each menu mk, initialize Revk() = 0, wk(0) = 1 2: for *buyer* t = 1*, . . . , T* do Select menu at time t to be mk with probability πk(t) = (1 − γ)πk(t) + γ/n where πk[t] = Pwk(t−1)
n j=1 wj (t−1) ;
For the selected menu k
∗, set gk∗ (t) to be the price paid by buyer t (i.e., gk∗ (t) is equal to p j 1 + kpj2
, where j and k are the tariff index and quantity chosen by buyer t). Set gk∗ (t) = γn gk∗ (t)
πk∗ (t)
;
For all other menus k, set gk
(t) = 0; For all menus k, update Revk(t) = Revk(t − 1) + gk
(t) and wk(t) = (1 + β)
Revk(t)/H;

## 3.2.2 Online Learning Under Smooth Distributions

Recent papers studying online learning of mechanisms, e.g., Balcan et al. (2018b; 2020a), studied the problem in a restricted setting, where at each point in time, instead of a worst-case value, the value is drawn from a bounded-density distribution. This assumption is in the same spirit as the "smoothed analysis" paradigm of Spielman and Teng (Spielman and Teng, 2004) and is used in similar contexts in papers, including CohenAddad and Kanade (2017); Gupta and Roughgarden (2017). Specifically, we assume the buyers' valuations come from κ*-bounded* distributions, where the density function is bounded at all points by κ. This assumption has proved to be sufficient for a few classes of mechanisms, including posted-pricing and second-price mechanisms, to establish *dispersion*. At a high level, dispersion ensures that the number of discontinuities in a small ball in the parameter space is limited with high probability and is a sufficient condition for bounded-regret online algorithms. We prove that menus of two-part tariffs satisfy dispersion and use it to derive bounded-regret algorithms for full-information, bandit, and semi-bandit settings. The main difference between the algorithms used in this section compared to the adversarial input setting in Section 3.2.1 is that we previously needed to go through a careful data-independent discretization step (Section 3.1) to reduce the problem to a finite number of experts. However, under smooth distributions, the assumed properties of the distribution influence the set of experts chosen.

We provide the main results in this setting, followed by a discussion of the key ideas behind the algorithms and proofs. After establishing the dispersion constraint for menus of two-part tariffs, it is sufficient to employ previously known algorithms designed for dispersed settings to achieve no-regret guarantees. The primary purpose of this section is to compare the regret guarantees from the recently developed online learning technique of dispersion and the discretization approach discussed in the previous section. The formal definition of dispersion and technical descriptions of the algorithms and proofs are deferred to the appendix. The main results are as follows2:
Definition 9. [κ-bounded] A density function f : R → R corresponds to a κ*-bounded distribution if* max{f(x)} ≤ κ.

Theorem 10. Let u1, . . . , uT : C → [0, H] *be the revenue functions of two-part tariff menus such that* ut(ρ) denotes the revenue of a mechanism associated with menu parameters ρ for the buyer arriving at time t. Let the samples of buyers' values be drawn from *S ∼ D*(1) × · · · × D(T). Suppose v(k) ∈ [0, H]
for any number of units k ∈ [K]. Also, suppose that for each distribution D(t)*, and every pair of number* of units k and k
′, v(k) and v(k
′) have a κ*-bounded joint distribution. An efficient implementation of the* exponentially weighted forecaster with λ =
q2ℓ ln(2H2κ
√T)/T /H *(Algorithm 4) has expected regret bounded* by O˜((Hℓ2K2√log κ + 1/(Hκ))√T) *and runs in time* O˜((T + 1)poly(ℓ,K)*poly*(ℓ, √T) + KT √T).

Theorem 11. Let u1, . . . , uT : C → [0, H] *be the revenue functions of two-part tariff menus such that* ut(ρ)
denotes the revenue of a mechanism associated with menu parameters ρ for the buyer arriving at time t*. Let* 2The regret term in the semi-bandit algorithm (Theorem 12) is smaller than the full-information algorithm (Theorem 10)
since different notions of dispersion are used. Also, the stated running time of both algorithms are the same; however, this is in the worst case, and the semi-bandit algorithm potentially performs fewer computations.
the samples of buyers' values be drawn from *S ∼ D*(1) × · · · × D(T). Suppose v(k) ∈ [0, H] for any number of units k ∈ [K]. Also, suppose that for each distribution D(t), and every pair of number of units k and k
′,
v(k) and v(k
′) have a κ*-bounded joint distribution. There is a bandit-feedback online optimization algorithm* with expected regret O˜T
(2ℓ+1)/(2ℓ+2) H2K
√ℓκd/2√log κ
+ 1/Hκ + Hℓ2K2. The per-round running time is O(H4ℓκ 2ℓT
ℓ).

Theorem 12. *Suppose the buyers' values are drawn from* D(1) ×· · ·×D(T), where each D(t)is κ*-bounded for* κ = ˜o(T). Then, running the continuous Exp3-SET algorithm (Algorithm 7) for menus of two-part tariffs under semi-bandit feedback has expected regret bounded by O˜(H
√ℓT)*. An efficient implementation has the* same regret bound and running time O˜((T + 1)poly(ℓ,K)*poly*(ℓ, √T) + KT √T).

Smoothed Distributional Assumptions. In an online setting under smoothed distributions, the algorithm receives samples *S ∼ D*T, where D is an arbitrary distribution over problem instances Π (which in our case is the buyer valuations). The goal is to find ρˆ that nearly maximizes Pv∈S u(v, ρ). In this setting, the goal is to find a value ρ that is nearly optimal in hindsight over a stream v1*, . . . ,* vT of instances, or equivalently, over a stream u1 = u(v1, ·)*, . . . , u*T = u(vT , ·) of functions. Each vt is drawn from a distribution D(t), which may be adversarial. Therefore, {v1, . . . , vT *} ∼ D*(1) *× · · · × D*(T).

Dispersion. Let u1*, . . . , u*T be a set of functions mapping a set C ⊆ R
dto [0, H]. In this paper, we study the mechanism selection setting, given a collection of problem instances v1*, . . . ,* vT ∈ Π and a utility function u : Π × C → [0, H], each function ui(·) might equal the function u(vi, ·), measuring a mechanism's performance on a fixed problem instance as a function of its parameters. Informally, dispersion is a constraint on the functions u1*, . . . , u*T that guarantees although each function ui may have discontinuities, they do not concentrate in a small region of space. We study two definitions of dispersion previously introduced in algorithm and mechanism selection problems. We show that menus of two-part tariffs satisfy both definitions;
(*w, k*)-dispersion (Definition 15) and β-dispersion (Definition 38). Then, we use the first to establish online learning results for full-information and bandit settings and the second for the semi-bandit setting. In order to prove menus of two-part tariffs satisfy dispersion under smoothed assumptions, we show this family of mechanisms satisfies certain structural properties. Balcan et al. (2018c) show in two-part tariff menus, for each function ui, the parameter space C is partitioned into sets P1*, . . . ,*Pn such that uiis L-Lipschitz on each piece, but ui may have discontinuities at the boundaries between pieces.3 We refine this structural property and show that multi-sets of parallel hyperplanes, corresponding to the stream of buyer valuations, partition the parameter space C into convex polytopes with bounded-degree linear utility functions inside each polytope. Later, we show this property is sufficient for proving dispersion and employing the related algorithms. Partitioning of parameter space to convex regions with linear utilities (Balcan et al., 2018c). Consider the sequence of buyers valuations b. At each time step, a buyer is presented with a menu, and based on the menu and their valuation, they select the tariff index and number of units that maximize their utility.

Formally, given menu ρ, buyer i with valuation bi selects option (*j, k*), where j is the tariff index and k is the number of units if this option produces more utility for the buyer than any other options. Concretely,

$$b_{i}(k)-\mathds{1}\{k\geq1\}\left(p_{1}^{(i)}(\mathbf{\rho})+kp_{2}^{(i)}(\mathbf{\rho})\right)\geq b_{i}(k^{\prime})-\mathds{1}\{k^{\prime}\geq1\}\left(p_{1}^{(i^{\prime})}(\mathbf{\rho})+k^{\prime}p_{2}^{(i^{\prime})}(\mathbf{\rho})\right)\quad\forall j^{\prime},k^{\prime}\tag{1}$$

where p
(j)
1
(ρ) and p
(j)
2
(ρ) are the up-front fee and per-unit fee of tariff j in menu ρ. The above inequalities identify a convex polytope of parameter vectors (menus ρ) with hyperplane boundaries. Since the tariff index and the number of units that bi selects are fixed in the region, the revenue, I{k ≥ 1}
p
(j)
1
(ρ) + kp(j)
2
(ρ)
,
is continuous and more specifically linear in the region (formally proved in Lemma 20). Following the same argument for the buyers in the sequence, the parameter space for each buyer is partitioned into convex polytopes where the revenue for the buyer's valuation is linear inside the polytopes. By superimposing these partitionings, since the intersections of convex regions are also convex, and the sum of linear functions (here revenues) is linear, the parameter space, C is partitioned into convex regions such that the cumulative revenue 3This previously-known structural result suffices for the techniques used in the setting with the limited number of buyers
(Section 3.2.3 and appendix A.1.3); however, we need a refined statement for proving dispersion.

for the sequence is linear in each region. Inside each region, the utility-maximizing choice of each buyer is fixed; therefore, each region is associated with a *mapping* from buyer valuations to their corresponding utilitymaximizing tariff index and number of units. We may use the mapping, formally defined in Section 3.2.3, to denote the region, e.g., region Pµ corresponding to mapping µ, or simply use cardinal indices for the regions P1,P2*, . . .*.

![12_image_0.png](12_image_0.png)

Figure 1: The figure is an abstraction of the regions for parameter space of two-part tariffs drawn in two dimensions for illustration. The coordinates are the up-front and per-unit fees for the tariff indices. The dashed hyperplanes correspond to a buyer valuation having the same utility through two pairs of tariff indices and the number of units; see Equation (1). The colored region area is defined by hyperplane boundaries. Inside each such region, any buyer valuation selects a fixed tariff index and the number of units, resulting in a linear cumulative revenue function.

Lemma 13. Consider the sequence of buyer valuations v arrived until time t. For menus of two-part tariffs, the parameter space C is partitioned into convex polytopes, P1, . . . ,Pn *by multisets of parallel hyperplanes,*
such that the utility function at each time step inside each region Pj *is a linear function satisfying* (K + 1)-
Lipschitz continuity.

Proof. Part of the proof that identifies the regions with linear utilities has been shown previously in Balcan et al. (2018c), Lemma 3.15. We reiterate that part for completeness and also prove the extra structural properties, i.e., parallel hyperplanes and (K + 1)-Lipschitz continuity. Consider the set of menus for which the buyer with valuation v
(i) arriving at time i selects the tariff index j and the number of units k. The buyer selects this option for menu ρ if it produces more utility for the buyer than any other option. Formally,

$$v^{(i)}(k)-\mathds{I}\{k\geq1\}\left(p_{1}^{(j)}(\mathbf{\rho})+kp_{2}^{(j)}(\mathbf{\rho})\right)\geq v^{(i)}(k^{\prime})-\mathds{I}\{k^{\prime}\geq1\}\left(p_{1}^{(j^{\prime})}(\mathbf{\rho})+k^{\prime}p_{2}^{(j^{\prime})}(\mathbf{\rho})\right).\ \ \forall j^{\prime},k^{\prime}\tag{2}$$

The above inequalities identify a convex polytope of parameter vectors (menus ρ) with hyperplane boundaries. Considering all the possible selections (*j, k*) (the tariff index and the number of units), the parameter space for v
(i)is partitioned into convex polytopes where inside each polytope the payment of v
(i)is linear; i.e., I{k ≥ 1}
p
(j) 1
(ρ) + kp(j)
2
(ρ)
. Considering the same analysis for all the buyers' valuations in the sequence, for each buyer, the parameter space is partitioned into convex polytopes where inside each polytope, the revenue function is linear and (K + 1)-Lipschitz. Since convex polytopes are closed under intersection, superimposing the partitions for i = 1*, . . . , t* results in polytopes with the properties in the statement. For a fixed valuation vector v
(i), the discontinuities in the utility function are defined by at most ℓ 2K2 hyperplanes: v
(i)(k) − I{k ≥ 1}
p
(j)
1
(ρ) + kp(j)
2
(ρ)
= v
(i)(k
′) − I{k
′ ≥ 1}
p
(j
′)
1(ρ) + k
′p
(j
′)
2(ρ)
. Let Ψv be the multi-set union of all these hyperplanes. Consider a set S =v
(1)*, . . . ,* v
(t)	with corresponding multi-sets Ψv(1) *, . . . ,* Ψv(t) of hyperplanes. We now partition the multi-set union of Ψv(1) *, . . . ,* Ψv(t) into at most ℓ 2K2 multi-sets B*j,k,j*′,k′ for all *j, j*′ ∈ [ℓ] and *k, k*′ ∈ [K] and i ∈ [t] such that for each B*j,k,j*′,k′ ,
the hyperplanes in B*j,k,j*′,k′ are parallel with probability 1 over the draw of S. To this end, define a single multi-set B*j,k,j*′,k′ to consist of the hyperplanes

{v (1) (k) − I{k ≥ 1} p (j) 1 (ρ) + kp(j) 2 (ρ) = v (1) (k ′) − I{k ′ ≥ 1} p (j ′) 1(ρ) + k ′p (j ′) 2(ρ) , v (2) (k) − I{k ≥ 1} p (j) 1 (ρ) + kp(j) 2 (ρ) = v (2) (k ′) − I{k ′ ≥ 1} p (j ′) 1(ρ) + k ′p (j ′) 2(ρ) , . . . , v (t)(k) − I{k ≥ 1} p (j) 1 (ρ) + kp(j) 2 (ρ) = v (t)(k ′) − I{k ′ ≥ 1} p (j ′) 1(ρ) + k ′p (j ′) 2(ρ) };
where the only variables are coordinates of ρ. The hyperplanes inside each multi-set are parallel and the utility of the regions defined by the hyperplanes are linear and K + 1-Lipschitz.4 Next, we establish an upper bound on the number of regions with continuous (linear) regions. Lemma 14. *The partitioning of the parameter space for menus of two-part tariffs explained in Lemma 13* after T *rounds results in* O((T + 1)ℓ 2K2) *regions, with linear cumulative utility function inside each region.*
Proof. Lemma 13 identifies multi-sets B*j,k,j*′,k′ of size T for each j, k, j′, k′such that the hyperplanes inside the multi-sets are parallel. Therefore, each multi-set divides the parameter space into T + 1 parts. Thus, each region with continuous utility can be defined as the intersection at most ℓ 2K2 parts, where each part corresponds to a distinct multi-set. This results in at most O((T + 1)ℓ 2K2) such regions.

Dispersion for menus of two-part tariffs. We provide intuition as to why menus of two-part tariffs for bounded density distributions satisfy dispersion; that is, the discontinuities in the revenue function do not concentrate with high probability. To prove this, we focus on Equation (1) for fixed values of j, k, j′, k′, i.e., pairs of tariffs and units, and for all bi ∈ b. The equalities for all of these equations are met at parallel hyperplanes because, for each ρ and fixed pairs of tariffs and units, other parameters, i.e.,
k, k′, p
(j) 1
, p
(j) 2
, p
(j
′)
1, p
(j
′)
2are fixed, and the equations are only different in bi. Assuming independence of distributions among buyers and κ-bounded joint distributions over bi(k) and bi(k
′), with high probability the intersection of multisets of parallel hyperplanes, defined by Equation (1) do not concentrate, implying dispersion.

We first provide the formal definition of (*w, k*)-dispersion. Recall that Π is a set of instances, C ⊂ R
dis a parameter space, and u is an abstract utility function. We use the l2 distance and let B(ρ, r) = {ρ
′ ∈
R

d: ∥ρ − ρ
′∥2 ≤ r} denote a ball of radius r centered at ρ. We use this notion of dispersion to derive our full-information and bandit setting results.

Definition 15 ((Balcan et al., 2018b), (*w, k*)-dispersion). Let u1, . . . , uT : C → [0, H] be a collection of functions where ui is piecewise Lipschitz over a partition Pi of C. We say that Pi *splits a set* A if A
intersects with at least two sets in Pi. The collection of functions is (*w, k*)-dispersed *if every ball of radius* w is split by at most k of the partitions P1, . . . ,PT . More generally, the functions are (*w, k*)-dispersed at a maximizer *if there exists a point* ρ
∗ ∈ argmaxρ∈C PT
i=1 ui(ρ) *such that the ball* B(ρ
∗, w) *is split by at most* k of the partitions P1*, . . . ,*PT .

We now prove menus of two-part tariffs satisfy (*w, k*)-dispersion and use it to derive no-regret online learning results for full-information and bandit settings. Proposition 16. Suppose that u(v, ρ) *is the revenue of the two-part tariff menu mechanism with prices* ρ and buyer's values v. With probability at least 1 − ζ over the draw *S ∼ D*(1) × · · · × D(T)*for any* α ≥ 1/2 the following statement holds:
Suppose v(k) ∈ [0, H] for any number of units k ∈ [K]. Also, suppose that for each distribution D(t), and every pair of number of units k and k
′, v(k) and v(k
′) have a κ*-bounded joint distribution. Then* u is

$$\left(\frac{1}{2H\kappa T^{1-\alpha}},O\left(\ell^{2}K^{2}T^{\alpha}\sqrt{\ln\frac{\ell K}{\zeta}}\right)\right)\,\mbox{-}d i s p e r s e d t$$

with respect to S.

Proof. Lemma 13 gives multisets of parallel hyperplanes that partition the parameter space into regions with K + 1-Lipschitz continuous utility functions. Since the samples are drawn independently from κ-bounded distributions with support [0, H], the offsets of the hyperplanes in each multiset B*j,k,j*′,k′ are independent random variables with Hκ-bounded distributions. Furthermore, the number of multisets is at most ℓ 2K2.

Using these properties, Theorem 32 of Balcan et al. (2018b) gives the statement.

4Partitioning of the parameter space by parallel multisets of hyperplanes has been established before for other families of mechanism design such as posted pricing (Balcan et al., 2018b). We extend this idea to the more complicated case of two-part tariffs.

After establishing dispersion and showing that the parameter space is partitioned into convex regions with cumulative linear utility inside each region, the no-regret guarantees and their performances are implied by prior results.

Algorithm 4: Full-information online learning of two-part tariffs under smoothed distributional assumptions (Adapted to two-part tariffs from (Balcan et al., 2018b), Algorithm 4)
Input: λ ∈ (0, 1/H], *η, ζ* ∈ (0, 1).

1: Set u0(·) = 0 (to be the constant 0 function over C).

2: for *buyer* t = 1, 2*, . . . , T* do Present menu ρt sampled with probability approximately proportional to e g(ρt)to the buyer, where where, g(·) = λPt−1 s=0 us(·). (Use Algorithm 6, with approximation parameter η/4 and confidence parameter ζ/T).;
Observe the revenue for all the potential menus as function ut(·). Receive payment ut(ρt) = I{k ≥ 1}(p
(i)
1
(ρt) + kp(i)
2
(ρt)), where i and k are the tariff index and the number of units chosen by buyer t respectively given menu ρt.

Overview of Algorithms. We provide high-level ideas for the full-information, bandit, and semi-bandit setting algorithms used for Theorems 10 to 12, respectively. Generic forms of these algorithms were devised by Balcan et al. (2018b; 2020a) for dispersed families of algorithms. The full information algorithm considers the cumulative revenue function up until the time t − 1 over the parameter space and samples the menu to present at time t proportional to an exponential function of its cumulative revenue. In order to have an efficient implementation, they use techniques from high-dimensional geometry and approximately sample menu ρt. Let P1*, . . . ,*Pn be the partition of C until time t. The algorithm picks Pi with probability approximately proportional to the region's cumulative weight and outputs a sample from the conditional distribution of menus in Pi. The bandit-setting algorithm considers a grid over the parameter space, whose granularity depends on the dispersion parameters, and runs the Exp3 algorithm over menus corresponding to the grid. The semi-bandit setting algorithm is a continuous version of the Exp3-SET algorithm of Alon et al. (2017b). At each time step, the algorithm learns the revenue function (only) inside the region Pi that the presented menu belongs to and updates the menu weights for the next round accordingly.

Comparison to the results in Section 3.2.1 . Although the discretization-based algorithms work under adversarial inputs and are more general, they provide similar regret bounds and even improved running times in some cases. In the full information case, the dependence on the regret bound in parameter T is similar in both algorithms. In running time, the discretization-based algorithm suffers worse dependence in H, but enjoys better dependence in T and K (the maximum number of units) compared to the dispersion-based algorithm. In the bandit setting, similarly, the regret bounds are similar in their dependence on T, while the running-time comparison depends on the value of κ (maximum density under smoothness assumption)
such that lower-density distributions may result in better running times.

Comparison to prior work. For menus of two-part tariffs, it has been shown in Balcan et al. (2018b)that based on the values observed from users until time t, the parameter space is partitioned into convex regions with hyperplane boundaries such that the utility inside each region satisfies Lipschitz continuity. We give a more refined characterization by showing that (1) the utility function inside each region is linear, and (2) the boundary hyperplanes constitute a multiset of parallel hyperplanes. Properties (1) and (2) are important for establishing dispersion and obtaining no-regret online learning algorithms under smooth distributional assumptions, as in Theorems 10 and 11. After establishing dispersion, we use previously developed results, i.e., regret bounds for dispersed settings, from prior work (e.g., Theorem 1 in Balcan et al. (2018b) for full information and Theorem 3 in (Balcan et al., 2018b) for bandit setting). The algorithms for fullinformation, semi-bandit, and bandit settings were previously developed in a general format (Balcan et al.,
2018b; 2020a) for any problem setting satisfying dispersion property. We adapt those algorithms to our settings in Algorithms 4, 6 and 7.

## 3.2.3 Limited Buyer Types

In this section, we assume that there are a finite number of known buyer types. This information provides extra structures compared to the general setting considered previously. In particular, now the mechanism designer is aware of where the potential discontinuities happen as a function of the parameter space. We provide algorithms with bounded regrets both for the full information and partial information settings specific to limited types. These algorithms improve the regret bounds significantly when the number of buyer types is small. This section is inspired by Balcan et al. (2015) and includes similar algorithms and notations. Balcan et al. (2015) study a security games setting, in which at each time step, the *defender* has a mixed strategy (a probability distribution) for protecting the *attack targets*. Knowing this mixed strategy, the attacker selects a target to attack, which maximizes the attacker's utility (depending on the attacker's type). Considering the target selected by each attacker type as a function of the defender's mixed strategy, the mixed strategy space is partitioned into regions where the action of each attacker type is fixed throughout each region. This is very similar to our setting, where the parameter space is partitioned into regions, where inside each region, each buyer type selects a fixed tariff index and the number of units (see the discussion on partitioning the parameter space in Section 3.2.2). Balcan et al. use the linear structure of the utility function inside each region to develop a no-regret full-information algorithm. In the partial information setting, other than the linearity of utility functions, they use the dependence of an agent's (in their case, attacker, and in our case, buyer) actions across different regions and identify a limited number of mixed strategies (corresponding to menus in our case) such that observing the agent's response to them suffice to estimate the utility of other strategies. We use similar machinery in both the full and partial information settings. However, the source of linearity of the utility is different across the two settings. In the security games context, the attacker's action corresponds to a fixed coordinate axis in the parameter space, and the utility is defined as a fixed linear function of that coordinate. In our setting, however, the utility depends on multiple coordinates, and its formula depends on the buyer's choice. Nevertheless, we show the cumulative utility is a linear function of coordinates (See Lemma 20). For completeness and to make the paper selfcontained, we include a full description of the algorithms and techniques adapted to our setting and using our terminology.

In this setting, we utilize the knowledge of the potential buyer types to design a limited number of menus and optimize over this set. In contrast to the previous section, where the valuations were realized after the arrival of the buyers, here, we have access to all potential buyer types up-front, but similarly, as discussed in Section 3.2.2, the piecewise linear structure of the utility for the buyers partition the parameter space such that each part has linear cumulative utility (Balcan et al., 2018c). This partitioning is equivalent to dividing the parameter space into convex regions such that in each region, there is a fixed *mapping* from the buyer types to the menu options that each buyer selects. We show that in each region, we need to consider only a limited number of menus, namely the extreme points.

Consider v1*, . . . ,* vV as the set of all potential buyer valuations. V denotes the number of buyer types. In order to define the behavior of buyers in each region, we need to define a concept called *menu options*, which determines the buyers' choices.

Definition 17 (menu option for menus of two-part tariffs, O). A pair (j, k), where j *is the tariff index* 1, . . . , ℓ, and k is the number of units 0, 1, . . . , K is a menu option. We denote the set of all menu options as O*. This set identifies all potential actions of a buyer when presented with a menu.*
Definition 18 (mapping µ, feasible mappings, Pµ). A mapping µ is a function from buyer types, v1*, . . . ,* vV
to menu options (j, k), where j and k are the tariff index and the number of units assigned to the buyer type respectively. Mapping µ is feasible if there is a menu corresponding to the mapping, i.e., a menu that, if presented to the buyers, each buyer selects their corresponding option in the mapping as their utility maximizing option. Pµ denotes the region of the parameter space corresponding to µ*, i.e., the set of menus* inducing mapping µ.

Using the discussion in Section 3.2.2, the parameter space is partitioned to convex polytopes, each with a linear utility function for any sequence of buyer types. We reiterate this result in Lemmas 19 and 20, adapting the statements to the limited buyer type setting and corresponding notations.

Lemma 19. For each feasible mapping µ, as defined in Definition 18, Pµ is a convex polytope with hyperplane boundaries.

Proof. The statement is a corollary of Lemma 13. For a fixed buyer type i and option (*j, k*), let P
(i)
(j,k)
be the set of all parameter vectors ρ corresponding to the length-ℓ menus that buyer type i selects option (*j, k*). The buyer selects option (*j, k*) for menu ρ if this option produces more utility for the buyer than any other option. Formally,

$v_{i}(k)-\mathbb{I}\{k\geq1\}\left(p_{1}^{(j)}(\mathbf{\rho})+kp_{2}^{(j)}(\mathbf{\rho})\right)\geq v_{i}(k^{\prime})-\mathbb{I}\{k^{\prime}\geq1\}\left(p_{1}^{(j^{\prime})}(\mathbf{\rho})+k^{\prime}p_{2}^{(j^{\prime})}(\mathbf{\rho})\right).\quad\forall j^{\prime},k^{\prime}$.  
The above inequalities identify a convex polytope of parameter vectors (menus ρ) with hyperplane boundaries. Pµ is the intersection of P
(i)
µ(i)
for i = 1*, . . . , V* . Therefore, Pµ is also a convex region with hyperplane boundaries.

Lemma 20.

P
For each feasible mapping µ and any sequence of buyer valuations b *the cumulative utility,*
i u(bi, ρ)*, is linear in* Pµ.

Proof. Before presenting the proof, we point out the difference between the proof of linearity in Balcan et al. (2015) and in this lemma. In Balcan et al. (2015), in each region, the attacker (corresponding to buyer in our case) chooses a target. There is a one-to-one correspondence between targets and coordinate indices of the parameter space. The utility is defined as a fixed linear function of the corresponding coordinate; immediately implying its linearity in the parameter space in each region. In our setting, however, the utility depends on multiple coordinates and its formula depends on the buyer's choice.

The proof builds on Lemma 13. We show that for any buyer valuation viin the sequence, u(vi, ρ) is linear in the region. Proving this claim is sufficient for concluding the statement. Let (*j, k*) = µ(vi), i.e., j is the tariff index and k is number of units that buyer valuation vi selects under µ. Therefore, the utility for the mechanism designer for menu ρ ∈ Pµ is I{k ≥ 1}
p
(j)
1
(ρ) + kp(j)
2
(ρ)
. Both p
(j)
1
(ρ) and p
(j)
2
(ρ) grow linearly as a function of ρ. Therefore, since the option that each buyer valuation selects (the tariff index and the number of units) is fixed inside Pµ, the utility is also linear.

After establishing the partitioning of parameter space into convex polytopes with linear utilities, for optimization purposes, it seems enough only to consider menus corresponding to the extreme points. This intuition is accurate conditioned on a small tweak. Depending on the tie-breaking rule of buyers among menu options producing the same utility, the polytopes Pµ may not be closed. Therefore, depending on the tie-breaking rule, we consider a menu in proximity to the extreme point but inside the polytope.

Definition 21 (E, extended set of extreme points (Balcan et al., 2015)). For a given ε > 0, set E *is the* set of menus as follows: for any µ and any ρ that is an extreme point of the closure of Pµ, if ρ ∈ Pµ*, then* ρ ∈ E*, otherwise, there exists* ρ
′ ∈ E *such that* ρ
′ ∈ Pµ and ||ρ − ρ
′||1 ≤ ε*. From now on, we may refer to* E
as the extreme points.

Lemma 22. The number of extreme points, |E| is at most (V ℓ2K2/4)2ℓ.

Proof. Length-ℓ menus of two-part tariffs occupy a 2ℓ-dimensional parameter space. In each d-dimensional space, an extreme point is the intersection of d linearly independent hyperplanes. The total number of hyperplanes defining the regions is H = Vℓ2 K
2
, where for each buyer type compares the utility of any pair of options, i.e., the number of units 0*, . . . , K* and tariff indices 1*, . . . , ℓ*. Out of these hyperplanes, we need 2ℓ of them to intersect to form an extreme point. Therefore, the number of extreme points is at most H
2ℓ
,
implying the statement.

The following lemma bounds the loss in utility where the set of menus is limited to the extreme points E. The proof is similar to Balcan et al. (2015); however, the loss depends on the problem-specific utility functions.

Lemma 23. Let E be as defined in Definition 21, then for any sequence of buyer valuations b = b1*, . . . ,* bT ,
and ρ
∗ *as the optimal menu in the hindsight:*

$$m a x_{\boldsymbol{\rho}\in\mathcal{E}}\sum_{t=1}^{T}u({\boldsymbol{b}}_{t},{\boldsymbol{\rho}})\geq\sum_{t=1}^{T}u({\boldsymbol{b}}_{t},{\boldsymbol{\rho}}^{*})-2K\varepsilon T.$$

Proof. The proof consists of a few simple steps: (i) since the mappings partition the space into regions with a fixed mapping, there exists a mapping µ such that ρ
∗ ∈ Pµ, (ii) the revenue of the buyer valuation sequence is linear in Pµ as shown in Lemma 20, (iii) the closure of Pµ is a convex polytope whose extreme points contain the maximizers of the linear function Pbi∈b u(bi, ρ), (iv) one of the maximizers has cumulative utility at least as ρ
∗, (v) the parameter vectors in ε proximity of the extreme point inside Pµ approximately preserve the revenue of the extreme points, (vi) since by definition of E the L1 distance of each member to an extreme point is at most ε, there is at most ε distance in the upfront fee and per-unit fee for any tariffs, resulting in the bound in the statement. Full Information. We first provide an algorithm for the full information case specific to the finite number of buyers. The main result of this section is provided below. The algorithm to achieve this regret guarantee is a weighted majority algorithm (Algorithm 2) on the set of menus corresponding to the extreme points E.

Theorem 24. In the full information case for length-ℓ menus of two-part tariffs, when there are V *types of* buyers, running Algorithm 2 over the set of menus corresponding to set E for β = 1/
√T has regret bounded by O˜(Hℓ√T ln(*V ℓK*)).

The proof follows from Lemma 23 and the guarantee of weighted majority algorithm and is deferred to the appendix. Partial Information (bandit). In the partial information setting, in each time step t, we present the arriving buyer a menu and only observe the option selected by the buyer (e.g., the tariff and the number of units) in the presented menu. A natural approach in this setting is running the EXP3 algorithm and using the weighted majority algorithm for the full information case as a subroutine. However, this approach leads to a regret bound that is exponential in the size of the menu (this result is presented formally in Appendix A). An alternative to this approach is estimating the revenue of other menus, more technically finding an *unbiased estimator* with *bounded range* for the revenue of all the menus, and then running the full information algorithm with the estimates, as introduced by Awerbuch and Mansour (2003). We take the latter approach and find the estimates by employing the notion of *barycentric spanners* (Awerbuch and Kleinberg, 2008). A barycentric spanner is a basis in a vector space such that any vector can be represented as a linear combination of basis vectors with bounded coefficients. By utilizing this concept, we provide algorithms with a regret bound that is sublinear in the number of timesteps and polynomial in other parameters. Similar ideas were employed in Balcan et al. (2015).

There are two main ideas deriving our bounded-regret algorithm. The first is a reduction from the partial information case to the full information case assuming Oracle access to *proper estimates* of utilities for all the menus, and the second is deriving these estimates. The first idea was introduced by Awerbuch and Mansour (2003), and we directly use an inspired theorem by Balcan et al. (2015) that suits our setting more accurately. For the second, we also use similar machinery to Balcan et al. (2015). We first show how to estimate the utility of any menu by only using the response of the buyers to a limited number of menus. In doing so, we take advantage of the dependence between responses of the buyers for different menus to obtain estimates for unused menus. In order to estimate the expected revenue of each menu over a time interval, it is sufficient to estimate the probability of selection of each option in the menu
(tariff index and number of units) by the buyers. Since the price of each option is determined by the menu, we can infer the expected revenue using these probabilities. Note that the option that each buyer type selects is fixed throughout each region. Balcan et al. (2015) use the dependence between these probabilities across regions to find a limited set of menus that infer the estimates. An analogous argument to theirs in our setting is as follows. Let I be the set of length-V indicator vectors that, for each region Pµ and each option (*j, k*),
indicate the (maximal set of) buyer types that select the option (*j, k*) given menus in Pµ. The algorithm presents the menus corresponding to the barycentric spanner of I to buyers at random times and records whether the buyer selects the corresponding option. We show the utility of each menu can be represented as a linear function of its corresponding vectors in I and, therefore, a linear function of the barycentric spanner vectors of I. This is enough to derive the estimates.

Now, we describe the overall structure of the algorithm. The algorithm operates in time blocks, with each block consisting of exploitation and exploration time steps. The exploration time steps are selected uniformly at random within the block and are limited in number. In an exploitation step, the menu used is the output of the full information algorithm, employing unbiased estimators from the previous time block. These menus are always the extreme points E. During exploration time steps, the menus corresponding to the barycentric spanner are used. At the end of each time block, the algorithm refines the estimators of all corner points using the information gathered in the exploration phases. The uniform random selection of time steps ensures that under any arbitrary sequence of valuations, the values observed in exploration time steps are selected uniformly at random, and thus, the estimator is unbiased (the expected value of the utility estimator for each menu is equal to the utility of that menu). A detailed description and proof of the theorem are provided in the appendix. Theorem 25. In the partial information (bandit) case for length-ℓ *menus of two-part tariffs, when there are* V *different types of buyers, there is an algorithm with regret bound of* O˜(T
2/3ℓ(HKV )
1/3log1/3(*V ℓK*)).

Technical contribution. Although the general structure of the algorithm is similar to Balcan et al. (2015),
the problem settings are quite different, and whether similar ideas could work in both settings is not apparent.

We are able to adapt the ideas to provide no-regret algorithms for menus of two-part tariffs. This adaptation requires establishing new properties and definitions for our problem settings.

## 3.3 Distributional Learning For Two-Part Tariffs

We present distributional learning results for menus of two-part tariffs. The learning algorithm simply considers all menus in the discretized set specified by Theorem 1 and outputs the empirical revenue-maximizing menu given the samples. More specifically, for each menu in the discretized set, the algorithm computes the cumulative revenue achieved from the samples and outputs the menu with the maximum cumulative revenue.

The revenue from each sample (buyer) for a fixed menu is the total payment corresponding to the buyer's utility maximizing option (tariff index and the number of units). This approach has a major difference with the previous line of work, e.g., (Balcan et al., 2018c; 2020b; 2022b), that did not use a discretization and optimized over the infinite parameter space.

Theorem 26. In the distributional setting, for length-ℓ menus of two-part tariffs, there exists a *learning algorithm with sample complexity* H2 2ε 2 (2ℓ ln ( 2KHℓ ε) + ln (2/δ)), *and running time* H2 2ε 2 2ℓ ln 2KHℓ ε+ ln (2/δ)Kℓ 2HKℓ ε2ℓ.

Remark. For menus of length larger than one, i.e., ℓ > 1, Theorem 26 provides much simpler algorithm and its running time is roughly the square root of the running time of the previous result (Balcan et al., 2020b; 2022b) in the worst case in terms of parameters H, K, and 1/ε. Under extra structural assumptions, (Balcan et al., 2022b) may result in better running times (see Appendix B for more details). Furthermore, in the real-world applications of menus of two-part tariffs, the length of the menu is often a small number; for example, there are a limited number of gym membership or delivery subscription options. Therefore, the exponential dependence on the length of the menu might not be a significant issue in such settings.

Technical comparison to prior work. For both menus of lotteries and two-part tariffs, distributional learning results were presented before (Balcan et al., 2018c; 2020b). Our discretization-based techniques lead to improvements over the previously best-known algorithms. Our algorithms choose several menus in a data-independent way (via data-independent discretization) and then select the best of them based on the data (empirical risk minimization over a cover); however, the prior algorithms optimize over the infinite space based on the sampled data (empirical risk minimization over the entire space utilizing geometric structure of utility functions). In the context of two-part tariffs, our algorithm is much simpler than prior ones for the same problem, yet it enjoys improved worst-case runtime guarantees compared to them Balcan et al. (2018c; 2020b) when the length of the menu is more than one (Theorem 26).

## 4 Menus Of Lotteries

Consider selling m items to a buyer. A set M =ϕ
(0), p(0),ϕ
(1), p(1)*, . . . ,* ϕ
(ℓ), p(ℓ)	 ⊆ R
m × R,
where ϕ
(0) = 0 and p
(0) = 0 is a length-ℓ menu of lotteries. Each ϕ
(j)is a vector of length m. Under the lottery ϕ
(j), p(j), a buyer receives each item i with probability ϕ
(j)[i] and pays a price of p
(j). The buyer's expected utility for the lottery ϕ
(j), p(j)is their expected value for the lottery less their payment. We consider additive and unit-demand buyers. For additive buyers, their value for lottery j is Pm i=1 v(ei)·ϕ
(j)[i],
where v(ei) is their value for item i. The buyer's expected utility is Pm i=1 v(ei) · ϕ
(j)[i] − p
(j). Note that for additive buyers, due to linearity of expectation, it does not matter whether the allocations of the items in a lottery, are independent or correlated. For unit-demand buyers, without loss of generality, we only consider lotteries such that Pm i=1 ϕ
(j)[i] ≤ 1. Under this constraint, for each lottery j, the allocations of the items are dependent, and the buyer never receives more than one item. In this case, the utility for lottery j has the same expression as for additive buyers. Presented with a menu of lotteries, the buyer selects a utility-maximizing lottery ϕ
(j
∗), p(j
∗)and the mechanism achieves revenue p
(j
∗).

Putting the problem formulation in the context of Section 2, M is the set of all menus of lotteries, each parameterized by ρ which in this case contains all ϕ
(j) and p
(j), where each ϕ
(j)[i] ∈ [0, 1] and p
(j)is ∈ [0*, mH*]
for the additive setting (and ∈ [0, H] for the unit-demand setting). Π is the set of buyer valuations and u : Π × C → [0*, mH*] be a utility function where u(v, ρ) measures the revenue of the menu with parameters ρ on buyer valuations v ∈ Π.

## 4.1 Discretization Procedure

In this section, we introduce a rounding procedure for menus of lotteries. In this procedure, given any vector of parameters (representing a menu) with arbitrary coordinates, we find a transformation to another vector that has two properties; first, the revenue of the output is nearly as high as the original menu for any valuation; secondly, the coordinates corresponding to allocation probabilities and prices belong to a finite set of values. This rounding procedure performed on all possible menus results in a final set of outcomes.

We perform the learning algorithms over this finite set. Theorem 27. Given a menu of lotteries M and parameters 0 < α < 1, 0 < δ < 1, and K*, an arbitrary* natural number, Algorithm 5 outputs menu M′*such that* Rev(M′) ≥ Rev(M)(1 − δ)(1 − α)
K − (2K + 1)α −
mH(1 − δ)
K*. The set of possible allocation probabilities is* {0,(1 − α)
K′,(1 − α)
K′−1*, . . .*(1 − α)
0 = 1},
where K′ = ⌊1/α ln (Hm/α)⌋ and the set of possible prices is {0, Hmα, 2Hmα, . . . Hm}. This constitutes a space with at most O
(1/αℓm+ℓ) (ln (Hm/α))lmdiscrete points, when limiting to length-ℓ *menus and* O
2
(1/αm+1)(ln (Hm/α))m*discrete points for arbitrary-length menus.*
Overview of Algorithm 5. The algorithm consists of three main steps, and its logic is similar to that of Dughmi et al. (2014). In step 1, we divide the lotteries in the menu exceeding a minimum price into K levels based on their price (and remove the ones below the minimum). The division in prices is proportional to powers of (1 − δ) with a higher level k having a higher price, compared to a lower level k
′ < k. Step 2 rounds down the allocation probability coordinates to a finite set. By multiplying ϕ by (1−α)
K−k and then rounding to integer powers of (1 − α), the allocation probabilities of lower-price levels decrease by a larger factor, making lower-price levels less desirable. Step 3 rounds down the prices, first by multiplying all prices by the same factor, (1−α)
K, then by rounding to multiples of α and finally by subtracting 2kα, which results in more subtraction of price for originally higher-price entries. The main insight behind nearly preserving the revenue of the original menu (and circumventing the issue with simple rounding) is that prices of the more expensive lotteries (higher-price level) are decreased more than the lower-price ones, while their allocation decreases by a lower factor. This ensures that no buyer *with any valuation*, switches from a higher-price level to a lower-price, after the rounding.

Before providing the proof of the discretization step, we note that this procedure for menus of lotteries needs extra care and the common rounding of the parameters may result in arbitrarily lower revenue. For example, if there are two lotteries with a similar utility for the buyer but a large difference in prices, minor changes Algorithm 5: (Almost) revenue preserving rounding for menus of lotteries Input: Menu of lotteries M with entries of pairs (ϕ, p), K ∈ N, and α such that 0 *< α <* 1.

Step 1: Partition the entries (ϕ, p) of the menu M into levels, where each level k, for k = 1*, . . . , K*,
contains all entries whose price is in the range mH(1 − δ)
K−k+1 < p ≤ mH(1 − δ)
K−k.

For every entry (ϕ, p) in level k, put an entry (ϕ
′, p′) in M′ where ϕ
′is the outcome of step 2 and p
′is obtained by step 3.

Step 2: multiply ϕ by (1 − α)
K−k, and round down all allocation probabilities to the set of zero and all integer powers of (1 − α) in the range [
α Hm , 1].

Step 3: First, multiplying p by a factor of (1 − α)
K, then rounding p down to an integer multiple of α, and then subtracting 2kα.

Output: M′: the modified menu.

in the probability of allocations or the prices may make the user switch from the high-price lottery to the low-price one. What follows is a concrete example of why standard rounding procedures fail.

| alloc. prob.   | price   | utility   |              |       |         |
|----------------|---------|-----------|--------------|-------|---------|
| 0              | 0       | 0         |              |       |         |
| 0.26           | 0.24    | -0.084    |              |       |         |
| 0.95           | 0.52    | 0.05      | alloc. prob. | price | utility |
|                | 0       | 0         | 0            |       |         |
|                | 0.25    | 0.125     | 0.025        |       |         |
|                | 0.5     | 0.5       | -0.2         |       |         |
| alloc. prob.   | price   | utility   |              |       |         |
| 0              | 0       | 0         |              |       |         |
| 0.5            | 0.125   | 0.175     |              |       |         |
| 1              | 0.5     | 0.1       | alloc. prob. | price | utility |
|                | 0       | 0         | 0            |       |         |
|                | 0.5     | 0.25      | 0.05         |       |         |
|                | 1       | 1         | -0.4         |       |         |

Example 1. *Consider a menu of three lotteries.* Consider the buyer that has value 0.6 *for the item. The first table shows the original menu. With this menu* the buyer's highest utility option is the last lottery that causes the highest revenue, i.e., Rev = 0.52*. The* following tables show the new menus after rounding down the allocation probabilities and prices, rounding up allocation probabilities and rounding down prices, and rounding up allocation probabilities and prices (all to powers of 1/2*), respectively. All these transformations result in the highest utility lottery changing to the* middle lottery which causes smaller revenue. Proof of Theorem 27. Most of this proof is identical to that of Dughmi et al. (2014). Note that in the algorithm, the original entries in a menu are divided into levels k = 1*, . . . , K* such that k = 1 is the lowest-price level and k = K is the highest price one. First, we show that if a buyer's utility-maximizing lottery is in level k given M, their utility-maximizing lottery in M′is never in a lower-price level k
′ < k.

Intuitively, the reason is that the lotteries with lower-level prices have their allocation reduced more and their prices reduced less than the ones in higher levels. More formally, let (*x, p*) be at level k and (*y, q*) at level k
′ < k. Also, let (x
′, p′) and (y
′, q′) be the transformed lotteries in the output of the algorithm. Than, p
′ − q
′ < ((1 − α)
Kp − 2kα) − ((1 − α)
Kq − 2k
′α − α) ≤ (1 − α)
K(p − q) − α, and for every valuation v, x
′· v − y
′· v > ((1 − α)
K−k+1x · v − α) − (1 − α)
K−k
′y · v ≥ (1 − α)
K(x · v − y · v) − α. Now, consider an arbitrary valuation v that has higher utility choosing (*x, p*) than *y, q*. Therefore x · v − p ≥ y · v − q, and therefore p − q ≤ x · v − y · v. Combining this inequality with the ones above implies x
′· v − p
′ ≥ y
′· v − q
′.

Secondly, we compute an upper bound on the loss incurred. Suppose the original utility-maximizing lottery was (*x, p*) in M. Also, suppose in M′, the utility-maximizing lottery is (y
′, q′) which is the transformation of (*y, q*). The first scenario is when p ≥ mH(1 − δ)
K. Note that in this case, q may be smaller by a factor (1 − δ) than p, then to obtain q
′ we first lost a multiplicative factor of (1 − α)
K and then an additive factor of at most (2K + 1)α (including the rounding). Thus q
′ ≥ (1 − δ)(1 − α)
Kp − (2K + 1)α.

In the second case where *p < mH*(1 − δ)
K, the loss is at most mH(1 − δ)
K. Therefore, in any case, q
′ ≥ (1 − δ)(1 − α)
Kp − (2K + 1)α − mH(1 − δ)
K.

Thirdly, the set of possible prices is {0, Hmα, 2*Hmα, . . . Hm*} which is of size 1/α and the set of possible allocation probabilities is {0,(1 − α)
K′,(1 − α)
K′−1*, . . .*(1 − α)
0 = 1}, for K′ = ⌊1/α ln (*Hm/α*)⌋ which is of size 1/α ln(*Hm/α*). In the ℓ-length menus, there are ℓ prices and mℓ allocation probabilities in total. In the unlimited-length menus, we consider the possibility that each potential lottery (each distinct vector of parameters) belongs to the lottery or not. This analysis gives us the final size of the discrete points.

Technical contribution. Our discretization scheme (Algorithm 5) extends that of Dughmi et al. (2014)
in the following aspects: (i) We remove the lower bound assumption on value distribution: Dughmi et al. (2014) assume values belong to [1, H], and we extend the discretization scheme to work when there is no lower bound on value distributions; i.e., values are in [0, H]. (ii) Supporting additive valuations: The original discretization in Dughmi et al. (2014) works for unit-demand valuations. (iii) We also modify the algorithm to support limited-length menus. As a consequence, we are able to provide improved regret bounds and running times when the size of the menu is limited. These extensions are done by small modifications to the algorithm and expand the scope of the application of the scheme.

## 4.2 Online Learning

We provide bounded-regret online learning algorithms in full and partial information settings for fixed and arbitrary-length menus of lotteries. The setting considered is as follows. In each round, a new buyer arrives, and a length-ℓ lottery menu is presented to the buyer. The buyer selects her utility-maximizing lottery j and pays p
(j). The mechanism achieves revenue p
(j). Missing proofs and explicit descriptions of the algorithms are deferred to Appendix B.

In the full information setting, the seller sees the revenue generated for all the possible menus. Similar to the previous section, we run Algorithm 2 (a weighted majority algorithm) over the discretized set as the outcome of Algorithm 5 and derive the following results for the length-ℓ menus and arbitrary length menus.

Theorem 28. In the full information case for length-ℓ *menus of lotteries, running Algorithm 2 over the* discretized set of menus specified in Theorem 27 for α = T
−1, β = T
−0.5, K = T
0.5*, and* δ = T
−0.5 has regret O˜(m2Hℓ√T).

Theorem 29. *In the full information case for arbitrary length menus of lotteries, running Algorithm 2 on* menus specified in Theorem 27 for α = T
−1/(2m+2), β = T
−1/(m+1), K = T
1/(m+1)*, and* δ = T
−1/(m+1) has regret O˜(mHT1−1/(2m+4) lnm (mHT)).

In the partial information setting, the seller only observes the revenue generated for the menu at hand. Similar to the previous section, we run Algorithm 3 (EXP3 algorithm) over the discretized set as the outcome of Algorithm 5 and derive the following result for length ℓ menus.

Theorem 30. In the partial information case for length-ℓ menus of lotteries, running Algorithm 3 over discretized set of menus in Theorem 27 for α = T
−1/(ℓm+2), β = γ = T
−1/(4ℓm+8), K = T
1/(2ℓm+4)*, and* δ = T
−1/(2ℓm+4) has regret O˜(m2HℓT1−1/(2ℓm+4) lnℓm+1 (mHT)).

For the case with V buyer types, we use similar machinery to Section 3.2.3 to derive bounded regret algorithms in the full and partial information settings. The discussion of how to adapt to the lotteries setting is deferred to the appendix. the partial information case. Theorem 31. In the full information case for length-ℓ menus of lotteries, when there are V types of buyers, there is an algorithm with regret bound of O(m2Hℓ√T ln (V ℓ)).

Theorem 32. In the partial information (bandit) case for length-ℓ menus of lotteries, when there are V *different types of buyers, there is an algorithm with regret bound of* O(T
2/3(ℓm)
4/3(HV )
1/3log1/3(V ℓ)).

Remark. The above results hold under adversarial input. Unlike menus of two-part tariffs (and many other families of algorithms and mechanisms discussed in Balcan et al. (2018b; 2020a)), for menus of lotteries, we provide evidence that *dispersion*, a sufficient condition for online learning under smooth distributions, may not hold. A formal result is stated as Theorem 33.

## 4.2.1 **Failure Of Dispersion For Menus Of Lotteries**

In this section, we prove that without making extra assumptions about optimal menus of lotteries, both definitions of dispersion (Definitions 15 and 38) fail. In particular, we show that the failure of both conditions happens if the optimal menu (maximizer) has two lotteries close to each other (similar coordinates) and satisfies some other properties. Example 2 illustrates a setting where there are lotteries with arbitrarily close coordinates in the optimal menu. Theorem 33. *Let the maximizer* ρ
∗ *have the following properties, where* ϕ
(1)
ρ∗ , p
(1)
ρ∗ , ϕ(2)
ρ∗ , p
(2)
ρ∗ *are the coordinates of* ρ
∗, respectively illustrating the probability of allocating item one in lottery 1*, the price of lottery* 1, the probability of allocating item one in lottery 2, the price of lottery 2*, and the allocation probability for* other items are the same across these lotteries.

1. $p_{\rho^{*}}^{(1)}-p_{\rho^{*}}^{(2)}=(L+1/2)\varepsilon$, where $L$ is the Lipschitz parameter.  
$$2.~\phi_{\rho^{*}}^{(1)}-\phi_{\rho^{*}}^{(2)}=(L+1)\varepsilon/c+\varepsilon/2.$$
.$ c\,\,is\,\,a\,\,con$  . 
3. c *is a constant such that* c ≤ H.

In this case, for every κ-bounded distribution whose density is also lower-bounded by 1/κ, the conditions of Definitions 15 and 38, are violated. In particular, in Definition 15, the probability of a hyperplane crossing the ε-radius ball centered at the maximizer is a constant depending on c; and in Definition 38, there exists a pair of points such that the expected number of times that their loss function difference violates the Lipschitz condition for any Lipschitz constant L
′ = L/2 *is a constant depending on* c.

Proof. We first show why Definition 15 fails. Consider a ball of radius ε centered at the maximizer ρ
∗. Let this ball be B. We show that the probability of a hyperplane crossing B is constant. Consider a point ρ ∈ B. We first find the probability density of hyperplanes going through ρ. Then, we integrate it to find the probability of crossing the ball. The following equation shows for what value of v (the value for the item), the hyperplane goes through ρ.

$$\begin{array}{c}{{v\phi_{\rho}^{(1)}-p_{\rho}^{(1)}=v\phi_{\rho}^{(2)}-p_{\rho}^{(2)}}}\\ {{v=\frac{p_{\rho}^{(1)}-p_{\rho}^{(2)}}{\phi_{\rho}^{(1)}-\phi_{\rho}^{(2)}}}}\end{array}$$

Let v min B and v max B be the minimum value of v for which the hyperplane crosses the ball (i.e., there is ρ ∈ B
such that v min B =
p
(1)
ρ − p
(2) ρ ϕ
(1)
ρ − ϕ
(2)
ρ
), and the maximum value respectively. The probability that the hyperplane

```
crosses the ball is R v
                      max
                      B
                     vmin
                      B
                          f(v)dv, where f(v) is the density function of the value for the item.

```

We consider the following points. These points are all in ε proximity of ρ
∗, therefore, fall in a ball of radius ε centered at ρ
∗. Consider points with p
(2) = p
(2)
ρ∗ and ϕ
(2) = ϕ
(2)
ρ∗ . Let p
(1) be in [p
(2)
ρ∗ + *Lε, p*(2)
ρ∗ + (L + 1)ε].

Let ϕ
(1) be in [ϕ
(2)
ρ∗ + (L + 1)*ε/c, ϕ*(2)
ρ∗ + (L + 1)ε/c + ε].

With the above construction, the numerator ranges from Lε to (L + 1)ε, and the denominator ranges from (L + 1)ε/c to (L + 1)ε/c + ε. Therefore, v min B =Lc L+c+1 and v max B = c. For κ-bounded distribution with support [0, 1],R v max B
vmin B
f(v)dv is at least

$${\frac{c-{\frac{L c}{L+c+1}}}{\kappa}}={\frac{{\frac{c(c+1)}{L+c+1}}}{\kappa}};$$
which is constant for a constant c.
Now, we show that Definition 38 fails. To do so, we still consider pair of points ρ and ρ
′ which correspond to v min B and v max B , respectively. If we consider the line segment connecting ρ and ρ
′, the probability of

```
the hyperplane crossing these two points is still R v
                                                    min
                                                    B
                                                   vmin
                                                   B
                                                       f(v)dv which again for κ-bounded distribution with

support [0, 1] whose density is also lower-bounded by 1/κ,R v
                                                       max
                                                       B
                                                      vmin
                                                       B
                                                          f(v)dv is at least

```

$${\frac{c-{\frac{L c}{L+c+1}}}{\kappa}}={\frac{{\frac{c(c+1)}{L+c+1}}}{\kappa}};$$

which is constant for a constant c. Note that |p
(1)
ρ − p
(2)
ρ′ | ≥ Lε and |p
(2)
ρ − p
(1)
ρ′ | ≥ Lε which implies anytime the hyperplane crosses between ρ and ρ
′, the difference in the loss, |ℓt(ρ) − ℓt(ρ
′)| is at least Lε. Also, the Euclidean distance between ρ and ρ
′is less than 2ε. Therefore, the Lipschitz condition for constant L
′ = L/2 is violated a constant fraction of times in expectation.

The following example shows that in the optimal menu of lotteries, lottery pairs can be arbitrarily close to each other. Example 2 ((Daskalakis et al., 2014)). Consider the case of two items, when the buyer's value for each item is drawn i.i.d. from the distribution supported on [0, 1] *with density function* f(x) = 2(1 − x)*. Daskalakis* et al. prove for this example that the unique (up to differences of measure zero) optimal mechanism has uncountable menu complexity. That is, the number of distinct options available for the buyer to purchase is uncountable. They show that the optimal mechanism contains the following four kinds of options: (a)
the buyer can receive item one with probability 1*, and item two with probability* 2
(4−5x)
2 *paying the price* 2−3x 4−5x +2x
(4−5x)
2 *, for any* x ∈ [0, ≈ .0618), (b) the buyer can receive item two with probability 1*, and item one* with probability 2
(4−5x)
2 *paying the price* 2−3x 4−5x +2x
(4−5x)
2 *, for any* x ∈ [0, ≈ .0618), (c) the buyer can receive both items and pay ≈ .5535*, and (d) the buyer can receive neither item and pay nothing.*
Technical contribution compared to prior work. Dispersion property has been shown to hold for various algorithm and mechanism design problems (Balcan et al., 2018b; 2020a; Balcan and Sharma, 2021; Balcan et al., 2022a). This section illustrates the first evidence for the failure of the dispersion property.

## 4.3 Distributional Learning

In the distributional setting, we have sample access to buyers' valuations. The value of the buyer for item i is drawn from distribution Di with support [0, H]
m; we do not assume independence among items. Similar to the distributional learning algorithm for menus of two-part tariffs, the algorithm simply considers all menus in the discretized set specified by Theorem 27 and outputs the empirical revenue-maximizing menu given the samples. The revenue from each sample (buyer) for a fixed menu is the payment corresponding to the buyer's utility-maximizing lottery in the menu. Theorem 34. For length-ℓ *menus of lotteries, there is a discretization-based distributional* learning algorithm with sample complexity O˜m2H2/ε2(ℓm + ln (2/δ))*, and running time* O˜2m2H2/ε2ℓm+ℓ+1 ℓ(ℓm + ln (2/δ)) lnℓm (*mH/ε* ln (*mH/ε*)).

Remark. For the limited menu length, the sample complexity of Theorem 34 is roughly the same as Balcan et al. (2018c), but the advantage is that we provide an efficient algorithm when m and ℓ are constant. The analysis for arbitrary-length menus is provided in the appendix as Theorem 57. The sample complexity and running time provided are similar to that of Dughmi et al. (2014), however, Theorem 57 works for a more general setting. Dughmi et al. (2014) provide a lower bound on the sample complexity, verifying an exponential dependence on the number of items.

Technical contribution compared to prior work. Similar to menus of two-part tariffs, our algorithms choose several menus in a data-independent way (via data-independent discretization) and then select the best of them based on the data (empirical risk minimization over a cover); however, the prior algorithms (Balcan et al., 2018c; 2020b) optimize over the infinite space based on the sampled data (empirical risk minimization over the entire space utilizing geometric structure of utility functions). In the context of lotteries, compared to the previous distributional learning results for fixed-length menus (Balcan et al., 2018c), our algorithm requires similar sample complexity; however, it has an efficient implementation. For arbitrarylength menus, our algorithm provides similar sample complexity and running time compared to (Dughmi et al., 2014); however, it works for a slightly more general setting.

## 5 Discussion

This paper contributes to both learning theory and mechanism design by studying prominent families of mechanisms from a learning perspective. Our work is focused on learning menu mechanisms that go beyond selling the items separately. Menus of lotteries provide a list of randomized allocations and their corresponding prices to the buyers and are specifically advantageous for selling multiple items. Menus of two-part tariffs, on the other hand, are employed for selling multiple units (copies) of an item by presenting a list of up-front fees and per-unit fees to the buyer. The two families of mechanisms are pricing schemes commonly studied for revenue maximization in the sale of goods.Both are presented as lists (menus) of options from which potential buyers can choose. From a structural perspective, both problems involve a utility function (in this case, revenue) that is piecewise linear in the parameter space. Our findings suggest that similar techniques can be applied to both problems.

We provide a suite of results with regard to these two families of mechanisms. By leveraging the structure of menus of two-part tariffs and lotteries, we provide a revenue-preserving reduction to a finite number of menus (discretization). Using this approach, we provide the first online learning algorithms for menus of lotteries and two-part tariffs with strong regret-bound guarantees and propose algorithms with significantly improved running times over prior work for the distributional settings. When there is a limited number of buyer types, we provide a reduction to online linear optimization, which enables us to obtain no-regret guarantees by presenting buyers with menus that correspond to a barycentric spanner. Finally, for the first time, we provide evidence of the failure of the "dispersion" property (Balcan et al., 2018b; 2020a)—a sufficient condition to provide a no-regret algorithm under smooth distributional assumption, which is widely applied to parametric algorithm and mechanism design problems—for a specific problem (menus of lotteries).

Discretization versus Dispersion. The majority of the paper focuses on online learning of these families of mechanisms. Two of the commonly used techniques for this setting are (the more traditional) discretizationbased and (the recently developed) dispersion-based techniques. Menus of lotteries and two-part tariffs are examples of parametric algorithm or mechanism design, where the objective function, here revenue, has sharp discontinuities in the parameter space, and the standard procedures, such as rounding down the parameters to multiples of ε, may result in arbitrary revenue loss. A discretization scheme means that there exists a grid in the parameter space such that for any arbitrary parameter vector, there is a corresponding parameter vector in proximity over the grid generating similar revenue. However, finding the corresponding parameter vector (the direction to move from the original parameter vector in the space) needs taking extra care, and moving in an arbitrary direction may cause a large revenue loss. In contrast to the discretization scheme, another method developed for proving online learnability of parameterized algorithms, called *dispersion* (Balcan et al., 2018b; 2020a), asserts that under smoothness assumptions moving in a small ball of parameter vectors, does not face sharp discontinuities with high probability. This means that with high probability, moving in any direction preserves similar revenue. Nevertheless, we show evidence that the dispersion may not hold for menus of lotteries (Theorem 33) and while dispersion holds for menus of two-part tariffs (Propositions 16 and 39), it heavily uses the smoothness assumption. In conclusion, although a small but arbitrary modification may change the revenue drastically when starting from a parameter vector, in designing our discretization scheme, we show a specific direction such that a small modification along that direction preserves the revenue. See Theorems 1 and 27. Lower bound for regret terms. In the full information case, the dependence of the regret bounds on T is tight according to Nisan et al. (2007), Theorem 4.8. In the bandit setting, our dependence on T matches a lower bound provided by Kleinberg et al. (2008) for general globally Lipschitz functions even though the utility functions in our case are only piecewise Lipschitz (not globally Lipschitz). The construction in Kleinberg et al. (2008) does not immediately imply a lower bound for our case since, since on one hand, learning piecewise Lipschitz functions is harder than globally Lipschitz ones, and on the other hand, in our case, the utility functions have more structure beyond Lipschitzness in each piece. It is a nontrivial open question if the dependence is tight for our case. Finding a lower bound for the dependence on the other parameters is an interesting open problem. Similar dependence appears both in our discretization-based and dispersion-based algorithms. The question of whether such dependencies can be avoided motivated us to study more structured settings, such as the limited buyer type setting. In the limited buyers' type case, where we utilize the knowledge of the potential buyer types and interdependence of utilities across experts, the dependence is improved. Computational Efficiency. Our discretization-based learning approach has the strength of not relying on any extra assumptions about the data and results in no-regret learning algorithms in the online learning setting without any extra assumptions. However, the drawback of this approach is that the algorithms may not be computationally efficient. Concerning known efficient algorithms, for the problem of menus of two-part tariff even in the simpler problem of distributional learning, prior results were not computationally efficient either (Balcan et al., 2020b), and our discretization-based algorithms improve upon those and satisfy the best computational guarantees in the worst case. Exploring computational complexity and the existence of more efficient algorithms is an interesting open direction. We have also taken steps to explore the possibility of more efficient algorithms by adding extra structure, e.g., smooth distributional assumption or limited buyer type assumption, to our settings. Establishing the "dispersion" property under smooth distributional assumptions enables us to use more refined online learning algorithms (a continuous version of multiplicative weight update algorithm that uses the geometric structure of utility functions), but this property has not been studied for menus of lotteries or two-part tariffs. One of our contributions is establishing this property for menus of two-part tariffs and obtaining more efficient algorithms. However, surprisingly we show this property does not work for lotteries. In the limited buyer type settings, we utilize the knowledge of the potential buyer types and interdependence of utilities across menus and provide algorithms with improved running time. Open Directions. An open question is whether there is a generalization capturing the techniques applied to both menus of lotteries and two-part tariffs. It is unclear whether such a generalization capturing both problems exists. The key difficulty we face in generalizing the techniques we used for menus of two-part tariffs and lotteries stems from the difference in the structure of the utility functions, specifically, the shape of discontinuity hyperplanes. As shown in Theorem 33, we provide evidence of the failure of the dispersion technique for lotteries; however, this technique works for two-part tariffs (Propositions 16 and 39).

Furthermore, the two mechanisms needed different discretization methods.

## 6 Acknowledgement

The authors would like to thank Avrim Blum, Misha Khodak, Rattana Pukdee, Dravyansh Sharma, and anonymous reviewers for helpful feedback and comments. This material is based on work supported in part by the National Science Foundation under grant CCF-1910321 and a Simons Investigator Award.

## References

Noga Alon, Moshe Babaioff, Yannai A. Gonczarowski, Yishay Mansour, Shay Moran, and Amir Yehudayoff.

Submultiplicative glivenko-cantelli and uniform convergence of revenues. In Isabelle Guyon, Ulrike von Luxburg, Samy Bengio, Hanna M. Wallach, Rob Fergus, S. V. N. Vishwanathan, and Roman Garnett, editors, *Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information* Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 1656–1665, 2017a.

Noga Alon, Nicolò Cesa-Bianchi, Claudio Gentile, Shie Mannor, Yishay Mansour, and Ohad Shamir. Nonstochastic multi-armed bandits with graph-structured feedback. *SIAM J. Comput.*, 46(6):1785–1826, 2017b. doi: 10.1137/140989455.

Peter Auer, Nicolo Cesa-Bianchi, Yoav Freund, and Robert E Schapire. Gambling in a rigged casino: The adversarial multi-armed bandit problem. In Proceedings of IEEE 36th annual foundations of computer science, pages 322–331. IEEE, 1995.

Baruch Awerbuch and Robert Kleinberg. Online linear optimization and adaptive routing. J. Comput. Syst.

Sci., 74(1):97–114, 2008. doi: 10.1016/j.jcss.2007.04.016.

Baruch Awerbuch and Yishay Mansour. Adapting to a reliable network path. In Elizabeth Borowsky and Sergio Rajsbaum, editors, *Proceedings of the Twenty-Second ACM Symposium on Principles of Distributed* Computing, PODC 2003, Boston, Massachusetts, USA, July 13-16, 2003, pages 360–367. ACM, 2003. doi:
10.1145/872035.872090.

Maria-Florina Balcan. Data-driven algorithm design. In Tim Roughgarden, editor, *Beyond the Worst-Case* Analysis of Algorithms, pages 626–645. Cambridge University Press, 2020. doi: 10.1017/9781108637435.

036. URL https://doi.org/10.1017/9781108637435.036.

Maria-Florina Balcan and Avrim Blum. Approximation algorithms and online mechanisms for item pricing.

In Joan Feigenbaum, John C.-I. Chuang, and David M. Pennock, editors, *Proceedings 7th ACM Conference* on Electronic Commerce (EC-2006), Ann Arbor, Michigan, USA, June 11-15, 2006, pages 29–35. ACM, 2006. doi: 10.1145/1134707.1134711.

Maria-Florina Balcan and Dravyansh Sharma. Data driven semi-supervised learning. In Marc'Aurelio Ranzato, Alina Beygelzimer, Yann N. Dauphin, Percy Liang, and Jennifer Wortman Vaughan, editors, *Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing* Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual, pages 14782–14794, 2021.

Maria-Florina Balcan, Avrim Blum, Jason D Hartline, and Yishay Mansour. Reducing mechanism design to algorithm design via machine learning. *Journal of Computer and System Sciences*, 74(8):1245–1270, 2008.

Maria-Florina Balcan, Avrim Blum, Nika Haghtalab, and Ariel D. Procaccia. Commitment without regrets: Online learning in stackelberg security games. In Tim Roughgarden, Michal Feldman, and Michael Schwarz, editors, Proceedings of the Sixteenth ACM Conference on Economics and Computation, EC '15, Portland, OR, USA, June 15-19, 2015, pages 61–78. ACM, 2015. doi: 10.1145/2764468.2764478.

Maria-Florina Balcan, Tuomas Sandholm, and Ellen Vitercik. Sample complexity of automated mechanism design. *Advances in Neural Information Processing Systems*, 29, 2016.

Maria-Florina Balcan, Vaishnavh Nagarajan, Ellen Vitercik, and Colin White. Learning-theoretic foundations of algorithm configuration for combinatorial partitioning problems. In *Conference on Learning* Theory, pages 213–274. PMLR, 2017.

Maria-Florina Balcan, Travis Dick, Tuomas Sandholm, and Ellen Vitercik. Learning to branch. In *International conference on machine learning*, pages 344–353. PMLR, 2018a.

Maria-Florina Balcan, Travis Dick, and Ellen Vitercik. Dispersion for data-driven algorithm design, online learning, and private optimization. In 2018 IEEE 59th Annual Symposium on Foundations of Computer Science (FOCS), pages 603–614. IEEE, 2018b.

Maria-Florina Balcan, Tuomas Sandholm, and Ellen Vitercik. A general theory of sample complexity for multi-item profit maximization. In *Proceedings of the 2018 ACM Conference on Economics and Computation*, pages 173–174, 2018c.

Maria-Florina Balcan, Travis Dick, and Wesley Pegden. Semi-bandit optimization in the dispersed setting.

In *Conference on Uncertainty in Artificial Intelligence*, pages 909–918. PMLR, 2020a.

Maria-Florina Balcan, Siddharth Prasad, and Tuomas Sandholm. Efficient algorithms for learning revenuemaximizing two-part tariffs. In Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence,{*IJCAI-20*}, 2020b.

Maria-Florina Balcan, Dan DeBlasio, Travis Dick, Carl Kingsford, Tuomas Sandholm, and Ellen Vitercik.

How much data is sufficient to learn high-performing algorithms? generalization guarantees for data-driven algorithm design. In *Proceedings of the 53rd Annual ACM SIGACT Symposium on Theory of Computing*, pages 919–932, 2021a.

Maria-Florina Balcan, Mikhail Khodak, Dravyansh Sharma, and Ameet Talwalkar. Learning-to-learn nonconvex piecewise-lipschitz functions. In Marc'Aurelio Ranzato, Alina Beygelzimer, Yann N. Dauphin, Percy Liang, and Jennifer Wortman Vaughan, editors, Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual, pages 15056–15069, 2021b.

Maria-Florina Balcan, Misha Khodak, Dravyansh Sharma, and Ameet Talwalkar. Provably tuning the elasticnet across instances. In *NeurIPS*, 2022a.

Maria-Florina Balcan, Christopher Seiler, and Dravyansh Sharma. Faster algorithms for learning to link, align sequences, and price two-part tariffs. *arXiv preprint arXiv:2204.03569*, 2022b.

Maria-Florina Balcan, Travis Dick, Tuomas Sandholm, and Ellen Vitercik. Learning to branch: Generalization guarantees and limits of data-independent discretization. *J. ACM*, dec 2023a. ISSN 0004-5411.

Maria-Florina Balcan, Tuomas Sandholm, and Ellen Vitercik. Generalization guarantees for multi-item profit maximization: Pricing, auctions, and randomized mechanisms. Operations Research, 2023b.

Raef Bassily, Adam D. Smith, and Abhradeep Thakurta. Private empirical risk minimization: Efficient algorithms and tight error bounds. In 55th IEEE Annual Symposium on Foundations of Computer Science, FOCS 2014, Philadelphia, PA, USA, October 18-21, 2014, pages 464–473. IEEE Computer Society, 2014.

doi: 10.1109/FOCS.2014.56.

Avrim Blum and Jason D. Hartline. Near-optimal online auctions. In Proceedings of the Sixteenth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2005, Vancouver, British Columbia, Canada, January 23-25, 2005, pages 1156–1163. SIAM, 2005. URL http://dl.acm.org/citation.cfm?id=1070432.

1070597.

Avrim Blum, Vijay Kumar, Atri Rudra, and Felix Wu. Online learning in online auctions. Theoretical Computer Science, 324(2-3):137–146, 2004.

Patrick Briest, Shuchi Chawla, Robert Kleinberg, and S Matthew Weinberg. Pricing randomized allocations.

In *Proceedings of the twenty-first annual ACM-SIAM symposium on Discrete Algorithms*, pages 585–597.

SIAM, 2010.

Johannes Brustle, Yang Cai, and Constantinos Daskalakis. Multi-item mechanisms without itemindependence: Learnability via robustness. In Péter Biró, Jason D. Hartline, Michael Ostrovsky, and Ariel D. Procaccia, editors, EC '20: The 21st ACM Conference on Economics and Computation, Virtual Event, Hungary, July 13-17, 2020, pages 715–761. ACM, 2020. doi: 10.1145/3391403.3399541.

Sébastien Bubeck, Nikhil R. Devanur, Zhiyi Huang, and Rad Niazadeh. Online auctions and multi-scale online learning. In Constantinos Daskalakis, Moshe Babaioff, and Hervé Moulin, editors, Proceedings of the 2017 ACM Conference on Economics and Computation, EC '17, Cambridge, MA, USA, June 26-30, 2017, pages 497–514. ACM, 2017. doi: 10.1145/3033274.3085145.

Nicolo Cesa-Bianchi, Claudio Gentile, and Yishay Mansour. Regret minimization for reserve prices in secondprice auctions. *IEEE Transactions on Information Theory*, 61(1):549–564, 2014.

Vincent Cohen-Addad and Varun Kanade. Online optimization of smoothed piecewise constant functions. In Aarti Singh and Xiaojin (Jerry) Zhu, editors, *Proceedings of the 20th International Conference on Artificial* Intelligence and Statistics, AISTATS 2017, 20-22 April 2017, Fort Lauderdale, FL, USA, volume 54 of Proceedings of Machine Learning Research, pages 412–420. PMLR, 2017.

Richard Cole and Tim Roughgarden. The sample complexity of revenue maximization. In David B. Shmoys, editor, *Symposium on Theory of Computing, STOC 2014, New York, NY, USA, May 31 - June 03, 2014*, pages 243–252. ACM, 2014.

Partha Dasgupta, Peter Hammond, and Eric Maskin. The implementation of social choice rules: Some general results on incentive compatibility. *The Review of Economic Studies*, 46(2):185–216, 1979.

Constantinos Daskalakis, Alan Deckelbaum, and Christos Tzamos. The complexity of optimal mechanism design. In *Proceedings of the twenty-fifth annual ACM-SIAM symposium on Discrete algorithms*, pages 1302–1318. SIAM, 2014.

Nikhil R. Devanur, Zhiyi Huang, and Christos-Alexandros Psomas. The sample complexity of auctions with side information. In Daniel Wichs and Yishay Mansour, editors, *Proceedings of the 48th Annual ACM* SIGACT Symposium on Theory of Computing, STOC 2016, Cambridge, MA, USA, June 18-21, 2016, pages 426–439. ACM, 2016.

Shaddin Dughmi, Li Han, and Noam Nisan. Sampling and representation complexity of revenue maximization. In *International Conference on Web and Internet Economics*, pages 277–291. Springer, 2014.

Paul Dütting, Zhe Feng, Harikrishna Narasimhan, David Parkes, and Sai Srivatsa Ravindranath. Optimal auctions through deep learning. In *International Conference on Machine Learning*, pages 1706–1715.

PMLR, 2019.

Edith Elkind. Designing and learning optimal finite support auctions. In Nikhil Bansal, Kirk Pruhs, and Clifford Stein, editors, Proceedings of the Eighteenth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2007, New Orleans, Louisiana, USA, January 7-9, 2007, pages 736–745. SIAM, 2007.

Yannai A. Gonczarowski and Noam Nisan. Efficient empirical revenue maximization in single-parameter auction environments. In Hamed Hatami, Pierre McKenzie, and Valerie King, editors, Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing, STOC 2017, Montreal, QC, Canada, June 19-23, 2017, pages 856–868. ACM, 2017.

Yannai A Gonczarowski and S Matthew Weinberg. The sample complexity of up-to-ε multi-dimensional revenue maximization. *Journal of the ACM (JACM)*, 68(3):1–28, 2021.

Roger Guesnerie and Claude Oddou. Second best taxation as a game. *Journal of Economic Theory*, 25(1):
67–91, 1981. ISSN 0022-0531. doi: https://doi.org/10.1016/0022-0531(81)90017-X.

Chenghao Guo, Zhiyi Huang, and Xinzhi Zhang. Settling the sample complexity of single-parameter revenue maximization. In Moses Charikar and Edith Cohen, editors, *Proceedings of the 51st Annual ACM SIGACT* Symposium on Theory of Computing, STOC 2019, Phoenix, AZ, USA, June 23-26, 2019, pages 662–673. ACM, 2019.

Rishi Gupta and Tim Roughgarden. A PAC approach to application-specific algorithm selection. *SIAM J.*
Comput., 46(3):992–1017, 2017. doi: 10.1137/15M1050276.

Sergiu Hart and Noam Nisan. Selling multiple correlated goods: Revenue maximization and menu-size complexity. *Journal of Economic Theory*, 183:991–1029, 2019.

Robert Kleinberg, Aleksandrs Slivkins, and Eli Upfal. Multi-armed bandits in metric spaces. In *Proceedings* of the fortieth annual ACM symposium on Theory of computing, pages 681–690, 2008.

Robert D. Kleinberg and Frank Thomson Leighton. The value of knowing a demand curve: Bounds on regret for online posted-price auctions. In *44th Symposium on Foundations of Computer Science (FOCS 2003),*
11-14 October 2003, Cambridge, MA, USA, Proceedings, pages 594–605. IEEE Computer Society, 2003.

doi: 10.1109/SFCS.2003.1238232.

W Arthur Lewis. The two-part tariff. *Economica*, 8(31):249–270, 1941.

László Lovász and Santosh S. Vempala. Fast algorithms for logconcave functions: Sampling, rounding, integration and optimization. In 47th Annual IEEE Symposium on Foundations of Computer Science
(FOCS 2006), 21-24 October 2006, Berkeley, California, USA, Proceedings, pages 57–68. IEEE Computer Society, 2006. doi: 10.1109/FOCS.2006.28.

Mehryar Mohri and Andrés Munoz Medina. Learning algorithms for second-price auctions with reserve. The Journal of Machine Learning Research, 17(1):2632–2656, 2016.

Jamie Morgenstern and Tim Roughgarden. Learning simple auctions. In *Conference on Learning Theory*,
pages 1298–1318. PMLR, 2016.

Jamie H Morgenstern and Tim Roughgarden. On the pseudo-dimension of nearly optimal auctions. Advances in Neural Information Processing Systems, 28, 2015.

Noam Nisan, Tim Roughgarden, Eva Tardos, and Vijay V. Vazirani. *Algorithmic Game Theory*. Cambridge University Press, 2007.

Walter Y Oi. A disneyland dilemma: Two-part tariffs for a mickey mouse monopoly. *The Quarterly Journal* of Economics, 85(1):77–96, 1971.

Tim Roughgarden and Okke Schrijvers. Ironing in the dark. In Vincent Conitzer, Dirk Bergemann, and Yiling Chen, editors, Proceedings of the 2016 ACM Conference on Economics and Computation, EC '16, Maastricht, The Netherlands, July 24-28, 2016, pages 1–18. ACM, 2016.

Tim Roughgarden and Joshua R. Wang. Minimizing regret with multiple reserves. In Vincent Conitzer, Dirk Bergemann, and Yiling Chen, editors, *Proceedings of the 2016 ACM Conference on Economics and* Computation, EC '16, Maastricht, The Netherlands, July 24-28, 2016, pages 601–616. ACM, 2016. doi: 10.1145/2940716.2940792.

Daniel A. Spielman and Shang-Hua Teng. Smoothed analysis of algorithms: Why the simplex algorithm usually takes polynomial time. *J. ACM*, 51(3):385–463, 2004. doi: 10.1145/990308.990310. URL https:
//doi.org/10.1145/990308.990310.

Vasilis Syrgkanis. A sample complexity measure with applications to learning optimal auctions. *Advances* in Neural Information Processing Systems, 30, 2017.

Leslie G Valiant. A theory of the learnable. *Communications of the ACM*, 27(11):1134–1142, 1984. Vladimir Vapnik. *Statistical Learning Theory*. Wiley, 1998.

## A Missing Proofs Of Section 3 A.1 Online Learning

A.1.1 Online Learning Under Adversarial Inputs Full Information Proposition 35 ((Auer et al., 1995), Theorem 3.2). *For any sequence of valuations* v¯,

$$\mathrm{Rev}_{\mathrm{WM}}\left({\bar{v}}\right)\geq\mathrm{OPT}_{X}\left({\bar{v}}\right)-{\frac{\beta}{2}}\mathrm{OPT}_{X}\left({\bar{v}}\right)-{\frac{H\ln n}{\beta}},$$

where X = m1, . . . , mn *are the set of experts (two-part tariff menus),* RevWM(¯v) *is the expected revenue* outcome of Algorithm 2, and OPTX (¯v) *is the revenue of the optimal menu in* X.

Theorem 7. In the full information case for length-ℓ *menus of two-part tariffs, running Algorithm 2 over discretized set of menus specified in Theorem 1 for* α = β = 1/
√T *has regret bounded by* O˜ℓ(K + H ln H)
√T
,
and running time O(*T ℓK* min{H2ℓT
ℓ, 2 H2T }).

Proof. Let n be the number of menus resulting from the discretization procedure in Section 3.1. Let vi be the valuation of the buyer at step i, and v¯ be the vector of valuation of all buyers in rounds 1 through T.

We denote RevM′ () as the maximum revenue obtained in the set of menus resulting from the discretization procedure, OPT() as the optimal revenue, and RevWM() as the revenue obtained from the weighted majority algorithm discussed above on the set of outcome menus of the discretization procedure. Then,

$$n=(H/\alpha)^{2\ell},$$
$$\begin{array}{c}{{n=\left(H/\alpha\right)^{2\ell},}}\\ {{\mathrm{~Rev_{WM}~}(\bar{v})\geq\mathrm{Rev}(M^{\prime})\left(\bar{v}\right)-\frac{\beta}{2}\mathrm{Rev}(M^{\prime})\left(\bar{v}\right)-\frac{H\ln n}{\beta},}}\\ {{\mathrm{~Rev_{M^{\prime}}~}(\bar{v})=\sum_{i=1}^{T}\mathrm{Rev}_{M^{\prime}}\left(\mathbf{v}_{i}\right),}}\\ {{\mathrm{~Rev_{M^{\prime}}~}(\mathbf{v}_{i})\geq\mathrm{OPT}\left(\mathbf{v}_{i}\right)-2K\ell\alpha;}}\end{array}$$

where the first expression is a result of the discretization procedure, the second expression uses Proposition 35, the third expands the revenue over T terms, and the last uses Theorem 1. Rearranging the terms, we have:

$$\begin{array}{l}{{\mathrm{Rev}_{M^{\prime}}\left(\mathbf{v}_{i}\right)\geq\mathrm{OPT}\left(\mathbf{v}_{i}\right)-2K\ell\alpha}}\\ {{\mathrm{Rev}_{M^{\prime}}\left(\bar{v}\right)\geq\mathrm{OPT}\left(\bar{v}\right)-2K\ell\alpha T}}\end{array}$$ $$\mathrm{Rev}_{\mathrm{WM}}\left(\bar{v}\right)\geq\mathrm{OPT}\left(\bar{v}\right)-2K\ell\alpha T-\frac{\beta H T}{2}-\frac{H\ln n}{\beta}$$ $$\mathrm{Rev}_{\mathrm{WM}}\left(\bar{v}\right)\geq\mathrm{OPT}\left(\bar{v}\right)-2K\ell\alpha T-\frac{\beta H T}{2}-\frac{2H\ell\left(\ln\left(H/\alpha\right)\right)}{\beta}$$

We set variables α and β to minimize the exponent of T in the regret. By substituting n, the regret is upper bounded by

$$2K\ell\alpha T+{\frac{\beta H T}{2}}+{\frac{2H\ell\left(\ln H-\ln\alpha\right)}{\beta}}.$$

By setting α = β = √
1 T
, The regret will be O˜ℓ(K + H ln H)
√T
. Based on the parameters chosen, the number of menus is O(min{H2ℓT
ℓ, 2 H2T }). The algorithm needs to maintain the weights for these menus and update them based on the revenue at each time step. The revenue of each menu can be calculated in O(Kℓ) given the buyer's valuation, resulting in the stated running time. The running time in each round is the number of menus times the time to calculate the revenue for each menu.

Partial Information Proposition 36 ((Auer et al., 1995), Theorem 4.1). *For any sequence of valuations* v¯,

$$\mathrm{Rev}_{\mathrm{Exp3}}\left({\bar{v}}\right)\geq\mathrm{OPT}_{X}-\left(\gamma+{\frac{\beta}{2}}\right)\mathrm{OPT}_{X}-{\frac{H n\ln n}{\beta\gamma}},$$

where X = m1, . . . , mn *are the set of experts (two-part tariff menus),* RevExp3(¯v) is the expected revenue outcome of Algorithm 3, and OPTX (¯v) *is the revenue of the optimal menu in* X.

Theorem 8. In the partial information case for length-ℓ menus of two-part tariffs, running Algorithm 3 over discretized set of menus in Theorem 1 for α = T
−1/(2(1+ℓ)), β = γ = T
−1/(4(1+ℓ)) *has regret bound* O˜T
1− 1 2(1+ℓ) ℓ(K + H2ℓ+1)
*, and running time* O(T min{min{H2ℓT
ℓ, 2 H2T }, 2 H2T }).

Proof. The proof follows the same logic as that of Theorem 7. We denote RevExp3() as the revenue obtained from the Exp3 algorithm described above on the set of outcome menus of the discretization procedure.

Similar to the proof of Theorem 7, in what follows n denotes the number of menus resulting from the discretization procedure in Section 3.1. viis the valuation of the buyer at step i, and v¯ is the sequence of valuation of all buyers in rounds 1 through T. RevM′ () is the maximum revenue obtained in the set of menus resulting from the discretization procedure and OPT() is the optimal revenue.

$$n=(H/\alpha)^{2\ell},$$
$$n=\left(H/\alpha\right)^{2\ell},$$ $$\mathrm{Rev}_{\mathrm{Exp3}}\left(\bar{v}\right)\geq\mathrm{Rev}(M^{\prime})\left(\bar{v}\right)-\left(\gamma+\frac{\beta}{2}\right)\mathrm{Rev}(M^{\prime})\left(\bar{v}\right)-\frac{H n\ln n}{\beta\gamma},$$ $$\mathrm{Rev}_{M^{\prime}}\left(\bar{v}\right)=\sum_{i=1}^{T}\mathrm{Rev}_{M^{\prime}}\left(\mathbf{v}_{i}\right),$$ $$\mathrm{Rev}_{M^{\prime}}\left(\mathbf{v}_{i}\right)\geq\mathrm{OPT}\left(\mathbf{v}_{i}\right)-2K\ell\alpha;$$  In the case $\mathbf{v}_{i}$ of Theorem 1, the second expression is $\mathbf{v}_{i}$.  
where the first expression is a result of Theorem 1, the second expression uses Proposition 36, the third expands the revenue over T terms, and the last uses Theorem 1. Rearranging the terms gives:

$$\begin{array}{c}{{\mathrm{Rev}_{M^{\prime}}\left(\bar{v}\right)\geq\mathrm{OPT}\left(\bar{v}\right)-2K\ell\alpha T}}\\ {{\mathrm{Rev}_{\mathrm{Exp3}}\left(\bar{v}\right)\geq\mathrm{OPT}\left(\bar{v}\right)-2K\ell\alpha T-\left(\gamma+\frac{\beta}{2}\right)H T-\frac{H n\ln n}{\beta\gamma}}}\\ {{\mathrm{Rev}_{\mathrm{Exp3}}\left(\bar{v}\right)\geq\mathrm{OPT}\left(\bar{v}\right)-2K\ell\alpha T-\left(\gamma+\frac{\beta}{2}\right)H T-\frac{2H(H/\alpha)^{2\ell}\ell\left(\ln H-\ln\alpha\right)}{\beta\gamma}}}\end{array}$$

We set variables α and β as a function of T to minimize the exponent of T in the regret. By setting α = T
−1/(2(1+ℓ)), β = γ = T
−1/(4(1+ℓ)), the regret is O
T
1− 1 2(1+ℓ)ln (T)ℓ(K + H2ℓ+1 ln H)
. The algorithm involves maintaining weights for all the menus in the discretized set at each time step, therefore the running time at each time step is proportional to the number of the menus that are derived based on parameter α.

## A.1.2 Online Learning Under Smooth Distributions Full Information

For completeness we include previously established algorithms for the full information setting, under dispersion condition, adapted to our setting.

Overview of Algorithms 4 and 6, related to Theorem 10. Algorithm 4 (Balcan et al., 2018b)
is an efficient algorithm for online learning in the full-information setting under smoothed distributional assumptions that uses Algorithm 6 (Balcan et al., 2018b) as a subroutine. The algorithm considers the cumulative revenue function up until the time t − 1 over the parameter space, Pt−1 0us, and samples the Algorithm 6: Multi-dimensional sampling algorithm ((Balcan et al., 2018b), Algorithm 2)
Input: Function g, partition with regions P1*, . . . ,*Pn, approximation parameter η, confidence parameter ζ.

1: Define α = β = η/3.

2: Let h(ρ) = exp(g(ρ)) and hi(ρ) = I{ρ ∈ Pi}h(ρ) be h restricted to Pi.

3: For each i ∈ [n], let Zˆi = Aintegrate(hi*, α, ζ/*(2n)).

4: Choose random partition index I = i with probability Zˆi/Pj Zˆj .

5: Let ρˆ be the sample output by Asample(hI *, β, ζ/*2).

Output: ρ menu to be presented at time t approximately proportional to an exponential function of its cumulative revenue, i.e., e g(ρt), where g = λPt−1 0us. In order to have an efficient implementation for sampling menu ρt approximately from distribution µ with density fµ(ρ) ∝ e g(ρt), techniques from high-dimensional geometry are used in Algorithm 6. This algorithm is used when g is piecewise concave (in our case, linear), and each piece is a convex set (in our case, convex polytopes where each buyer already in the sequence selects a fixed tariff index and the number of units) as shown in Lemma 13. Let P1*, . . . ,*Pn be the partition of C until time t.

The algorithm first picks Pi with probability proportional to the integral of fµ on that region and then outputs a sample from the conditional distribution of menus in Pi. The algorithm assumes access to two procedures for approximate integration and sampling, namely Aintegrate(*h, α, ζ*) and Asample(*h, β, ζ*). Aintegrate(hi*, α, ζ*)
is a polynomial running-time procedure that takes the approximate integral of any logconcave function hi restricted to region Pi with accuracy parameter α and failure probability ζ. Asample(hi*, β, ζ*) is a polynomial procedure that approximately samples a menu with probability distribution according to hiin the region Pi with accuracy parameter β and failure probability ζ.

Definition 37 (Aintegrate(*h, α, ζ*) and Asample(*h, β, ζ*) (Balcan et al., 2018b)). *For any logconcave function* h : R
d → R, any accuracy parameter α > 0, and any failure probability ζ > 0, Aintegrate(h, α, ζ) outputs a number Z that with probability at least 1 − ζ *satisfies* e
−αRh ≤ Z ≤ e αRh*. For any logconcave function* h :
R

d → R, any accuracy parameter β > 0, and any failure probability ζ > 0, Asample(h, β, ζ) *outputs a sample* X drawn from a distribution uˆh that with probability at least 1 − ζ, D∞(µ, µˆ) ≤ β, where D∞(µ, µˆ) *is the* relative (multiplicative) distance between probability measures µ and µˆ*. Formally,* D∞(µ, µˆ) = supρ | log dµ dµˆ
|, where dµ dµˆ
denotes the Radon-Nikodym derivative.

Similar to Balcan et al. (2018b), we use the implementation of Aintegrate by Lovász and Vempala (2006) and Asample by Bassily et al. (2014), Algorithm 6. These implementations satisfy the conditions in Definition 37.

The first runs in time poly(d, 1α
, log 1 ζ
, log R
r
), where the domain of function h is a subset of a ball of radius R and its level set of probability mass 1/8 is a superset of a ball with radius r. The second succeeds with probability 1 and runs in time poly(*d, L,* 1β
, log R
r
).

Theorem 10. Let u1, . . . , uT : C → [0, H] *be the revenue functions of two-part tariff menus such that* ut(ρ) denotes the revenue of a mechanism associated with menu parameters ρ for the buyer arriving at time t. Let the samples of buyers' values be drawn from *S ∼ D*(1) × · · · × D(T). Suppose v(k) ∈ [0, H] for any number of units k ∈ [K]. Also, suppose that for each distribution D(t)*, and every pair of number* of units k and k
′, v(k) and v(k
′) have a κ-bounded joint distribution. An efficient implementation of the exponentially weighted forecaster with λ =
q2ℓ ln(2H2κ
√T)/T /H *(Algorithm 4) has expected regret bounded* by O˜((Hℓ2K2√log κ + 1/(Hκ))√T) *and runs in time* O˜((T + 1)*poly*(ℓ,K)*poly*(ℓ, √T) + KT √T).

Proof. Proposition 16 determines the dispersion for two-part tariff menus with probability 1−ζ. Theorem 1 in Balcan et al. (2018b) relates dispersion to a regret bound for full information online learning algorithms.

It states if a sequence of piecewise L-Lipschitz functions in d dimensions is (*w, k*)-dispersed, there is an exponentially weighted forecaster with expected regret O(H(pT d log R/w + k) + *T Lw*). Since dispersion holds with probability 1 − ζ, the final regret bound is O((1 − ζ)(H(pT d log R/w + k) + *T Lw*)) + ζH.

Substituting w and k by dispersion found in Proposition 16 gives:

$$\mathbf{\Omega}$$
$$\Phi\left(H\left(\sqrt{2T\ell\log(2H^{2}\kappa T^{1-\alpha})}+\ell^{2}K^{2}T^{\alpha}\sqrt{\ln\frac{\ell K}{\zeta}}\right)+\frac{T^{\alpha}}{2H\kappa}+\zeta H T\right)$$

For all rounds, t ∈ [T], the sum of utilities is linear over at most (T + 1)ℓ 2K2pieces, and all the pieces are convex. In this case, we may use Algorithm 6 as a subroutine to Algorithm 4 for a more efficient but approximate implementation. Setting dispersion parameters ζ = 1/
√T and α = 0.5 and approximation parameters η = ζ = 1/
√T and using Theorem 1 in Balcan et al. (2018b), gives the statement's regret bound and running time.

## Bandit Setting

The bandit-setting algorithm considers a grid over the parameter space, whose granularity depends on the dispersion parameters, and runs the Exp3 algorithm over menus corresponding to the grid.

Theorem 11. Let u1, . . . , uT : C → [0, H] *be the revenue functions of two-part tariff menus such that* ut(ρ)
denotes the revenue of a mechanism associated with menu parameters ρ for the buyer arriving at time t*. Let* the samples of buyers' values be drawn from *S ∼ D*(1) × · · · × D(T). Suppose v(k) ∈ [0, H] for any number of units k ∈ [K]. Also, suppose that for each distribution D(t), and every pair of number of units k and k
′,
v(k) and v(k
′) have a κ*-bounded joint distribution. There is a bandit-feedback online optimization algorithm* with expected regret O˜T
(2ℓ+1)/(2ℓ+2) H2K
√ℓκd/2√log κ
+ 1/Hκ + Hℓ2K2. The per-round running time is O(H4ℓκ 2ℓT
ℓ).

Proof. Proposition 39 determines dispersion for two-part tariff menus with probability 1 − ζ. Theorem 3 in Balcan et al. (2018b) relates dispersion to a regret bound for the bandit setting. It states if a sequence of piecewise L-Lipschitz functions that are (*w, k*)-dispersed and when the parameter space is contained in a ball of radius R, running Exp3 algorithm has regret

$$O\left(H{\sqrt{T d\left({\frac{3R}{w}}\right)^{d}\log{\frac{R}{w}}}}+T L w+H k\right)$$
 .
The per-round running time is O((3R/w)
d). Note that dispersion holds only with probability 1−ζ and with probability ζ, regret is bounded by HT. In our case, L = K + 1, R = H and d = 2ℓ. Substituting these terms along with w and k, and setting α = 2ℓ+1/2ℓ+2 and ζ = 1/
√T gives the regret bound and running time in the theorem statement. Semi-Bandit Setting For the semi-bandit setting, we need to invoke a more recent definition of dispersion.

Definition 38 ((Balcan et al., 2020a), β-point-dispersion). The sequence of loss functions l1, l2*, . . .* is βpoint-dispersed for the Lipschitz constant L if for all T *and for all* ε ≥ T
−β, we have that, in expectation, the maximum number of functions among l1, . . . , lT that fail the L-Lipschitz condition for any pair of points at distance ε in C is at most O˜(εT). That is, for all T *and for all* ε ≥ T
−β*, we have* E-maxρ,ρ′{t ∈ [T] :
|lt(ρ) − lt(ρ
′)| > L∥ρ − ρ
′∥2}= O˜(εT). where the max is taken over all ρ, ρ′ ∈ C : ∥ρ − ρ
′∥2 ≤ ε.

Proposition 39. *Suppose* lt(ρ) = H − ut(ρ), where ut(ρ) is the revenue of the two-part tariff menu mechanism with prices ρ and buyer's values vt at time t*, where buyers' values are drawn from* D(1) × · · · × D(T)*. If* D(i) are κ*-bounded, where* κ = ˜o(T), and K and ℓ*, the maximum number of units and the number of tariffs,*
are polynomial in T, these loss functions are β*-point-dispersed for* β = 1/2. Proof. We use the following statement from Balcan and Sharma (2021), theorem 7.

Proposition 40. (Balcan and Sharma, 2021) Let l1*, . . . , l*T : R
d → R be independent piecewise L-
Lipschitz functions, each having discontinuities specified by a collection of at most K′ algebraic hypersurfaces of bounded degree. Let P *denote the set of axis-aligned paths between pairs of points in* R
d, and for each s ∈ P define D(*T, s*) = |{1 ≤ t ≤ T | lt has a discontinuity along s}|*. Then we have* E[sups∈P D(*T, s*)] ≤ sups∈P E[D(*T, s*)] + O(pT log(TK′)).

The number of hyperplanes, defined as K′in the theorem, is at most T ℓ2K2 and lts are piecewise (K + 1)-
Lipschitz function (by Lemma 52); where T is the number of buyers (rounds), ℓ is the number of tariffs, and K is the maximum number of units. Note that, as shown in Lemma 13. The independence of lts comes from the assumptions of this setting, where the buyer valuations for each round are drawn independently. Definition 38 counts the number of times (in T time intervals) that the difference in utility of the pair violates the L-Lipschitz condition, and finds the worst pair for this property. Proposition 40, counts the number of times that in an axis-aligned path, the utility function has discontinuities. Therefore, sups∈P E[D(*T, s*)] +
O(pT log(TK′)) is an upper bound on E-maxρ,ρ′{t ∈ [T] : |ut(ρ) − ut(ρ
′)| > L∥ρ − ρ
′∥2}. To find the dispersion we need to find sups∈P E[D(*T, s*)].

Recall from the proof of Proposition 16 that the discontinuities can be partitioned into ℓ 2K2 multisets of parallel hyperplanes, such that multiset B*j,k,j*′,k′ corresponds to pairs of tariffs and the number of units
(*j, k*) and (j
′, k′). In addition, since we assume the buyers' valuations are in the range [0, H] and are drawn from pairwise κ-bounded joint distributions, the offsets of the hyperplanes are independent draws from a Hκ-bounded distribution. The number of multi-sets is ℓ 2K2, and the size of each multi-set is T. The hyperplanes within each multi-set are well-dispersed. For a multi-set B*j,k,j*′,k′ , let Θ*j,k,j*′,k′ be the multiset of the hyperplanes' offsets. By assumption, the elements of Θ*j,k,j*′,k′ are independently drawn from Hκ-bounded distributions. Since the offsets are Hκ-bounded, the probability that it falls in any interval of length ε is O(Hκε). The expected number of hyperplanes crossed from each multiset in distance ε along each axis is at most Hκε|B*j,k,j*′,k′ |, and since there are 2ℓ dimensions, the total expected number of crossings is 2ℓHκε|B*j,k,j*′,k′ |. Using the upper bound on |B*j,k,j*′,k′ |, in total, for any pair of points at distance ε, sups∈P E[D(*T, s*)] = O(ℓ 3K2*HκεT*). By Proposition 40, E[sups∈P D(*T, s*)] ≤ sups∈P E[D(*T, s*)]+
O(pT log(TKℓ)), which in our case is upper bounded by: O(ℓ 3K2*HκεT* +pT log(TKℓ)). For κ = ˜o(T),
K = O(poly(T)) and ℓ = O(poly(T)), E[sups∈P D(*T, s*)] = O˜(εT). Therefore, these loss functions are β-point dispersed for β = 1/2, satisfying the statement. Overview of Algorithm 7 The generic algorithm for the semi-bandit case was previously developed in Balcan et al. (2020a). We adapt it to our setting and consider an efficient implementation using the approximate integration and sampling from Balcan et al. (2018b) discussed in Definition 37. The semibandit-setting algorithm is a continuous version of the Exp3-SET algorithm of Alon et al. (2017b). At each time step, the algorithm learns the revenue function (only) inside the region P
(t) ∋ ρt that the presented menu belongs to and updates the menu weights for the next round accordingly. Algorithm 7: Semi-bandit two-part tariff under smoothed distributional assumptions (Adapted from (Balcan et al., 2020a), Algorithm 1 for two-part tariffs)
Input: Step size λ ∈ [0, 1]
1: Let w1(ρ) = 1 for all ρ ∈ C
2: for *buyer* t = 1*, . . . , T* do Let pt(ρ) = wt(ρ)
Wt
, where Wt =RC wt(ρ) dρ; Sample ρt from pt, present it to buyer t, observe the tariff index j and the number of units k selected by the buyer and region P
(t)for which the buyer takes this action; the revenue inside P
(t)
is ut(ρ) = I{k ≥ 1}(p
(i) 1
(ρ) + kp(i)
2
(ρ)) and the normalized loss is lt(ρ) = H−ut(ρ)
Hfor all ρ ∈ P(t);
Let ˆlt(ρ) = I{ρ∈P(t)}
pt(P(t))
lt(ρ), where we define pt(P
(t)) = RP(t) pt(ρ) dρ; Let wt+1(ρ) = wt(ρ) exp(−λˆlt(ρ)) for all ρ.

Theorem 12. *Suppose the buyers' values are drawn from* D(1) ×· · ·×D(T), where each D(t)is κ*-bounded for* κ = ˜o(T)*. Then, running the continuous Exp3-SET algorithm (Algorithm 7) for menus of two-part tariffs* under semi-bandit feedback has expected regret bounded by O˜(H
√ℓT). An efficient implementation has the same regret bound and running time O˜((T + 1)*poly*(ℓ,K)*poly*(ℓ, √T) + KT √T).

Proof. For the regret bound, we invoke Theorem 2 of Balcan et al. (2020a), stating that if the loss functions are Lipschitz functions satisfying β-point-dispersion, running Algorithm 7 has expected regret bounded by O˜(
√dT + T
1−β), when the loss function is in [0, 1]. In our case, d, the number of dimensions is 2ℓ, the dispersion parameter β = 1/2, and the loss function is in [0, H]. This implies the regret bound. Now, we discuss the running time of the algorithm. At each time t, using the buyer's valuation vector, the tariff j, and the number of units k selected by the buyer, we can determine the region P
(t), where the buyer makes the same selection and whose utility function is linear by solving a linear program (the inequalities in Equation (2)). This computation is done in time poly(*ℓ, K*). Next, for the integration procedures inside the algorithm, we use the approximate version introduced in Definition 37, and for sampling, we use the efficient implementation demonstrated in Algorithm 6. In particular, we consider η = ζ = 1/(3√T). For RC wt(ρ) dρ, we use lines 1 through 3 of Algorithm 6 and take the sum of the integration outcomes of line 3, for η
′ = η/4 and ζ
′ = ζ/T. For pt(P
(t)) = RP(t) pt(ρ) dρ we do the same, except that now we do the integration operations in line 3 only for the regions inside P
(t). For sampling ρt from pt, we use the complete procedure Algorithm 6 that takes the regions with linear cumulative utility, λ =
q2ℓ ln(2H2κ
√T)/T /H,
g = λPt−1 s=0 us and η = ζ = 1/(3√T). Note that since the loss is only updated for P
(t), for any regions outside this part, we do not need to repeat the integration operations in Algorithm 6. This may result in potentially better running time for semi-bandit compared to full-information; however, we do not quantify the improvement. Using union bound, with probability at least 1 − 1/
√T, all the approximate integration and sampling operations performed in the algorithm succeed and the density function of the approximate distribution used for sampling is always within (1 − η) fraction of the exact distribution. Using these parameters together with Theorem 1 in (Balcan et al., 2018b) conclude that the same regret bound is achievable from the approximate operations and give the running time in the statement.

## A.1.3 Limited Buyer Types Full Information Setting

Theorem 24. In the full information case for length-ℓ menus of two-part tariffs, when there are V *types of* buyers, running Algorithm 2 over the set of menus corresponding to set E for β = 1/
√T has regret bounded by O˜(Hℓ√T ln(*V ℓK*)).

Proof. We run the weighted majority algorithm Algorithm 2 with parameter β = 1/
√T on the set E as the set of menus (experts). The proof directly follows from Lemma 23 and Proposition 35. Let n = |E|. Let bi be the valuation of the buyer at step i, and ¯b be the vector of valuation of all buyers in rounds 1 through T. We denote RevE () as the maximum revenue obtained in the set of E, OPT() as the optimal revenue, and RevWM() as the revenue obtained from Algorithm 2 on the set of experts X = E. Then,

$$\begin{array}{c}{{n\leq\left(V\ell^{2}K^{2}/4\right)^{2\ell},}}\\ {{\mathrm{Rev}_{\mathrm{WM}}\left(\bar{b}\right)\geq\mathrm{Rev}(\mathcal{E})\left(\bar{b}\right)-\frac{\beta}{2}\mathrm{Rev}(\mathcal{E})\left(\bar{b}\right)-\frac{H\ln n}{\beta},}}\\ {{\mathrm{Rev}_{\mathcal{E}}\left(\bar{b}\right)=\sum_{i=1}^{T}\mathrm{Rev}_{\mathcal{E}}\left(\mathbf{b}_{i}\right),}}\\ {{\mathrm{Rev}_{\mathcal{E}}\left(\mathbf{b}_{i}\right)\geq\mathrm{OPT}\left(\mathbf{b}_{i}\right)-2K\varepsilon;}}\end{array}$$

where the first expression uses the size of E in Lemma 22, the second expression uses Proposition 35, the third expands the revenue over T terms, and the last uses Lemma 23. Rearranging the terms, we have:

$$\begin{array}{l}{{\mathrm{Rev}_{\mathcal{E}}\left(\mathbf{b}_{i}\right)\geq\mathrm{OPT}\left(\mathbf{b}_{i}\right)-2K\varepsilon}}\\ {{\mathrm{Rev}_{\mathcal{E}}\left(\bar{b}\right)\geq\mathrm{OPT}\left(\bar{b}\right)-2K\varepsilon T}}\\ {{\mathrm{Rev}_{\mathrm{WM}}\left(\bar{b}\right)\geq\mathrm{OPT}\left(\bar{b}\right)-2K\varepsilon T-\frac{\beta H T}{2}-\frac{H\ln n}{\beta}}}\\ {{\mathrm{Rev}_{\mathrm{WM}}\left(\bar{b}\right)\geq\mathrm{OPT}\left(\bar{b}\right)-2K\varepsilon T-\frac{\beta H T}{2}-\frac{2\ell H\left(\ln\left(V\ell K\right)\right)}{\beta}}}\end{array}$$

We set variables ε and β to minimize the exponent of T in the regret. By setting β = √
1 T
and ε = 1/(K
√T),
The regret will be O(Hℓ√T ln (*V ℓK*)).

Partial Information Setting We first show how to estimate the utility of any menu by only using the response of the buyer to a limited number of menus. In doing so, we take advantage of the interdependence of the buyers' responses for different menus to obtain estimates for unused menus. In particular, using barycentric spanner concept from Awerbuch and Kleinberg (2008), we devise a basis for the menus such that observing buyers' responses to them is sufficient for estimating the revenue of other menus.

Let I be a set of length-V indicator vectors, such that for each feasible mapping µ and option to select
(*j, k*), which is the tariff index and the number of units, there is a vector in I. This vector indicates the
(maximal) set of buyer types that select this option in mapping µ. As an example, if in mapping µ, {v2, v3}
is the exact set of valuation types that select the same option (*j, k*), vector (0, 1, 1, 0*, . . .*) belongs to I. For I ∈ I, µI and (*j, k*)I denote the corresponding mapping and option to I, respectively. Similarly, Iµ,(j,k)is the vector in I, corresponding to mapping µ and option (*j, k*). Using principles from linear algebra, since the vectors are V -dimensional, there is a set of at most V vectors in I such that any other vector in I is a linear combination of the vectors in this set. Awerbuch and Kleinberg make this property stronger and show that there is a set of V vectors in I, called the barycentric spanner or *spanner* for short, we denote it by S,
such that any member of I can be written as a linear combination of vectors in S with coefficients in [−1, 1].

Lemma 41. There exists set S in I such that, for all I ∈ I, there exists coefficients λ1, . . . , λV ∈ [−1, 1],
so that I =PV
j=1 λisj .

Proof. The statement is a direct corollary of Awerbuch and Kleinberg (2008) Proposition 2.2. Here is the main idea on how to find estimates for the utility of all the menus by only presenting the menus corresponding to the spanner S to the buyers. First, similar to Balcan et al. (2015), we define function fτ (·)
for the vectors in I that will be instrumental in computing the utility for all the menus based on the spanner.

Recall that each vector I in I corresponds to a mapping µI and an option (*j, k*)I . Let fτ (I) be the number of times during a time block τ that given a menu in Pµ the arriving buyer selects option (*j, k*). First, we show how the quantity of this function on inputs from the spanner is sufficient for finding the revenue of arbitrary menus and then show how to estimate it.

Lemma 42. For each menu ρ *and any time block* τ : t + 1, . . . , t + τℓ, let uτ (ρ) represent the average utility of ρ for buyer types in τ *. Then,*

$$u_{\tau}(\mathbf{\rho})=\frac{1}{\ell_{\tau}}\sum_{(j,k)\in\mathcal{O}}\mathbb{I}\{k\geq1\}\left(p_{1}^{(j)}(\mathbf{\rho})+k p_{2}^{(j)}(\mathbf{\rho})\right)\sum_{i=1}^{V}\lambda_{i}(\mathbf{I}_{\mu_{\rho},(j,k)})f_{\tau}(\mathbf{s}_{i})\ ,$$
$$\begin{array}{l}{\square}\end{array}$$
$\square$
$$2.2.$$

Proof. By definition, uτ (ρ) is the average utility of menu ρ for buyers arriving in τ . Menu ρ, corresponds to a feasible mapping µρ. By definition, the buyers in time block τ select option (*j, k*) equal to fτ (Iµρ,(j,k))
number of times. By Lemma 41, Iµρ,(j,k) can be written as a linear combination of the vectors in the spanner. Furthermore, fτ (.) is a linear function as it is equivalent to the dot product of a vector indicating the frequency, i.e., the number of arrivals, of each buyer type during τ and the function input. Therefore,

$$u_{\tau}(\mathbf{\rho})=\frac{1}{\ell_{\tau}}\sum_{(j,k)\in\mathcal{O}}\mathbb{I}\{k\geq1\}\left(p_{1}^{(j)}(\mathbf{\rho})+kp_{2}^{(j)}(\mathbf{\rho})\right)f_{\tau}(\mathbf{I}_{\mu_{\rho},(j,k)})$$ $$=\frac{1}{\ell_{\tau}}\sum_{(j,k)\in\mathcal{O}}\mathbb{I}\{k\geq1\}\left(p_{1}^{(j)}(\mathbf{\rho})+kp_{2}^{(j)}(\mathbf{\rho})\right)\sum_{i=1}^{V}\lambda_{i}(\mathbf{I}_{\mu_{\rho},(j,k)})f_{\tau}(\mathbf{s}_{i}).$$
$$\square$$

Let ˆfτ (si) be the estimator to fτ (si)/ℓτ for the spanner vectors. Let µsi be the corresponding mapping to si. Recall that fτ (si) is the number of times during τ that given a menu in Pµsi
, the arriving buyer, selects option (*j, k*)si
. In order to estimate this quantity we present a corresponding menu to si, i.e., a menu in Pµsi
, once uniformly at random during the time block τ . If the buyer selects option (*j, k*)si
, we let ˆfτ (si)
equal to 1 and otherwise set it to 0. The next lemma shows that ˆfτ (si) has the same expected value and has range [0, 1]. Intuitively, the reason is that due to the uniform random selection of the time step, the estimator has the same expected value.

Lemma 43 (Adapted from Balcan et al. (2015) Lemma 6.3). For any s ∈ S, E[
ˆfτ (s)]ℓτ = fτ (s).

Proof. Note that ˆfτ (s) = 1 if and only if at the time step that menu ρs was presented, (*j, k*)s was selected.

Since ρs is presented once uniformly at random over the time steps and is independent of the sequence of buyers, the buyer presented with ρs is also picked uniformly at random over the time steps. Therefore, E[
ˆfτ (s)] is the probability that a randomly chosen buyer from time block τ selects (*j, k*)s.

Now, we prove that the expected value of the utility estimator for each menu is equal to the utility of that menu, i.e., the estimator is unbiased and, moreover, has a bounded range. The utility estimator is defined as follows, where fτ (si)/ℓτ in the utility formula is replaced by its estimator ˆfτ (si).

$$\hat{u}_{\tau}(\mathbf{\rho})=\sum_{(j,k)\in\mathcal{O}}\mathbb{I}\{k\geq1\}\left(p_{1}^{(j)}(\mathbf{\rho})+kp_{2}^{(j)}(\mathbf{\rho})\right)\sum_{i=1}^{V}\lambda_{i}(\mathbf{I}_{\mathbf{\mu}_{\mathbf{\rho}},(j,k)})\hat{f}_{\tau}(\mathbf{s}_{i})\.$$  **Lemma 44**.: _For any menu $\mathbf{\rho}$, $\mathbb{E}[\hat{u}_{\tau}(\mathbf{\rho})]=u_{\tau}(\mathbf{\rho})$ and $\hat{u}_{\tau}(\mathbf{\rho})\in[-\ell KVH,\ell KVH]$._
Proof. The proof of the equality of the expectation simply follows from uˆτ (ρ) and uτ (ρ) definitions and Lemma 43. Now, we prove the range of the estimator. Since S is a barycentric spanner, for any I ∈ I,
λi(I) ∈ [−1, 1]. Also, ˆfτ (.) belongs to {0, 1}. Also, the utility of the buyer selecting each option in the menu, e.g., p
(j) 1
(ρ) + kp(j)
2
(ρ), is always in [0, H]. Therefore, using the formula of the estimator, it is bounded by H times the number of options times the number of buyer types. We use the algorithm below along with the weighted majority algorithm in the full-information (similar to Algorithm 2) that uses the utility (revenue) estimates. We use E as the set of experts (menus) and obtain distribution q over set E as the weight vector.

Overview of Algorithm 8 First, we provide a high-level structure of the algorithm and then discuss the details. The algorithm operates in time blocks, with each block consisting of exploitation and exploration time steps. The exploration time steps are selected uniformly at random within the block and are limited in number. In an exploitation step, the menu used is the output of the full information algorithm, employing the utility estimators from the previous time block. These menus are always the extreme points of the continuity regions, as discussed at the beginning of the section. During exploration time steps, the corresponding menu to a vector in the spanner is used. At the end of each time block, the algorithm refines the unbiased estimators of the utility of all extreme points using the information gathered in the exploration phases. Z is the number of time blocks, with each time block consisting of *T /Z* time steps. The algorithm uniformly at random picks time steps t1*, . . . , t*V and their permutation π in the current time block. Whenever the time step is equal to ti, the algorithm runs an exploration step; otherwise, the algorithm runs an exploitation step. In the exploration step at time step ti, a menu corresponding to si, ρsπ(i)
, is presented to the arriving buyer and the estimator ˆfτ (sπ(i)) will be assigned as 1 if the buyer selects (*j, k*)sπ(i)
and will be assigned as 0, otherwise. At the end of the time block, we update the estimates of the revenue of the menus corresponding to the extreme points. Lemma 45. [(Balcan et al., 2015) Lemma 6.2] Let M *be the set of all actions. For any time block (set* of consecutive time steps) T
′ and action j ∈ M, let cT′ (j) be the average loss of action j *over* T
′. Assume that S ⊆ M is such that by sampling all actions in S, we can compute cˆT′ (j) for all j ∈ M *with the* following properties: E[ˆcT′ (j)] = cT′ (j) and cˆT′ (j) ∈ [−κ, κ]. Then there is an algorithm with a loss Lalg ≤
Lmin + O
T
2 3 |S| 1 3 κ 1 3 log 1 3 (|M|)
, where Lmin *is the loss of the best action in hindsight.*
Algorithm 8: Partial-Information Algorithm for Limited Buyer Types (adapted from (Balcan et al., 2015) Algorithm 1)
Input: V : the number of buyer types, O : the set of menu options (|O| = ℓ(K + 1))
1: Z ← (T
2|O|2V log(|O|V ))1/3 ▷ the number of time blocks 2: Create set I = {Iµ,(j,k)| for all options (*j, k*) and feasible mappings µ} such that the ith component of Iµ,(j,k)is 1 iff vi selects (*j, k*) in µ and is 0 otherwise.

3: Find a barycentric spanner S = {s1*, ...,* sV } for I. For every s ∈ S, let µs be the corresponding mapping, (*j, k*)s, the corresponding option, and ρs a menu in Pµs
.

4: for all I ∈ I do let λ(I) be the representation of I in spanner S. That is PV
i=1 λi(I)si = I.

5: Let q1 be the uniform distribution over E. ▷ initial weight vector over menus in E
6: for τ = 1*, ..., Z* do ▷ time blocks Choose a random permutation π over [V ] and t1*, . . . , t*V from [*T /Z*].;
for t = (τ − 1)(*T /Z*) + 1, ..., τ (*T /Z*), do ▷ time steps in a time block if t = ti *for some* i ∈ [V ], **then** ▷ exploration time step ρt ← ρsπ(j)
;
If (*j, k*)sπ(j)
is selected, then ˆfτ (sπ(j)) ← 1, otherwise ˆfτ (sπ(j)) ← 0; else ▷ exploitation time step draw ρt at random from distribution qτ ;
for all ρ ∈ E, for µ such that ρ ∈ Pµ, do uˆτ (ρ) = P(j,k)∈O I{k ≥ 1}
p
(j)
1
(ρ) + kp(j)
2
(ρ)
PV
i=1 λi(Iµρ,(j,k))
ˆfτ (si).;
Call Algorithm 2 for experts E and (uˆτ ) as their revenue function; And receive qτ+1 as a distribution over all mixed strategies in E.

We are now ready to prove the main result of this section. Theorem 25. In the partial information (bandit) case for length-ℓ *menus of two-part tariffs, when there are* V *different types of buyers, there is an algorithm with regret bound of* O˜(T
2/3ℓ(HKV )
1/3log1/3(V ℓK)).

Proof. In Lemma 45, |S| is the number of dimensions (barycentric spanner set), κ is the maximum revenue times the number of buyer types times the number of their options (entries in the menu), |M| is the number of extreme points. In our case, |S| = 2ℓ, κ = *HℓKV* , and |M| ≤ (V ℓ2K2/4)2ℓ. By Lemma 44, the expected value of the estimated utility is equal to the exact value of utility with range [−*HℓKV, HℓKV* ].

Using Lemma 45, the regret for menus of two-part tariffs is bounded by O(T
2/3ℓ 1/3(*HℓKV* )
1/3ℓ 1/3log1/3(*V ℓK*)) ∈ O(T
2/3ℓ(HKV )
1/3log1/3(*V ℓK*)).

The following quantifies the regret of simply running the Exp3 algorithm on the set of extreme points. Proposition 46. In the partial information case for length-ℓ *menus of two-part tariffs when there are* V
buyer types, running Algorithm 3 over menus corresponding to E for β = γ = T
−1/3 *has regret bound* OT
2/3ℓH(V ℓ2K2/4)2ℓln (*V ℓK*).

Proof. The proof is similar to that of Theorem 11. We denote RevExp3() as the revenue obtained from the Exp3 algorithm as presented in Algorithm 3 on the set of menus corresponding to E. Let n denote the number of such menus. biis the valuation of the buyer at step i, and ¯b is the sequence of valuation of all buyers in rounds 1 through T. RevE () is the maximum revenue obtained in the set E and OPT() is the optimal revenue.

$$\begin{array}{c}{{n\leq(V\ell^{2}K^{2}/4)^{2\ell},}}\\ {{\mathrm{Rev}_{\mathrm{Exp3}}\left(\bar{b}\right)\geq\mathrm{Rev}(\mathcal{E})\left(\bar{b}\right)-\left(\gamma+\frac{\beta}{2}\right)\mathrm{Rev}(\mathcal{E})\left(\bar{b}\right)-\frac{H n\ln n}{\beta\gamma},}}\\ {{\mathrm{Rev}_{\mathcal{E}}\left(\bar{b}\right)=\sum_{i=1}^{T}\mathrm{Rev}_{\mathcal{E}}\left(\mathbf{b}_{i}\right),}}\\ {{\mathrm{Rev}_{\mathcal{E}}\left(\mathbf{b}_{i}\right)\geq\mathrm{OPT}\left(\mathbf{b}_{i}\right)-2K\varepsilon;}}\end{array}$$

where the first expression uses the size of E in Lemma 22, the second expression uses Proposition 36, the third expands the revenue over T terms, and the last uses Lemma 23. Rearranging the terms, we have:

as the revenue over $T$ terms, and the net use lemma 20. Rearranging the terms,  $$\text{Rev}_{\mathcal{E}}\left(\mathbf{b}_{i}\right)\geq\text{OPT}\left(\mathbf{b}_{i}\right)-2K\varepsilon$$ $$\text{Rev}_{\mathcal{E}}\left(\bar{b}\right)\geq\text{OPT}\left(\bar{b}\right)-2K\varepsilon T$$ $$\text{Rev}_{\text{Exp3}}\left(\bar{b}\right)\geq\text{OPT}\left(\bar{b}\right)-2K\varepsilon T-\left(\gamma+\frac{\beta}{2}\right)HT-\frac{Hn\ln n}{\beta\gamma}$$ $$\text{Rev}_{\text{Exp3}}\left(\bar{b}\right)\geq\text{OPT}\left(\bar{b}\right)-2K\varepsilon T-\left(\gamma+\frac{\beta}{2}\right)HT-\frac{2\ell H(V\ell^{2}K^{2}/4)^{2\ell}\left(\ln\left(V\ell K\right)\right)}{\beta\gamma}$$
We set variables ε in E and β = γ as a function of T to minimize the exponent of T in the regret. By setting β = γ = T
−1/3 and ε = T
−1/2, the regret is OT
2/3ℓH(V ℓ2K2/4)2ℓln (*V ℓK*).

Remark. The standard technique for the partial information algorithm of running the Exp3 algorithm on the extreme points leads to a regret bound that is exponential in the size of the menu as stated in Proposition 46; however, Algorithm 8 has regret bound polynomial in the size of the menus. Therefore, the new technique results in a significant improvement.

## A.2 Distributional Learning

Theorem 26. In the distributional setting, for length-ℓ menus of two-part tariffs, there exists a *learning algorithm with sample complexity* H2 2ε 2 (2ℓ ln ( 2KHℓ ε) + ln (2/δ)), *and running time* H2 2ε 2 2ℓ ln 2KHℓ ε+ ln (2/δ)Kℓ 2HKℓ ε2ℓ.

Proof. We need to find the number of samples such that with probability 1 − δ, the difference between the expected revenue of our algorithm and the optimal revenue is at most ε. Note that since our algorithm uses discretization of possible menus, we face two types of errors: the discretization error, and the usual empirical error in a PAC learning setting. We find the sample complexity and discretization parameters such that the total error is bounded by ε.

The possible number of menus after discretization using parameter N is computed by the following formula.

$$|{\mathcal{H}}|=(H/\alpha)^{2\ell}.$$

Using uniform convergence in the PAC learning setting, the sample complexity for empirical error ε
′is as follows.

$$|S|\geq\frac{H^{2}}{2\varepsilon^{\prime2}}\left(\ln|{\mathcal{H}}|+\ln\left(2/\delta\right)\right).$$
Replacing $\ln\mathcal{H}$ we have, . 
$$|S|\geq\frac{H^{2}}{2\varepsilon^{\prime2}}\left(2\ell\ln\left(H/\alpha\right)+\ln\left(2/\delta\right)\right).$$

Also, the revenue loss compared to the optimum for arbitrary buyer i with valuation viis:

$\text{Rev}_{M'}\left(\pmb{v_i}\right)\geq\text{OPT}\left(\pmb{v_i}\right)-2K\ell\alpha$... 
The total error (from discretization and empirical error), when the empirical error is set to ε
′, is

$$2K\ell\alpha\ +\varepsilon^{\prime}.$$

By setting 2Kℓα = ε
′, we have

$$\alpha={\frac{\varepsilon^{\prime}}{2K\ell}},$$

Replacing α gives the following sample complexity:

$$\begin{array}{l l}{{|S|\geq\frac{H^{2}}{2\varepsilon^{\prime2}}\left(2\ell\ln\left(H/\alpha\right)+\ln\left(2/\delta\right)\right)}}\\ {{}}&{{\geq\frac{H^{2}}{2\varepsilon^{\prime2}}\left(2\ell\ln\left(2K\ell H/\varepsilon^{\prime}\right)+\ln\left(2/\delta\right)\right)}}\end{array}$$

which by replacing ε
′ with ε/2 results in ε total error.

The computational complexity of finding the empirical optimal menu for |S| buyers and menu of size ℓ is:

$$O(|S|K\ell|{\mathcal{H}}|)=|S|K\ell\left({\frac{2H K\ell}{\varepsilon}}\right)^{2\ell}.$$
$\square$
This implies the efficiency of the algorithm.

Lemma 47. *The running time of distributional learning algorithm for two-part tariffs in (Balcan et al.,*
2020b) is at least
$$\left(c\left(\frac{H}{\varepsilon}\right)^{2}\left(18\ell\log\left(8^{2}K^{2}\ell^{3}\right)+\log\frac{1}{\delta}\right)\right)^{2\ell+1}K^{4\ell+2}(2\ell)^{2+1/18}.$$
Proof. The algorithm involves computing N2ℓK4ℓregions, where N is c(H/ε)
2(18ℓ log (8K2ℓ 3) + log 1 δ
),
and solving a linear program for each region with 2ℓ variables and NK2constraints, which takes O˜((2ℓ)
2+1/18NK2).

Comparison with previous results. The sample complexity using the pseudo-dimension method of (Balcan et al., 2018c) is O(H2/ε2(ℓ log (Kℓ) + log (1/δ))) and the best previously-known running time (Balcan et al., 2022b) is OR2(2ℓ)
2ℓ+1KH2/ε2(ℓ log (Kℓ) + log (1/δ)), where R the number of discontinuity regions is bounded by O([H2/ε2(ℓ log (Kℓ) + log (1/δ))]3K), resulting in the worst case running time of O
H2/ε2(ℓ log (Kℓ) + log (1/δ))2ℓ+1 K4ℓ+2(2ℓ)
2+1/18due to (Balcan et al., 2020b; 2022b) (See Lemma 47).

## B Missing Proofs Of Section 4 B.1 Online Learning

Similar to the section on two-part tariffs, using the outcome of the discretization summarized in Theorem 27, we show a reduction to a finite number of experts and run standard learning algorithms (weighted majority and Exp3) over the menus in the discretized set.

## B.1.1 Full Information

In the full information setting, the seller sees the revenue generated for all the possible menus. To design an online algorithm in this case, we use a variant of the weighted majority algorithm by (Auer et al., 1995).

The experts in our case are the discretized menus from the previous section, denoted in the algorithm by set X = m1*, . . . , m*n. Furthermore, vt is the valuation of the buyer are time t and Revk(v1*, . . . ,* vt) is the cumulative revenue of menu mk for the buyers until time step t.

Similar to two-part tariffs, we use Algorithm 2 for the full information case. The only difference is that since the maximum revenue in lotteries is mH, as opposed to two-part tariffs where it is H, in the algorithm we need to replace H with mH. Proposition 48 ((Auer et al., 1995), Theorem 3.2). *For any sequence of valuations* v¯,

$$\mathrm{Rev}_{\mathrm{WM}}\left({\bar{v}}\right)\geq\left(1-{\frac{\beta}{2}}\right)\mathrm{OPT}_{X}\left({\bar{v}}\right)-{\frac{m H\ln n}{\beta}},$$

where X = m1, . . . , mn *are the set of experts (lottery menus),* RevWM(¯v) is the expected revenue outcome of Algorithm 2 where H is replaced with mH*, and* OPTX (¯v) *is the revenue of the optimal menu in* X.

Theorem 28. In the full information case for length-ℓ *menus of lotteries, running Algorithm 2 over the* discretized set of menus specified in Theorem 27 for α = T
−1, β = T
−0.5, K = T
0.5*, and* δ = T
−0.5 has regret O˜(m2Hℓ√T).

Proof. Let n be the number of menus resulting from Algorithm 5. Let vi be the valuation of the buyer at step i, and v¯ be the vector of valuation of all buyers in rounds 1 through T. We denote RevM′ () as the maximum revenue obtained in the set of menus resulting from Algorithm 5, OPT() as the optimal revenue, and RevWM() as the revenue obtained from the weighted majority algorithm discussed above on the set of outcome menus of Algorithm 5. We have

$$n=\left(1/\alpha^{\ell m+\ell}\right)\left(\ln\left(H m/\alpha\right)\right)^{l m},$$ $$\mathrm{Rev}_{\mathrm{WM}}\left(\bar{v}\right)\geq\mathrm{Rev}_{M^{\prime}}\left(\bar{v}\right)-\frac{\beta}{2}\mathrm{Rev}(M^{\prime})\left(\bar{v}\right)-\frac{m H\ln n}{\beta},$$
$\mathrm{Rev}_{M^{\prime}}\left(\bar{v}\right)=\sum_{i=1}^{T}\mathrm{Rev}_{M^{\prime}}\left(\mathbf{v}_{i}\right),$  $\mathrm{Rev}_{M^{\prime}}\left(\mathbf{v}_{i}\right)\geq\mathrm{OPT}\left(\mathbf{v}_{i}\right)\left(1-\delta\right)\left(1-\alpha\right)^{K}-\left(2K+1\right)\alpha-mH(1-\delta)^{K};$
where the first expression is a result of Algorithm 5, the second expression uses Proposition 48, the third expands the revenue over T terms, and the last uses Theorem 27. Rearranging the terms, we have:

RevM′ (vi) ≥ OPT (vi) (1 − δ)(1 − α) K − (2K + 1)α − mH(1 − δ) K ≥ OPT (vi) − OPT (vi)1 − (1 − δ)(1 − α) K− (2K + 1)α − mH(1 − δ) K ≥ OPT (vi) − mH 1 − (1 − δ)(1 − α) K− (2K + 1)α − mH(1 − δ) K RevM′ (¯v) ≥ OPT (¯v) − mHT 1 − (1 − δ)(1 − α) K− T(2K + 1)α − mHT(1 − δ) K RevWM (¯v) ≥ OPT (¯v) − mHT 1 − (1 − δ)(1 − α) K− T(2K + 1)α − mHT(1 − δ) K − βmHT 2− mH ln n β
We set variables K, α, δ, and β as a function of T to minimize the exponent of T in the regret. The regret is upper bounded by

$$mHT\left(1-(1-\delta)(1-\alpha)^{K}\right)+T(2K+1)\alpha+mHT(1-\delta)^{K}+\frac{\beta mHT}{2}+\frac{mH\ln n}{\beta},$$  $$\leq mHT\left(1-(1-\delta)(1-\alpha)^{K}\right)+T(2K+1)\alpha+mHT(1-\delta)^{K}+\frac{\beta mHT}{2}+\frac{mH\Omega\left(\ell m\ln\left(Hm/\alpha\right)\right)}{\beta},$$

where the inequality is followed by upper bounding n. By setting α = T
−1, β = T
−0.5, K = T
0.5, and δ = T
−0.5the regret is bounded by O˜(m2Hℓ√T).

Theorem 29. *In the full information case for arbitrary length menus of lotteries, running Algorithm 2 on* menus specified in Theorem 27 for α = T
−1/(2m+2), β = T
−1/(m+1), K = T
1/(m+1)*, and* δ = T
−1/(m+1) has regret O˜(mHT1−1/(2m+4) lnm (mHT)).

Proof. The proof follows the same argument as Theorem 28. The only difference in the parameters is n, the number of experts, which in this case is n = 2(1/αm+1)(ln (*Hm/α*))m. We set variables K, α, δ, and β as a function of T to minimize the exponent of T in the regret. The regret is upper bounded by the formula below after substituting n

$$\begin{array}{l}{{m H T\left(1-(1-\delta)(1-\alpha)^{K}\right)+T(2K+1)\alpha+m H T(1-\delta)^{K}}}\\ {{+\,\frac{\beta m H T}{2}+\frac{m H(1/\alpha^{m+1})(\ln{(H m/\alpha)})^{m}l n2}{\beta}}}\end{array}$$

By setting α = T
−1/(2m+2), β = T
−1/(m+1), K = T
1/(m+1), and δ = T
−1/(m+1), the regret is bounded by O˜(mHT1−1/(2m+4) lnm (mHT)).

## B.1.2 Bandit Setting

In the partial information setting, the seller does not see the outcome for all the possible menus and only observes the outcome of the menu used (the lottery chosen by the buyer). Similar to the two-part tariffs results, to design an online algorithm in this case, we use a version of the Exp3 algorithm in (Auer et al., 1995).

This variant of the Exp3 algorithm contains the weighted majority algorithm (Algorithm 2) a subroutine. At each step, we mix the probability distribution π, used by the weighted majority algorithm, with the uniform distribution to obtain a modified probability distribution π, which is then used to select a menu from our discretized set. Following the lottery chosen by buyer t, we use the price paid (the gain from the chosen menu) to formulate a simulated gain vector, which is then used to update the weights maintained by the weighted majority algorithm.

Similar to two-part tariffs, we use Algorithm 3 for the bandit case. The only difference is that since the maximum revenue in lotteries is mH, as opposed to two-part tariffs where it is H, in the algorithm we need to replace H with mH. Proposition 49 ((Auer et al., 1995), Theorem 4.1). *For any sequence of valuations* v¯,

$$\mathrm{Rev}_{\mathrm{Exp3}}\left({\bar{v}}\right)\geq\mathrm{OPT}_{X}-\left(\gamma+{\frac{\beta}{2}}\right)\mathrm{OPT}_{X}-{\frac{m H n\ln n}{\beta\gamma}},$$

where X = m1, . . . , mn *are the set of experts (lottery menus),* RevExp3(¯v) is the expected revenue outcome of Algorithm 3 where H is replaced with mH*, and* OPTX (¯v) *is the revenue of the optimal menu in* X.

Theorem 30. In the partial information case for length-ℓ *menus of lotteries, running Algorithm 3 over* discretized set of menus in Theorem 27 for α = T
−1/(ℓm+2), β = γ = T
−1/(4ℓm+8), K = T
1/(2ℓm+4)*, and* δ = T
−1/(2ℓm+4) has regret O˜(m2HℓT1−1/(2ℓm+4) lnℓm+1 (mHT)).

Proof. The proof follows the same logic as that of Theorem 28. We denote RevExp3() as the revenue obtained from the Exp3 algorithm described above on the set of outcome menus of Algorithm 5. Similar to the proof of Theorem 28, in what follows n denotes the number of menus resulting from the procedure Algorithm 5.

viis the valuation of the buyer at step i, and v¯ is the vector of valuation of all buyers in rounds 1 through T. RevM′ () is the maximum revenue obtained in the set of menus resulting from Algorithm 5 and OPT()
as the optimal revenue.

$$\begin{array}{c}{{n=\left(1/\alpha^{t m+\ell}\right)\left(\ln\left(H m/\alpha\right)\right)^{l m},}}\\ {{\mathrm{Rev}_{\mathrm{Exp3}}\left(\bar{v}\right)\geq\mathrm{Rev}_{M^{\prime}}-\left(\gamma+\frac{\beta}{2}\right)\mathrm{Rev}_{M^{\prime}}-\frac{m H n\ln n}{\beta\gamma},}}\\ {{\mathrm{Rev}_{M^{\prime}}\left(\bar{v}\right)=\sum_{i=1}^{T}\mathrm{Rev}_{M^{\prime}}\left(v_{i}\right),}}\\ {{\mathrm{Rev}_{M^{\prime}}\left(v_{i}\right)\geq\mathrm{OPT}\left(v_{i}\right)\left(1-\delta\right)\left(1-\alpha\right)^{K}-\left(2K+1\right)\alpha-m H\left(1-\delta\right)^{K};}}\end{array}$$

where the first expression is a result of Algorithm 5, the second expression uses Proposition 49, the third expands the revenue over T terms, and the last uses Theorem 27. Rearranging the terms, we have:

$$\mathrm{Rev}_{\mathrm{Exp3}}\left(\bar{v}\right)\geq\mathrm{Rev}_{M^{\prime}}\left(\bar{v}\right)-\left(\gamma+\frac{\beta}{2}\right)\mathrm{Rev}_{M^{\prime}}\left(\bar{v}\right)-\frac{mHn\ln n}{\beta\gamma}$$ $$\geq\mathrm{Rev}_{M^{\prime}}\left(\bar{v}\right)-\left(\gamma+\frac{\beta}{2}\right)mHT-\frac{mHn\ln n}{\beta\gamma}$$ $$\geq\mathrm{OPT}\left(\bar{v}\right)-mHT\left(1-(1-\delta)(1-\alpha)^{K}\right)-T(2K+1)\alpha-mHT(1-\delta)^{K}$$ $$\qquad-\left(\gamma+\frac{\beta}{2}\right)mHT-\frac{mHn\ln n}{\beta\gamma}$$

We set variables K, α, δ, β, and γ as a function of T to minimize the exponent of T in the regret. After substituting n, the regret is upper bounded by

$$\begin{array}{c}{{m H T\left(1-(1-\delta)(1-\alpha)^{K}\right)+T(2K+1)\alpha+m H T(1-\delta)^{K}+\left(\gamma+\frac{\beta}{2}\right)m H T}}\\ {{+\frac{2\ell m^{2}H(1/\alpha^{t m+\ell})\left(\ln\left(H m/\alpha\right)\right)^{\ell m+1}}{\beta\gamma}}}\end{array}$$

By setting α = T
−1/(ℓm+2), β = γ = T
−1/(4ℓm+8), K = T
1/(2ℓm+4), and δ = T
−1/(2ℓm+4), the regret is bounded by O˜(m2HℓT1−1/(2ℓm+4) lnℓm+1 (mHT)).

## B.2 Limited Buyer Types

The ideas for designing a specific algorithm specific to the limited buyer types in the menus of lotteries are similar to those for menus of two-part tariffs. There are a few changes that we overview here.

One of the main differences is the menu options O. Unlike two-part tariffs that given a menu, the buyer needed to select a tariff and number of units that maximized the buyer's utility; for menus of lotteries, the options are exactly aligned with menu entries, and |O| = ℓ + 1 for length-ℓ lotteries. The mechanism designer's utility (revenue) given menu ρ is equal to p
(j)(ρ) if the buyer selects entry j. The buyer selects entry j, if this entry results in higher utility than any other entry in menu ρ. These inequalities identify regions Pµ, where the buyer's utility maximizing option is aligned with µ.

Definition 50 (menu option for menus of lotteries, O). Index j such that 0 ≤ j ≤ ℓ *indicating a lottery* index in the menu is a menu option. We denote the set of all menu options as O*. This set identifies all* potential actions of a buyer when presented with a menu.

Definition 51 (mapping µ, feasible mappings, Pµ). A mapping µ is a function from buyer types, v1*, . . . ,* vV
to menu options j = 0, 1, . . . , ℓ, where j is the lottery index assigned to the buyer type. Mapping µ is feasible if there is a menu corresponding to the mapping, i.e., a menu that if presented to the buyers, each buyer selects their corresponding option in the mapping as their utility maximizing option. Pµ denotes the region of the parameter space corresponding to µ*, i.e., the set of menus inducing mapping* µ.

Lemma 52. For each feasible mapping µ, as defined in Definition 51, Pµ is a convex polytope with hyperplane boundaries.

Proof. For a fixed buyer type i and option j = 0*, . . . , ℓ*, let P
(i)
jbe the set of all parameter vectors ρ corresponding to the length-ℓ menus that buyer type i selects option j. The buyer selects option j for menu ρ if this option produces more utility for the buyer than any other option. Formally,

$\mathbf{p}$ is the global primitive lattice unity on the above unit and any other optimal boundary,  $$\sum_{k=1}^{m}v(e_{k})\phi^{(j)}[k](\mathbf{\rho})-p^{(j)}(\mathbf{\rho})\geq\sum_{k=1}^{m}v(e_{k})\phi^{(j^{\prime})}[k](\mathbf{\rho})-p^{(j^{\prime})}(\mathbf{\rho});\quad\forall j^{\prime}.$$  The above inequalities identify a convex polytope of parameter vectors (means $\mathbf{\rho}$) with hyperplane bound 
The above inequalities identify a convex polytope of parameter vectors (menus ρ) with hyperplane boundaries. Pµ is the intersection of P
(i)
µ(i)
for i = 1*, . . . , V* . Therefore, Pµ is also a convex region with hyperplane boundaries.

Lemma 53.

P
For each feasible mapping µ and any sequence of buyer valuations b *the cumulative utility,*
i u(bi, ρ)*, is linear in* Pµ.

Proof. We show that for any buyer valuation viin the sequence, u(vi, ρ) is linear in the region. Proving this claim is sufficient for concluding the statement. Let j = µ(vi), i.e., j is the lottery index that buyer valuation vi selects under µ. Therefore, the utility for this buyer for menu ρ ∈ Pµ is Pm k=1 v(ek)ϕ
(j)[k](ρ) − p
(j)(ρ).

Note that ϕ
(j)[k](ρ) is a coordinate of ρ and therefore, has a linear dependence on ρ. Therefore, since the option that each buyer valuation selects is fixed inside Pµ, the utility is also linear.

Lemma 54. The number of extreme points for menus of lotteries, |E|, is at most (V ℓ2)
m(ℓ+1).

Proof. Length-ℓ menus of lotteries occupy a ℓ(m + 1)-dimensional parameter space. In each d-dimensional space, an extreme point is the intersection of d linearly independent hyperplanes. The total number of hyperplanes defining the regions is H = Vℓ2
, where for each buyer type compares the utility of two menu entries. Out of these hyperplanes, we need ℓ(m + 1) of them to intersect for an extreme point. Therefore, the number of extreme points is at most  H
ℓ(m+1), implying the statement.

The following lemma bounds the loss in utility where the set of menus is limited to the extreme points E. The proof is similar to Balcan et al. (2015); however, the loss depends on the problem-specific utility functions.

Lemma 55. Let E be as defined in Definition 21, then for any sequence of buyer valuations b = b1*, . . . ,* bT ,
and ρ
∗ *as the optimal menu in the hindsight:*

$$m a x_{\rho\in\varepsilon}\sum_{t=1}^{T}u(\mathbf{b}_{t},\mathbf{\rho})\geq\sum_{t=1}^{T}u(\mathbf{b}_{t},\mathbf{\rho}^{*})-\varepsilon T.$$

Proof. The proof is similar to that of Lemma 23. The only difference is in step (vi) which computes the loss in revenue between menus that are at ε L1 distance. In menus of lotteries, this distance implies a price difference of at most ε in any of the lotteries in the menu, and therefore causes ε total loss per time step.

## Full Information Setting

Theorem 31. In the full information case for length-ℓ menus of lotteries, when there are V *types of buyers,*
there is an algorithm with regret bound of O(m2Hℓ√T ln (V ℓ)).

Proof. The proof follows the same logic as of theorem 24. We run the weighted majority algorithm (Algorithm 2, where H is replaced by mH) with parameter β = 1/
√T on the set E as the set of menus (experts).

The proof directly follows from Lemma 55 and Proposition 48. Let n = |E|. Let bi be the valuation of the buyer at step i, and ¯b be the vector of valuation of all buyers in rounds 1 through T. We denote RevE () as the maximum revenue obtained in the set of E, OPT() as the optimal revenue, and RevWM() as the revenue obtained from Algorithm 2 on the set of experts X = E. Then,

$$n\leq(V\ell^{2})^{m(\ell+1)},$$  $$\mathrm{Rev}_{\mathrm{WM}}\left(\bar{b}\right)\geq\mathrm{Rev}(\mathcal{E})\left(\bar{b}\right)-\frac{\beta}{2}\mathrm{Rev}(\mathcal{E})\left(\bar{b}\right)-\frac{mH\ln n}{\beta},$$
$$\operatorname{Rev}_{\mathcal{E}}\left({\bar{b}}\right)=\sum_{i=1}^{T}\operatorname{Rev}_{\mathcal{E}}\left(\mathbf{b}_{i}\right),$$ $$\operatorname{Rev}_{\mathcal{E}}\left(\mathbf{b}_{i}\right)\geq\operatorname{OPT}\left(\mathbf{b}_{i}\right)-\varepsilon;$$

where the first expression uses the size of E in Lemma 54, the second expression uses Proposition 48, the third expands the revenue over T terms, and the last uses Lemma 55. Rearranging the terms, we have:

$$\begin{array}{l}{{\mathrm{Rev}_{\mathcal{E}}\left(\mathbf{b}_{i}\right)\geq\mathrm{OPT}\left(\mathbf{b}_{i}\right)-\varepsilon}}\\ {{\mathrm{Rev}_{\mathcal{E}}\left(\bar{b}\right)\geq\mathrm{OPT}\left(\bar{b}\right)-\varepsilon T}}\\ {{\mathrm{Rev}_{\mathrm{WM}}\left(\bar{b}\right)\geq\mathrm{OPT}\left(\bar{b}\right)-\varepsilon T-\frac{\beta m H T}{2}-\frac{m H\ln n}{\beta}}}\\ {{\mathrm{Rev}_{\mathrm{WM}}\left(\bar{b}\right)\geq\mathrm{OPT}\left(\bar{b}\right)-\varepsilon T-\frac{\beta m H T}{2}-\frac{m^{2}(\ell+1)H\left(\ln\left(V\ell\right)\right)}{\beta}}}\end{array}$$

We set variables ε and β to minimize the exponent of T in the regret. By setting β = √
1 T
and ε = 1/(
√T),
The regret will be O(m2Hℓ√T ln (V ℓ)).

Partial Information (Bandit) Setting In the partial information setting, the change in the menu options also affects the definition of set I that consists of indicator vectors over the buyer types that select the same menu entry j in a mapping µ. The changes that need to be made in Algorithm 8 to work for menus of lotteries include changing |O| to ℓ + 1, using option (menu entry) j instead of (*j, k*), and changing utility from I{k ≥ 1}
p
(j) 1
(ρ) + kp(j)
2
(ρ)
to p
(j)(ρ). After making these changes, we can perform the modified algorithm to achieve a bounded regret.

Lemma 56. *For any menu* ρ, E[ˆuτ (ρ)] = uτ (ρ) and uˆτ (ρ) ∈ [−mH(ℓ + 1)*V, mH*(ℓ + 1)V ].

Proof. The proof is similar to Lemma 56. The proof of the equality of the expectation simply follows from uˆτ (ρ) and uτ (ρ) definitions and Lemma 43. Now, we prove the range of the estimator. Since S is a barycentric spanner, for any I ∈ I, λi(I) ∈ [−1, 1]. Also, ˆfτ (.) belongs to {0, 1}. Additionally, the utility of the buyer selecting each option in the menu, e.g., p
(j)(ρ), is always in [0*, mH*]. Therefore, using the formula of the estimator, it is bounded by mH times the number of options times the number of buyer types. Theorem 32. In the partial information (bandit) case for length-ℓ *menus of lotteries,* when there are V *different types of buyers, there is an algorithm with regret bound of* O(T
2/3(ℓm)
4/3(HV )
1/3log1/3(V ℓ)).

Proof. The proof follows the same logic as of theorem 25. In Lemma 45, |S| is the number of dimensions
(barycentric spanner set), κ is the maximum revenue times the number of buyer types times the number of their options (entries in the menu), |M| is the number of extreme points. In our case, |S| = ℓ(m + 1),
κ = mHV (ℓ+ 1), and |M| ≤ (V ℓ2)
m(ℓ+1). By Lemma 56, the expected value of the estimated utility is equal to the exact value of utility with range [−mH(ℓ + 1)*V, mH*(ℓ + 1)V ].

Using Lemma 45, the regret for menus of lotteries is bounded by

$$O(T^{2/3}(\ell m)^{4/3}(H V)^{1/3}\log^{1/3}(V\ell)).$$

## B.3 Distributional Learning

Theorem 34. For length-ℓ menus of lotteries, there is a discretization-based distributional learning algorithm with sample complexity O˜m2H2/ε2(ℓm + ln (2/δ))*, and running time* O˜2m2H2/ε2ℓm+ℓ+1 ℓ(ℓm + ln (2/δ)) lnℓm (*mH/ε* ln (*mH/ε*)).

Proof. We need to find the number of samples such that with probability 1 − δ, the difference between the expected revenue of our algorithm and the optimal revenue is at most ε. Note that since our algorithm uses discretization of possible menus, we face two types of errors: the discretization error, and the usual empirical error in a PAC learning setting. We find the sample complexity and discretization parameters such that the total error is bounded by ε. The possible number of menus after discretization using Algorithm 5 with parameter α is computed by the following formula.

$$|{\mathcal{H}}|=(1/\alpha^{\ell m+\ell})\left(\ln\left(H m/\alpha\right)\right)^{\ell m}$$
mH[1 − (1 − d)(1 − α)
K] + (2K + 1)α + mH(1 − d)
K + ε
′
Using uniform convergence in the PAC learning setting, the sample complexity for empirical error ε
′is as follows.

$$|S|\geq{\frac{m^{2}H^{2}}{2\varepsilon^{\prime2}}}\left(\ln|{\mathcal{H}}|+\ln\left(2/\delta\right)\right)$$
Replacing $\ln\mathcal{H}$ we have,  . 
$$|S|\geq{\frac{m^{2}H^{2}}{2\varepsilon^{\prime2}}}\left(\ell m(\ln(1/\alpha)+\ln\ln\left(m H\right)+\ln\left(2/\delta\right)\right)\right)$$

Also, the revenue loss compared to the optimum for arbitrary buyer i with valuation vi when using Algorithm 5 with parameters α, K, and d (we use d instead of δ in Algorithm 5 and reserve δ for (*ε, δ*)-learning)
is computed by the following formula.

$$\mathrm{Rev}_{M^{\prime}}\left(\mathbf{v}_{i}\right)\geq\mathrm{OPT}(\mathbf{v}_{i})(1-d)(1-\alpha)^{K}-(2K+1)\alpha-m H(1-d)^{K}$$

The total error (from discretization and empirical error), when the empirical error is set to ε
′, is By setting d = ε
′/(2mH), K = 2*mH/ε*′ln(*mH/ε*′), and α = ε
′/(2m2H2ln(*mH/ε*′)), the total mistake is less than 4ε
′.

Replacing these parameters and substituting ε
′ with ε/4 to satify total error ε, we have the following sample complexity:

$$\begin{split}|S|&\geq\frac{m^{2}H^{2}}{2\varepsilon^{2}}\left(\ell m(\ln(1/\alpha)+\ln\ln\left(mH\right)+\ln\left(2/\delta\right)\right)\\ &=\tilde{O}\left(\frac{m^{2}H^{2}}{\varepsilon^{2}}(\ell m+\ln\left(2/\delta\right))\right)\end{split}$$

Also, replacing the parameters we have:

$$|{\mathcal{H}}|=O\left(\left({\frac{2m^{2}H^{2}}{\varepsilon^{2}}}\right)^{\ell m+\ell}\ln^{\ell m}\left(m H/\varepsilon\ln\left(m H/\varepsilon\right)\right)\right)$$

The computational complexity of finding the empirical optimal menu for |S| buyers and menu of size ℓ is:

The computational complexity of finding the empirical optimal mean for $|\delta|$ buyers and in $$|S|\ell|\mathcal{H}|=\hat{O}\left(\left(\frac{(2m^2H^2)}{\varepsilon^2}\right)^{\ell m+\ell+1}\ell(\ell m+\ln{(2/\delta)})\ln^{\ell m}{(mH/\varepsilon\ln{(mH/\varepsilon)})}\right).$$ This implies the computational complexity of the algorithm. 
Theorem 57. For arbitrary-length menus of lotteries, there is a discretization-based distributional learning algorithm with sample complexity

$$O\left(\frac{m^{2}H^{2}}{\varepsilon^{2}}\left((32m^{2}H^{2}/\varepsilon^{2})^{m+1}\ln^{m}\left(m H/\varepsilon\ln(m H/\varepsilon)\right)\ln^{m+1}\left(m H/\varepsilon\right)+\ln\left(1/\delta\right)\right)\right),$$

and running time

$$O\left(2^{\left(32m^{2}H^{2}/\varepsilon^{2}\right)^{m+1}\ln^{m}\left(m H/\varepsilon\ln(m H/\varepsilon)\right)\ln^{m+1}\left(m H/\varepsilon\right)}\right),$$
$\square$
Proof. This proof follows the same line as the proof of Theorem 34. We need to find the number of samples such that with probability 1 − δ, the difference between the expected revenue of our algorithm and the optimal revenue is at most ε. Note that since our algorithm uses discretization of possible menus, we face two types of errors: the discretization error, and the usual empirical error in a PAC learning setting. We find the sample complexity and discretization parameters such that the total error is bounded by ε. The possible number of menus after discretization using Algorithm 5 with parameter α is computed by the following formula.

$$|{\mathcal{H}}|=O\left(2^{(1/\alpha^{m+1})(\ln{(H m/\alpha)})^{m}}\right)$$

Using uniform convergence in the PAC learning setting, the sample complexity for empirical error ε
′is as follows.

$$|S|\geq{\frac{m^{2}H^{2}}{2\varepsilon^{\prime2}}}\left(\ln|{\mathcal{H}}|+\ln\left(2/\delta\right)\right)$$
Replacing ln H we have,
$$|S|\geq{\frac{m^{2}H^{2}}{2\varepsilon^{\prime2}}}\left({\frac{\ln^{m}\left(H m/\alpha\right)}{\alpha^{m+1}}}+\ln\left(2/\delta\right)\right)$$

Also, the revenue loss compared to the optimum for arbitrary buyer i with valuation vi when using Algorithm 5 with parameters α, K, and d (we use d instead of δ in Algorithm 5 and reserve δ for (*ε, δ*)-learning)
is computed by the following formula.

$$\mathrm{Rev}_{M^{\prime}}\left(\mathbf{v}_{i}\right)\geq\mathrm{OPT}(\mathbf{v}_{i})(1-d)(1-\alpha)^{K}-(2K+1)\alpha-m H(1-d)^{K}$$

The total error (from discretization and empirical error) when the empirical error is set to ε
′is mH[1 − (1 − d)(1 − α)
K] + (2K + 1)α + mH(1 − d)
K + ε
′
By setting d = ε
′/(2mH), K = 2*mH/ε*′ln(*mH/ε*′), and α = ε
′/(2m2H2ln(*mH/ε*′)), the total mistake is less than 4ε
′.

Replacing these parameters and substituting ε
′ with ε/4 to satify total error ε, we have the following sample complexity:

$$\begin{array}{l}{{|S|\geq\frac{m^{2}H^{2}}{2\varepsilon^{2}}\left(\frac{\ln^{m}\left(m H/\alpha\right)}{\alpha^{m+1}}+\ln\left(2/\delta\right)\right)}}\\ {{=O\left(\frac{m^{2}H^{2}}{\varepsilon^{2}}\left(\left(32m^{2}H^{2}/\varepsilon^{2}\right)^{m+1}\ln^{m}\left(m H/\varepsilon\ln(m H/\varepsilon)\right)\ln^{m+1}\left(m H/\varepsilon\right)+\ln\left(1/\delta\right)\right)\right)}}\end{array}$$

Also, replacing the parameters we have:

$$\mathcal{H}|=O\left(2^{(1/\alpha^{m+1})\ln^{m}\left(Hm/\alpha\right)}\right)$$ $$=O\left(2^{(32m^{2}H^{2}/\varepsilon^{2})^{m+1}\ln^{m}\left(mH/\varepsilon\ln(mH/\varepsilon)\right)\ln^{m+1}\left(mH/\varepsilon\right)}\right).$$

The computational complexity of finding the empirical optimal menu for |S| buyers is the number of potential menus |H| times |S| times the maximum size of a menu which is O(ln(H)).

Lemma 58. The sample complexity of length ℓ menus of lotteries using the techniques in (Balcan et al., 2018b) is bounded by

$$c\left(\frac{H}{\varepsilon}\right)^{2}\left(9\ell(m+1)\log\left(4\ell(m+1)\left((\ell+1)^{2}+m\ell\right)\right)+\log\frac{1}{\delta}\right).$$

Proof. Balcan et al. (2018c) introduce *delineability* as a condition to upper bound the pseudo-dimension and therefore, the sample complexity. They show the class of lotteries is (ℓ(m + 1),(ℓ + 1)2 +
mℓ)-delineable. Also, if M is a mechanism class that is (*d, t*)-delineable, then the pseudo dimension of M is at most 9d log(4dt). Therefore, the pseudo-dimension for menus of lotteries is bounded by 9ℓ(m + 1) log 4ℓ(m + 1) (ℓ + 1)2 + mℓ. Furthermore, the sample complexity is at most c(H/ε)
2(Pdim(H) + log (1/δ)), which by replacing pseudo dimension for this class of mechanism completes the proof.