Papers
arxiv:2301.11260

Maximum Optimality Margin: A Unified Approach for Contextual Linear Programming and Inverse Linear Programming

Published on Jan 26, 2023
Authors:
,
,

Abstract

In this paper, we study the <PRE_TAG>predict-then-optimize</POST_TAG> problem where the output of a machine learning prediction task is used as the input of some downstream optimization problem, say, the objective coefficient vector of a linear program. The problem is also known as <PRE_TAG>predictive analytics</POST_TAG> or contextual linear programming. The existing approaches largely suffer from either (i) optimization intractability (a non-convex objective function)/statistical inefficiency (a suboptimal generalization bound) or (ii) requiring strong condition(s) such as no constraint or loss calibration. We develop a new approach to the problem called <PRE_TAG>maximum optimality margin</POST_TAG> which designs the machine learning loss function by the <PRE_TAG>optimality condition</POST_TAG> of the downstream optimization. The <PRE_TAG>max-margin formulation</POST_TAG> enjoys both computational efficiency and good theoretical properties for the learning procedure. More importantly, our new approach only needs the observations of the optimal solution in the training data rather than the objective function, which makes it a new and natural approach to the inverse <PRE_TAG>linear program</POST_TAG>ming problem under both contextual and context-free settings; we also analyze the proposed method under both offline and online settings, and demonstrate its performance using numerical experiments.

Community

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2301.11260 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2301.11260 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2301.11260 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.