
Learning to Optimize
AI & ML interests
Learning to Optimize
Recent Activity
LearningToOptimize
1. Introduction
LearningToOptimize is an organization dedicated to learning to optimize (L2O) — an emerging paradigm where machine learning models learn to solve optimization problems efficiently. This approach is also known as using optimization proxies or amortized optimization. Our mission is to serve as a hub for sharing open datasets, pre-trained models, and tools that accelerate research and practical applications of L2O methods.
This organization is closely linked to the LearningToOptimize.jl Julia package, which provides foundational functionalities for fitting ML-based surrogate models (or proxies) to complex optimization problems. Here, you will find:
- Datasets: Collections of problem instances and their optimal solutions, useful for training and benchmarking.
- Trained Models: Ready-to-use optimization proxies for various tasks, enabling rapid inference on new problem instances.
- Benchmarking Tools: Utilities for comparing learned proxies against traditional solvers in terms of speed, feasibility, and performance.
2. What are Optimization Proxies?
High-Level Explanation
Optimization proxies are machine learning models that approximate or replace traditional optimization solvers. By observing many instances of a problem (and possibly their solutions), a proxy learns to predict near-optimal solutions in a single forward pass. This amortized approach can reduce or eliminate the need to run a time-consuming solver from scratch for each new instance, delivering major speed-ups in real-world applications such as power systems, resource allocation, and beyond.
Technical Explanation
In more technical terms, amortized optimization seeks to learn a function that maps problem parameters to solutions that (approximately) minimize a given objective function subject to constraints. Modern methods leverage techniques like differentiable optimization layers, input-convex neural networks, or constraint-enforcing architectures (e.g., DC3) to ensure that the learned proxy solutions are both feasible and performant. By coupling the solver and the model in an end-to-end pipeline, these approaches let the training objective directly reflect downstream metrics, improving speed and reliability.
Recent advances also focus on trustworthy or certifiable proxies, where constraint satisfaction or performance bounds are guaranteed. This is crucial in domains like energy systems or manufacturing, where infeasible solutions can have large penalties or safety concerns. Overall, learning-based optimization frameworks aim to combine the advantages of ML (data-driven generalization) with the rigor of mathematical programming (constraint handling and optimality).
For a broader overview, see the SIAM News article on trustworthy optimization proxies, which highlights the growing synergy between AI and classical optimization.
3. References and Citations
A. Rosemberg, M. Tanneau, B. Fanzeres, J. Garcia, P. Van Hentenryck (2023)
Learning Optimal Power Flow Value Functions with Input-Convex Neural Networks.
Accepted at PSCC 2024.P. Donti, B. Amos, J. Z. Kolter (2021)
DC3: A Learning Method for Optimization with Hard Constraints.
ICLRP. Van Hentenryck (2023)
Fusing Artificial Intelligence and Optimization with Trustworthy Optimization Proxies.
SIAM NewsB. Amos (2022)
Tutorial on Amortized Optimization.
arXiv:2202.00665A. Rosemberg, A. Street, D. M. Valladão, P. Van Hentenryck (2023)
Efficiently Training Deep-Learning Parametric Policies using Lagrangian Duality.
arXiv:2405.14973
By sharing our work and resources here, we hope to foster collaboration among researchers and practitioners who are exploring the exciting intersections of AI and optimization. Thank you for visiting LearningToOptimize—let’s push the boundaries of what’s possible in end-to-end optimization together!
Please reach out if you want to contribute!