{
  "cells": [
    {
      "cell_type": "markdown",
      "id": "29642bb2",
      "metadata": {},
      "source": [
        "# LevenbergMarquardtOptimizer\n",
        "\n",
        "## Overview\n",
        "\n",
        "The `LevenbergMarquardtOptimizer` class in GTSAM is a specialized optimizer that implements the Levenberg-Marquardt algorithm. This algorithm is a popular choice for solving non-linear least squares problems, which are common in various applications such as computer vision, robotics, and machine learning.\n",
        "\n",
        "The Levenberg-Marquardt algorithm is an iterative technique that interpolates between the Gauss-Newton algorithm and the method of gradient descent. It is particularly useful for optimizing problems where the solution is expected to be near the initial guess.\n",
        "\n",
        "The Levenberg-Marquardt algorithm seeks to minimize a cost function $F(x)$ of the form:\n",
        "\n",
        "$$\n",
        "F(x) = \\frac{1}{2} \\sum_{i=1}^{m} r_i(x)^2\n",
        "$$\n",
        "\n",
        "where $r_i(x)$ are the residuals. The update rule for the algorithm is given by:\n",
        "\n",
        "$$\n",
        "x_{k+1} = x_k - (J^T J + \\lambda I)^{-1} J^T r\n",
        "$$\n",
        "\n",
        "Here, $J$ is the Jacobian matrix of the residuals, $\\lambda$ is the damping parameter, and $I$ is the identity matrix.\n",
        "\n",
        "Key features:\n",
        "\n",
        "- **Non-linear Optimization**: The class is designed to handle non-linear optimization problems efficiently.\n",
        "- **Damping Mechanism**: It incorporates a damping parameter to control the step size, balancing between the Gauss-Newton and gradient descent methods.\n",
        "- **Iterative Improvement**: The optimizer iteratively refines the solution, reducing the error at each step.\n",
        "\n",
        "## Key Methods\n",
        "\n",
        "Please see the base class [NonlinearOptimizer.ipynb](NonlinearOptimizer.ipynb).\n",
        "\n",
        "## Parameters\n",
        "\n",
        "The `LevenbergMarquardtParams` class defines parameters specific to this optimization algorithm:\n",
        "\n",
        "| Parameter | Type | Default Value | Description |\n",
        "|-----------|------|---------------|-------------|\n",
        "| lambdaInitial | double | 1e-5 | The initial Levenberg-Marquardt damping term |\n",
        "| lambdaFactor | double | 10.0 | The amount by which to multiply or divide lambda when adjusting lambda |\n",
        "| lambdaUpperBound | double | 1e5 | The maximum lambda to try before assuming the optimization has failed |\n",
        "| lambdaLowerBound | double | 0.0 | The minimum lambda used in LM |\n",
        "| verbosityLM | VerbosityLM | SILENT | The verbosity level for Levenberg-Marquardt |\n",
        "| minModelFidelity | double | 1e-3 | Lower bound for the modelFidelity to accept the result of an LM iteration |\n",
        "| logFile | std::string | \"\" | An optional CSV log file, with [iteration, time, error, lambda] |\n",
        "| diagonalDamping | bool | false | If true, use diagonal of Hessian |\n",
        "| useFixedLambdaFactor | bool | true | If true applies constant increase (or decrease) to lambda according to lambdaFactor |\n",
        "| minDiagonal | double | 1e-6 | When using diagonal damping saturates the minimum diagonal entries |\n",
        "| maxDiagonal | double | 1e32 | When using diagonal damping saturates the maximum diagonal entries |\n",
        "\n",
        "These parameters complement the standard optimization parameters inherited from `NonlinearOptimizerParams`, which include:\n",
        "\n",
        "- Maximum iterations\n",
        "- Relative and absolute error thresholds\n",
        "- Error function verbosity\n",
        "- Linear solver type\n",
        "\n",
        "## Usage Notes\n",
        "\n",
        "- The choice of the initial guess can significantly affect the convergence speed and the quality of the solution.\n",
        "- Proper tuning of the damping parameter $\\lambda$ is crucial for balancing the convergence rate and stability.\n",
        "- The optimizer is most effective when the residuals are approximately linear near the solution.\n",
        "\n",
        "## Files\n",
        "\n",
        "- [LevenbergMarquardtOptimizer.h](https://github.com/borglab/gtsam/blob/develop/gtsam/nonlinear/LevenbergMarquardtOptimizer.h)\n",
        "- [LevenbergMarquardtOptimizer.cpp](https://github.com/borglab/gtsam/blob/develop/gtsam/nonlinear/LevenbergMarquardtOptimizer.cpp)\n",
        "- [LevenbergMarquardtParams.h](https://github.com/borglab/gtsam/blob/develop/gtsam/nonlinear/LevenbergMarquardtParams.h)\n",
        "- [LevenbergMarquardtParams.cpp](https://github.com/borglab/gtsam/blob/develop/gtsam/nonlinear/LevenbergMarquardtParams.cpp)"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "colab_button",
      "metadata": {},
      "source": [
        "<a href=\"https://colab.research.google.com/github/borglab/gtsam/blob/develop/gtsam/nonlinear/doc/LevenbergMarquardtOptimizer.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "license_cell",
      "metadata": {
        "tags": [
          "remove-cell"
        ]
      },
      "source": [
        "GTSAM Copyright 2010-2022, Georgia Tech Research Corporation,\nAtlanta, Georgia 30332-0415\nAll Rights Reserved\n\nAuthors: Frank Dellaert, et al. (see THANKS for the full author list)\n\nSee LICENSE for the license information"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "colab_import",
      "metadata": {
        "tags": [
          "remove-cell"
        ]
      },
      "outputs": [],
      "source": [
        "try:\n    import google.colab\n    %pip install --quiet gtsam-develop\nexcept ImportError:\n    pass"
      ]
    }
  ],
  "metadata": {
    "language_info": {
      "name": "python"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 5
}