<!doctype html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1" />
<meta name="generator" content="pdoc 0.10.0" />
<title>silk.models.magicpoint API documentation</title>
<meta name="description" content="The MagicPoint model of SuperPoint to be trained
on synthetic data. Based off of the official
PyTorch implementation from the MagicLeap paper.
…" />
<link rel="preload stylesheet" as="style" href="https://cdnjs.cloudflare.com/ajax/libs/10up-sanitize.css/11.0.1/sanitize.min.css" integrity="sha256-PK9q560IAAa6WVRRh76LtCaI8pjTJ2z11v0miyNNjrs=" crossorigin>
<link rel="preload stylesheet" as="style" href="https://cdnjs.cloudflare.com/ajax/libs/10up-sanitize.css/11.0.1/typography.min.css" integrity="sha256-7l/o7C8jubJiy74VsKTidCy1yBkRtiUGbVkYBylBqUg=" crossorigin>
<link rel="stylesheet preload" as="style" href="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/10.1.1/styles/github.min.css" crossorigin>
<style>:root{--highlight-color:#fe9}.flex{display:flex !important}body{line-height:1.5em}#content{padding:20px}#sidebar{padding:30px;overflow:hidden}#sidebar > *:last-child{margin-bottom:2cm}.http-server-breadcrumbs{font-size:130%;margin:0 0 15px 0}#footer{font-size:.75em;padding:5px 30px;border-top:1px solid #ddd;text-align:right}#footer p{margin:0 0 0 1em;display:inline-block}#footer p:last-child{margin-right:30px}h1,h2,h3,h4,h5{font-weight:300}h1{font-size:2.5em;line-height:1.1em}h2{font-size:1.75em;margin:1em 0 .50em 0}h3{font-size:1.4em;margin:25px 0 10px 0}h4{margin:0;font-size:105%}h1:target,h2:target,h3:target,h4:target,h5:target,h6:target{background:var(--highlight-color);padding:.2em 0}a{color:#058;text-decoration:none;transition:color .3s ease-in-out}a:hover{color:#e82}.title code{font-weight:bold}h2[id^="header-"]{margin-top:2em}.ident{color:#900}pre code{background:#f8f8f8;font-size:.8em;line-height:1.4em}code{background:#f2f2f1;padding:1px 4px;overflow-wrap:break-word}h1 code{background:transparent}pre{background:#f8f8f8;border:0;border-top:1px solid #ccc;border-bottom:1px solid #ccc;margin:1em 0;padding:1ex}#http-server-module-list{display:flex;flex-flow:column}#http-server-module-list div{display:flex}#http-server-module-list dt{min-width:10%}#http-server-module-list p{margin-top:0}.toc ul,#index{list-style-type:none;margin:0;padding:0}#index code{background:transparent}#index h3{border-bottom:1px solid #ddd}#index ul{padding:0}#index h4{margin-top:.6em;font-weight:bold}@media (min-width:200ex){#index .two-column{column-count:2}}@media (min-width:300ex){#index .two-column{column-count:3}}dl{margin-bottom:2em}dl dl:last-child{margin-bottom:4em}dd{margin:0 0 1em 3em}#header-classes + dl > dd{margin-bottom:3em}dd dd{margin-left:2em}dd p{margin:10px 0}.name{background:#eee;font-weight:bold;font-size:.85em;padding:5px 10px;display:inline-block;min-width:40%}.name:hover{background:#e0e0e0}dt:target .name{background:var(--highlight-color)}.name > span:first-child{white-space:nowrap}.name.class > span:nth-child(2){margin-left:.4em}.inherited{color:#999;border-left:5px solid #eee;padding-left:1em}.inheritance em{font-style:normal;font-weight:bold}.desc h2{font-weight:400;font-size:1.25em}.desc h3{font-size:1em}.desc dt code{background:inherit}.source summary,.git-link-div{color:#666;text-align:right;font-weight:400;font-size:.8em;text-transform:uppercase}.source summary > *{white-space:nowrap;cursor:pointer}.git-link{color:inherit;margin-left:1em}.source pre{max-height:500px;overflow:auto;margin:0}.source pre code{font-size:12px;overflow:visible}.hlist{list-style:none}.hlist li{display:inline}.hlist li:after{content:',\2002'}.hlist li:last-child:after{content:none}.hlist .hlist{display:inline;padding-left:1em}img{max-width:100%}td{padding:0 .5em}.admonition{padding:.1em .5em;margin-bottom:1em}.admonition-title{font-weight:bold}.admonition.note,.admonition.info,.admonition.important{background:#aef}.admonition.todo,.admonition.versionadded,.admonition.tip,.admonition.hint{background:#dfd}.admonition.warning,.admonition.versionchanged,.admonition.deprecated{background:#fd4}.admonition.error,.admonition.danger,.admonition.caution{background:lightpink}</style>
<style media="screen and (min-width: 700px)">@media screen and (min-width:700px){#sidebar{width:30%;height:100vh;overflow:auto;position:sticky;top:0}#content{width:70%;max-width:100ch;padding:3em 4em;border-left:1px solid #ddd}pre code{font-size:1em}.item .name{font-size:1em}main{display:flex;flex-direction:row-reverse;justify-content:flex-end}.toc ul ul,#index ul{padding-left:1.5em}.toc > ul > li{margin-top:.5em}}</style>
<style media="print">@media print{#sidebar h1{page-break-before:always}.source{display:none}}@media print{*{background:transparent !important;color:#000 !important;box-shadow:none !important;text-shadow:none !important}a[href]:after{content:" (" attr(href) ")";font-size:90%}a[href][title]:after{content:none}abbr[title]:after{content:" (" attr(title) ")"}.ir a:after,a[href^="javascript:"]:after,a[href^="#"]:after{content:""}pre,blockquote{border:1px solid #999;page-break-inside:avoid}thead{display:table-header-group}tr,img{page-break-inside:avoid}img{max-width:100% !important}@page{margin:0.5cm}p,h2,h3{orphans:3;widows:3}h1,h2,h3,h4,h5,h6{page-break-after:avoid}}</style>
<script defer src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/10.1.1/highlight.min.js" integrity="sha256-Uv3H6lx7dJmRfRvH8TH6kJD1TSK1aFcwgx+mdg3epi8=" crossorigin></script>
<script>window.addEventListener('DOMContentLoaded', () => hljs.initHighlighting())</script>
</head>
<body>
<main>
<article id="content">
<header>
<h1 class="title">Module <code>silk.models.magicpoint</code></h1>
</header>
<section id="section-intro">
<p>The MagicPoint model of SuperPoint to be trained
on synthetic data. Based off of the official
PyTorch implementation from the MagicLeap paper.</p>
<h1 id="checked-parity">Checked Parity</h1>
<h2 id="with-paper-httpsarxivorgpdf171207629pdf">With Paper : <a href="https://arxiv.org/pdf/1712.07629.pdf">https://arxiv.org/pdf/1712.07629.pdf</a></h2>
<h3 id="optimizer-page-6">Optimizer (page 6)</h3>
<ul>
<li>[<strong>done</strong>] Type = Adam</li>
<li>[<strong>done</strong>] Learning Rate = 0.001</li>
<li>[<strong>done</strong>] β = (0.9, 0.999)</li>
</ul>
<h3 id="training-page-6">Training (page 6)</h3>
<ul>
<li>[<strong>done</strong>] Batch Size = 32</li>
<li>[<strong>diff</strong>] Steps = 200,000 (ours : early stopping)</li>
</ul>
<h3 id="metrics-page-4">Metrics (page 4)</h3>
<ul>
<li>[<strong>done</strong>] mAP = 0.971 (ours : 0.999)</li>
</ul>
<details class="source">
<summary>
<span>Expand source code</span>
</summary>
<pre><code class="python"># Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.

# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.

&#34;&#34;&#34;
The MagicPoint model of SuperPoint to be trained
on synthetic data. Based off of the official
PyTorch implementation from the MagicLeap paper.
# Checked Parity
## With Paper : https://arxiv.org/pdf/1712.07629.pdf
### Optimizer (page 6)
* [**done**] Type = Adam
* [**done**] Learning Rate = 0.001
* [**done**] β = (0.9, 0.999)
### Training (page 6)
* [**done**] Batch Size = 32
* [**diff**] Steps = 200,000 (ours : early stopping)
### Metrics (page 4)
* [**done**] mAP = 0.971 (ours : 0.999)
&#34;&#34;&#34;

from typing import Any, Dict, Optional, Union

import pytorch_lightning as pl
import torch
from silk.backbones.superpoint.utils import (
    space_to_depth,
    prob_map_to_positions_with_prob,
    prob_map_to_points_map,
)
from silk.config.core import ensure_is_instance
from silk.config.optimizer import Spec
from silk.flow import Flow
from silk.logger import LOG
from silk.models.abstract import OptimizersHandler, StateDictRedirect
from silk.tasks.training.supervised_keypoint import SupervisedKeypoint
from silk.transforms.abstract import NamedContext, Transform
from silk.transforms.cv.homography import RandomHomographicSampler

_DEBUG_MODE_ENABLED = False


# internal debug function to dump counts
# TODO(Pierre): clean
def _debug_dump_counts(counts, device):
    if not _DEBUG_MODE_ENABLED:
        return

    def _dump(image, path):
        from os import makedirs
        
        from silk.logger import LOG
        from skimage.io import imsave

        makedirs(&#34;./debug&#34;, exist_ok=True)

        image = image.permute((1, 2, 0))
        image = image.squeeze()
        image = image.detach().cpu().numpy()
        image /= image.max()

        LOG.warning(f&#34;debug dump image to : {path}&#34;)
        imsave(path, image)

    for k in range(counts.shape[0]):
        _dump(counts[k], f&#34;./debug/{device}-counts-{k}.png&#34;)

    LOG.warning(f&#39;debug mode enabled on &#34;{__file__}&#34;&#39;)


class HomographyAdaptation:
    def __init__(
        self,
        random_homographic_adaptation_kwargs,
        score_fn,
        default_detection_threshold=None,
        default_nms_dist=None,
        default_border_dist=None,
    ) -&gt; None:
        # will be initialized / used in homographic adaptation
        self._homographic_sampler = None
        self._random_homographic_adaptation_kwargs = (
            {}
            if random_homographic_adaptation_kwargs is None
            else random_homographic_adaptation_kwargs
        )
        self._default_detection_threshold = default_detection_threshold
        self._default_nms_dist = default_nms_dist
        self._default_border_dist = default_border_dist
        self._score_fn = score_fn

    def _check_homographic_sampler(self, images, n_samples=100):
        &#34;&#34;&#34;Make sure the homographic sample is initialized for the proper input size.&#34;&#34;&#34;
        reinit_homographic_sampler = False
        reinit_homographic_sampler |= self._homographic_sampler is None
        if self._homographic_sampler is not None:
            reinit_homographic_sampler |= (
                self._homographic_sampler.batch_size != images.shape[0] * n_samples
            )
            reinit_homographic_sampler |= (
                self._homographic_sampler._sampling_size != images.shape[-2:]
            )
            reinit_homographic_sampler |= (
                self._homographic_sampler.device != images.device
            )

        if reinit_homographic_sampler:
            self._homographic_sampler = RandomHomographicSampler(
                batch_size=images.shape[0] * n_samples,
                sampling_size=images.shape[-2:],
                auto_randomize=False,
                device=images.device,
                **self._random_homographic_adaptation_kwargs,
            )

    def homographic_adaptation_prediction(
        self,
        batch: NamedContext,
        detection_threshold=None,
        nms_dist=None,
        border_dist=None,
        n_samples: int = 100,
        add_identity: bool = False,
    ) -&gt; NamedContext:
        &#34;&#34;&#34;Prediction using homographic adaptation technique.

        Parameters
        ----------
        batch : NamedContext
            Input batch containing an &#34;image&#34; of shape :math:`(B,C,H,W)`.
        n_samples : int, optional
            Number of homographic samples to generate per image, by default 100.
        add_identity : bool, optional
            Include original image in the set random homographic samples, by default False.

        Returns
        -------
        NamedContext
            New context containing &#34;points&#34; tensor of shape :math:`(B,N,3)` (2D coordinates + probabilities).
        &#34;&#34;&#34;
        # 1. prepare input and homographic sampler
        ensure_is_instance(batch, NamedContext)
        batch.ensure_exists(&#34;image&#34;)
        images = batch[&#34;image&#34;]
        device = images.device
        detection_threshold = (
            self._default_detection_threshold
            if detection_threshold is None
            else detection_threshold
        )
        nms_dist = self._default_nms_dist if nms_dist is None else nms_dist
        border_dist = self._default_border_dist if border_dist is None else border_dist

        assert detection_threshold is not None, &#34;detection_threshold should be provided&#34;
        assert nms_dist is not None, &#34;nms_dist should be provided&#34;
        assert border_dist is not None, &#34;border_dist should be provided&#34;

        self._check_homographic_sampler(images, n_samples)

        # 2. run inference on initial image
        if add_identity:
            probs_map_identity = self._score_fn(images)
            probs_map_identity = probs_map_identity.to(device)

            counts_identity = torch.ones_like(images)

        # 3. run inference on random homographic crops
        images_samples = self._homographic_sampler.forward_sampling(
            images,
            randomize=True,
        )
        probs_map_samples = self._score_fn(images_samples)
        probs_map_samples = probs_map_samples.to(device)

        # 4. bring prob map to original image referential
        probs_map_samples = self._homographic_sampler.backward_sampling(
            probs_map_samples, randomize=False
        )

        # 5. sum reduction of all probs
        probs_map = probs_map_samples.view(images.shape[0], -1, *images.shape[1:]).sum(
            dim=1
        )
        if add_identity:
            probs_map += probs_map_identity

        # 6. update counts per pixel (to compute average per position)
        ones_crop = torch.ones(
            (1, 1, images.shape[2], images.shape[3]),
            dtype=probs_map.dtype,
            device=self._homographic_sampler.device,
        )
        counts_samples = self._homographic_sampler.backward_sampling(
            ones_crop, randomize=False
        )
        counts = counts_samples.view(images.shape[0], -1, *images.shape[1:]).sum(dim=1)
        if add_identity:
            counts += counts_identity

        _debug_dump_counts(counts, self.device)

        # 7. compute average prob per position
        zero_counts = counts == 0
        final_probs_map = probs_map / counts
        final_probs_map[zero_counts] = 0
        final_probs_map = final_probs_map.squeeze(1)

        # 8. convert to point coordinates (using NMS)
        prob_map = prob_map_to_points_map(
            final_probs_map,
            detection_threshold,
            nms_dist,
            border_dist,
        )
        points = prob_map_to_positions_with_prob(prob_map)

        return batch.add(&#34;points&#34;, list(points))


class MagicPoint(
    OptimizersHandler,
    StateDictRedirect,
    pl.LightningModule,
    HomographyAdaptation,
):
    def __init__(
        self,
        images_to_logits_fn,
        optimizer_spec: Spec = None,
        image_aug_transform: Union[Transform, None] = None,
        random_homographic_adaptation_kwargs: Union[Dict[str, Any], None] = None,
        **kwargs,
    ):
        &#34;&#34;&#34;
        Initialize the model.
        Can take an input image of any number of channels (e.g. grayscale, RGB).
        &#34;&#34;&#34;
        OptimizersHandler.__init__(self, optimizer_spec)
        pl.LightningModule.__init__(self, **kwargs)
        StateDictRedirect.__init__(self, images_to_logits_fn)
        HomographyAdaptation.__init__(
            self,
            random_homographic_adaptation_kwargs,
            self._get_scores,
            images_to_logits_fn._detection_threshold,
            images_to_logits_fn._nms_dist,
            images_to_logits_fn._border_dist,
        )

        self.flow = Flow(&#34;batch&#34;)
        self.flow.define_transition(
            &#34;checked_batch&#34;,
            self._check_batch,
            &#34;batch&#34;,
        )
        self.flow.define_transition(&#34;images&#34;, self._get_images, &#34;checked_batch&#34;)
        self.flow.define_transition(&#34;labels&#34;, self._get_labels, &#34;checked_batch&#34;)

        self._images_to_logits_fn = images_to_logits_fn
        self._batch_to_images_labels_fn = self.flow.with_outputs((&#34;images&#34;, &#34;labels&#34;))
        self._training_task = SupervisedKeypoint(
            batch_to_images_and_labels_fn=self._batch_to_images_labels_fn,
            images_to_logits_fn=images_to_logits_fn,
            image_aug_transform=image_aug_transform,
        )

        self._cell_size = 8

    @property
    def model(self):
        return self._images_to_logits_fn

    def _get_scores(self, images):
        images = images.to(self.device)
        return self._images_to_logits_fn.forward_flow(&#34;score&#34;, images)

    def _named_context_to_device(self, batch: NamedContext) -&gt; NamedContext:
        # send context tensors to model device
        def to_device(el):
            if isinstance(el, torch.Tensor):
                return el.to(self.device)
            elif isinstance(el, list):
                return [e.to(self.device) for e in el]
            elif isinstance(el, tuple):
                return tuple(e.to(self.device) for e in el)
            raise RuntimeError(f&#34;type {type(el)} not handled&#34;)

        # send data to model&#39;s device
        return batch.map(to_device)

    def _check_batch(self, batch: NamedContext):
        # check batch
        ensure_is_instance(batch, NamedContext)

        if self.training:
            batch.ensure_exists(&#34;image&#34;, &#34;label_map&#34;)
            assert batch[&#34;image&#34;].shape == batch[&#34;label_map&#34;].shape
        else:
            batch.ensure_exists(&#34;image&#34;)

        return self._named_context_to_device(batch)

    def predict_step(
        self,
        batch: NamedContext,
        batch_idx: Optional[int] = None,
        dataloader_idx: Optional[int] = None,
    ) -&gt; Any:
        images = self.flow.with_outputs(&#34;images&#34;)(batch)
        points = self._images_to_logits_fn.flow.with_outputs(&#34;positions&#34;)(images)
        return batch.add(&#34;points&#34;, points)

    def test_step(
        self, batch: NamedContext, batch_idx: int, dataloader_idx: Optional[int] = None
    ) -&gt; NamedContext:
        images, labels = self._batch_to_images_labels_fn(batch)
        (
            class_probs,
            probs_map,
            nms_pred_positions_with_prob,
        ) = self._images_to_logits_fn.flow.with_outputs(
            (
                &#34;probability&#34;,
                &#34;score&#34;,
                &#34;positions&#34;,
            )
        )(
            images
        )

        class_label = torch.argmax(labels, dim=1)

        pred_positions_with_prob = prob_map_to_positions_with_prob(
            probs_map, threshold=1e-4
        )

        batch = batch.add(&#34;one_hot_class_labels&#34;, labels)
        batch = batch.add(&#34;class_label&#34;, class_label)
        batch = batch.add(&#34;class_probs&#34;, class_probs)
        batch = batch.add(&#34;probs_map&#34;, probs_map)
        batch = batch.add(&#34;pred_positions_with_prob&#34;, pred_positions_with_prob)
        batch = batch.add(&#34;nms_pred_positions_with_prob&#34;, nms_pred_positions_with_prob)

        batch = self._named_context_to_device(batch)

        return batch

    def _get_images(self, batch: NamedContext):
        assert isinstance(batch[&#34;image&#34;], torch.Tensor)

        # check data shape
        shape = batch[&#34;image&#34;].shape
        assert len(shape) == 4
        assert shape[1] == 1
        assert shape[2] % self._cell_size == 0
        assert shape[3] % self._cell_size == 0

        return batch[&#34;image&#34;]

    def _get_labels(
        self,
        batch: NamedContext,
    ):
        assert isinstance(batch[&#34;label_map&#34;], torch.Tensor)

        # get labels by adding dustbin and conver cells to depth
        return space_to_depth(batch[&#34;label_map&#34;], self._cell_size)

    def training_step(self, batch, batch_idx):
        loss = self._training_task.batch_to_training_loss_fn(batch)
        self.log(&#34;train.loss&#34;, loss)
        return loss

    def validation_step(self, batch, batch_idx):
        loss = self._training_task.batch_to_validation_loss_fn(batch)
        self.log(&#34;val.loss&#34;, loss)
        return loss</code></pre>
</details>
</section>
<section>
</section>
<section>
</section>
<section>
</section>
<section>
<h2 class="section-title" id="header-classes">Classes</h2>
<dl>
<dt id="silk.models.magicpoint.HomographyAdaptation"><code class="flex name class">
<span>class <span class="ident">HomographyAdaptation</span></span>
<span>(</span><span>random_homographic_adaptation_kwargs, score_fn, default_detection_threshold=None, default_nms_dist=None, default_border_dist=None)</span>
</code></dt>
<dd>
<div class="desc"></div>
<details class="source">
<summary>
<span>Expand source code</span>
</summary>
<pre><code class="python">class HomographyAdaptation:
    def __init__(
        self,
        random_homographic_adaptation_kwargs,
        score_fn,
        default_detection_threshold=None,
        default_nms_dist=None,
        default_border_dist=None,
    ) -&gt; None:
        # will be initialized / used in homographic adaptation
        self._homographic_sampler = None
        self._random_homographic_adaptation_kwargs = (
            {}
            if random_homographic_adaptation_kwargs is None
            else random_homographic_adaptation_kwargs
        )
        self._default_detection_threshold = default_detection_threshold
        self._default_nms_dist = default_nms_dist
        self._default_border_dist = default_border_dist
        self._score_fn = score_fn

    def _check_homographic_sampler(self, images, n_samples=100):
        &#34;&#34;&#34;Make sure the homographic sample is initialized for the proper input size.&#34;&#34;&#34;
        reinit_homographic_sampler = False
        reinit_homographic_sampler |= self._homographic_sampler is None
        if self._homographic_sampler is not None:
            reinit_homographic_sampler |= (
                self._homographic_sampler.batch_size != images.shape[0] * n_samples
            )
            reinit_homographic_sampler |= (
                self._homographic_sampler._sampling_size != images.shape[-2:]
            )
            reinit_homographic_sampler |= (
                self._homographic_sampler.device != images.device
            )

        if reinit_homographic_sampler:
            self._homographic_sampler = RandomHomographicSampler(
                batch_size=images.shape[0] * n_samples,
                sampling_size=images.shape[-2:],
                auto_randomize=False,
                device=images.device,
                **self._random_homographic_adaptation_kwargs,
            )

    def homographic_adaptation_prediction(
        self,
        batch: NamedContext,
        detection_threshold=None,
        nms_dist=None,
        border_dist=None,
        n_samples: int = 100,
        add_identity: bool = False,
    ) -&gt; NamedContext:
        &#34;&#34;&#34;Prediction using homographic adaptation technique.

        Parameters
        ----------
        batch : NamedContext
            Input batch containing an &#34;image&#34; of shape :math:`(B,C,H,W)`.
        n_samples : int, optional
            Number of homographic samples to generate per image, by default 100.
        add_identity : bool, optional
            Include original image in the set random homographic samples, by default False.

        Returns
        -------
        NamedContext
            New context containing &#34;points&#34; tensor of shape :math:`(B,N,3)` (2D coordinates + probabilities).
        &#34;&#34;&#34;
        # 1. prepare input and homographic sampler
        ensure_is_instance(batch, NamedContext)
        batch.ensure_exists(&#34;image&#34;)
        images = batch[&#34;image&#34;]
        device = images.device
        detection_threshold = (
            self._default_detection_threshold
            if detection_threshold is None
            else detection_threshold
        )
        nms_dist = self._default_nms_dist if nms_dist is None else nms_dist
        border_dist = self._default_border_dist if border_dist is None else border_dist

        assert detection_threshold is not None, &#34;detection_threshold should be provided&#34;
        assert nms_dist is not None, &#34;nms_dist should be provided&#34;
        assert border_dist is not None, &#34;border_dist should be provided&#34;

        self._check_homographic_sampler(images, n_samples)

        # 2. run inference on initial image
        if add_identity:
            probs_map_identity = self._score_fn(images)
            probs_map_identity = probs_map_identity.to(device)

            counts_identity = torch.ones_like(images)

        # 3. run inference on random homographic crops
        images_samples = self._homographic_sampler.forward_sampling(
            images,
            randomize=True,
        )
        probs_map_samples = self._score_fn(images_samples)
        probs_map_samples = probs_map_samples.to(device)

        # 4. bring prob map to original image referential
        probs_map_samples = self._homographic_sampler.backward_sampling(
            probs_map_samples, randomize=False
        )

        # 5. sum reduction of all probs
        probs_map = probs_map_samples.view(images.shape[0], -1, *images.shape[1:]).sum(
            dim=1
        )
        if add_identity:
            probs_map += probs_map_identity

        # 6. update counts per pixel (to compute average per position)
        ones_crop = torch.ones(
            (1, 1, images.shape[2], images.shape[3]),
            dtype=probs_map.dtype,
            device=self._homographic_sampler.device,
        )
        counts_samples = self._homographic_sampler.backward_sampling(
            ones_crop, randomize=False
        )
        counts = counts_samples.view(images.shape[0], -1, *images.shape[1:]).sum(dim=1)
        if add_identity:
            counts += counts_identity

        _debug_dump_counts(counts, self.device)

        # 7. compute average prob per position
        zero_counts = counts == 0
        final_probs_map = probs_map / counts
        final_probs_map[zero_counts] = 0
        final_probs_map = final_probs_map.squeeze(1)

        # 8. convert to point coordinates (using NMS)
        prob_map = prob_map_to_points_map(
            final_probs_map,
            detection_threshold,
            nms_dist,
            border_dist,
        )
        points = prob_map_to_positions_with_prob(prob_map)

        return batch.add(&#34;points&#34;, list(points))</code></pre>
</details>
<h3>Subclasses</h3>
<ul class="hlist">
<li><a title="silk.models.magicpoint.MagicPoint" href="#silk.models.magicpoint.MagicPoint">MagicPoint</a></li>
<li><a title="silk.models.superpoint.SuperPoint" href="superpoint.html#silk.models.superpoint.SuperPoint">SuperPoint</a></li>
</ul>
<h3>Methods</h3>
<dl>
<dt id="silk.models.magicpoint.HomographyAdaptation.homographic_adaptation_prediction"><code class="name flex">
<span>def <span class="ident">homographic_adaptation_prediction</span></span>(<span>self, batch: <a title="silk.transforms.abstract.NamedContext" href="../transforms/abstract.html#silk.transforms.abstract.NamedContext">NamedContext</a>, detection_threshold=None, nms_dist=None, border_dist=None, n_samples: int = 100, add_identity: bool = False) ‑> <a title="silk.transforms.abstract.NamedContext" href="../transforms/abstract.html#silk.transforms.abstract.NamedContext">NamedContext</a></span>
</code></dt>
<dd>
<div class="desc"><p>Prediction using homographic adaptation technique.</p>
<h2 id="parameters">Parameters</h2>
<dl>
<dt><strong><code>batch</code></strong> :&ensp;<code>NamedContext</code></dt>
<dd>Input batch containing an "image" of shape :math:<code>(B,C,H,W)</code>.</dd>
<dt><strong><code>n_samples</code></strong> :&ensp;<code>int</code>, optional</dt>
<dd>Number of homographic samples to generate per image, by default 100.</dd>
<dt><strong><code>add_identity</code></strong> :&ensp;<code>bool</code>, optional</dt>
<dd>Include original image in the set random homographic samples, by default False.</dd>
</dl>
<h2 id="returns">Returns</h2>
<dl>
<dt><code>NamedContext</code></dt>
<dd>New context containing "points" tensor of shape :math:<code>(B,N,3)</code> (2D coordinates + probabilities).</dd>
</dl></div>
<details class="source">
<summary>
<span>Expand source code</span>
</summary>
<pre><code class="python">def homographic_adaptation_prediction(
    self,
    batch: NamedContext,
    detection_threshold=None,
    nms_dist=None,
    border_dist=None,
    n_samples: int = 100,
    add_identity: bool = False,
) -&gt; NamedContext:
    &#34;&#34;&#34;Prediction using homographic adaptation technique.

    Parameters
    ----------
    batch : NamedContext
        Input batch containing an &#34;image&#34; of shape :math:`(B,C,H,W)`.
    n_samples : int, optional
        Number of homographic samples to generate per image, by default 100.
    add_identity : bool, optional
        Include original image in the set random homographic samples, by default False.

    Returns
    -------
    NamedContext
        New context containing &#34;points&#34; tensor of shape :math:`(B,N,3)` (2D coordinates + probabilities).
    &#34;&#34;&#34;
    # 1. prepare input and homographic sampler
    ensure_is_instance(batch, NamedContext)
    batch.ensure_exists(&#34;image&#34;)
    images = batch[&#34;image&#34;]
    device = images.device
    detection_threshold = (
        self._default_detection_threshold
        if detection_threshold is None
        else detection_threshold
    )
    nms_dist = self._default_nms_dist if nms_dist is None else nms_dist
    border_dist = self._default_border_dist if border_dist is None else border_dist

    assert detection_threshold is not None, &#34;detection_threshold should be provided&#34;
    assert nms_dist is not None, &#34;nms_dist should be provided&#34;
    assert border_dist is not None, &#34;border_dist should be provided&#34;

    self._check_homographic_sampler(images, n_samples)

    # 2. run inference on initial image
    if add_identity:
        probs_map_identity = self._score_fn(images)
        probs_map_identity = probs_map_identity.to(device)

        counts_identity = torch.ones_like(images)

    # 3. run inference on random homographic crops
    images_samples = self._homographic_sampler.forward_sampling(
        images,
        randomize=True,
    )
    probs_map_samples = self._score_fn(images_samples)
    probs_map_samples = probs_map_samples.to(device)

    # 4. bring prob map to original image referential
    probs_map_samples = self._homographic_sampler.backward_sampling(
        probs_map_samples, randomize=False
    )

    # 5. sum reduction of all probs
    probs_map = probs_map_samples.view(images.shape[0], -1, *images.shape[1:]).sum(
        dim=1
    )
    if add_identity:
        probs_map += probs_map_identity

    # 6. update counts per pixel (to compute average per position)
    ones_crop = torch.ones(
        (1, 1, images.shape[2], images.shape[3]),
        dtype=probs_map.dtype,
        device=self._homographic_sampler.device,
    )
    counts_samples = self._homographic_sampler.backward_sampling(
        ones_crop, randomize=False
    )
    counts = counts_samples.view(images.shape[0], -1, *images.shape[1:]).sum(dim=1)
    if add_identity:
        counts += counts_identity

    _debug_dump_counts(counts, self.device)

    # 7. compute average prob per position
    zero_counts = counts == 0
    final_probs_map = probs_map / counts
    final_probs_map[zero_counts] = 0
    final_probs_map = final_probs_map.squeeze(1)

    # 8. convert to point coordinates (using NMS)
    prob_map = prob_map_to_points_map(
        final_probs_map,
        detection_threshold,
        nms_dist,
        border_dist,
    )
    points = prob_map_to_positions_with_prob(prob_map)

    return batch.add(&#34;points&#34;, list(points))</code></pre>
</details>
</dd>
</dl>
</dd>
<dt id="silk.models.magicpoint.MagicPoint"><code class="flex name class">
<span>class <span class="ident">MagicPoint</span></span>
<span>(</span><span>images_to_logits_fn, optimizer_spec: <a title="silk.config.optimizer.Spec" href="../config/optimizer.html#silk.config.optimizer.Spec">Spec</a> = None, image_aug_transform: Optional[<a title="silk.transforms.abstract.Transform" href="../transforms/abstract.html#silk.transforms.abstract.Transform">Transform</a>] = None, random_homographic_adaptation_kwargs: Optional[Dict[str, Any]] = None, **kwargs)</span>
</code></dt>
<dd>
<div class="desc"><p>Automate the most common pattern of optimizer creation.
This pattern consists of one optimizer per model.</p>
<h2 id="examples">Examples</h2>
<pre><code class="language-python">class MyCustomModel(OptimizersHandler, pl.LightningModule):
    def __init__(self, optimizer_spec, **kwargs):
        OptimizersHandler.__init__(self, optimizer_spec)
        pl.LightningModule.__init__(self, **kwargs)
        ...
</code></pre>
<p>This will automatically equip <code>MyCustomModel</code> with the <code>configure_optimizers</code> method required by Pytorch Lightning.
Notice how <code>OptimizersHandler</code> is before <code>pl.LightningModule</code> in the list of base classes.
This is necessary since <code>pl.LightningModule</code> checks if the current class has a method called <code>configure_optimizers</code>.</p>
<pre><code class="language-python">class MyCustomModel(OptimizersHandler, pl.LightningModule):
    def __init__(self, optimizer_spec_A, optimizer_spec_B, **kwargs):
        self.submodel_A = ModelA(...)
        self.submodel_B = ModelB(...)

        OptimizersHandler.__init__(self,
            MultiSpec(optimizer_spec_A, optimizer_spec_B),
            self.submodel_A, self.submodel_B
        )
        pl.LightningModule.__init__(self, **kwargs)
        ...
</code></pre>
<p>In this case, two optimizers will be automatically created and attached to their relative model.</p>
<p>Initialize the model.
Can take an input image of any number of channels (e.g. grayscale, RGB).</p></div>
<details class="source">
<summary>
<span>Expand source code</span>
</summary>
<pre><code class="python">class MagicPoint(
    OptimizersHandler,
    StateDictRedirect,
    pl.LightningModule,
    HomographyAdaptation,
):
    def __init__(
        self,
        images_to_logits_fn,
        optimizer_spec: Spec = None,
        image_aug_transform: Union[Transform, None] = None,
        random_homographic_adaptation_kwargs: Union[Dict[str, Any], None] = None,
        **kwargs,
    ):
        &#34;&#34;&#34;
        Initialize the model.
        Can take an input image of any number of channels (e.g. grayscale, RGB).
        &#34;&#34;&#34;
        OptimizersHandler.__init__(self, optimizer_spec)
        pl.LightningModule.__init__(self, **kwargs)
        StateDictRedirect.__init__(self, images_to_logits_fn)
        HomographyAdaptation.__init__(
            self,
            random_homographic_adaptation_kwargs,
            self._get_scores,
            images_to_logits_fn._detection_threshold,
            images_to_logits_fn._nms_dist,
            images_to_logits_fn._border_dist,
        )

        self.flow = Flow(&#34;batch&#34;)
        self.flow.define_transition(
            &#34;checked_batch&#34;,
            self._check_batch,
            &#34;batch&#34;,
        )
        self.flow.define_transition(&#34;images&#34;, self._get_images, &#34;checked_batch&#34;)
        self.flow.define_transition(&#34;labels&#34;, self._get_labels, &#34;checked_batch&#34;)

        self._images_to_logits_fn = images_to_logits_fn
        self._batch_to_images_labels_fn = self.flow.with_outputs((&#34;images&#34;, &#34;labels&#34;))
        self._training_task = SupervisedKeypoint(
            batch_to_images_and_labels_fn=self._batch_to_images_labels_fn,
            images_to_logits_fn=images_to_logits_fn,
            image_aug_transform=image_aug_transform,
        )

        self._cell_size = 8

    @property
    def model(self):
        return self._images_to_logits_fn

    def _get_scores(self, images):
        images = images.to(self.device)
        return self._images_to_logits_fn.forward_flow(&#34;score&#34;, images)

    def _named_context_to_device(self, batch: NamedContext) -&gt; NamedContext:
        # send context tensors to model device
        def to_device(el):
            if isinstance(el, torch.Tensor):
                return el.to(self.device)
            elif isinstance(el, list):
                return [e.to(self.device) for e in el]
            elif isinstance(el, tuple):
                return tuple(e.to(self.device) for e in el)
            raise RuntimeError(f&#34;type {type(el)} not handled&#34;)

        # send data to model&#39;s device
        return batch.map(to_device)

    def _check_batch(self, batch: NamedContext):
        # check batch
        ensure_is_instance(batch, NamedContext)

        if self.training:
            batch.ensure_exists(&#34;image&#34;, &#34;label_map&#34;)
            assert batch[&#34;image&#34;].shape == batch[&#34;label_map&#34;].shape
        else:
            batch.ensure_exists(&#34;image&#34;)

        return self._named_context_to_device(batch)

    def predict_step(
        self,
        batch: NamedContext,
        batch_idx: Optional[int] = None,
        dataloader_idx: Optional[int] = None,
    ) -&gt; Any:
        images = self.flow.with_outputs(&#34;images&#34;)(batch)
        points = self._images_to_logits_fn.flow.with_outputs(&#34;positions&#34;)(images)
        return batch.add(&#34;points&#34;, points)

    def test_step(
        self, batch: NamedContext, batch_idx: int, dataloader_idx: Optional[int] = None
    ) -&gt; NamedContext:
        images, labels = self._batch_to_images_labels_fn(batch)
        (
            class_probs,
            probs_map,
            nms_pred_positions_with_prob,
        ) = self._images_to_logits_fn.flow.with_outputs(
            (
                &#34;probability&#34;,
                &#34;score&#34;,
                &#34;positions&#34;,
            )
        )(
            images
        )

        class_label = torch.argmax(labels, dim=1)

        pred_positions_with_prob = prob_map_to_positions_with_prob(
            probs_map, threshold=1e-4
        )

        batch = batch.add(&#34;one_hot_class_labels&#34;, labels)
        batch = batch.add(&#34;class_label&#34;, class_label)
        batch = batch.add(&#34;class_probs&#34;, class_probs)
        batch = batch.add(&#34;probs_map&#34;, probs_map)
        batch = batch.add(&#34;pred_positions_with_prob&#34;, pred_positions_with_prob)
        batch = batch.add(&#34;nms_pred_positions_with_prob&#34;, nms_pred_positions_with_prob)

        batch = self._named_context_to_device(batch)

        return batch

    def _get_images(self, batch: NamedContext):
        assert isinstance(batch[&#34;image&#34;], torch.Tensor)

        # check data shape
        shape = batch[&#34;image&#34;].shape
        assert len(shape) == 4
        assert shape[1] == 1
        assert shape[2] % self._cell_size == 0
        assert shape[3] % self._cell_size == 0

        return batch[&#34;image&#34;]

    def _get_labels(
        self,
        batch: NamedContext,
    ):
        assert isinstance(batch[&#34;label_map&#34;], torch.Tensor)

        # get labels by adding dustbin and conver cells to depth
        return space_to_depth(batch[&#34;label_map&#34;], self._cell_size)

    def training_step(self, batch, batch_idx):
        loss = self._training_task.batch_to_training_loss_fn(batch)
        self.log(&#34;train.loss&#34;, loss)
        return loss

    def validation_step(self, batch, batch_idx):
        loss = self._training_task.batch_to_validation_loss_fn(batch)
        self.log(&#34;val.loss&#34;, loss)
        return loss</code></pre>
</details>
<h3>Ancestors</h3>
<ul class="hlist">
<li><a title="silk.models.abstract.OptimizersHandler" href="abstract.html#silk.models.abstract.OptimizersHandler">OptimizersHandler</a></li>
<li><a title="silk.models.abstract.StateDictRedirect" href="abstract.html#silk.models.abstract.StateDictRedirect">StateDictRedirect</a></li>
<li>pytorch_lightning.core.lightning.LightningModule</li>
<li>pytorch_lightning.core.mixins.device_dtype_mixin.DeviceDtypeModuleMixin</li>
<li>pytorch_lightning.core.mixins.hparams_mixin.HyperparametersMixin</li>
<li>pytorch_lightning.core.saving.ModelIO</li>
<li>pytorch_lightning.core.hooks.ModelHooks</li>
<li>pytorch_lightning.core.hooks.DataHooks</li>
<li>pytorch_lightning.core.hooks.CheckpointHooks</li>
<li>torch.nn.modules.module.Module</li>
<li><a title="silk.models.magicpoint.HomographyAdaptation" href="#silk.models.magicpoint.HomographyAdaptation">HomographyAdaptation</a></li>
</ul>
<h3>Class variables</h3>
<dl>
<dt id="silk.models.magicpoint.MagicPoint.dump_patches"><code class="name">var <span class="ident">dump_patches</span> : bool</code></dt>
<dd>
<div class="desc"></div>
</dd>
<dt id="silk.models.magicpoint.MagicPoint.training"><code class="name">var <span class="ident">training</span> : bool</code></dt>
<dd>
<div class="desc"></div>
</dd>
</dl>
<h3>Instance variables</h3>
<dl>
<dt id="silk.models.magicpoint.MagicPoint.model"><code class="name">var <span class="ident">model</span></code></dt>
<dd>
<div class="desc"></div>
<details class="source">
<summary>
<span>Expand source code</span>
</summary>
<pre><code class="python">@property
def model(self):
    return self._images_to_logits_fn</code></pre>
</details>
</dd>
</dl>
<h3>Methods</h3>
<dl>
<dt id="silk.models.magicpoint.MagicPoint.forward"><code class="name flex">
<span>def <span class="ident">forward</span></span>(<span>self, *args, **kwargs) ‑> Any</span>
</code></dt>
<dd>
<div class="desc"><p>Same as :meth:<code>torch.nn.Module.forward()</code>.</p>
<h2 id="args">Args</h2>
<dl>
<dt><strong><code>*args</code></strong></dt>
<dd>Whatever you decide to pass into the forward method.</dd>
<dt><strong><code>**kwargs</code></strong></dt>
<dd>Keyword arguments are also possible.</dd>
</dl>
<h2 id="return">Return</h2>
<p>Your model's output</p></div>
<details class="source">
<summary>
<span>Expand source code</span>
</summary>
<pre><code class="python">def forward(self, *args, **kwargs) -&gt; Any:
    r&#34;&#34;&#34;
    Same as :meth:`torch.nn.Module.forward()`.

    Args:
        *args: Whatever you decide to pass into the forward method.
        **kwargs: Keyword arguments are also possible.

    Return:
        Your model&#39;s output
    &#34;&#34;&#34;
    return super().forward(*args, **kwargs)</code></pre>
</details>
</dd>
<dt id="silk.models.magicpoint.MagicPoint.predict_step"><code class="name flex">
<span>def <span class="ident">predict_step</span></span>(<span>self, batch: <a title="silk.transforms.abstract.NamedContext" href="../transforms/abstract.html#silk.transforms.abstract.NamedContext">NamedContext</a>, batch_idx: Optional[int] = None, dataloader_idx: Optional[int] = None) ‑> Any</span>
</code></dt>
<dd>
<div class="desc"><p>Step function called during :meth:<code>~pytorch_lightning.trainer.trainer.Trainer.predict</code>. By default, it
calls :meth:<code>~pytorch_lightning.core.lightning.LightningModule.forward</code>. Override to add any processing
logic.</p>
<p>The :meth:<code>~pytorch_lightning.core.lightning.LightningModule.predict_step</code> is used
to scale inference on multi-devices.</p>
<p>To prevent an OOM error, it is possible to use :class:<code>~pytorch_lightning.callbacks.BasePredictionWriter</code>
callback to write the predictions to disk or database after each batch or on epoch end.</p>
<p>The :class:<code>~pytorch_lightning.callbacks.BasePredictionWriter</code> should be used while using a spawn
based accelerator. This happens for <code>Trainer(strategy="ddp_spawn")</code>
or training on 8 TPU cores with <code>Trainer(tpu_cores=8)</code> as predictions won't be returned.</p>
<p>Example ::</p>
<pre><code>class MyModel(LightningModule):

    def predicts_step(self, batch, batch_idx, dataloader_idx):
        return self(batch)

dm = ...
model = MyModel()
trainer = Trainer(gpus=2)
predictions = trainer.predict(model, dm)
</code></pre>
<h2 id="args">Args</h2>
<dl>
<dt><strong><code>batch</code></strong></dt>
<dd>Current batch</dd>
<dt><strong><code>batch_idx</code></strong></dt>
<dd>Index of current batch</dd>
<dt><strong><code>dataloader_idx</code></strong></dt>
<dd>Index of the current dataloader</dd>
</dl>
<h2 id="return">Return</h2>
<p>Predicted output</p></div>
<details class="source">
<summary>
<span>Expand source code</span>
</summary>
<pre><code class="python">def predict_step(
    self,
    batch: NamedContext,
    batch_idx: Optional[int] = None,
    dataloader_idx: Optional[int] = None,
) -&gt; Any:
    images = self.flow.with_outputs(&#34;images&#34;)(batch)
    points = self._images_to_logits_fn.flow.with_outputs(&#34;positions&#34;)(images)
    return batch.add(&#34;points&#34;, points)</code></pre>
</details>
</dd>
<dt id="silk.models.magicpoint.MagicPoint.test_step"><code class="name flex">
<span>def <span class="ident">test_step</span></span>(<span>self, batch: <a title="silk.transforms.abstract.NamedContext" href="../transforms/abstract.html#silk.transforms.abstract.NamedContext">NamedContext</a>, batch_idx: int, dataloader_idx: Optional[int] = None) ‑> <a title="silk.transforms.abstract.NamedContext" href="../transforms/abstract.html#silk.transforms.abstract.NamedContext">NamedContext</a></span>
</code></dt>
<dd>
<div class="desc"><p>Operates on a single batch of data from the test set.
In this step you'd normally generate examples or calculate anything of interest
such as accuracy.</p>
<p>.. code-block:: python</p>
<pre><code># the pseudocode for these calls
test_outs = []
for test_batch in test_data:
    out = test_step(test_batch)
    test_outs.append(out)
test_epoch_end(test_outs)
</code></pre>
<h2 id="args">Args</h2>
<dl>
<dt>batch (:class:<code>~torch.Tensor</code> | (:class:<code>~torch.Tensor</code>, &hellip;) | [:class:<code>~torch.Tensor</code>, &hellip;]):</dt>
<dt>The output of your :class:<code>~torch.utils.data.DataLoader</code>. A tensor, tuple or list.</dt>
<dt><strong><code>batch_idx</code></strong> :&ensp;<code>int</code></dt>
<dd>The index of this batch.</dd>
<dt><strong><code>dataloader_idx</code></strong> :&ensp;<code>int</code></dt>
<dd>The index of the dataloader that produced this batch
(only if multiple test dataloaders used).</dd>
</dl>
<h2 id="return">Return</h2>
<p>Any of.</p>
<ul>
<li>Any object or value</li>
<li><code>None</code> - Testing will skip to the next batch</li>
</ul>
<p>.. code-block:: python</p>
<pre><code># if you have one test dataloader:
def test_step(self, batch, batch_idx):
    ...


# if you have multiple test dataloaders:
def test_step(self, batch, batch_idx, dataloader_idx):
    ...
</code></pre>
<p>Examples::</p>
<pre><code># CASE 1: A single test dataset
def test_step(self, batch, batch_idx):
    x, y = batch

    # implement your own
    out = self(x)
    loss = self.loss(out, y)

    # log 6 example images
    # or generated text... or whatever
    sample_imgs = x[:6]
    grid = torchvision.utils.make_grid(sample_imgs)
    self.logger.experiment.add_image('example_images', grid, 0)

    # calculate acc
    labels_hat = torch.argmax(out, dim=1)
    test_acc = torch.sum(y == labels_hat).item() / (len(y) * 1.0)

    # log the outputs!
    self.log_dict({'test_loss': loss, 'test_acc': test_acc})
</code></pre>
<p>If you pass in multiple test dataloaders, :meth:<code>test_step</code> will have an additional argument.</p>
<p>.. code-block:: python</p>
<pre><code># CASE 2: multiple test dataloaders
def test_step(self, batch, batch_idx, dataloader_idx):
    # dataloader_idx tells you which dataset this is.
    ...
</code></pre>
<h2 id="note">Note</h2>
<p>If you don't need to test you don't need to implement this method.</p>
<h2 id="note_1">Note</h2>
<p>When the :meth:<code>test_step</code> is called, the model has been put in eval mode and
PyTorch gradients have been disabled. At the end of the test epoch, the model goes back
to training mode and gradients are enabled.</p></div>
<details class="source">
<summary>
<span>Expand source code</span>
</summary>
<pre><code class="python">def test_step(
    self, batch: NamedContext, batch_idx: int, dataloader_idx: Optional[int] = None
) -&gt; NamedContext:
    images, labels = self._batch_to_images_labels_fn(batch)
    (
        class_probs,
        probs_map,
        nms_pred_positions_with_prob,
    ) = self._images_to_logits_fn.flow.with_outputs(
        (
            &#34;probability&#34;,
            &#34;score&#34;,
            &#34;positions&#34;,
        )
    )(
        images
    )

    class_label = torch.argmax(labels, dim=1)

    pred_positions_with_prob = prob_map_to_positions_with_prob(
        probs_map, threshold=1e-4
    )

    batch = batch.add(&#34;one_hot_class_labels&#34;, labels)
    batch = batch.add(&#34;class_label&#34;, class_label)
    batch = batch.add(&#34;class_probs&#34;, class_probs)
    batch = batch.add(&#34;probs_map&#34;, probs_map)
    batch = batch.add(&#34;pred_positions_with_prob&#34;, pred_positions_with_prob)
    batch = batch.add(&#34;nms_pred_positions_with_prob&#34;, nms_pred_positions_with_prob)

    batch = self._named_context_to_device(batch)

    return batch</code></pre>
</details>
</dd>
<dt id="silk.models.magicpoint.MagicPoint.training_step"><code class="name flex">
<span>def <span class="ident">training_step</span></span>(<span>self, batch, batch_idx)</span>
</code></dt>
<dd>
<div class="desc"><p>Here you compute and return the training loss and some additional metrics for e.g.
the progress bar or logger.</p>
<h2 id="args">Args</h2>
<p>batch (:class:<code>~torch.Tensor</code> | (:class:<code>~torch.Tensor</code>, &hellip;) | [:class:<code>~torch.Tensor</code>, &hellip;]):
The output of your :class:<code>~torch.utils.data.DataLoader</code>. A tensor, tuple or list.
batch_idx (<code>int</code>): Integer displaying index of this batch
optimizer_idx (<code>int</code>): When using multiple optimizers, this argument will also be present.
hiddens (<code>Any</code>): Passed in if
:paramref:<code>~pytorch_lightning.core.lightning.LightningModule.truncated_bptt_steps</code> &gt; 0.</p>
<h2 id="return">Return</h2>
<p>Any of.</p>
<ul>
<li>:class:<code>~torch.Tensor</code> - The loss tensor</li>
<li><code>dict</code> - A dictionary. Can include any keys, but must include the key <code>'loss'</code></li>
<li><code>None</code> - Training will skip to the next batch. This is only for automatic optimization.
This is not supported for multi-GPU, TPU, IPU, or DeepSpeed.</li>
</ul>
<p>In this step you'd normally do the forward pass and calculate the loss for a batch.
You can also do fancier things like multiple forward passes or something model specific.</p>
<p>Example::</p>
<pre><code>def training_step(self, batch, batch_idx):
    x, y, z = batch
    out = self.encoder(x)
    loss = self.loss(out, x)
    return loss
</code></pre>
<p>If you define multiple optimizers, this step will be called with an additional
<code>optimizer_idx</code> parameter.</p>
<p>.. code-block:: python</p>
<pre><code># Multiple optimizers (e.g.: GANs)
def training_step(self, batch, batch_idx, optimizer_idx):
    if optimizer_idx == 0:
        # do training_step with encoder
        ...
    if optimizer_idx == 1:
        # do training_step with decoder
        ...
</code></pre>
<p>If you add truncated back propagation through time you will also get an additional
argument with the hidden states of the previous step.</p>
<p>.. code-block:: python</p>
<pre><code># Truncated back-propagation through time
def training_step(self, batch, batch_idx, hiddens):
    # hiddens are the hidden states from the previous truncated backprop step
    out, hiddens = self.lstm(data, hiddens)
    loss = ...
    return {"loss": loss, "hiddens": hiddens}
</code></pre>
<h2 id="note">Note</h2>
<p>The loss value shown in the progress bar is smoothed (averaged) over the last values,
so it differs from the actual loss returned in train/validation step.</p></div>
<details class="source">
<summary>
<span>Expand source code</span>
</summary>
<pre><code class="python">def training_step(self, batch, batch_idx):
    loss = self._training_task.batch_to_training_loss_fn(batch)
    self.log(&#34;train.loss&#34;, loss)
    return loss</code></pre>
</details>
</dd>
<dt id="silk.models.magicpoint.MagicPoint.validation_step"><code class="name flex">
<span>def <span class="ident">validation_step</span></span>(<span>self, batch, batch_idx)</span>
</code></dt>
<dd>
<div class="desc"><p>Operates on a single batch of data from the validation set.
In this step you'd might generate examples or calculate anything of interest like accuracy.</p>
<p>.. code-block:: python</p>
<pre><code># the pseudocode for these calls
val_outs = []
for val_batch in val_data:
    out = validation_step(val_batch)
    val_outs.append(out)
validation_epoch_end(val_outs)
</code></pre>
<h2 id="args">Args</h2>
<dl>
<dt>batch (:class:<code>~torch.Tensor</code> | (:class:<code>~torch.Tensor</code>, &hellip;) | [:class:<code>~torch.Tensor</code>, &hellip;]):</dt>
<dt>The output of your :class:<code>~torch.utils.data.DataLoader</code>. A tensor, tuple or list.</dt>
<dt><strong><code>batch_idx</code></strong> :&ensp;<code>int</code></dt>
<dd>The index of this batch</dd>
<dt><strong><code>dataloader_idx</code></strong> :&ensp;<code>int</code></dt>
<dd>The index of the dataloader that produced this batch
(only if multiple val dataloaders used)</dd>
</dl>
<h2 id="return">Return</h2>
<ul>
<li>Any object or value</li>
<li><code>None</code> - Validation will skip to the next batch</li>
</ul>
<p>.. code-block:: python</p>
<pre><code># pseudocode of order
val_outs = []
for val_batch in val_data:
    out = validation_step(val_batch)
    if defined("validation_step_end"):
        out = validation_step_end(out)
    val_outs.append(out)
val_outs = validation_epoch_end(val_outs)
</code></pre>
<p>.. code-block:: python</p>
<pre><code># if you have one val dataloader:
def validation_step(self, batch, batch_idx):
    ...


# if you have multiple val dataloaders:
def validation_step(self, batch, batch_idx, dataloader_idx):
    ...
</code></pre>
<p>Examples::</p>
<pre><code># CASE 1: A single validation dataset
def validation_step(self, batch, batch_idx):
    x, y = batch

    # implement your own
    out = self(x)
    loss = self.loss(out, y)

    # log 6 example images
    # or generated text... or whatever
    sample_imgs = x[:6]
    grid = torchvision.utils.make_grid(sample_imgs)
    self.logger.experiment.add_image('example_images', grid, 0)

    # calculate acc
    labels_hat = torch.argmax(out, dim=1)
    val_acc = torch.sum(y == labels_hat).item() / (len(y) * 1.0)

    # log the outputs!
    self.log_dict({'val_loss': loss, 'val_acc': val_acc})
</code></pre>
<p>If you pass in multiple val dataloaders, :meth:<code>validation_step</code> will have an additional argument.</p>
<p>.. code-block:: python</p>
<pre><code># CASE 2: multiple validation dataloaders
def validation_step(self, batch, batch_idx, dataloader_idx):
    # dataloader_idx tells you which dataset this is.
    ...
</code></pre>
<h2 id="note">Note</h2>
<p>If you don't need to validate you don't need to implement this method.</p>
<h2 id="note_1">Note</h2>
<p>When the :meth:<code>validation_step</code> is called, the model has been put in eval mode
and PyTorch gradients have been disabled. At the end of validation,
the model goes back to training mode and gradients are enabled.</p></div>
<details class="source">
<summary>
<span>Expand source code</span>
</summary>
<pre><code class="python">def validation_step(self, batch, batch_idx):
    loss = self._training_task.batch_to_validation_loss_fn(batch)
    self.log(&#34;val.loss&#34;, loss)
    return loss</code></pre>
</details>
</dd>
</dl>
<h3>Inherited members</h3>
<ul class="hlist">
<li><code><b><a title="silk.models.magicpoint.HomographyAdaptation" href="#silk.models.magicpoint.HomographyAdaptation">HomographyAdaptation</a></b></code>:
<ul class="hlist">
<li><code><a title="silk.models.magicpoint.HomographyAdaptation.homographic_adaptation_prediction" href="#silk.models.magicpoint.HomographyAdaptation.homographic_adaptation_prediction">homographic_adaptation_prediction</a></code></li>
</ul>
</li>
</ul>
</dd>
</dl>
</section>
</article>
<nav id="sidebar">
<h1>Index</h1>
<div class="toc">
<ul>
<li><a href="#checked-parity">Checked Parity</a><ul>
<li><a href="#with-paper-httpsarxivorgpdf171207629pdf">With Paper : https://arxiv.org/pdf/1712.07629.pdf</a><ul>
<li><a href="#optimizer-page-6">Optimizer (page 6)</a></li>
<li><a href="#training-page-6">Training (page 6)</a></li>
<li><a href="#metrics-page-4">Metrics (page 4)</a></li>
</ul>
</li>
</ul>
</li>
</ul>
</div>
<ul id="index">
<li><h3>Super-module</h3>
<ul>
<li><code><a title="silk.models" href="index.html">silk.models</a></code></li>
</ul>
</li>
<li><h3><a href="#header-classes">Classes</a></h3>
<ul>
<li>
<h4><code><a title="silk.models.magicpoint.HomographyAdaptation" href="#silk.models.magicpoint.HomographyAdaptation">HomographyAdaptation</a></code></h4>
<ul class="">
<li><code><a title="silk.models.magicpoint.HomographyAdaptation.homographic_adaptation_prediction" href="#silk.models.magicpoint.HomographyAdaptation.homographic_adaptation_prediction">homographic_adaptation_prediction</a></code></li>
</ul>
</li>
<li>
<h4><code><a title="silk.models.magicpoint.MagicPoint" href="#silk.models.magicpoint.MagicPoint">MagicPoint</a></code></h4>
<ul class="two-column">
<li><code><a title="silk.models.magicpoint.MagicPoint.dump_patches" href="#silk.models.magicpoint.MagicPoint.dump_patches">dump_patches</a></code></li>
<li><code><a title="silk.models.magicpoint.MagicPoint.forward" href="#silk.models.magicpoint.MagicPoint.forward">forward</a></code></li>
<li><code><a title="silk.models.magicpoint.MagicPoint.model" href="#silk.models.magicpoint.MagicPoint.model">model</a></code></li>
<li><code><a title="silk.models.magicpoint.MagicPoint.predict_step" href="#silk.models.magicpoint.MagicPoint.predict_step">predict_step</a></code></li>
<li><code><a title="silk.models.magicpoint.MagicPoint.test_step" href="#silk.models.magicpoint.MagicPoint.test_step">test_step</a></code></li>
<li><code><a title="silk.models.magicpoint.MagicPoint.training" href="#silk.models.magicpoint.MagicPoint.training">training</a></code></li>
<li><code><a title="silk.models.magicpoint.MagicPoint.training_step" href="#silk.models.magicpoint.MagicPoint.training_step">training_step</a></code></li>
<li><code><a title="silk.models.magicpoint.MagicPoint.validation_step" href="#silk.models.magicpoint.MagicPoint.validation_step">validation_step</a></code></li>
</ul>
</li>
</ul>
</li>
</ul>
</nav>
</main>
<footer id="footer">
<p>Generated by <a href="https://pdoc3.github.io/pdoc" title="pdoc: Python API documentation generator"><cite>pdoc</cite> 0.10.0</a>.</p>
</footer>
</body>
</html>