Memory leak in Midas depthmap computation?

#2
by LaurentClaessens - opened

I'm using Midas for computing the depthmap of a (large) list of images, very like in the demo

My issue is that the RAM usage increase as I make more and more inferences.

Here is the full code.

#!venv/bin/python3

from pathlib import Path
import psutil

import numpy as np
import torch
import cv2


def make_model():
    model_type = "DPT_BEiT_L_512"       # MiDaS v3.1 - Large
    midas = torch.hub.load("intel-isl/MiDaS", model_type)

    device = torch.device("cuda")
    midas.to(device)
    midas.eval()

    midas_transforms = torch.hub.load("intel-isl/MiDaS", "transforms")

    transform = midas_transforms.dpt_transform

    return {"transform": transform,
            "device": device,
            "midas": midas
            }


def get_depthmap(cv_image, model):
    """Make the inference."""
    transform = model['transform']
    device = model["device"]
    midas = model["midas"]

    input_batch = transform(cv_image).to(device)

    with torch.no_grad():
        prediction = midas(input_batch)

        prediction = torch.nn.functional.interpolate(
            prediction.unsqueeze(1),
            size=cv_image.shape[:2],
            mode="bilinear",
            align_corners=False,
        ).squeeze()

    output = prediction.cpu().numpy()
    formatted = (output * 255 / np.max(output)).astype('uint8')

    return formatted


# Create Midas "DPT_BEiT_L_512" -  MiDaS v3.1 - Large
model = make_model()

image_dir = Path('.') / "all_images"
for image_file in image_dir.iterdir():
    ram_usage = psutil.virtual_memory()[2]
    print("image", ram_usage)
    cv_image = cv2.imread(str(image_file))
    _ = get_depthmap(cv_image, model)

In short:

  • Create the model "DPT_BEiT_L_512"
  • Define the function inference
  • loop over the images in the directory all_images
  • for each: cv2.imread
  • compute the depthmap (do not keep the result in memory)

I see that the RAM usage keeps raising over and over.

Variations 1

Instead of looping over a directory of images, I infer always the same image.
result: no leak.

Variation 2

Instead of looping over a directory of images, I infer always the same image but adding a random noise.
Result: no leak

Variation 3

I tried with different combinations of stuff like

optimizer = SGD(midas.parameters(), lr=0.01)
self.optimizer.zero_grad()
torch.cuda.empty_cache()

with no result.

Observation

Using a memory profiler, it seems that the most RAM intensive line is with torch.no_grad():.
But I have no idea how to exploit that information.

QUESTIONS:

  • Where does the memory leak come from ?
  • Do I have to "reset" something between two inferences ?

Sign up or log in to comment