John6666 commited on
Commit
5bd44d9
1 Parent(s): c2de824

Update lisa_on_cuda/utils/app_helpers.py

Browse files

The **inference_decorator** was not passed to the model loading function. This gives the illusion that the model load function has no GPU. It still worked before.
There are other ways to do **.to("cuda")** the models in the inference function after they have been loaded into the CPU. Where this is possible, this is lighter.

Files changed (1) hide show
  1. lisa_on_cuda/utils/app_helpers.py +1 -1
lisa_on_cuda/utils/app_helpers.py CHANGED
@@ -309,7 +309,7 @@ def get_inference_model_by_args(
309
  if internal_logger0 is None:
310
  internal_logger0 = app_logger
311
  internal_logger0.info(f"args_to_parse:{args_to_parse}, creating model...")
312
- model, clip_image_processor, tokenizer, transform = get_model(args_to_parse, device_map=device_map, device=device)
313
  internal_logger0.info("created model, preparing inference function")
314
  no_seg_out = placeholders["no_seg_out"]
315
 
 
309
  if internal_logger0 is None:
310
  internal_logger0 = app_logger
311
  internal_logger0.info(f"args_to_parse:{args_to_parse}, creating model...")
312
+ model, clip_image_processor, tokenizer, transform = get_model(args_to_parse, device_map=device_map, device=device, inference_decorator=inference_decorator)
313
  internal_logger0.info("created model, preparing inference function")
314
  no_seg_out = placeholders["no_seg_out"]
315