In_silico_perturbation : out of memory error + GPU size

#338
by melanieblr - opened

Hello,
I am trying to run the in_silico_perturbation code on the dataset human_dcm_hcm_nf.dataset with these parameters (and using the 6 layer model) :

isp = InSilicoPerturber(perturb_type="delete",
perturb_rank_shift=None,
genes_to_perturb="all",
combos=0,
anchor_gene=None,
model_type="CellClassifier",
num_classes=3,
emb_mode="cell",
cell_emb_style="mean_pool",
filter_data=filter_data_dict,
cell_states_to_model=cell_states_to_model,
state_embs_dict=state_embs_dict,
max_ncells=2000,
emb_layer=0,
forward_batch_size=1,
nproc=3)

I only have one 16G GPU and even after reducing the batch size to 1, this error still occurs:

OutOfMemoryError: CUDA out of memory. Tried to allocate 20.00 MiB. GPU 0 has a total capacity of 15.57 GiB of which 19.44 MiB is free. Process 831440 has 160.00 MiB memory in use. Process 3121141 has 13.58 GiB memory in use. Process 3141803 has 1.81 GiB memory in use. Of the allocated memory 1.68 GiB is allocated by PyTorch, and 4.65 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.

Do you know if there is another way to fix this error?
Additionally, I have the option to increase the memory capacity of my GPU. Would upgrading to a 32GB GPU be enough to resolve this problem?

Thank you for your help.

Thank you for your question! If the out of memory issue occurs at some point after the first cell, you could try to set the memory reset threshold to be less than 1000 cells, which is what is currently set at in line 856 of the in_silico_perturber.py code. Otherwise, you could consider increasing your memory capacity. The lowest amount of memory we have tested is 32G GPUs, which is sufficient.

ctheodoris changed discussion status to closed

Sign up or log in to comment