question
stringlengths 9
229
| context
stringlengths 0
17.6k
| answer
stringlengths 5
3.54k
| id
stringlengths 18
28
| url
stringlengths 94
97
|
---|---|---|---|---|
Can training step return dictionary? | My training_step returns {βlossβ: loss, βnumβ: len(batch)}, so I could calculate mean loss at the end of epoch. But now I get this warning, that my training_step returns None and loss doesn't display in the progress bar. How can I return dict from training_step and display loss in the progress bar at the same tim? | Can training step return dictionary?
Yes, check the doc of LightningModule.training_step.
How can I return dict from training_step and display loss in the progress bar at the same time?
just return dict like what you do now and use self.log(loss, prog_bar=True) to display loss in prog_bar.
so I could calculate mean loss at the end of epoch
BTW, If you just want mean loss, I think pytorch_lightning have already done for you and you don't need to modified training_step_end(not sure) | MDEwOkRpc2N1c3Npb24zNDMzMTY2 | https://github.com/PyTorchLightning/pytorch-lightning/discussions/8153#discussioncomment-928490 |
Better undestanding how data is loded in datamodule setup method for multi GPU setting in NLP | I want to better understand the setup and prepare_data methods in multi gpu scenariu in context of NLP and text processing.
I have prepared the DataModule which process json line file with pairs of sentence for translation task. The file contains 10M lines.
In prepare_data() I open the data file, read it to memory, do some basic filtering (remove to long sentences and do some sorting based on length in order to group similar length sentences together) then I write it to another file (filtered_data.json).
Next in the setup() method I read filtered_data.json and split it to train and valid.
I can perform split deterministically so train and valid splits will always have the same elements or I can split randomly then each GPU process will have a different train and valid sets.
When using it in multi-gpu (2 GPUs) each process will have its own copy of the train and valid set (am I right?). Which approach is better in context data utilization, random or deterministically?
I do not fully understand how distributed DataLoader handles these two approaches? Could someone explain it in detail?
If data are loaded deterministically then all GPU processes, especially forward and backward pass will return the same values (for gpu 1 and 2), it is efficient? How gradients are merged and how network weight updates will be performed.
Or maybe the second (random split) approach is better because gradients computed on different samples and merged from 2 gpus will result in a better estimation of the true gradient. | I have prepared the DataModule which process json line file with pairs of sentence for translation task. The file contains 10M lines.
In prepare_data I open the data file, read it to memory, do some basic filtering (remove to long sentences and do some sorting based on length in order to group similar length sentences together) then I write it to another file (filtered_data.json).
Do all of that either offline in a different script, or do it in the prepare_data hook.
Next in the setup method I read filtered_data.json and split it to train and valid.
Sounds good. Each GPU/node will run the same, so you will have the same train and val split in all of them (initially). Don't split the data differently for each GPU, that part will be done by the DistributedSampler [1].
I do not fully understand how distributed DataLoader handles these two approaches? Could someone explain it in detail?
Lightning takes your DataLoader and adds a DistributedSampler. The DS knows on which GPU it is and will sample only a portion of your data on one GPU and another portion on the other GPU. Each GPU sees a different split of train and a different split of val.
If data are loaded deterministically then all GPU processes, especially forward and backward pass will return the same values (for gpu 1 and 2), it is efficient? How gradients are merged and how network weight updates will be performed.
As explained above, the dataloader on each GPU will return different samples on each GPU automatically. Each GPU will have the same network weights, uses different data to compute gradients, then gradients are averaged so each GPU gets the same update and starts with the same weights for the next forward/backward [2].
Or maybe the second (random split) approach is better because gradients computed on different samples and merged from 2 gpus will result in a better estimation of the true gradient.
Yes, again this is automatically done for you.
References:
[1] DistributeSampler
[2] Distributed Training in PyTorch
[3] Multi-GPU training in PyTorch | MDEwOkRpc2N1c3Npb24zMzM0MTk4 | https://github.com/PyTorchLightning/pytorch-lightning/discussions/7186#discussioncomment-654431 |
return_result role in training_batch_loop.py | Hello,
I have been trying to debug OOM after a few iterations on my model.
After investigation I saw that in loops/batch/training_batch_loop.py there is this piece of code (L235):
result = self.training_step_and_backward(split_batch, batch_idx, opt_idx, optimizer, hiddens)
if result is not None:
return_result.update(result)
return return_result.loss
What I see is that return_result will keep the computation graph as it contains the loss. So I wonder what is the role of this variable? Also, where is the graph released, I could no go any further than "_training_step_and_backward_closure"? I don't understand why my model runs fine for a few iterations then there is some increase in memory. | Hello Justus!
Indeed I have seen during my investigations that master changed quite a lot for that part of the code.
For the specific concern that I described see:
#9343 and
#9336
pull request 9336 seems to address exactly the issue that I was referring to.
Many thanks to the team for the great responsiveness and great work!
Julien | MDEwOkRpc2N1c3Npb24zNTU2MTc0 | https://github.com/PyTorchLightning/pytorch-lightning/discussions/9332#discussioncomment-1289737 |
Using callbacks for datamodule setup preprocessing logic? | I have some preprocessing logic to run during datamodule setup process, but only in certain situations (mostly to try out different preprocessing steps while experimenting, but at other times, its due to the model I am using with the datamodule).
Is there a way to specify a set of data preprocessing steps to perform using callbacks? Reading the documentation, I could not find the correct hook to use. | Dear @brijow,
I wondered if something like that could fit your use-case ?
class MyDataModule(LightningDataModule):
def __init__(self):
self._processed_train_dataset = None
def setup(self):
self.train_dataset = ...
@property
def processed_train_dataset(self):
return self._processed_train_dataset or self.train_dataset
@processed_train_dataset.setter
def processed_train_dataset(self, processed_train_dataset):
self._processed_train_dataset = processed_train_dataset
def train_dataloader(self):
return DataLoader(self.processed_train_dataset)
class Preprocessing1(Callback):
def preprocess_function(self, dataset):
# your preprocessing logic
def on_train_start(self, trainer, pl_module):
#Β apply processing
trainer.datamodule.processed_train_dataset = self.preprocess_function(trainer.datamodule.train_dataset)
# force dataloader reload
# trainer.reset_train_dataloader(pl_module)
trainer = Trainer(callbacks=Preprocessing1())
trainer.fit(model, dm) | MDEwOkRpc2N1c3Npb24zNDkwOTk3 | https://github.com/PyTorchLightning/pytorch-lightning/discussions/8650#discussioncomment-1124276 |
changing val_check_interval from 0.01 to 0.005 changes number of steps in an epoch | Changing val_check_interval from 0.01 to 0.005 changed the number of steps in an epoch from 2k to 3k for one of my experiments. Wanted to know if that is expected behavior. | The number of steps in a epoch that is shown in the progressbar is both the number of training steps AND the number of validation steps. Since val_check_interval changes the number of validation steps, it makes sense that you are seeing the number changing :] | MDEwOkRpc2N1c3Npb24zMjU2NDQ1 | https://github.com/PyTorchLightning/pytorch-lightning/discussions/6440#discussioncomment-455752 |
To delete | to be continued ... | MDEwOkRpc2N1c3Npb24zMzQ3MzM1 | https://github.com/PyTorchLightning/pytorch-lightning/discussions/7312#discussioncomment-686109 |
|
How to load a released PL model in my directory? | Hello! Iβm new to PyTorch Lightning. Recently, I am developing a PyTorch project. I want to run an open-source DeepSpeech code in Github and load its checkpoint. I clone their repository and run successfully. Then, I want to add their model to my project.
In PyTorch, this is easy. I only need to copy their model definition files and checkpoint files to my project directory. However, when I did these, I received an error when I ran load_from_checkpoint.
ModuleNotFoundError: No module named deepspeech_pytorch
Then, I try to directly call torch.load('librispeech_pretrained_v3.ckpt'). The error is still raised.
Thus, I was wondering if I can load their model in my own directory without copying their whole repository? | I assume you are talking about this one, right?
https://github.com/SeanNaren/deepspeech.pytorch
Maybe you are getting
ModuleNotFoundError: No module named deepspeech_pytorch
because you didn't install that module into your environment. Follow the instructions on the README page of the repository to install the package, i.e., pip install -e . in the cloned repo folder.
Then in your other projects you can import the deepspeech_pytorch module given that you active the same virtual environment. An so loading a model checkpoint should be able to import that too under the same conditions.
Hope that helps | D_kwDOCqWgoM4ANv6a | https://github.com/PyTorchLightning/pytorch-lightning/discussions/9795#discussioncomment-1421573 |
Validation crashes when setting seed or with val_num_workers > 0 with CUDA initialization error | I get the below error immediately when I set a CUDA seed or after a few epochs without seed and with val_workers>0. I also find that I get the error when I try reading from a pickled file containing PyTorch tensors (on cpu).
I have a small dataset, so I load all the data in the __init__ of my Dataset class. I then save it on my disk using pickle so I can save on dataloading time when I run my code again. Now, since I have 2 GPUs, DDP in pytorch-lightning starts 2 processes and each of these processes start reading from the pickle file. Both the training data and validation data are being read from pickle files.
My error is quite similar to the the comment mentioned here - pytorch/pytorch#28950 (comment). Note that both the person who commented and I, have pin_memory=False although the title says otherwise.
Any ideas as to why this is happening will surely help me! Thank you.
PL version = 1.4.2 , torch version = '1.9.0+cu102', CUDA version = 10.2
Validation sanity check: 0it [00:00, ?it/s]/home/usr/pytorch/lib/python3.8/site-packages/pytorch_lightning/trainer/data_loading.py:105: UserWarning: The dataloader, val
dataloader 0, does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` (try 24 which is the number of cpus on this m
achine) in the `DataLoader` init to improve performance.
rank_zero_warn(
Validation sanity check: 0%| | 0/1 [00:00<?, ?it/s]
/home/usr/pytorch/lib/python3.8/site-packages/torch/nn/functional.py:718: UserWarning: Named tensors and all their associated APIs are an experimental feature and subjec
t to change. Please do not use them for anything important until they are released as stable. (Triggered internally at /pytorch/c10/core/TensorImpl.h:1156.)
return torch.max_pool2d(input, kernel_size, stride, padding, dilation, ceil_mode)
/home/usr/pytorch/lib/python3.8/site-packages/torch/nn/functional.py:718: UserWarning: Named tensors and all their associated APIs are an experimental feature and subjec
t to change. Please do not use them for anything important until they are released as stable. (Triggered internally at /pytorch/c10/core/TensorImpl.h:1156.)
return torch.max_pool2d(input, kernel_size, stride, padding, dilation, ceil_mode)
Global seed set to 42
Global seed set to 42
Epoch 0: 80%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 4/5 [00:14<00:02, 2.80s/it, loss=4.33, v_num=d09et
erminate called after throwing an instance of 'c10::CUDAError' | 0/1 [00:00<?, ?it/s]
what(): CUDA error: initialization error
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Exception raised from insert_events at /pytorch/c10/cuda/CUDACachingAllocator.cpp:1089 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x42 (0x2b5f7135ca22 in /home/usr/pytorch/lib/python3.8/site-packages/torch/lib/libc10.so)
frame #1: <unknown function> + 0x10d7e (0x2b5f710ecd7e in /home/usr/pytorch/lib/python3.8/site-packages/torch/lib/libc10_cuda.so)
frame #2: c10::cuda::CUDACachingAllocator::raw_delete(void*) + 0x1a7 (0x2b5f710ee027 in /home/usr/pytorch/lib/python3.8/site-packages/torch/lib/libc10_cuda.so)
frame #3: c10::TensorImpl::release_resources() + 0x54 (0x2b5f713465a4 in /home/usr/pytorch/lib/python3.8/site-packages/torch/lib/libc10.so)
frame #4: <unknown function> + 0xa27e1a (0x2b5f1a569e1a in /home/usr/pytorch/lib/python3.8/site-packages/torch/lib/libtorch_python.so)
<omitting python frames>
terminate called after throwing an instance of 'c10::CUDAError'
what(): CUDA error: initialization error
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Exception raised from insert_events at /pytorch/c10/cuda/CUDACachingAllocator.cpp:1089 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x42 (0x2b4b41756a22 in /home/usr/pytorch/lib/python3.8/site-packages/torch/lib/libc10.so)
frame #1: <unknown function> + 0x10d7e (0x2b4b414e6d7e in /home/usr/pytorch/lib/python3.8/site-packages/torch/lib/libc10_cuda.so)
frame #2: c10::cuda::CUDACachingAllocator::raw_delete(void*) + 0x1a7 (0x2b4b414e8027 in /home/usr/pytorch/lib/python3.8/site-packages/torch/lib/libc10_cuda.so)
frame #3: c10::TensorImpl::release_resources() + 0x54 (0x2b4b417405a4 in /home/usr/pytorch/lib/python3.8/site-packages/torch/lib/libc10.so)
frame #4: <unknown function> + 0xa27e1a (0x2b4aea963e1a in /home/usr/pytorch/lib/python3.8/site-packages/torch/lib/libtorch_python.so)
<omitting python frames>
Traceback (most recent call last):
File "/home/usr/pytorch/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 990, in _try_get_data
Traceback (most recent call last):
File "/home/usr/pytorch/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 990, in _try_get_data
data = self._data_queue.get(timeout=timeout)
File "/server/easybuild/software/2020/avx2/Core/python/3.8.10/lib/python3.8/multiprocessing/queues.py", line 107, in get
data = self._data_queue.get(timeout=timeout)
File "/server/easybuild/software/2020/avx2/Core/python/3.8.10/lib/python3.8/multiprocessing/queues.py", line 107, in get
if not self._poll(timeout):
File "/server/easybuild/software/2020/avx2/Core/python/3.8.10/lib/python3.8/multiprocessing/connection.py", line 257, in poll
if not self._poll(timeout):
File "/server/easybuild/software/2020/avx2/Core/python/3.8.10/lib/python3.8/multiprocessing/connection.py", line 257, in poll
return self._poll(timeout)
return self._poll(timeout)
File "/server/easybuild/software/2020/avx2/Core/python/3.8.10/lib/python3.8/multiprocessing/connection.py", line 424, in _poll
File "/server/easybuild/software/2020/avx2/Core/python/3.8.10/lib/python3.8/multiprocessing/connection.py", line 424, in _poll
r = wait([self], timeout)
r = wait([self], timeout)
File "/server/easybuild/software/2020/avx2/Core/python/3.8.10/lib/python3.8/multiprocessing/connection.py", line 931, in wait
File "/server/easybuild/software/2020/avx2/Core/python/3.8.10/lib/python3.8/multiprocessing/connection.py", line 931, in wait
ready = selector.select(timeout)
ready = selector.select(timeout)
File "/server/easybuild/software/2020/avx2/Core/python/3.8.10/lib/python3.8/selectors.py", line 415, in select
File "/server/easybuild/software/2020/avx2/Core/python/3.8.10/lib/python3.8/selectors.py", line 415, in select
fd_event_list = self._selector.poll(timeout)
fd_event_list = self._selector.poll(timeout)
File "/home/usr/pytorch/lib/python3.8/site-packages/torch/utils/data/_utils/signal_handling.py", line 66, in handler
File "/home/usr/pytorch/lib/python3.8/site-packages/torch/utils/data/_utils/signal_handling.py", line 66, in handler
_error_if_any_worker_fails()
RuntimeError: DataLoader worker (pid 3404) is killed by signal: Aborted.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/usr/mymodel/run.py", line 22, in <module>
_error_if_any_worker_fails()
RuntimeError: DataLoader worker (pid 3407) is killed by signal: Aborted.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/usr/mymodel/run.py", line 22, in <module>
main()
File "/home/usr/mymodel/run.py", line 18, in main
main()
File "/home/usr/mymodel/run.py", line 18, in main
return train(CFG)
File "/scratch/usr/mymodel/src/train.py", line 110, in train
return train(CFG)
File "/scratch/usr/mymodel/src/train.py", line 110, in train
trainer.fit(model,dm)
File "/home/usr/pytorch/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 553, in fit
trainer.fit(model,dm)
File "/home/usr/pytorch/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 553, in fit
self._run(model)
File "/home/usr/pytorch/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 918, in _run
self._run(model)
File "/home/usr/pytorch/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 918, in _run
self._dispatch()
File "/home/usr/pytorch/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 986, in _dispatch
self._dispatch()
File "/home/usr/pytorch/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 986, in _dispatch
self.accelerator.start_training(self)
File "/home/usr/pytorch/lib/python3.8/site-packages/pytorch_lightning/accelerators/accelerator.py", line 92, in start_training
self.accelerator.start_training(self)
File "/home/usr/pytorch/lib/python3.8/site-packages/pytorch_lightning/accelerators/accelerator.py", line 92, in start_training
self.training_type_plugin.start_training(trainer)
File "/home/usr/pytorch/lib/python3.8/site-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py", line 161, in start_training
self._results = trainer.run_stage()
File "/home/usr/pytorch/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 996, in run_stage
return self._run_train()
File "/home/usr/pytorch/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1045, in _run_train
self.training_type_plugin.start_training(trainer)
File "/home/usr/pytorch/lib/python3.8/site-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py", line 161, in start_training
self.fit_loop.run()
File "/home/usr/pytorch/lib/python3.8/site-packages/pytorch_lightning/loops/base.py", line 111, in run
self._results = trainer.run_stage()
File "/home/usr/pytorch/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 996, in run_stage
self.advance(*args, **kwargs)
File "/home/usr/pytorch/lib/python3.8/site-packages/pytorch_lightning/loops/fit_loop.py", line 200, in advance
return self._run_train()
File "/home/usr/pytorch/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1045, in _run_train
epoch_output = self.epoch_loop.run(train_dataloader)
File "/home/usr/pytorch/lib/python3.8/site-packages/pytorch_lightning/loops/base.py", line 112, in run
self.on_advance_end()
File "/home/usr/pytorch/lib/python3.8/site-packages/pytorch_lightning/loops/epoch/training_epoch_loop.py", line 177, in on_advance_end
self.fit_loop.run()
File "/home/usr/pytorch/lib/python3.8/site-packages/pytorch_lightning/loops/base.py", line 111, in run
self._run_validation()
File "/home/usr/pytorch/lib/python3.8/site-packages/pytorch_lightning/loops/epoch/training_epoch_loop.py", line 256, in _run_validation
self.advance(*args, **kwargs)
File "/home/usr/pytorch/lib/python3.8/site-packages/pytorch_lightning/loops/fit_loop.py", line 200, in advance
epoch_output = self.epoch_loop.run(train_dataloader)
self.val_loop.run()
File "/home/usr/pytorch/lib/python3.8/site-packages/pytorch_lightning/loops/base.py", line 112, in run
File "/home/usr/pytorch/lib/python3.8/site-packages/pytorch_lightning/loops/base.py", line 111, in run
self.advance(*args, **kwargs)
File "/home/usr/pytorch/lib/python3.8/site-packages/pytorch_lightning/loops/dataloader/evaluation_loop.py", line 110, in advance
self.on_advance_end()
File "/home/usr/pytorch/lib/python3.8/site-packages/pytorch_lightning/loops/epoch/training_epoch_loop.py", line 177, in on_advance_end
dl_outputs = self.epoch_loop.run(
File "/home/usr/pytorch/lib/python3.8/site-packages/pytorch_lightning/loops/base.py", line 111, in run
self._run_validation()
File "/home/usr/pytorch/lib/python3.8/site-packages/pytorch_lightning/loops/epoch/training_epoch_loop.py", line 256, in _run_validation
self.advance(*args, **kwargs)
File "/home/usr/pytorch/lib/python3.8/site-packages/pytorch_lightning/loops/epoch/evaluation_epoch_loop.py", line 93, in advance
self.val_loop.run()
File "/home/usr/pytorch/lib/python3.8/site-packages/pytorch_lightning/loops/base.py", line 111, in run
batch_idx, batch = next(dataloader_iter)
File "/home/usr/pytorch/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 521, in __next__
self.advance(*args, **kwargs)
File "/home/usr/pytorch/lib/python3.8/site-packages/pytorch_lightning/loops/dataloader/evaluation_loop.py", line 110, in advance
dl_outputs = self.epoch_loop.run(
File "/home/usr/pytorch/lib/python3.8/site-packages/pytorch_lightning/loops/base.py", line 111, in run
data = self._next_data()
File "/home/usr/pytorch/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1186, in _next_data
self.advance(*args, **kwargs)
File "/home/usr/pytorch/lib/python3.8/site-packages/pytorch_lightning/loops/epoch/evaluation_epoch_loop.py", line 93, in advance
batch_idx, batch = next(dataloader_iter)
File "/home/usr/pytorch/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 521, in __next__
idx, data = self._get_data()
File "/home/usr/pytorch/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1152, in _get_data
data = self._next_data()
File "/home/usr/pytorch/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1186, in _next_data
success, data = self._try_get_data()
File "/home/usr/pytorch/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1003, in _try_get_data
idx, data = self._get_data()
File "/home/usr/pytorch/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1152, in _get_data
raise RuntimeError('DataLoader worker (pid(s) {}) exited unexpectedly'.format(pids_str)) from e
RuntimeError: DataLoader worker (pid(s) 3404) exited unexpectedly
success, data = self._try_get_data()
ile "/home/usr/pytorch/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1003, in _try_get_data
raise RuntimeError('DataLoader worker (pid(s) {}) exited unexpectedly'.format(pids_str)) from e
RuntimeError: DataLoader worker (pid(s) 3407) exited unexpectedly | Same as #8821. | MDEwOkRpc2N1c3Npb24zNTM1Mzkx | https://github.com/PyTorchLightning/pytorch-lightning/discussions/9060#discussioncomment-1234478 |
How to define an interval validate callbacks in lightning | I want to train my model in 20000 steps and run evaluation on each 1000 steps
I try to define the callback like this:
class IntervalStepValidate(Callback):
def __init__(self, config):
self.config = config
self.total_steps = 20000
self.validation_interval = 1000
def on_batch_end(self, trainer, pl_module):
if self.total_steps % self.validation_interval == 0:
trainer.run_evaluation()
But I find out that there is no run_evaluation() in the latest version of Pytorch-Lightning :(
How can I update this code to get the function I want? | there is a val_check_interval argument for it inside Trainer. You can set Trainer(val_check_interval=1000). | D_kwDOCqWgoM4ANogY | https://github.com/PyTorchLightning/pytorch-lightning/discussions/9534#discussioncomment-1331283 |
How to save/load only part of the weights in the model? | For example, part of my model's parameters are frozen, no need to train, no need to save | This might work:
def on_save_checkpoint(checkpoint):
# pop the backbone here using custom logic
del checkpoint['state_dict'][backbone_keys]
LitModel.load_from_checkpoint(ckpt_path, strict=False) | D_kwDOCqWgoM4AN16B | https://github.com/PyTorchLightning/pytorch-lightning/discussions/9961#discussioncomment-1487934 |
CacheDataset with DDP and Multi-GPUs | We use CacheDataset MONAI CacheDataset to speed up data loading. However, when combining the lightning module's standard training code with DDP strategy and multi-GPU environment, the cached dataset is not working as expected:
If provided with a full length of data in the CacheDataset, the initial epoch takes forever to load because each GPU will try to read in and cache ALL data, which is unnecessary because in DDP each GPU will only use a portion of the data.
A workaround is mentioned in here MONAI issue, which mentioning to partition data before feeding into the CacheDataset:
MONAI Tutorial
However, if I make the partitioning in the setup() function, the trainer will train for total_data_length // num_gpus samples each epoch instead of total_ data_length.
And if I put the CacheDataset with full data length in the prepare_data function, the subprocess's object can't access the dataset instance (saved in self.x, which is not recommended).
So what's the best practical way to handle this? My gut feeling is that I should use the partitioned dataset on each GPU, and let the loader use the full length of dataset instead of part of it. Any suggestions? | hey @bill-yc-chen
since DDP executes scripts independently across devices, maybe try DDP_Spawn instead?
https://pytorch-lightning.readthedocs.io/en/latest/advanced/training_tricks.html#sharing-datasets-across-process-boundaries | D_kwDOCqWgoM4AOtq7 | https://github.com/PyTorchLightning/pytorch-lightning/discussions/11763#discussioncomment-2118063 |
How to ignore certain "parameters" with model checkpointing? | I have some data that I store on my LightningModule during validation. I want to prevent this from being saved by the model checkpoint. They are not actually parameters and do not affect the state at all. I want to maintain other parts of the state, I don't want to use weights only.
Is it possible to do this? | Hey @jmerkow,
The checkpoint is generated from the dump_checkpoint function of the CheckpointConnector. One of the latest hooks called is lightning_module.on_save_checkpoint(checkpoint) here: https://github.com/PyTorchLightning/pytorch-lightning/blob/master/pytorch_lightning/trainer/connectors/checkpoint_connector.py#L386
This is actually done in-place.
Therefore, you could do the following.
class MyModel(LightningModule):
def on_save_checkpoint(self, checkpoint):
# pop the keys you are not interested by
checkpoint | D_kwDOCqWgoM4ANrY- | https://github.com/PyTorchLightning/pytorch-lightning/discussions/9627#discussioncomment-1368843 |
load_from_checkpoint giving different validation results | I'm creating a classifier that first trains a VAE then passes it into a convolutional network. The psudo code below kind of describes it:
class VAE(pl.LightningModule):
# ...
class ConvNetwork(pl.LightningModule):
def __init__(self, vae):
# Trying both ways: pass in entire model vs loading checkpoint
# self.vae = vae
# self.vae = VAE.load_from_checkpoint(vae)
freeze_training(self.vae) # sets all params to requries_grad=False
self.sub_network = nn.Sequential(
# Mix of convolutional layers, ReLU activations, and Batch Normalization
)
def forward(self, data):
vae_decoded_results = self.vae(data)
results_that_differ_wildly = self.sub_network(vae_decoded_results)
If I train the VAE and pass in the entire model before training the convolutional network, I get good training/validation results. What I would prefer, however, is to train the VAE in a separate script, save off checkpoints, then pass the path of the checkpoint into the convolutional network. Then in the convolutional network's init I load the vae network, freeze training on it, and proceed to train the convolutional network. When I do this, my training results seem okay, but my validation results are all over the place. Some things I've checked:
After loading the VAE from a checkpoint, I verified the model parameters perfectly match the VAE that produced the checkpoint.
In my forward function for the convolutional network I call the VAE's forward function. The results at this step differ by less than 1% between loading a checkpoint and passing in an entire model.
After passing the VAE forward() results into the first stage of my Convolution network (consists of some convolution layers, ReLU activations, and batch normalization) I get very different results.
I can't for the life of me figure out why using the results from a loaded model would so wildly differ from the results of a model I train and pass in all in one script, especially when the parameters and vae output appear to match. I'm sure I'm just missing something stupid. | Just a wild guess, but maybe the model is in train-mode after loading from a checkpoint. Have you tried model.eval() in addition to setting the requires_grad? I'm thinking about BN layers and so on, where this is important (see here). | MDEwOkRpc2N1c3Npb24zMjkxODc0 | https://github.com/PyTorchLightning/pytorch-lightning/discussions/6678#discussioncomment-537736 |
how to plot confusion_matrix with lightning tensorboard? | I need to plot confusion_matrix in tensotboard ,how to do it ? | https://stackoverflow.com/questions/65498782/how-to-dump-confusion-matrix-using-tensorboard-logger-in-pytorch-lightning | MDEwOkRpc2N1c3Npb24zNDQxODI1 | https://github.com/PyTorchLightning/pytorch-lightning/discussions/8254#discussioncomment-954383 |
The call of training_step and validation_step .etc. | Anyone help me where did these funcs been called in the core parts? I expect it to be called in the loop instance of trainer however not. Quite confused about this. | depends upon the configuration but in a general case here:
pytorch-lightning/pytorch_lightning/plugins/training_type/training_type_plugin.py
Lines 217 to 230
in
454e93b
def training_step(self, *args, **kwargs):
return self.model.training_step(*args, **kwargs)
def post_training_step(self):
pass
def validation_step(self, *args, **kwargs):
return self.model.validation_step(*args, **kwargs)
def test_step(self, *args, **kwargs):
return self.model.test_step(*args, **kwargs)
def predict_step(self, *args, **kwargs):
return self.model.predict_step(*args, **kwargs)
call sequence is loop -> accelerators -> training_type_plugin -> actual call | D_kwDOCqWgoM4AN5Lv | https://github.com/PyTorchLightning/pytorch-lightning/discussions/10070#discussioncomment-1515980 |
NCCL WARN Failed to open libibverbs.so[.1] | Just received qty 2 of A6000 and these are not compatible
with my existing docker file (lack of sm_86 support)
FROM pytorch/pytorch:1.6.0-cuda10.1-cudnn7-runtime
RUN pip install pytorch-lightning==1.0.7
So upgraded my docker to
FROM pytorch/pytorch:1.10.0-cuda11.3-cudnn8-runtime
RUN pip install pytorch-lightning==1.5.10
I also made changed to my code the for the lightning braking change from
trainer = pl.Trainer( gpus=[0,1],
distributed_backend='ddp', ,
....
to
trainer = pl.Trainer( gpus=[0,1],
strategy='ddp',
....
When I try to train it just stops. So set env NCCL_DEBUG=WARN
as per suggestion
to get the following output:
initializing distributed: GLOBAL_RANK: 0, MEMBER: 1/2
initializing distributed: GLOBAL_RANK: 1, MEMBER: 2/2
----------------------------------------------------------------------------------------------------
distributed_backend=nccl
All distributed processes registered. Starting with 2 processes
----------------------------------------------------------------------------------------------------
60b476048acc:22:22 [0] misc/ibvwrap.cc:63 NCCL WARN Failed to open libibverbs.so[.1]
NCCL version 2.10.3+cuda11.1
60b476048acc:120:120 [1] misc/ibvwrap.cc:63 NCCL WARN Failed to open libibverbs.so[.1]
Same happens when I try
FROM pytorch/pytorch:1.8.1-cuda11.1-cudnn8-runtime
FROM pytorch/pytorch:1.9.1-cuda11.1-cudnn8-runtime
FROM pytorch/pytorch:1.10.0-cuda11.3-cudnn8-runtime
My old setup was 2xRTX Titan with nvlink while the new setup is 2xA6000 without a nvlink. nvidia doc says that PCI is used but unclear if I need to do something to use this.
Distributed communication docs say "NCCL backends are built and included in PyTorch distributed (NCCL only when building with CUDA)" .
I suspect I am missing something about the breaking changes from pl 1.0 to 1.5. Would appreciate hints as to what to look for.
Is NCCL something used in pl 1.0 or is this new to pl 1.5?
Does NCCL need to be installed? | Duplicate of #12235. | D_kwDOCqWgoM4AO7sj | https://github.com/PyTorchLightning/pytorch-lightning/discussions/12219#discussioncomment-2304295 |
PyTorch Lightning Optimizer_Step() prevents training_step() from running | Hello,
I had an error where the training_step() was not run properly. I just found out the cause was because of the optimizer_step(). My training step immediately runs after I commented out optimizer_step().
Some other users also experienced the same error as described here: https://stackoverflow.com/questions/66756245/training-step-not-executing-in-pytorch-lightning
My question is: Now that training_step() is running, but my train_loss is explosive because of the lack of a learning rate scheduler, henceforth, what can I implement to re-enable back my learning rate scheduler?
Here's my chunk of code:
def configure_optimizers(self):
"""
AdamW Optimizer lr=0.0006
"""
optimizer = optim.AdamW(self.parameters(),
lr=self.lr,
weight_decay=0.01 # Default
)
self.scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(
optimizer,
mode='min',
factor=0.1,
patience=2,
min_lr=1e-6,
verbose=True
)
return optimizer
#def optimizer_step(self, epoch=None,
# batch_idx=None,
# optimizer=None,
# optimizer_idx=None,
# optimizer_closure=None,
# on_tpu=None,
# using_native_amp=None,
# using_lbfgs=None,
# second_order_closure=None):
#
# if batch_idx == 0: # to call the scheduler after each validation
#
# self.scheduler.step(self.epoch_val_loss)
#
# print(f'metric: {self.epoch_val_loss}, \
# best: {self.scheduler.best}, \
# num_bad_epochs: {self.scheduler.num_bad_epochs}') # for debugging
#
# optimizer.step()
#
# optimizer.zero_grad()
Thank you! | hey @tsuijenk
optimizer _closure must be passed in optimizer.step() since it runs training_step and backward call. You can check the docstrings and examples here: https://pytorch-lightning.readthedocs.io/en/latest/common/lightning_module.html#optimizer-step | D_kwDOCqWgoM4AOb_Z | https://github.com/PyTorchLightning/pytorch-lightning/discussions/11358#discussioncomment-1923977 |
Processing in predict_step() requires access to a DataModule attribute | Hi!
In my LightningDataModule, I apply preprocessing transforms to my input data before feeding it to my dataloader. In the same datamodule, I also defined the postprocessing transforms to apply after the inference.
self.post_transforms = Compose([
AsDiscreted(keys="pred", argmax=True, to_onehot=False, n_classes=3),
Invertd(
keys="pred", # invert the `pred` data field, also support multiple fields
transform=val_transforms,
loader=val_dataloader,
orig_keys="img", # get the previously applied pre_transforms information on the `img` data field,
# then invert `pred` based on this information. we can use same info
# for multiple fields, also support different orig_keys for different fields
meta_keys="pred_meta_dict", # key field to save inverted meta data, every item maps to `keys`
orig_meta_keys="img_meta_dict", # get the meta data from `img_meta_dict` field when inverting,
# for example, may need the `affine` to invert `Spacingd` transform,
# multiple fields can use the same meta data to invert
meta_key_postfix="meta_dict", # if `meta_keys=None`, use "{keys}_{meta_key_postfix}" as the meta key,
# if `orig_meta_keys=None`, use "{orig_keys}_{meta_key_postfix}",
# otherwise, no need this arg during inverting
nearest_interp=True, # change to use "nearest" mode in interpolation when inverting
to_tensor=True, # convert to PyTorch Tensor after inverting
),
SaveImaged(keys="pred", meta_keys="pred_meta_dict", output_dir="./out", output_postfix="seg", resample=False),
])
I want to apply these post_transforms to my inference outputs in predict_step(). What would be the best "Lightning way" to give access to my datamodule.post_transforms attribute to predict_step?
def predict_step(self, batch: Any, batch_idx: int):
batch["pred"] = self.forward(batch)
post_transforms(batch)
Thanks in advance for your suggestions :) | Hi, you should be able to access the datamodule inside the LightningModule.
Try
def predict_step(self, batch: Any, batch_idx: int):
batch["pred"] = self(batch)
self.datamodule.post_transforms(batch)
Also, another tipp: Better use self() instead of self.forward() (generally in PyTorch). | MDEwOkRpc2N1c3Npb24zNDAxNjI5 | https://github.com/PyTorchLightning/pytorch-lightning/discussions/7884#discussioncomment-846190 |
how to save the last epoch only? | "monitor (Optional[str]) β quantity to monitor. By default it is None which saves a checkpoint only for the last epoch."
when i trainning a model, i set the 'monitor' to None, it should save the last epoch as the doc says. but it still save depend on the val_loss, it always save the model with lowest val_loss.
i also try another way, set the 'save_last' to True. while this needs to set a monitor. And if i set save_top_k to 0, it will save nothing; if set to 1, it will save 2 models, the best one and the last one. But i just want to save the last one.
is this a bug or i made sth wrong? is there a way to save model with epoch asigned myself? such as the last 3 epochs? | Hey! Have a look at this example:
import os
import torch
from torch.utils.data import Dataset
from pytorch_lightning import LightningModule, Trainer
from pytorch_lightning.callbacks import ModelCheckpoint
class RandomDataset(Dataset):
def __init__(self, size, length):
self.len = length
self.data = torch.randn(length, size)
def __getitem__(self, index):
return self.data[index]
def __len__(self):
return self.len
class BoringModel(LightningModule):
def __init__(self):
super().__init__()
self.layer = torch.nn.Linear(32, 2)
def forward(self, x):
return self.layer(x)
def training_step(self, batch, batch_idx):
loss = self(batch).sum()
self.log("train_loss", loss)
return {"loss": loss}
def validation_step(self, batch, batch_idx):
loss = self(batch).sum()
self.log("valid_loss", loss, logger=False)
return {"x": loss}
def configure_optimizers(self):
return torch.optim.SGD(self.layer.parameters(), lr=0.1)
def run():
train_data = torch.utils.data.DataLoader(RandomDataset(32, 64), batch_size=2, num_workers=0)
val_data = torch.utils.data.DataLoader(RandomDataset(32, 64), batch_size=2, num_workers=0)
model = BoringModel()
trainer = Trainer(
default_root_dir=os.getcwd(),
limit_train_batches=1,
limit_val_batches=1,
num_sanity_val_steps=0,
max_epochs=5, # this will save a checkpoint at epoch index 4 (last epoch)
weights_summary=None,
logger=False,
callbacks=[ModelCheckpoint(dirpath="./checkpoints", monitor=None)]
)
trainer.fit(model, train_dataloader=train_data, val_dataloaders=val_data)
if __name__ == '__main__':
run()
I'm choosing:
ModelCheckpoint(dirpath="./checkpoints", monitor=None)
That's all, it saves only one checkpoint, named epoch=4-step=4.ckpt, it corresponds to the last epoch being run.
Note: for backward compatibility, when monitor=None, we choose "val_loss" as the monitor when it is available. You should be able to avoid that by just renaming your validation loss to "valid_loss" or something else :) | MDEwOkRpc2N1c3Npb24zMzMyMTA0 | https://github.com/PyTorchLightning/pytorch-lightning/discussions/7153#discussioncomment-642644 |
Cross Entropy Loss and loss of PyTorch Lightning does not matches | Hi, I am working on building a question and answering model using T5(Huggingface) with PyTorch Lightning Module and I am checking my loss and PyTorch Lightning Loss is not being matched.
class UQAFineTuneModel(pl.LightningModule):
def __init__(self):
super().__init__()
self.model = T5ForConditionalGeneration.from_pretrained(
"allenai/unifiedqa-t5-small", return_dict=True
)
self.model.train()
def forward(
self,
source_text_input_ids,
source_text_attention_mask,
target_text_input_ids=None,
):
output = self.model(
input_ids=source_text_input_ids,
attention_mask=source_text_attention_mask,
labels=target_text_input_ids,
)
return output.loss, output.logits
def training_step(self, batch, batch_idx):
source_text_input_ids = batch["source_text_input_ids"]
source_text_attention_mask = batch["source_text_attention_mask"]
target_text_input_ids = batch["target_text_input_ids"]
# labels_attention_mask = batch["target_text_attention_mask"]
loss, outputs = self(
source_text_input_ids, source_text_attention_mask, target_text_input_ids
)
loss_mine = None
output = self.model(
input_ids=source_text_input_ids,
attention_mask=source_text_attention_mask,
labels=target_text_input_ids,
)
labels = batch["target_text_input_ids"].clone()
labels[labels == 0] = -100
if target_text_input_ids is not None:
loss_fct = CrossEntropyLoss(ignore_index=-100)
loss_mine = loss_fct(output.logits.view(-1, outputs.size(-1)), labels.view(-1))
print(f"loss_Hugginface: {loss.item()}, loss_mine : {loss_mine.item()}")
self.log("train_loss", loss, prog_bar=True, logger=True)
return {"loss": loss, "predictions": outputs}
You can see the above image, why loss is not same, help is very much needed, I asked the same question on HuggingFace but they told me to ask here, you can view that discussion here. | Hey @ayush714,
I believe this is related to HF and you might get your answers by opening an issue on their repo directly.
Best,
T.C | MDEwOkRpc2N1c3Npb24zNTQxOTY5 | https://github.com/PyTorchLightning/pytorch-lightning/discussions/9159#discussioncomment-1243087 |
what will be the closest replacement for @auto_move_data? | I really liked @auto_move_data as it gave me a nice and easy way to pass (a single) input to a model, without caring about moving data structures to devices, but now it will be removed π’
I realize that you want people to use trainer.predict() but this requires a dataloader, right?, which at least for me oftentimes is overly complicated while developing.
As far as I see the dataloader/collate/transforms also have to be different from the normal ones, since these typically provide batches with inputs and targets, whereas the forward function only takes inputs.
Likely I'm missing something, can you maybe give some hints about this? | After writing this I realized that I can use something like this:
model = model.eval()
i, _ = next(iter(data.test_dataloader()))
with th.no_grad():
outputs = model.forward(model.transfer_batch_to_device(i))
But I'm still curious about the intended future ways of doing this.... | MDEwOkRpc2N1c3Npb24zMzcwOTA4 | https://github.com/PyTorchLightning/pytorch-lightning/discussions/7588#discussioncomment-752556 |
AttributeError: 'Trainer' object has no attribute 'running_sanity_check" | Hi,
I am trying to optimise the hyperparameters of my network using raytune. My implementation is pretty much based on this:
https://docs.ray.io/en/master/tune/tutorials/tune-pytorch-lightning.html#selecting-a-scheduler
When I train my network using pre-set hyperparameters, it works smoothly. The problems come from the callback, so when I add the following line:
TuneReportCallback({"loss":"val_loss"}, on="validation_end")
I get the following error:
Anyone knows how to solve this??
I don't think the problem is with my code as I haven't done anything different compared to the tutorial!
My code can be found here:
https://github.com/annalauralerede/anomaly-detection/blob/main/lstmae_pl_opt.py | I think it dues to this : ray-project/ray#20741
As of ray[tune]==1.10.0, either downgrade your pytorch-lightning to 1.4.
Or upgrade your raytune to be compatible with pytorch-lightning 1.5+ ( the fix has been merged in this commit ray-project/ray#20562).
$ sudo pip install ray==1.11.0rc0 | D_kwDOCqWgoM4AOy9z | https://github.com/PyTorchLightning/pytorch-lightning/discussions/11926#discussioncomment-2196679 |
What is hparams exactly? | Hi, thanks for the nice product again.
From #525 and #599, I could guess that hparams is required to load a saved model (which I think should be mentioned somewhere in the doc btw). And from the examples, seems like hparams may be argparse.Namespace. Unfortunately though, it was not so easy to understand the concept.
What is hparams exactly? What kind of information it should/can/should not include to work properly? Is it recommended to use hyperparameter argument parser? Say, if I'm not into hyperparameter search at the moment and just want to be able to load the checkpoint model, what is the requirement on the hparams? | It should be an argparse.Namespace. You can get this from argparse, testtube's HyperOptArgumentParser, or create it manually from a dict like so: argparse.Namespace(**my_dict). | MDEwOkRpc2N1c3Npb24yNzkyNTQ2 | https://github.com/PyTorchLightning/pytorch-lightning/discussions/5821#discussioncomment-339811 |
Iterating over task for Continual Learning. | Hi everyone, I am new to PyTorch lightening and I am currently trying to implement a continual learning model in PyTorch lightening.
I have multiple data loaders for different tasks and I want to train on all of these data loaders. After training on task1 with dataloader1 I want to update the parameters of the model which are going to be trained for task two. To do this, I have an attribute named current_task in my dataloader which decides the dataset from which the samples are generated for the current task. My datamodule looks something like this.
class RandSplitCIFAR100DataModule(LightningDataModule):
def __init__(self):
.....
def setup(self, stage: Optional[str] = None):
# load datasets only if they're not loaded already
if not self.data_train and not self.data_val and not self.data_test:
self.data_train = datasets.CIFAR100(self.hparams.data_dir, train=True, transform=self.train_transforms)
self.data_val = datasets.CIFAR100(self.hparams.data_dir, train=False, transform=self.val_transforms)
np.random.seed(self.hparams.seed)
perm = np.random.permutation(self.num_classes)
print(perm)
splits = [
(self.partition_datasetv4(self.data_train, perm[5 * i:5 * (i+1)]),
self.partition_datasetv4(self.data_val, perm[5 * i:5 * (i+1)]),)
for i in range(self.hparams.num_tasks)
]
kwargs = {"num_workers": self.hparams.workers, "pin_memory": self.hparams.pin_memory}
self.loaders = [
(DataLoader(x[0], batch_size=self.hparams.batch_size, shuffle=True, **kwargs),
DataLoader(x[1], batch_size=self.hparams.test_batch_size, shuffle=False, **kwargs),)
for x in splits
]
def update_task(self, i):
self.current_task = i
def train_dataloader(self):
return self.loader[self.current_task][0]
def val_dataloader(self):
return self.loader[self.current_task][1]
Now I want to have a training loop that does something like this.
for task in range(num_tasks):
self.dataloder.update_task(task)
for n, p in model.named_parameters():
# change parameters to update
for epoch in range(max_epochs):
for batch in dataloader:
....
I am currently not able to figure out how to go about this, I feel confident that lightening should be able to handle such cases but I am just not sure how to go about this.
Any help is greatly appreciated!
Prateek | well, there are multiple ways:
if your max_epochs is consistent across all the tasks:
class LitModel(LightningModule):
def on_train_epoch_start(self):
if current_epoch == 0 or (current_epoch + 1) % self.trainer.reload_dataloaders_every_n_epochs == 0:
# update model parameters
max_epochs_n_tasks = max_epochs * n_tasks
trainer = Trainer(max_epochs=max_epochs_n_tasks, reload_dataloaders_every_n_epochs=max_epochs)
model = LitModel()
# inject the update task counter logic inside datamodule
dm = RandSplitCIFAR100DataModule(...)
trainer.fit(model, datamodule=dm)
create an explicit loop
def init_trainer(...):
trainer = Trainer(max_epochs=max_epochs, ...)
return trainer
datamodule = ...
model = ...
for task in range(num_tasks):
# update params
datamodule.update_task(task)
trainer = init_trainer(...)
trainer.fit(model, datamodule=dm)
Although I'd suggest (1), even if your max_epochs differs for each task, it can easily be extended to support that too. | D_kwDOCqWgoM4AOsmI | https://github.com/PyTorchLightning/pytorch-lightning/discussions/11724#discussioncomment-2105709 |
Training slows down significantly for small dataset sizes | π Bug
When the dataset size is small (i.e. comparable to the minibatch size), it slows training down significantly.
No GPU, batch size 64, dataset size 1024: 185 iterations/second
No GPU, batch size 64, dataset size 100: 47 iterations/second
1 GPU, batch size 64, dataset size 1024: 110 iterations/second
1 GPU, batch size 64, dataset size 100: 23 iterations/second
1 GPU, batch size 800, dataset size 1024: 19 iterations/second
1 GPU, batch size 800, dataset size 10000: 90 iterations/second
1 GPU, batch size 64, dataset size 10000: 235 iterations/second
Please reproduce using the BoringModel
import os, sys
from argparse import ArgumentParser
import torch
from torch.utils.data import Dataset, DistributedSampler, DataLoader
from pl_examples import cli_lightning_logo
from pytorch_lightning import LightningModule, Trainer
from pytorch_lightning.utilities.seed import seed_everything
from pytorch_lightning.callbacks.progress import ProgressBar, ProgressBarBase, tqdm, reset, convert_inf
class CustomProgressBar(ProgressBar):
def init_train_tqdm(self) -> tqdm:
""" Override this to customize the tqdm bar for training. """
bar = tqdm(
desc='Training',
initial=self.trainer.global_step,
position=(2 * self.process_position),
disable=self.is_disabled,
leave=True,
dynamic_ncols=True,
file=sys.stdout,
smoothing=0,
)
return bar
def on_train_start(self, trainer, pl_module):
super(ProgressBar, self).on_train_start(trainer, pl_module)
self.main_progress_bar = self.init_train_tqdm()
self.prev_train_gs = -1
reset(self.main_progress_bar, self.trainer.max_steps)
def on_train_batch_end(self, trainer, pl_module, outputs, batch, batch_idx, dataloader_idx):
super(ProgressBar, self).on_train_batch_end(trainer, pl_module, outputs, batch, batch_idx, dataloader_idx)
if self.prev_train_gs != self.trainer.global_step and self._should_update(self.trainer.global_step, self.trainer.max_steps):
self._update_bar(self.main_progress_bar)
self.main_progress_bar.set_postfix(trainer.progress_bar_dict)
self.prev_train_gs = self.trainer.global_step
def on_train_epoch_start(self, trainer, pl_module):
super(ProgressBar, self).on_train_epoch_start(trainer, pl_module)
def on_train_end(self, trainer, pl_module):
super(ProgressBar, self).on_train_end(trainer, pl_module)
class RandomDataset(Dataset):
"""
>>> RandomDataset(size=10, length=20) # doctest: +ELLIPSIS
<...bug_report_model.RandomDataset object at ...>
"""
def __init__(self, size, length):
self.len = length
self.data = torch.randn(length, size)
def __getitem__(self, index):
return self.data[index]
def __len__(self):
return self.len
class BoringModel(LightningModule):
"""
>>> BoringModel() # doctest: +ELLIPSIS +NORMALIZE_WHITESPACE
BoringModel(
(layer): Linear(...)
)
"""
def __init__(self, train_data, test_data, bs):
"""
Testing PL Module
Use as follows:
- subclass
- modify the behavior for what you want
class TestModel(BaseTestModel):
def training_step(...):
# do your own thing
or:
model = BaseTestModel()
model.training_epoch_end = None
"""
super().__init__()
self.layer1 = torch.nn.Linear(32, 32)
self.layer2 = torch.nn.Linear(32, 32)
self.layer3 = torch.nn.Linear(32, 2)
self.train_data = train_data
self.test_data = test_data
self.bs = bs
def forward(self, x):
return self.layer3(torch.relu(self.layer2(torch.relu(self.layer1(x)))))
def loss(self, batch, prediction):
# An arbitrary loss to have a loss that updates the model weights during `Trainer.fit` calls
return torch.nn.functional.mse_loss(prediction, torch.ones_like(prediction))
def step(self, x):
x = self.forward(x)
out = torch.nn.functional.mse_loss(x, torch.ones_like(x))
return out
def training_step(self, batch, batch_idx):
output = self.forward(batch)
loss = self.loss(batch, output)
return {"loss": loss}
def training_step_end(self, training_step_outputs):
return training_step_outputs
def training_epoch_end(self, outputs) -> None:
torch.stack([x["loss"] for x in outputs]).mean()
def configure_optimizers(self):
optimizer = torch.optim.Adam(list(self.layer1.parameters()) + list(self.layer2.parameters()) + list(self.layer3.parameters()), lr=0.001)
lr_scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=1)
return [optimizer], [lr_scheduler]
def train_dataloader(self):
train_loader = DataLoader(self.train_data, shuffle=True, num_workers=1, batch_size=self.bs)
return train_loader
parser = ArgumentParser()
parser.add_argument("--gpus", type=int, default=0)
parser.add_argument("--num_processes", type=int, default=1)
parser.add_argument("--dataset_size", type=int, default=1024)
parser.add_argument("--mb_size", type=int, default=64)
args = parser.parse_args()
def test_run():
# data
train_data = torch.randn(args.dataset_size, 32)
test_data = torch.randn(256, 32)
# model
model = BoringModel(train_data, test_data, bs=args.mb_size)
trainer = Trainer(
gpus=args.gpus,
logger=False,
max_steps=5000,
limit_val_batches=0,
num_processes=args.num_processes,
weights_summary=None,
reload_dataloaders_every_epoch=False,
callbacks=[CustomProgressBar()]
)
# fit
trainer.fit(model)
print(f"{trainer.accelerator_backend=}")
print(f"{trainer.gpus=}")
print(f"{trainer.num_processes=}")
print(f"{trainer.global_step=}")
if __name__ == "__main__":
test_run()
To Reproduce
Run the following command: python bug_report.py --gpus 1 --dataset_size 10000 --mb_size 64
for varying values of gpus, dataset_size, and mb_size.
Expected behavior
Iterations/second is unaffected by dataset size.
Environment
PyTorch Version (e.g., 1.0): 1.8.1
OS (e.g., Linux): Linux
How you installed PyTorch (conda, pip, source): conda
Python version: 3.9
Additional context
My guess is that this is caused by inter-epoch reloading of the dataset. The code should be restructured to pre-load a fixed number of minibatches ahead, rather than caring about the location of epoch boundaries. | I would say that the case here is that you would need to assume always some overhead and for a small dataset the initial phase is dominant compare to the full run, you can see a parallel with a car riding 100 or 1000 meters, in both cases, you need to start from zero and as long you go you benefit from no need starting again... | MDEwOkRpc2N1c3Npb24zNDI3OTk4 | https://github.com/PyTorchLightning/pytorch-lightning/discussions/8113#discussioncomment-913818 |
Is it possible to disable CheckpointConnector? | Even with checkpoint_callback=False, Trainer appears to be using CheckpointConnector for some reason. Since very occasionally (once every ~30 runs) the checkpoint is either deleted too early or is never created in the first place (no idea which one), the whole experiment fails, as shown in the log below. Since CheckpointConnector does not appear to be doing anything important when running locally, is it possible to eliminate it without breaking the training process?
Traceback (most recent call last):
File ".\my_code\run_automated.py", line 95, in <module>
main(experiment, config, dataset)
File "D:\GIT\my_code\processing.py", line 117, in main
trainer.fit(model, dataloader_train, dataloader_val)
File "c:\users\pluczak\.conda\envs\pytorch\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 513, in fit
self.dispatch()
File "c:\users\pluczak\.conda\envs\pytorch\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 553, in dispatch
self.accelerator.start_training(self)
File "c:\users\pluczak\.conda\envs\pytorch\lib\site-packages\pytorch_lightning\accelerators\accelerator.py", line 74, in start_training
self.training_type_plugin.start_training(trainer)
File "c:\users\pluczak\.conda\envs\pytorch\lib\site-packages\pytorch_lightning\plugins\training_type\training_type_plugin.py", line 111, in start_training
self._results = trainer.run_train()
File "c:\users\pluczak\.conda\envs\pytorch\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 609, in run_train
self._pre_training_routine()
File "c:\users\pluczak\.conda\envs\pytorch\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 600, in _pre_training_routine
self.checkpoint_connector.restore_weights()
File "c:\users\pluczak\.conda\envs\pytorch\lib\site-packages\pytorch_lightning\trainer\connectors\checkpoint_connector.py", line 65, in restore_weights
max_suffix = self.max_ckpt_in_folder(dir_path_hpc, "hpc_ckpt_")
File "c:\users\pluczak\.conda\envs\pytorch\lib\site-packages\pytorch_lightning\trainer\connectors\checkpoint_connector.py", line 372, in max_ckpt_in_folder
files = [os.path.basename(f["name"]) for f in fs.listdir(dir_path)]
File "c:\users\pluczak\.conda\envs\pytorch\lib\site-packages\fsspec\spec.py", line 1122, in listdir
return self.ls(path, detail=detail, **kwargs)
File "c:\users\pluczak\.conda\envs\pytorch\lib\site-packages\fsspec\implementations\local.py", line 51, in ls
return [self.info(f) for f in paths]
File "c:\users\pluczak\.conda\envs\pytorch\lib\site-packages\fsspec\implementations\local.py", line 51, in <listcomp>
return [self.info(f) for f in paths]
File "c:\users\pluczak\.conda\envs\pytorch\lib\site-packages\fsspec\implementations\local.py", line 61, in info
out = os.stat(path, follow_symlinks=False)
FileNotFoundError: [WinError 2] Nie moΕΌna odnaleΕΊΔ okreΕlonego pliku: 'D:/GIT/my_code/04d2a63662b34828946b5545646c063f.pt' | So are you saying the file returned by fs.listdir(dir_path) gets removed?
If you can reproduce this, please open an Issue about it. Seems like a bug.
In the meantime, you can patch the problematic function like this:
trainer.checkpoint_connector.max_ckpt_in_folder = lambda *args, **kwargs: None | MDEwOkRpc2N1c3Npb24zMjgxNTg5 | https://github.com/PyTorchLightning/pytorch-lightning/discussions/6596#discussioncomment-637673 |
Override .ckpt in ModelCheckpoint Callback | Hi,
I'm using ModelCheckpoint Callback to save my model checkpoint. pytorch lightning automatically attaches -v0, -v1 to the filename I specified if it finds checkpoint models exist in dirpath. Instead of saving all the models from different runs, is there a way to make the ModelCheckpoint Callback only save one model in the checkpoint folder and just override the model from previous runs?
For example, my ModelCheckpoint is as follow:
ModelCheckpoint(monitor='valid_score',
dirpath="./checkpoint/",
filename="model",
mode='max', save_top_k=1))
If I run the code for 3 three times, my checkpoints folder will have the following:
- checkpoint:
- model.ckpt
- model-v0.ckpt
- model-v1.ckpt
Would it be possible to just have model.ckpt in my checkpoint folder no matter how many times I run the code? | We don't support this, but you could always remove the file manually between runs with os.remove() | MDEwOkRpc2N1c3Npb24yMTc5Njk3 | https://github.com/PyTorchLightning/pytorch-lightning/discussions/5684#discussioncomment-637694 |
Manually call model.eval() and model.train() inside the training loop | How to manually call model.eval() or model.train() inside the lightning module? I happen to have several models and not all of them need to be updated during each forward pass. Thanks! | You can use self.freeze() and self.unfreeze(). | MDEwOkRpc2N1c3Npb24zMjk3MTA2 | https://github.com/PyTorchLightning/pytorch-lightning/discussions/6718#discussioncomment-543258 |
Progress Bar Variables from Validation Step | Greetings,
I can only show metrics of variables calculated on training step but can't show validation step metrics on the progress bar. How can show a metric in the validation step ? self.log(...., prog_bar=True) does not work. | Hi
It works fine for me. Have a look at this running example (latest PL version 1.2.6:
import os
import torch
from torch.utils.data import Dataset
from pytorch_lightning import LightningModule, Trainer
class RandomDataset(Dataset):
def __init__(self, size, length):
self.len = length
self.data = torch.randn(length, size)
def __getitem__(self, index):
return self.data[index]
def __len__(self):
return self.len
class BoringModel(LightningModule):
def __init__(self):
super().__init__()
self.layer = torch.nn.Linear(32, 2)
def forward(self, x):
return self.layer(x)
def loss(self, batch, prediction):
# An arbitrary loss to have a loss that updates the model weights during `Trainer.fit` calls
return torch.nn.functional.mse_loss(prediction, torch.ones_like(prediction))
def step(self, x):
x = self.layer(x)
out = torch.nn.functional.mse_loss(x, torch.ones_like(x))
return out
def training_step(self, batch, batch_idx):
output = self.layer(batch)
loss = self.loss(batch, output)
return {"loss": loss}
def training_step_end(self, training_step_outputs):
return training_step_outputs
def training_epoch_end(self, outputs) -> None:
torch.stack([x["loss"] for x in outputs]).mean()
def validation_step(self, batch, batch_idx):
output = self.layer(batch)
loss = self.loss(batch, output)
self.log("VALIDATION_STEP", loss, prog_bar=True)
return {"x": loss}
def validation_epoch_end(self, outputs) -> None:
torch.stack([x['x'] for x in outputs]).mean()
def test_step(self, batch, batch_idx):
output = self.layer(batch)
loss = self.loss(batch, output)
return {"y": loss}
def test_epoch_end(self, outputs) -> None:
torch.stack([x["y"] for x in outputs]).mean()
def configure_optimizers(self):
optimizer = torch.optim.SGD(self.layer.parameters(), lr=0.1)
lr_scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=1)
return [optimizer], [lr_scheduler]
def test_run():
train_data = torch.utils.data.DataLoader(RandomDataset(32, 64))
val_data = torch.utils.data.DataLoader(RandomDataset(32, 64))
# model
model = BoringModel()
trainer = Trainer(
default_root_dir=os.getcwd(),
limit_train_batches=1,
limit_val_batches=1,
max_epochs=3,
weights_summary=None,
)
trainer.fit(model, train_data, val_data)
if __name__ == '__main__':
test_run() | MDEwOkRpc2N1c3Npb24zMjkzNzMx | https://github.com/PyTorchLightning/pytorch-lightning/discussions/6688#discussioncomment-568927 |
self.local_rank in LightningDataModule | Hi, I was recently reading this example from NVIDIA DALI:
https://github.com/NVIDIA/DALI/blob/629c57592b9b4e91b8213e6c77c1af179f7dd079/docs/examples/frameworks/pytorch/pytorch-lightning.ipynb
I wanted to split the model and datamodule apart. In that case, how can I get local_rank, global_rank and world_size for datamodule's setup? | hey @austinmw !
you can access them using self.trainer.local_rank, self.trainer.global_rank & self.trainer.world_size. | D_kwDOCqWgoM4AO3Et | https://github.com/PyTorchLightning/pytorch-lightning/discussions/12056#discussioncomment-2234560 |
Trainer: loss stagnates, whereas custom train implementation continues converging ?? | Hi,
it would be great if you can help me unravel, what is a mystery to me.
Background
I have adapted a pretrained model for image regression.
Issue :
If I finetune the model using the lightning trainer, the training loss stagnates at a value of ~10. However, in my pytorch training implementation training and validation loss become much less.
Can you help me understand where my mistake is? Did I implement .train_step and .forward correctly?
PLModule:
class RGBYieldRegressor(LightningModule):
def __init__(self, optimizer:str = 'sgd', k:int = 0, lr:float = 0.001, momentum:float = 0.8, wd:float = 0.01, batch_size:int = 16, pretrained:bool = True):
super().__init__()
self.lr = lr
self.momentum = momentum
self.wd = wd
self.batch_size = batch_size
self.k = k
optimizers = {'adam': Adam, 'sgd': SGD}
self.optimizer = optimizers[optimizer]
self.criterion = nn.MSELoss(reduction='mean')
self.model_arch = model
num_target_classes = 1
self.model = models.resnet50(pretrained=pretrained)
num_filters = self.model.fc.in_features
self.model.fc = nn.Sequential(
nn.ReLU(),
nn.Linear(num_filters, num_target_classes))
def forward(self, x):
return torch.flatten(self.model(x))
def training_step(self, batch, batch_idx): # torch.autograd?
x, y = batch
y_hat = torch.flatten(self.model(x))
loss = self.criterion(y, y_hat)
self.log("train_loss", loss, on_step=True, on_epoch=True, prog_bar=True, logger=True)
return loss
def validation_step(self, batch, batch_idx):
x, y = batch
y_hat = torch.flatten(self.model(x))
loss = self.criterion(y, y_hat)
self.log("val_loss", loss, on_epoch=True, prog_bar=True, logger=True)
def test_step(self, batch, batch_idx):
x, y = batch
y_hat = torch.flatten(self.model(x))
loss = self.criterion(y, y_hat)
self.log("test_loss", loss, prog_bar=True, logger=True)
def predicts_step(self, batch, batch_idx, dataloader_idx=0):
return self.model(batch).squeeze()
def configure_optimizers(self):
return self.optimizer(self.parameters(), lr=self.lr, momentum=self.momentum, weight_decay=self.wd)
Trainer:
trainer = Trainer(
max_epochs=50, # general
num_sanity_val_steps=0,
devices=1,
accelerator="auto",
callbacks=callbacks,
default_root_dir=this_output_dir,
weights_save_path=this_output_dir,
logger=logger,
num_processes=1,
)
trainer.fit(lightningmodule, train_dataloaders=datamodule.train_dataloader(), val_dataloaders=datamodule.val_dataloader())
vs. pytorch training:
for phase in ['train', 'val']:
if phase == 'train':
model.train() # Set model to training mode
else:
model.eval() # Set model to evaluate mode
running_loss = 0.0
# Iterate over data.
for inputs, labels in dataloaders[phase]:
inputs = inputs.to(device)
labels = labels.to(device)
# zero the parameter gradients
optimizer.zero_grad()
# forward
# track history if only in train
with torch.set_grad_enabled(phase == 'train'):
outputs = model(inputs)
loss = criterion(torch.flatten(outputs), labels.data)
if phase == 'train':
loss.backward()
optimizer.step()
# statistics
running_loss += loss.item() * inputs.size(0)
epoch_loss = running_loss / len(dataloaders[phase].dataset)
print('{} Loss: {:.4f}'.format(phase, epoch_loss)) | update:
loss = self.criterion(y, y_hat)
to
loss = self.criterion(y_hat, y)
everywhere.
also:
trainer.fit(lightningmodule, train_dataloaders=datamodule.train_dataloader(), val_dataloaders=datamodule.val_dataloader())
can be just
trainer.fit(lightningmodule, datamodule=datamodule) | D_kwDOCqWgoM4APPdA | https://github.com/PyTorchLightning/pytorch-lightning/discussions/12667#discussioncomment-2529516 |
How to accumulate metrics for multiple validation dataloaders | β Questions and Help
What is your question?
How to accumulate metrics for multiple validation dataloaders separately? Currently the metrics are accumulated for all dataloaders simultaneously.
Code
The validation step accepts dataset_idx parameter when running validation with multiple dataloaders.
def validation_step(self, batch, batch_idx, dataset_idx: Optional[int] = None):
However I'm not sure how to update the metrics separately for each dataloader. Would I have to create separate metrics, one for dataset A and second for B? Or maybe my metric could accept the dataset_idx parameter to know for which ds it should log given output.
This however wouldn't work with pl factory metrics like average precision, since they are dataset agnostic?
def update(self, preds: torch.Tensor, target: torch.Tensor):
Not sure how to approach this. | You would have to create seperate metrics per validation dataloader (similar to how you need seperate metrics for train/val/test). Something like this could maybe work for you
def __init__(self, ...)
...
self.val_metrics = nn.ModuleList([pl.metrics.Accuracy() for _ in range(n_val_dataloaders)])
def validation_step(self, batch, batch_idx, dataset_idx):
...
self.val_metrics[dataset_idx].update(preds, target) | MDEwOkRpc2N1c3Npb24yNzkxODc3 | https://github.com/PyTorchLightning/pytorch-lightning/discussions/5793#discussioncomment-339703 |
Error while using custom DistributedSampler | I wanted to set shuffle to False. So, i tried
sampler = torch.utils.data.distributed.DistributedSampler(dataset, shuffle=False)
dataloader = DataLoader(dataset, batch_size=32, sampler=sampler)
I am getting an error
RuntimeError: Default process group has not been initialized, please make sure to call init_process_group.
Please anyone tell me how to use custom sampler. | Adding on to the existing answer:
DataLoader(shuffle, sampler) are mutually exclusive, i.e., if you set shuffle=True you will get a RandomSampler and if it is set to False you get a SequentialSampler.
When using DDP, Lightning takes your dataloader and replaces it with the following
DataLoader(sampler=DistributedDampler(shuffle=True), ...), however ONLY if the sampler is not already a distributed sampler. If it is, no changes are done.
So OP can do the following:
def train_dataloader(self):
return DataLoader(sampler=DistributedSampler(shuffle=False), ...)`
returning their own sampler. Note this needs to be done in a place where the distributed group is already initialized, so basically in any hook after (including) setup(). OP got this error
RuntimeError: Default process group has not been initialized, please make sure to call init_process_group.
Because they probably did the following:
dataloader = DataLoader(sampler=DistributedSampler(shuffle=False), ...) # fails here, because distributed not init yet
trainer = Trainer()
trainer.fit(dataloader) | MDEwOkRpc2N1c3Npb24zMzY4OTgy | https://github.com/PyTorchLightning/pytorch-lightning/discussions/7573#discussioncomment-883581 |
Can not log metric which is a tensor | π Bug
I get an error The metric val_acc_0_step/epoch_0 does not contain a single element thus it cannot be converted to float. when running training loop.
Full stack trace
File "main.py", line 76, in <module>
run_training()
File "main.py", line 69, in run_training
trainer.fit(model, dm)
File "/opt/conda/lib/python3.6/site-packages/mlflow/utils/autologging_utils/safety.py", line 484, in safe_patch_function
patch_function(call_original, *args, **kwargs)
File "/opt/conda/lib/python3.6/site-packages/mlflow/utils/autologging_utils/safety.py", line 241, in patch_with_managed_run
result = patch_function(original, *args, **kwargs)
File "/opt/conda/lib/python3.6/site-packages/mlflow/pytorch/_pytorch_autolog.py", line 296, in fit
return _run_and_log_function(self, original, args, kwargs)
File "/opt/conda/lib/python3.6/site-packages/mlflow/pytorch/_pytorch_autolog.py", line 288, in _run_and_log_function
result = original(self, *args, **kwargs)
File "/opt/conda/lib/python3.6/site-packages/mlflow/utils/autologging_utils/safety.py", line 440, in call_original
original_result = original(*og_args, **og_kwargs)
File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 499, in fit
self.dispatch()
File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 546, in dispatch
self.accelerator.start_training(self)
File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/accelerators/accelerator.py", line 73, in start_training
self.training_type_plugin.start_training(trainer)
File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py", line 114, in start_training
self._results = trainer.run_train()
File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 637, in run_train
self.train_loop.run_training_epoch()
File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/training_loop.py", line 577, in run_training_epoch
self.trainer.run_evaluation(on_epoch=True)
File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 732, in run_evaluation
self.evaluation_loop.log_evaluation_step_metrics(output, batch_idx)
File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/evaluation_loop.py", line 336, in log_evaluation_step_metrics
self.__log_result_step_metrics(step_log_metrics, step_pbar_metrics, batch_idx)
File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/evaluation_loop.py", line 351, in __log_result_step_metrics
self.trainer.logger_connector.log_metrics(metrics_by_epoch, {}, step=batch_idx)
File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/connectors/logger_connector/logger_connector.py", line 222, in log_metrics
scalar_metrics = self.trainer.metrics_to_scalars(metrics)
File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/logging.py", line 51, in metrics_to_scalars
f"The metric `{k}` does not contain a single element"
pytorch_lightning.utilities.exceptions.MisconfigurationException: The metric `val_acc_0_step/epoch_0` does not contain a single element thus it cannot be converted to float. Found `tensor([0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0.])`
To Reproduce
I train the model below, which is multi-class / multi-dimensional classifier (have multiple class categories trained in one hierarchical model). The error seem to come from the following piece (running in loop is so that I can specify number of classes, which is different for different dimensions):
for i in range(len(y_true)):
self.valid_acc[i](preds[i], y_true[i])
self.log(f'val_acc_{i}', self.valid_acc[i], on_step=True, on_epoch=True)
I was getting the same error when trying to save confusion matrix.
Code sample
class OntologyTaggerModel(pl.LightningModule):
def __init__(self,
num_classes,
model_name='bert-base-cased',
learning_rate=3e-6,
**kwargs):
super().__init__()
self.save_hyperparameters()
self.learning_rate = learning_rate
self.model = BertForMulticlassSequenceClassification.from_pretrained(model_name, num_classes=num_classes)
self.valid_acc = nn.ModuleList([torchmetrics.Accuracy(average=None, num_classes=num_class) for num_class in torch.tensor(num_classes)])
self.valid_f1 = torchmetrics.F1(multiclass=True, mdmc_average='samplewise')
self.cm = nn.ModuleList([torchmetrics.ConfusionMatrix(num_classes=num_class) for num_class in torch.tensor(num_classes)])
def forward(self, *input, **kwargs):
return self.model(*input, **kwargs)
def training_step(self, batch, batch_idx):
x, y_true = batch
loss, _ = self(x, labels=y_true)
return loss
def validation_step(self, batch, batch_idx):
x, y_true = batch
_, y_preds = self(x, labels=y_true)
preds = [torch.argmax(y_pred, axis=1) for y_pred in y_preds]
for i in range(len(y_true)):
self.valid_acc[i](preds[i], y_true[i])
self.log(f'val_acc_{i}', self.valid_acc[i], on_step=True, on_epoch=True)
self.valid_f1(torch.stack(preds), torch.stack(y_true))
self.log('f1', self.valid_f1, on_step=True, on_epoch=True)
def configure_optimizers(self):
'Prepare optimizer and schedule (linear warmup and decay)'
return torch.optim.Adam(params=self.parameters(), lr=self.learning_rate)
def training_epoch_end(self, training_step_outputs):
avg_loss = torch.tensor([x['loss']
for x in training_step_outputs]).mean()
self.log('train_loss', avg_loss)
print(f'###score: train_loss### {avg_loss}')
def validation_epoch_end(self, val_step_outputs):
for i in range(len(self.valid_acc)):
acc = self.valid_acc[i].compute()
self.log(f'val_score_{i}', acc)
f1 = self.valid_f1.compute()
self.log('f1', f1)
print(f'###score: val_score### {acc}')
Expected behavior
Metrics should be rendered irrespective of dimension
Environment
pytorch-lightning==1.2.7
datasets==1.4.1
mlflow==1.16.0
torchmetrics=0.3.1
torch=1.7.1
python=3.6 | since you set average=None when you initialize the Accuracy metric, the output will be the accuracy score calculated per class. As this is a non-scalar tensor and self.log is only intended to be used with scalar tensors, you get this error. | MDEwOkRpc2N1c3Npb24zMzUxNDc0 | https://github.com/PyTorchLightning/pytorch-lightning/discussions/7373#discussioncomment-697594 |
When does loss.backward() get called when accumulating gradients? | Just wondering when backward gets called when you are accumulating over (say 8) batches. I had put a breakpoint in on_after_backward() and that seemed to be only getting called on the 8th iteration of training. According to this answer, in order to save on GPU memory, it seems its best to call loss.backward() on each iteration. | Dear @sachinruk,
This behaviour was a bug and should have been resolved on master and on_after_backward should be called after each backward call. In the case of accumulating over (say 8) batches, you should see on_after_backward called 8 times on master, and only 1 time previously.
Best,
T.C | MDEwOkRpc2N1c3Npb24zNDY2OTI3 | https://github.com/PyTorchLightning/pytorch-lightning/discussions/8460#discussioncomment-1020696 |
how to set find_unused_parameters=True? | π Bug
RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your
module has parameters that were not used in producing loss. You can enable unused parameter detection by (1) passing the
keyword argument `find_unused_parameters=True` to `torch.nn.parallel.DistributedDataParallel`; (2) making sure all `forward`
function outputs participate in calculating loss. If you already have done the above two steps, then the distributed data parallel
module wasn't able to locate the output tensors in the return value of your module's `forward` function. Please include the loss
function and the structure of the return value of `forward` of your module when reporting this issue (e.g. list, dict, iterable).
when I use pt 1.0.8, my model is ok, but when I switch to 1.1.4, it throws this error. It seems 1.0.8 enable unused parameters by default, but 1.1.4 not. How to solve this problem.
I think switch find_unused_parameters=True by default to False is a breaking change, but in docs, it doesn't mention, yet no clear instructions to set to True . | No need to subclass DDPPlugin. This is enough:
trainer = pl.Trainer(plugins=[DDPPlugin(find_unused_parameters=True)])
Which is used here:
pytorch-lightning/pytorch_lightning/plugins/ddp_plugin.py
Lines 66 to 68
in
0a50bb4
self._ddp_kwargs["find_unused_parameters"] = self._ddp_kwargs.get(
"find_unused_parameters", False
)
Sorry for the inconvenience! | MDEwOkRpc2N1c3Npb24yNzkxOTU0 | https://github.com/PyTorchLightning/pytorch-lightning/discussions/5799#discussioncomment-355256 |
Multi-GPU Tensor Initialization Question | The documentation advises the usage of 'type_as' when initializing new tensors in multi-gpu settings:
https://pytorch-lightning.readthedocs.io/en/stable/advanced/multi_gpu.html#init-tensors-using-type-as-and-register-buffer
When you need to create a new tensor, use type_as. This will make your code scale to any arbitrary number of GPUs or TPUs with Lightning.
The example shows a case where a new tensor is initialized inside a LightningModule forward function:
def forward(self, x):
z = torch.Tensor(2, 3)
z = z.type_as(x)
Presumably x is a tensor that has already been initialized on the target gpu.
My question is what to do in the case where we want to initialize a new tensor on the target gpu, and we do not have access to a tensor that has already been initialized on the target gpu?
For example, how does one properly initialize a new tensor when it is created inside a Dataset constructor that is instantiated during LightningDataModule setup()?
class SomeDataModule(LightningDataModule):
...
def setup(self, stage: Optional[str] = None):
if stage in (None, "fit"):
dataset = SomeDataset()
...
where:
class SomeDataset(Dataset):
def init(self):
self.some_tensor = torch.Tensor(2,3)
Will using type_as on the new tensor initialize the data on the target gpu?
self.some_tensor = self.some_tensor.type_as(self.some_tensor)
Or is a different approach necessary? (e.g. register_buffer()) | if it's part of the dataset, it's already moved to the target device when a batch is created while iterating over the dataset. | D_kwDOCqWgoM4AOuBR | https://github.com/PyTorchLightning/pytorch-lightning/discussions/11774#discussioncomment-2121722 |
Multiple dataloaders in training | Hi, I'm trying to use integrate pytorch lightning into my current pipeline. But I'm having some difficulties in using multiple dataloaders. In my current use case, let's say I have 10 dataloaders in my pipeline, but for each training step, I'm only sampling data from 5 of them. Is it doable in pytorch lightning? I can sample from 10 dataloaders each step but that would be a waste to system IO and GPU memory. Thanks for any help! | Hey @YuShen1116,
Here is the pseudo code to get it working
from typing import List
import itertools
from pytorch_lightning.trainer.supporters import CombinedLoader
from pytorch_lightning.utilities.apply_func import apply_to_collection
class CyclingLoader(object):
def __init__(self, combined_loaders: List[CombinedLoader]):
self.combined_loaders = combined_loaders
self._dataloader_idx_cycle = itertools.cycle(range(len(combined_loaders)))
def __iter__(self):
self._iterators = apply_to_collection(self.combined_loaders, CombinedLoader, iter)
self._dataloader_idx_cycle_iter = iter(self._dataloader_idx_cycle)
return self
def __next__(self):
iterator_idx = next(self._dataloader_idx_cycle_iter)
return next(self._iterators[iterator_idx])
class MyDataModule(DataModule):
def train_dataloader(self):
ds_1, .... ds_10 = create_dataloaders()
ds_1_5 = CombinedLoader([ds_1, ... ds_5])
ds_6_10 = CombinedLoader([ds_6, ... ds_10])
return CyclingLoader([ds_1_5, ds_6_10])
class Model(LightningModule):
def training_step(self, batch, batch_idx):
if batch_idx % 2 == 0:
# batches from dataloaders from 1 - 5
elif batch_idx % 2 == 1:
# batches from dataloaders from 6 - 10 | MDEwOkRpc2N1c3Npb24zNDU5Mzc4 | https://github.com/PyTorchLightning/pytorch-lightning/discussions/8410#discussioncomment-1020971 |
WGANGP not working as expected | Hi, can you please tell me if I am doing something wrong here, especially the manual update for the generator and the critic:
class Generator(nn.Module):
def __init__(self, latent_dim=64, img_shape=None):
super().__init__()
self.img_shape = img_shape
self.init_size = 8 #self.img_shape[1] // 4
self.l1 = nn.Sequential(
nn.Linear(latent_dim, 64*self.init_size**2), nn.LeakyReLU(0.2, inplace=True))
self.conv_blocks = nn.Sequential(
nn.BatchNorm2d(64),
nn.Upsample(scale_factor=2),
nn.Conv2d(in_channels=64, out_channels=64, kernel_size=3, padding=0),
nn.BatchNorm2d(64),
nn.LeakyReLU(0.2, inplace=True),
nn.Upsample(scale_factor=2),
nn.Conv2d(in_channels=64, out_channels=32, kernel_size=3, padding=0),
nn.BatchNorm2d(32),
nn.LeakyReLU(0.2, inplace=True),
nn.Upsample(scale_factor=2),
nn.Conv2d(in_channels=32, out_channels=16, kernel_size=3, padding=0),
nn.BatchNorm2d(16),
nn.LeakyReLU(0.2, inplace=True),
nn.Upsample(scale_factor=2),
nn.Conv2d(in_channels=16, out_channels=8, kernel_size=3, padding=1),
nn.BatchNorm2d(8),
nn.LeakyReLU(0.2, inplace=True),
nn.Conv2d(in_channels=8, out_channels=img_shape[0], kernel_size=3, padding=1),
nn.Tanh()
)
def forward(self, z):
out = self.l1(z)
out = out.view(out.shape[0], 64, self.init_size, self.init_size)
img = self.conv_blocks(out)
return img
class Critic(nn.Module):
def __init__(self, img_shape):
super().__init__()
self.disc = nn.Sequential(
nn.Conv2d(in_channels=img_shape[0], out_channels=16, kernel_size=4, stride=2),
nn.LeakyReLU(0.2, inplace=True),
nn.Conv2d(16, 32, kernel_size=4, stride=2),
nn.BatchNorm2d(32),
nn.LeakyReLU(0.2, inplace=True),
nn.Conv2d(32, 64, kernel_size=4, stride=2),
nn.BatchNorm2d(64),
nn.LeakyReLU(0.2, inplace=True),
nn.Conv2d(64, 128, kernel_size=4, stride=2),
nn.BatchNorm2d(128),
nn.LeakyReLU(0.2, inplace=True),
)
# The height and width of downsampled image
#
ds_size = 2 ** 4
self.adv_layer = nn.Sequential(nn.Linear(128 * ds_size, 1))
def forward(self, img):
out = self.disc(img)
# import pdb; pdb.set_trace()
out = out.view(out.shape[0], -1)
validity = self.adv_layer(out)
return validity
class WGANGP(pl.LightningModule):
def __init__(self, latent_dim=128, lr=0.0002, lambda_pen=10, crit_repeats=5):
super().__init__()
self.save_hyperparameters()
self.latent_dim = latent_dim
self.lr = lr
self.lambda_pen = lambda_pen
self.crit_repeats = crit_repeats
self.b1 = 0.0
self.b2 = 0.9
### initializing networks
img_shape = (1, 100, 100)
self.generator = Generator(self.latent_dim, img_shape)
self.critic = Critic(img_shape)
# application of weight
self.generator.apply(self.weights_init)
self.critic.apply(self.weights_init)
#
self.validation_z = torch.randn(10, self.latent_dim)
self.example_input_array = torch.zeros(10, self.latent_dim)
# Important: This property activates manual optimization.
self.automatic_optimization = False # True - Auto // # False - Manual update
def forward(self, z):
return self.generator(z)
### weight initialization
def weights_init(self, m):
if isinstance(m, nn.Conv2d) or isinstance(m, nn.ConvTranspose2d):
torch.nn.init.normal_(m.weight, 0.0, 0.02)
if isinstance(m, nn.BatchNorm2d):
torch.nn.init.normal_(m.weight, 0.0, 0.02)
torch.nn.init.constant_(m.bias, 0)
if isinstance(m, nn.Linear):
torch.nn.init.normal_(m.weight, 0.0, 0.02)
torch.nn.init.constant_(m.bias, 0)
def training_step(self, batch, batch_idx):
imgs = batch
# # sample noise
# z = torch.randn(imgs.shape[0], self.latent_dim)
# z = z.type_as(imgs)
# optimizers, manual access
g_opt, c_opt = self.optimizers()
# update critic
mean_iteration_critic_loss = 0
for _ in range(self.crit_repeats):
c_opt.zero_grad()
# sample noise
z = torch.randn(imgs.shape[0], self.latent_dim).type_as(imgs)
# fake image
fake = self(z)
crit_fake_pred = self.critic(fake.detach())
crit_real_pred = self.critic(imgs)
# eps
epsilon = torch.rand(len(imgs), 1, 1, 1, device=self.device, requires_grad=True)
# gradient penalty
gp = self.gradient_penalty(self.critic, imgs, fake, epsilon)
# critic loss
critic_loss = torch.mean(crit_fake_pred) - torch.mean(crit_real_pred) + self.lambda_pen * gp
# Keep track of the average critic loss in this batch
mean_iteration_critic_loss += critic_loss.item() / crit_repeats
# Update gradients
self.manual_backward(critic_loss)
# Update optimizer
c_opt.step()
# log critic average loss
self.log('c_loss_mean', mean_iteration_critic_loss, prog_bar=True)
# update generator
g_opt.zero_grad()
# sample new noise
z_new = torch.randn(imgs.shape[0], self.latent_dim).type_as(imgs)
# new fake image
fake_new = self(z_new)
crit_fake_pred = self.critic(fake_new)
# generator loss
gen_loss = -torch.mean(crit_fake_pred)
# Update gradients
self.manual_backward(gen_loss)
# Update optimizer
g_opt.step()
# log generator average loss
self.log('g_loss', gen_loss, prog_bar=True)
def gradient_penalty(self, crit, real, fake, epsilon):
# mix/interpolate images
mixed_images = real * epsilon + fake * (1 - epsilon)
# Calculate the critic's scores on the mixed images
mixed_scores = crit(mixed_images)
# Take the gradient of the scores with respect to the images
gradient = torch.autograd.grad(
inputs=mixed_images,
outputs=mixed_scores,
grad_outputs=torch.ones_like(mixed_scores),
create_graph=True,
retain_graph=True,
)[0]
# Flatten the gradients so that each row captures one image
gradient = gradient.view(len(gradient), -1)
# Calculate the magnitude of every row
gradient_norm = gradient.norm(2, dim=1)
# Penalize the mean squared distance of the gradient norms from 1
gradient_penalty = torch.mean((gradient_norm - 1) ** 2)
return gradient_penalty
def configure_optimizers(self):
opt_g = torch.optim.Adam(self.generator.parameters(), lr=self.lr, betas=(self.b1, self.b2))
opt_c = torch.optim.Adam(self.critic.parameters(), lr=self.lr, betas=(self.b1, self.b2))
return opt_g, opt_c
def on_epoch_end(self):
z = self.validation_z.to(self.device)
# log sampled images
sample_imgs = self(z)
grid = torchvision.utils.make_grid(sample_imgs)
self.logger.experiment.add_image('generated_images', grid, self.current_epoch)
# defining the hyperparameters
n_epochs = 1000
z_dim = 50
batch_size = 64
lr = 0.0002
c_lambda = 10
crit_repeats = 5
desired output
current output
SOLVED | Solved. | MDEwOkRpc2N1c3Npb24zMzg3NTAx | https://github.com/PyTorchLightning/pytorch-lightning/discussions/7754#discussioncomment-802604 |
Emulating multiple devices with a single GPU | Hello,
I have a single GPU, but I would like to spawn multiple replicas on that single GPU and train a model with DDP. Of course, each replica would have to use a smaller batch size in order to fit in memory. (For my use case, I am not interested in having a single replica with a large batch size).
I tried to pass --gpus "0,0" to the Lightning Trainer, and it managed to spawn two processes on the same GPU:
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
initializing ddp: GLOBAL_RANK: 0, MEMBER: 1/2
initializing ddp: GLOBAL_RANK: 1, MEMBER: 2/2
But in the end it crashed with RuntimeError: NCCL error in: /pytorch/torch/lib/c10d/ProcessGroupNCCL.cpp:911, invalid usage.
Please, is there any way to split a single GPU into multiple replicas with Lightning?
Thanks!
P.S.: Ray has a really nice support for fractional GPUs: https://docs.ray.io/en/master/using-ray-with-gpus.html#fractional-gpus. I've never used them with Lightning, but maybe it could be a workaround? | For reference: it seems to be possible when the backend is gloo instead of nccl. See discussion here: #8630 (reply in thread). | MDEwOkRpc2N1c3Npb24zNDg5NzQ0 | https://github.com/PyTorchLightning/pytorch-lightning/discussions/8630#discussioncomment-1127299 |
DDP specifics: Single node function execution, test loss sync, num_workers, sync_batchnorm, precision | β Questions and Help
What is your question?
This is about DDP specifics and about the handling of functions in a script that we only want executed once (not for every GPU).
I think they are missing from the doc and I couldn't find answers elsewhere. I apologize if they have been covered already.
Questions:
Does the regular torch.save(model.state_dict(), path) call work normally or does DDP complicate things for multiple GPUS?
Does DDP run all the functions of a script for every GPU? For example, in the following script will the delete_and_remake_folder function be executed multiple times (that would result in conflicts)? Is there a way to specify functions to be run only once?
Am I correct that sync_batchnorm=True and precision=16 work in DDP?
Does the trainer.test() function automatically aggregate results accross devices or is it required to set self.log(loss, sync_dist=True) in the model?
Am I correct in assuming that, if we set num_workers=X in a Dataloader, the actual CPU core usage will be X*N for N GPUS?
Questions 1-4 are summarized in the following script and whether it works/can work.
def main():
delete_and_remake_folder() # I only want to run once
model = Model()
trainer = Trainer(gpus = 8, backend='ddp, sync_batchnorm=True, precision=16)
trainer.fit()
trainer.test()
torch.save(model.pt_model.state_dict(), save_dir) # I probably only want to run once (?) | yes, but you might run into the issue in your 2nd question
yes, yes if self.global_rank == 0: do stuff
yes
no, yes (think of it like another val loop)
yes
I'll close this for now, if you have more questions please use our forums! https://forums.pytorchlightning.ai/ | MDEwOkRpc2N1c3Npb244MjI5Nw== | https://github.com/PyTorchLightning/pytorch-lightning/discussions/4387#discussioncomment-238371 |
What if I load data in __init__ function of LightningDataModule | Hi,
I see the doc describing the functions of LightningDataModule.
https://pytorch-lightning.readthedocs.io/en/latest/notebooks/lightning_examples/datamodules.html#Defining-The-MNISTDataModule
Here is my thinking. If some variable, e.g., a transform, can be defined in init function, and later shared across different GPUs. Theoretically, if we load data in init, the data should also be able to transfer to different GPUs similarly. In the case of a single machine with multiple GPUs, the data will be copied multiple times in the memory. In the case of multiple machines, the data will broadcast through the network from the main node to other nodes. Broadcasting large data through networks may have efficiency issue, which is why we had better load data in the setup function.
Please let me know whether my analysis is correct or not. Basically, I am not clear about how the variables, e.g., i.e. self.something, defined in init are shared across multiple GPUs/machines. Thanks! | @zhiqiangdon
lightning just runs prepare_data on the main process before the distributed process actually starts so there is no blocking happening behind the scenes.
To tackle this issue we have prepare_data_per_node. A node is just a machine. If they share the disk then prepare_data_per_node should be set to False.
User runs the __init__ function when they initialize the DataModule, lightning just send to across devices. | D_kwDOCqWgoM4AOI5p | https://github.com/PyTorchLightning/pytorch-lightning/discussions/10772#discussioncomment-1707638 |
Intel deep learning boost compatibility | Hi,
I was wondering if algorithms implemented with pytorch lightning can take advantage of deep learning boost hardware:
https://www.intel.com/content/www/us/en/artificial-intelligence/deep-learning-boost.html
https://github.com/oneapi-src/oneDNN
As far as I know it should be compatible with vanilla pytorch | If there is a speed-up on vanilla PyTorch ops, there should be one when using Lightning | MDEwOkRpc2N1c3Npb24zMzcwNzMw | https://github.com/PyTorchLightning/pytorch-lightning/discussions/7586#discussioncomment-752665 |
What is the relationship beween accumulate_grad_batches and lr_scheduler? | I wrote following code:
def configure_optimizers(self):
......
return [
{
'optimizer': optimizer,
'lr_scheduler': {
'scheduler': scheduler,
'interval': 'step',
'frequency': 1
}
}
I choose step as the interval. Actually, I don't understand what step means!!!
In my opinion, step may mean a batch? But when I set Trainer parameter: accumulate_grad_batches=5, will lr_scheduler still execute after one batch or it only execute after every accumulate_grad_batches batches? If the answer is the later, so the step means the call of optimizer.step()?
(I know accumulate_grad_batches can affect optimizer, but I don't know whether it can affect lr_scheduler) | yes, step means optimization step and accumulate_grad_batches will be taken under consideration while calling the lr_scheduler.
Ref code:
pytorch-lightning/pytorch_lightning/loops/epoch/training_epoch_loop.py
Lines 434 to 437
in
8ea39d2
def update_lr_schedulers(self, interval: str, update_plateau_schedulers: bool) -> None:
"""updates the lr schedulers based on the given interval."""
if interval == "step" and self._should_accumulate():
return | D_kwDOCqWgoM4AOGUx | https://github.com/PyTorchLightning/pytorch-lightning/discussions/10651#discussioncomment-1678059 |
Code not printing values in trained_epoch_end | My code is not printing print('train acc loss',acc,loss) in trained_epoch_end but its printing print('val acc loss',acc,loss) in validation_epoch_end
class Model(LightningModule):
def __init__(self):
super(Model,self).__init__()
self.model=ResneT(21)
self.lr=1e-3
self.bs=128
self.worker=6
self.acc=torchmetrics.Accuracy()
self.creterion=nn.BCEWithLogitsLoss()
self.scheduler='lambda'
def forward(self,x):
x=self.model(x)
return x
def configure_optimizers(self):
opt=torch.optim.AdamW(params=self.parameters(),lr=self.lr )
return opt
def train_dataloader(self):
dataset=DataReader(train_df)
dataloader=DataLoader(dataset,batch_size=self.bs,num_workers=self.worker,shuffle=True,
pin_memory=True,collate_fn=collate_fn)
return dataloader
def training_step(self,batch,batch_idx):
signal,label=batch
out=self(signal.float())
loss=self.creterion(out.flatten(),label.float().flatten())
acc=self.acc(out.flatten(),label.long().flatten())
return {'loss':loss,'acc':acc}
def trained_epoch_end(self,outputs):
acc=torch.stack([x['acc'] for x in outputs]).mean().detach().cpu().numpy().round(2)
loss=torch.stack([x['loss'] for x in outputs]).mean().detach().cpu().numpy().round(2)
self.log('train acc',acc)
self.log('train loss',loss)
print('train acc loss',acc,loss)
def val_dataloader(self):
dataset=DataReader(val_df)
dataloader=DataLoader(dataset,batch_size=self.bs,num_workers=self.worker,shuffle=False,
pin_memory=True,
collate_fn=collate_fn)
return dataloader
def validation_step(self,batch,batch_idx):
signal,label=batch
out=self(signal.float())
loss=self.creterion(out.flatten(),label.float().flatten())
acc=self.acc(out.flatten(),label.long().flatten())
return {'loss':loss,'acc':acc}
def validation_epoch_end(self,outputs):
acc=torch.stack([x['acc'] for x in outputs]).mean().detach().cpu().numpy().round(2)
loss=torch.stack([x['loss'] for x in outputs]).mean().detach().cpu().numpy().round(2)
print('val acc loss',self.current_epoch,acc,loss)
self.log('val acc',acc)
self.log('val loss',loss) | You need to implement training_epoch_end, not trained_epoch_end:
pytorch-lightning/pytorch_lightning/core/lightning.py
Line 689
in
1515ef9
def training_epoch_end(self, outputs: EPOCH_OUTPUT) -> None: | D_kwDOCqWgoM4AOwkL | https://github.com/PyTorchLightning/pytorch-lightning/discussions/11851#discussioncomment-2152757 |
Keep only the best and the latest artifacts in wandb logger | Hey,
I am not sure if I can currently keep only the best and latest models as a wandb artifact using the WandbLogger? That is, I am looking for a behavior similar to log_model='all', but which keeps only the latest and best models and deletes previous checkpoints from the wandb artifacts of the experiment.
My checkpoints weigh about 1GB and I don't want to keep the entire history of checkpoints with the log_model='all' flag, but rather only the best and the latest models. I thought about inheriting from the WandbLogger and following the guideline here:
https://gitbook-docs.wandb.ai/guides/artifacts/api#cleaning-up-unused-versions
Any thoughts?
Maybe this should be a feature request? | Hey @ohayonguy
Given the docs of WandbLogger I think you can just set log_model=True. This will then upload the checkpoints at the end of training. And if you have set save_top_k=n it will only upload the best n ones. | MDEwOkRpc2N1c3Npb24zNTU3MTEw | https://github.com/PyTorchLightning/pytorch-lightning/discussions/9342#discussioncomment-1286613 |
How to disable logging temporarily? | My LightningModule.training_step includes calls to self.log and finally returns the loss value. What is the best way to run training_step outside of a Trainer context, for debugging purposes (such as manual gradient inspection, etc)? Without the instrumentation by Trainer, logger is not defined and self.log calls cause an exception. I was trying to mock the logger to turn them to no-ops, but I wasn't successful. | it was updated to warning recently: #9733
might work with recent release or master | D_kwDOCqWgoM4AN4Z4 | https://github.com/PyTorchLightning/pytorch-lightning/discussions/10049#discussioncomment-1508216 |
Access a registered buffer is very slow | Hello,
I implemented MoCo in Pytorch lightning. I was surprised to see that my lightning version was slower than Pytorch's and I ran the profiler to check which function is slow. I can't share all my code but here are the relevant parts:
class MoCoModel(LightningModule):
def __init__(
...
) -> None:
...
self.register_buffer('queue', torch.randn(queue.feature_dim, queue.size))
self.queue = nn.functional.normalize(self.queue, dim=0)
self.register_buffer('queue_ptr', torch.zeros(1, dtype=torch.long))
@torch.no_grad()
def _update_queue(self, x: Tensor) -> None:
x = self.concat_all_gather_without_backprop(x)
#batch_size = x.shape[0]
batch_size = self._get_batch_size(x)
# for simplicity
ptr = self._get_ptr()
#ptr = int(self.queue_ptr)
self._assert(batch_size)
#assert self.queue_size % batch_size == 0
# replace the keys at ptr (dequeue and enqueue)
x = self._transpose(x)
self._assign_in_queue(x, ptr, batch_size)
#self.queue[:, ptr: ptr + batch_size] = x.T
# move pointer
ptr = self._compute_ptr(ptr, batch_size)
self._assign_ptr(ptr)
#ptr = (ptr + batch_size) % self.queue_size
def _get_batch_size(self, x):
return x.shape[0]
def _get_ptr(self):
return int(self.queue_ptr)
def _assert(self, batch_size):
assert self.queue_size % batch_size == 0
def _assign_ptr(self, ptr):
self.queue_ptr[0] = ptr
def _compute_ptr(self, batch_size, ptr):
return (ptr + batch_size) % self.queue_size
def _transpose(self, x):
return x.T
def _assign_in_queue(self, x, ptr, batch_size):
self.queue[:, ptr: ptr + batch_size] = x
def training_step(self, batch):
...
self._update_queue(k)
Here is the output of running simple profiler:
Action | Mean duration (s) |Num calls | Total time (s) | Percentage % |
--------------------------------------------------------------------------------------------------------------------------------------
Total | - |_ | 53.595 | 100 % |
--------------------------------------------------------------------------------------------------------------------------------------
run_training_epoch | 45.224 |1 | 45.224 | 84.381 |
run_training_batch | 0.21673 |195 | 42.262 | 78.854 |
optimizer_step_with_closure_0 | 0.20378 |195 | 39.738 | 74.145 |
training_step_and_backward | 0.19978 |195 | 38.957 | 72.688 |
model_forward | 0.1909 |195 | 37.225 | 69.457 |
training_step | 0.19077 |195 | 37.201 | 69.411 |
backward | 0.0083673 |195 | 1.6316 | 3.0443 |
on_train_batch_end | 0.0077772 |195 | 1.5166 | 2.8296 |
get_train_batch | 0.0034326 |196 | 0.6728 | 1.2553 |
fetch_next_train_batch | 0.0034203 |196 | 0.67037 | 1.2508 |
zero_grad | 0.00049274 |195 | 0.096084 | 0.17928 |
configure_optimizers | 0.093719 |1 | 0.093719 | 0.17486 |
training_batch_to_device | 0.00028381 |195 | 0.055342 | 0.10326 |
on_train_batch_start | 0.00018134 |195 | 0.03536 | 0.065977 |
on_train_start | 0.033906 |1 | 0.033906 | 0.063264 |
on_pretrain_routine_start | 0.006531 |1 | 0.006531 | 0.012186 |
on_batch_start | 3.062e-05 |195 | 0.0059708 | 0.011141 |
on_after_backward | 3.0163e-05 |195 | 0.0058817 | 0.010974 |
on_before_optimizer_step | 2.989e-05 |195 | 0.0058285 | 0.010875 |
on_batch_end | 2.9087e-05 |195 | 0.005672 | 0.010583 |
on_before_zero_grad | 2.8804e-05 |195 | 0.0056167 | 0.01048 |
on_before_backward | 2.6982e-05 |195 | 0.0052616 | 0.0098172 |
on_train_epoch_end | 0.0014064 |1 | 0.0014064 | 0.0026241 |
training_step_end | 4.9198e-06 |195 | 0.00095937 | 0.00179 |
on_train_epoch_start | 0.00025167 |1 | 0.00025167 | 0.00046957 |
on_train_end | 0.00017067 |1 | 0.00017067 | 0.00031844 |
on_before_accelerator_backend_setup | 6.968e-05 |1 | 6.968e-05 | 0.00013001 |
setup | 5.0209e-05 |1 | 5.0209e-05 | 9.3682e-05 |
prepare_data | 4.4779e-05 |1 | 4.4779e-05 | 8.355e-05 |
on_fit_end | 3.892e-05 |1 | 3.892e-05 | 7.2618e-05 |
on_epoch_start | 3.332e-05 |1 | 3.332e-05 | 6.2169e-05 |
on_pretrain_routine_end | 3.009e-05 |1 | 3.009e-05 | 5.6143e-05 |
on_epoch_end | 2.741e-05 |1 | 2.741e-05 | 5.1142e-05 |
on_configure_sharded_model | 2.556e-05 |1 | 2.556e-05 | 4.7691e-05 |
on_fit_start | 2.0869e-05 |1 | 2.0869e-05 | 3.8938e-05 |
teardown | 1.9379e-05 |1 | 1.9379e-05 | 3.6158e-05 |
configure_sharded_model | 6.5197e-06 |1 | 6.5197e-06 | 1.2165e-05 |
configure_callbacks | 5.16e-06 |1 | 5.16e-06 | 9.6277e-06 |
on_train_dataloader | 4.2003e-06 |1 | 4.2003e-06 | 7.837e-06 |
As we can see a large time is spent in training_step and here is the output of advanced profiler for this function:
Profile stats for: training_step rank: 0
1065072 function calls (862519 primitive calls) in 37.086 seconds
Ordered by: cumulative time
ncalls tottime percall cumtime percall filename:lineno(function)
195 0.001 0.000 37.082 0.190 accelerator.py:210(training_step)
195 0.000 0.000 37.079 0.190 ddp.py:438(training_step)
31980/195 0.053 0.000 37.079 0.190 module.py:1096(_call_impl)
195 0.015 0.000 37.078 0.190 distributed.py:852(forward)
195 0.002 0.000 36.629 0.188 base.py:76(forward)
195 0.009 0.000 36.624 0.188 moco.py:201(training_step)
1170/390 0.006 0.000 34.429 0.088 grad_mode.py:25(decorate_context)
195 0.002 0.000 32.216 0.165 moco.py:83(_update_queue)
195 32.171 0.165 32.171 0.165 moco.py:109(_get_ptr)
390 0.000 0.000 3.942 0.010 resnet.py:268(forward)
...
195 0.008 0.000 0.008 0.000 moco.py:124(_assign_in_queue)
195 0.005 0.000 0.006 0.000 moco.py:115(_assign_ptr)
195 0.002 0.000 0.002 0.000 moco.py:121(_transpose)
195 0.001 0.000 0.001 0.000 gather.py:44(concat_all_gather_without_backprop)
195 0.000 0.000 0.000 0.000 moco.py:106(_get_batch_size)
...
The function _update_queue is very long and the function taking the most time is _get_ptr which should be really fast in comparison with forwards or computation of MoCo loss. I watched lightning bolts implementation that uses the same kind of operations so I don't really understand why it is this slow.
I tested with DDP and SingleDevice strategy that resulted in the same kind of slow down on a SLURM cluster environment. | Fixed it, lightning is now as fast as my previous implementation, the problem was elsewhere but I didn't detect it using the profiler because of the asynchronous computation from GPUs which were not synchronized during profiling. | D_kwDOCqWgoM4AOgPG | https://github.com/PyTorchLightning/pytorch-lightning/discussions/11493#discussioncomment-2004190 |
Can a pl.LightningModule be used from native pytorch? | Can a pl.LightningModule be used from native pytorch?
We are writing a library, and the use of pl.LightningModule for our modules is convenient, particularly because each module knows its device.
However, our clients might be using native pytorch, and want to include our LightningModule as an nn.Module in their code.
FWIW, our LightningModule currently is used purely for forward inference and currently passes no gradients, nor is it trainable.
Are there any interoperability pitfalls in having a LightningModule be an nn.Module in a pure pytorch codebase?
Are the benefits gained by using pl.LightningModules in our codebase no longer relevant when called from a pure pytorch codebase, particularly given that we pass back no gradients? | It should work as a native nn.Module, it actually subclasses it. If you find any, you can assume it's a bug so feel free to open issues about any pitfalls found.
Depends on which features of the LightningModule you value the most. The benefits left would be mainly organization and such. | MDEwOkRpc2N1c3Npb24zMzE1NTUz | https://github.com/PyTorchLightning/pytorch-lightning/discussions/6953#discussioncomment-618010 |
"resume from checkpoint" lead to CUDA out of memory | When I use βresume from checkpointβ,
there is a βCUDA out of memoryβ problem,
when using torch.load(), set "map location" to "cpu" can solve this problem,
in "resume from checkpoint" scenario, what should I do? | I solved the problem after setting the strategy to 'ddp'. | D_kwDOCqWgoM4AOkGz | https://github.com/PyTorchLightning/pytorch-lightning/discussions/11563#discussioncomment-2031552 |
Loading Lightning model in PyTorch | How to load a model saved in PyTorch Lightning in Vanilla PyTorch? | check this out: https://pytorch-lightning.readthedocs.io/en/latest/common/production_inference.html#id1 | D_kwDOCqWgoM4AO2uC | https://github.com/PyTorchLightning/pytorch-lightning/discussions/12041#discussioncomment-2228442 |
String βbestβ at argument βckpt_pathβ for test method of Trainer class | Hi,
The test method of the Trainer class, has the input argument ckpt_path. According to the docs:
ckpt_path (Optional[str]) β Either best or path to the checkpoint you wish to test. If None and the model instance was passed, use the current weights. Otherwise, the best model from the previous trainer.fit call will be loaded.
Also, in the documentation of PyTorch Lightning for the test set, using Trainer, there is the following:
# run full training
trainer.fit(model)
# (1) load the best checkpoint automatically (lightning tracks this for you)
trainer.test(ckpt_path="best")
My question is, according to what the "best" checkpoint is decided? That is, is the "best" decided on maximising or minimising some value? What would be that value? Can someone configure the policy (i.e. minimising or maximising) and the value? How one should use this "best" string?
Links for reference:
Test set documentation
Test method of Trainer class documentation
P.S. Please note that I'm not referring to using the ModelChekpoint callback, but explicitly to the above, which seems that the ModelCheckpoint callback is not used. | it actually relies on ModelCheckpoint callback only
pytorch-lightning/pytorch_lightning/trainer/trainer.py
Lines 1268 to 1280
in
b3e9dff
if ckpt_path == "best":
# if user requests the best checkpoint but we don't have it, error
if not self.checkpoint_callback.best_model_path:
if self.fast_dev_run:
raise MisconfigurationException(
f"You cannot execute `.{fn}()` with `fast_dev_run=True` unless you do"
f" `.{fn}(ckpt_path=PATH)` as no checkpoint path was generated during fitting."
)
raise MisconfigurationException(
f'`.{fn}(ckpt_path="best")` is set but `ModelCheckpoint` is not configured to save the best model.'
)
# load best weights
ckpt_path = self.checkpoint_callback.best_model_path
so if there is no ModelCheckpoint callback it will raise an error | D_kwDOCqWgoM4ANxgv | https://github.com/PyTorchLightning/pytorch-lightning/discussions/9836#discussioncomment-1436028 |
Loading from checkpoints re-downloads pre-trained BERT model | I am defining a simple multi-class BERT classification model and then training it using pytorch-lightning. The code is in https://colab.research.google.com/drive/1os9mz7w7gmLBL_ZDvZ9K1saz9UA3rmD7?usp=sharing under class BertForMulticlassSequenceClassification(BertPreTrainedModel). The issue is that after training when I am loading the classifier model model = ClassTaggerModel.load_from_checkpoint(checkpoint_file) I get
Some weights of BertForMulticlassSequenceClassification were not initialized from the model checkpoint at bert-base-cased and are newly initialized: ['classifiers.0.weight', 'classifiers.1.bias', 'classifiers.0.bias', 'classifiers.1.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
The reason is probably because pl.LightningModule module has transformer's from_pretrained function that would normally downloads weights from huggingface. This is undesirable behaviour when loading from the trained checkpoint. Is there any feature in Pytroch-lightning that can help having different logic for these two cases (training vs loading). Thanks! | It's because lightning instantiates the LightningModel and then loads the weights using load_from_checkpoint and since you have HFModel.from_pretrained in the init it will load the pretrained weights every time. There is a way around for this.
class HFLightningModule(LightningModule):
def __init__(self, ..., model_name=None)
if model_name is not None:
self.model = HFModel.from_pretrained(model_name, ...)
else:
self.model = HFModel(config, num_classes)
model = HFLightningModule(..., model_name='bert-base-cased')
trainer.fit(model, ...)
model = HFLightningModule.load_from_checkpoint(...)
Although there might be a better solution. | MDEwOkRpc2N1c3Npb24zNTQ4NzY1 | https://github.com/PyTorchLightning/pytorch-lightning/discussions/9236#discussioncomment-1262041 |
DDP with shared file system | Is it possible to use shared filesystem for DDP init_group in pytorch lighting? If so how what should I do to the Trainer?
Thanks! | I'm not quite sure about what you want to do, but if it's about customizing DDP, you can do the following:
from pytorch_lightning.plugins import DDPPlugin
class MyCustomDDP(DDPPlugin):
...
trainer = Trainer(plugins=[MyCustomDDP()]) | MDEwOkRpc2N1c3Npb24zNDY5NjEy | https://github.com/PyTorchLightning/pytorch-lightning/discussions/8496#discussioncomment-1035391 |
Can I set the epoch/step initial value to 1? | Can I set the epoch/step initial value to 1? Now the initial default is 0, it feels awkward when watching Tensorboard.
In addition, can I bring plots in the training and valid in one tersorboard plot? | Dear @yuys0602,
We are starting at 0 as we are counting the number of epoch and before starting, you actually haven't completed any epoch.
If this is really an issue for you, I believe there might be a hacky solution.
I believe this should be working if you are using master. @carmocca Could you confirm ?
def training_step()
self.log("epoch", self.trainer.current_epoch + 1)
Concerning, can I bring plots in the training and valid in one tersorboard plot?
Unfortunately, I believe this is a feature provided or not by Logger and Lightning can't do much about it.
Best,
T.C | MDEwOkRpc2N1c3Npb24zNDIxNDU1 | https://github.com/PyTorchLightning/pytorch-lightning/discussions/8054#discussioncomment-896526 |
Lightning CLI, PyTorch Profiler, Improved Early Stopping | [1.3.0] - 2021-05-06
Added
Added support for the EarlyStopping callback to run at the end of the training epoch (#6944)
Added synchronization points before and after setup hooks are run (#7202)
Added a teardown hook to ClusterEnvironment (#6942)
Added utils for metrics to scalar conversions (#7180)
Added utils for NaN/Inf detection for gradients and parameters (#6834)
Added more explicit exception message when trying to execute trainer.test() or trainer.validate() with fast_dev_run=True (#6667)
Added LightningCLI class to provide simple reproducibility with minimum boilerplate training CLI (#4492, #6862, #7156, #7299)
Added gradient_clip_algorithm argument to Trainer for gradient clipping by value (#6123).
Added a way to print to terminal without breaking up the progress bar (#5470)
Added support to checkpoint after training steps in ModelCheckpoint callback (#6146)
Added TrainerStatus.{INITIALIZING,RUNNING,FINISHED,INTERRUPTED} (#7173)
Added Trainer.validate() method to perform one evaluation epoch over the validation set (#4948)
Added LightningEnvironment for Lightning-specific DDP (#5915)
Added teardown() hook to LightningDataModule (#4673)
Added auto_insert_metric_name parameter to ModelCheckpoint (#6277)
Added arg to self.log that enables users to give custom names when dealing with multiple dataloaders (#6274)
Added teardown method to BaseProfiler to enable subclasses defining post-profiling steps outside of __del__ (#6370)
Added setup method to BaseProfiler to enable subclasses defining pre-profiling steps for every process (#6633)
Added no return warning to predict (#6139)
Added Trainer.predict config validation (#6543)
Added AbstractProfiler interface (#6621)
Added support for including module names for forward in the autograd trace of PyTorchProfiler (#6349)
Added support for the PyTorch 1.8.1 autograd profiler (#6618)
Added outputs parameter to callback's on_validation_epoch_end & on_test_epoch_end hooks (#6120)
Added configure_sharded_model hook (#6679)
Added support for precision=64, enabling training with double precision (#6595)
Added support for DDP communication hooks (#6736)
Added artifact_location argument to MLFlowLogger which will be passed to the MlflowClient.create_experiment call (#6677)
Added model parameter to precision plugins' clip_gradients signature (#6764, #7231)
Added is_last_batch attribute to Trainer (#6825)
Added LightningModule.lr_schedulers() for manual optimization (#6567)
Added MpModelWrapper in TPU Spawn (#7045)
Added max_time Trainer argument to limit training time (#6823)
Added on_predict_{batch,epoch}_{start,end} hooks (#7141)
Added new EarlyStopping parameters stopping_threshold and divergence_threshold (#6868)
Added debug flag to TPU Training Plugins (PT_XLA_DEBUG) (#7219)
Added new UnrepeatedDistributedSampler and IndexBatchSamplerWrapper for tracking distributed predictions (#7215)
Added trainer.predict(return_predictions=None|False|True) (#7215)
Added BasePredictionWriter callback to implement prediction saving (#7127)
Added trainer.tune(scale_batch_size_kwargs, lr_find_kwargs) arguments to configure the tuning algorithms (#7258)
Added tpu_distributed check for TPU Spawn barrier (#7241)
Added device updates to TPU Spawn for Pod training (#7243)
Added warning when missing Callback and using resume_from_checkpoint (#7254)
DeepSpeed single file saving (#6900)
Added Training type Plugins Registry (#6982, #7063, #7214, #7224)
Add ignore param to save_hyperparameters (#6056)
Changed
Changed LightningModule.truncated_bptt_steps to be property (#7323)
Changed EarlyStopping callback from by default running EarlyStopping.on_validation_end if only training is run. Set check_on_train_epoch_end to run the callback at the end of the train epoch instead of at the end of the validation epoch (#7069)
Renamed pytorch_lightning.callbacks.swa to pytorch_lightning.callbacks.stochastic_weight_avg (#6259)
Refactor RunningStage and TrainerState usage (#4945, #7173)
Added RunningStage.SANITY_CHECKING
Added TrainerFn.{FITTING,VALIDATING,TESTING,PREDICTING,TUNING}
Changed trainer.evaluating to return True if validating or testing
Changed setup() and teardown() stage argument to take any of {fit,validate,test,predict} (#6386)
Changed profilers to save separate report files per state and rank (#6621)
The trainer no longer tries to save a checkpoint on exception or run callback's on_train_end functions (#6864)
Changed PyTorchProfiler to use torch.autograd.profiler.record_function to record functions (#6349)
Disabled lr_scheduler.step() in manual optimization (#6825)
Changed warnings and recommendations for dataloaders in ddp_spawn (#6762)
pl.seed_everything will now also set the seed on the DistributedSampler (#7024)
Changed default setting for communication of multi-node training using DDPShardedPlugin (#6937)
trainer.tune() now returns the tuning result (#7258)
LightningModule.from_datasets() now accepts IterableDataset instances as training datasets. (#7503)
Changed resume_from_checkpoint warning to an error when the checkpoint file does not exist (#7075)
Automatically set sync_batchnorm for training_type_plugin (#6536)
Allowed training type plugin to delay optimizer creation (#6331)
Removed ModelSummary validation from train loop on_trainer_init (#6610)
Moved save_function to accelerator (#6689)
Updated DeepSpeed ZeRO (#6546, #6752, #6142, #6321)
Improved verbose logging for EarlyStopping callback (#6811)
Run ddp_spawn dataloader checks on Windows (#6930)
Updated mlflow with using resolve_tags (#6746)
Moved save_hyperparameters to its own function (#7119)
Replaced _DataModuleWrapper with __new__ (#7289)
Reset current_fx properties on lightning module in teardown (#7247)
Auto-set DataLoader.worker_init_fn with seed_everything (#6960)
Remove model.trainer call inside of dataloading mixin (#7317)
Split profilers module (#6261)
Ensure accelerator is valid if running interactively (#5970)
Disabled batch transfer in DP mode (#6098)
Deprecated
Deprecated outputs in both LightningModule.on_train_epoch_end and Callback.on_train_epoch_end hooks (#7339)
Deprecated Trainer.truncated_bptt_steps in favor of LightningModule.truncated_bptt_steps (#7323)
Deprecated outputs in both LightningModule.on_train_epoch_end and Callback.on_train_epoch_end hooks (#7339)
Deprecated LightningModule.grad_norm in favor of pytorch_lightning.utilities.grads.grad_norm (#7292)
Deprecated the save_function property from the ModelCheckpoint callback (#7201)
Deprecated LightningModule.write_predictions and LightningModule.write_predictions_dict (#7066)
Deprecated TrainerLoggingMixin in favor of a separate utilities module for metric handling (#7180)
Deprecated TrainerTrainingTricksMixin in favor of a separate utilities module for NaN/Inf detection for gradients and parameters (#6834)
period has been deprecated in favor of every_n_val_epochs in the ModelCheckpoint callback (#6146)
Deprecated trainer.running_sanity_check in favor of trainer.sanity_checking (#4945)
Deprecated Profiler(output_filename) in favor of dirpath and filename (#6621)
Deprecated PytorchProfiler(profiled_functions) in favor of record_functions (#6349)
Deprecated @auto_move_data in favor of trainer.predict (#6993)
Deprecated Callback.on_load_checkpoint(checkpoint) in favor of Callback.on_load_checkpoint(trainer, pl_module, checkpoint) (#7253)
Deprecated metrics in favor of torchmetrics (#6505, #6530, #6540, #6547, #6515, #6572, #6573, #6584, #6636, #6637, #6649, #6659, #7131)
Deprecated the LightningModule.datamodule getter and setter methods; access them through Trainer.datamodule instead (#7168)
Deprecated the use of Trainer(gpus="i") (string) for selecting the i-th GPU; from v1.5 this will set the number of GPUs instead of the index (#6388)
Removed
Removed the exp_save_path property from the LightningModule (#7266)
Removed training loop explicitly calling EarlyStopping.on_validation_end if no validation is run (#7069)
Removed automatic_optimization as a property from the training loop in favor of LightningModule.automatic_optimization (#7130)
Removed evaluation loop legacy returns for *_epoch_end hooks (#6973)
Removed support for passing a bool value to profiler argument of Trainer (#6164)
Removed no return warning from val/test step (#6139)
Removed passing a ModelCheckpoint instance to Trainer(checkpoint_callback) (#6166)
Removed deprecated Trainer argument enable_pl_optimizer and automatic_optimization (#6163)
Removed deprecated metrics (#6161)
from pytorch_lightning.metrics.functional.classification removed to_onehot, to_categorical, get_num_classes, roc, multiclass_roc, average_precision, precision_recall_curve, multiclass_precision_recall_curve
from pytorch_lightning.metrics.functional.reduction removed reduce, class_reduce
Removed deprecated ModelCheckpoint arguments prefix, mode="auto" (#6162)
Removed mode='auto' from EarlyStopping (#6167)
Removed epoch and step arguments from ModelCheckpoint.format_checkpoint_name(), these are now included in the metrics argument (#7344)
Removed legacy references for magic keys in the Result object (#6016)
Removed deprecated LightningModule hparams setter (#6207)
Removed legacy code to log or include metrics in the progress bar by returning them in a dict with the "log"/"progress_bar" magic keys. Use self.log instead (#6734)
Removed trainer.fit() return value of 1. It has no return now (#7237)
Removed logger_connector legacy code (#6733)
Removed unused mixin attributes (#6487)
Fixed
Fixed NaN errors in progress bars when training with iterable datasets with no length defined (#7306)
Fixed attaching train and validation dataloaders when reload_dataloaders_every_epoch=True and num_sanity_val_steps=0 (#7207)
Added a barrier in the accelerator teardown to synchronize processes before execution finishes (#6814)
Fixed multi-node DDP sub-process launch by using local_rank instead of global_rank for main process assertion (#7061)
Fixed incorrect removal of WORLD_SIZE environment variable in DDP training when launching with torch distributed/torchelastic (#6942)
Made the Plugin.reduce method more consistent across all Plugins to reflect a mean-reduction by default (#6011)
Move lightning module to correct device type when using LightningDistributedWrapper (#6070)
Do not print top-k verbose log with ModelCheckpoint(monitor=None) (#6109)
Fixed ModelCheckpoint(save_top_k=0, save_last=True) not saving the last checkpoint (#6136)
Fixed .teardown(stage='fit') and .on_fit_{start,end}() getting called during trainer.test (#6386)
Fixed LightningModule all_gather on cpu tensors (#6416)
Fixed torch distributed not available in setup hook for DDP (#6506)
Fixed trainer.tuner.{lr_find,scale_batch_size} not setting the Trainer state properly (#7258)
Fixed bug where the learning rate schedulers did not follow the optimizer frequencies (#4868)
Fixed pickle error checker to now check for pickle.PickleError to catch all pickle errors (#6917)
Fixed a bug where the outputs object passed to LightningModule.training_epoch_end was different from the object passed to the on_train_end_epoch hook (#6969)
Fixed a bug where the outputs passed to train_batch_end would be listed even when using a single optimizer and no truncated backprop through time steps (#6969)
Fixed bug for trainer error handling which would cause hang for distributed training (#6864)
Fixed self.device not returning the correct device in replicas of data-parallel (#6414)
Fixed lr_find trying beyond num_training steps and suggesting a too high learning rate (#7076)
Fixed logger creating incorrect version folder in DDP with repeated Trainer.fit calls (#7077)
Fixed metric objects passed directly to self.log not being reset correctly (#7055)
Fixed CombinedLoader in distributed settings for validation / testing (#7102)
Fixed the save_dir in WandbLogger when the run was initiated externally (#7106)
Fixed num_sanity_val_steps affecting reproducibility of training data shuffling (#7014)
Fixed resetting device after fitting/evaluating/predicting (#7188)
Fixed bug where trainer.tuner.scale_batch_size(max_trials=0) would not return the correct batch size result (#7262)
Fixed metrics not being properly logged with precision=16 and manual_optimization (#7228)
Fixed BaseFinetuning properly reloading optimizer_states when using resume_from_checkpoint (#6891)
Fixed parameters_to_ignore not properly set to DDPWrapper (#7239)
Fixed parsing of fast_dev_run=True with the built-in ArgumentParser (#7240)
Fixed handling an IterableDataset that fails to produce a batch at the beginning of an epoch (#7294)
Fixed LightningModule.save_hyperparameters() when attempting to save an empty container (#7268)
Fixed apex not properly instantiated when running with ddp (#7274)
Fixed optimizer state not moved to GPU (#7277)
Fixed custom init args for WandbLogger (#6989)
Fixed a bug where an error would be raised if the train dataloader sometimes produced None for a batch (#7342)
Fixed examples (#6600, #6638, #7096, #7246, #6357, #6476, #6294, #6373, #6088, #7398)
Resolved schedule step bug for PyTorch Profiler (#6674, #6681)
Updated logic for checking TPUs availability (#6767)
Resolve TPU miss rendezvous (#6781)
Fixed auto-scaling mode when calling tune method on trainer (#7321)
Fixed finetuning complex models correctly unfreezes (#6880)
Ensure we set the eval/train flag correctly on accelerator model (#6877)
Set better defaults for rank_zero_only.rank when training is launched with SLURM and torchelastic (#6802)
Fixed matching the number of outputs of backward with forward for AllGatherGrad (#6625)
Fixed the gradient_clip_algorithm has no effect (#6928)
Fixed CUDA OOM detection and handling (#6934)
Fixed unfreeze_and_add_param_group expects modules rather than module (#6822)
Fixed DPP + SyncBN when move on device (#6838)
Fixed missing arguments in lr_find call (#6784)
Fixed set_default_tensor_type to torch.DoubleTensor with precision=64 (#7108)
Fixed NeptuneLogger.log_text(step=None) (#7194)
Fixed importing torchtext batch (#6365, #6323, #6211)
Contributors
@akihironitta, @alessiobonfiglio, @amisev, @amogkam, @ananthsub, @ArvinZhuang, @ashleve, @asnorkin, @awaelchli, @BloodAxe, @bmahlbrand, @Borda, @borisdayma, @camruta, @carmocca, @ceshine, @dbonner, @dhkim0225, @EdwardJB, @EliaCereda, @EricCousineau-TRI, @ethanwharris, @FlorianMF, @hemildesai, @ifsheldon, @kaushikb11, @mauvilsa, @maxfrei750, @mesejo, @ramonemiliani93, @rohitgr7, @s-rog, @sadiqj, @scart97, @SeanNaren, @shuyingsunshine21, @SkafteNicki, @SpontaneousDuck, @stllfe, @tchaton, @THasthika, @vballoli
If we forgot someone due to not matching commit email with GitHub account, let us know :]
This discussion was created from the release Lightning CLI, PyTorch Profiler, Improved Early Stopping. | Epic! | MDEwOkRpc2N1c3Npb24zMzU0NTUy | https://github.com/PyTorchLightning/pytorch-lightning/discussions/7412#discussioncomment-709207 |
extremely slow training with multiple GPUs | I am training a model with lightning where I am attempting to use all the GPUs on my system (4 in total).
My trainer is run as:
model = MyModel(hparams)
if torch.cuda.is_available():
trainer = Trainer(gpus=-1)
else:
trainer = Trainer()
trainer.fit(model)
My model is defined as follows:
class SiameseNet(pl.LightningModule):
"""
Implement a siamese network as a feature extractor withh Lightning module
"""
def __init__(self,
hparams):
"""
Build the network
"""
super(SiameseNet, self).__init__()
self.net = self._build_net()
self.hparams = hparams
self.train_data_path = hparams.get('train_data_path', None)
self.test_data_path = hparams.get('test_data_path', None)
self.val_data_path = hparams.get('val_data_path', None)
self.train_dataset = None
self.val_dataset = None
self.test_dataset = None
self.lossfn = TripletLoss(margin=1.0)
def forward_once(self, x):
output = self.net(x)
output = torch.squeeze(output)
return output
def forward(self, input1, input2, input3=None):
output1 = self.forward_once(input1)
output2 = self.forward_once(input2)
if input3 is not None:
output3 = self.forward_once(input3)
return output1, output2, output3
return output1, output2
@staticmethod
def _build_net():
net = nn.Sequential(
nn.Conv2d(3, 32,kernel_size=3,stride=2),
nn.ReLU(inplace=True),
nn.BatchNorm2d(32),
nn.Conv2d(32, 64, kernel_size=3, stride=2),
nn.ReLU(inplace=True),
nn.BatchNorm2d(64),
nn.Conv2d(64, 128, kernel_size=3, stride=2),
nn.ReLU(inplace=True),
nn.BatchNorm2d(128),
nn.Conv2d(128, 256, kernel_size=1, stride=2),
nn.ReLU(inplace=True),
nn.BatchNorm2d(256),
nn.Conv2d(256, 256, kernel_size=1, stride=2),
nn.ReLU(inplace=True),
nn.BatchNorm2d(256),
nn.Conv2d(256, 512, kernel_size=3, stride=2),
nn.ReLU(inplace=True),
nn.BatchNorm2d(512),
nn.Conv2d(512, 1024, kernel_size=1, stride=1),
nn.ReLU(inplace=True),
nn.BatchNorm2d(1024))
return net
def prepare_data(self):
transform = torchvision.transforms.Compose([
torchvision.transforms.Resize((128, 128)),
torchvision.transforms.ColorJitter(hue=.05, saturation=.05),
torchvision.transforms.RandomHorizontalFlip(),
torchvision.transforms.RandomRotation(20, resample=PIL.Image.BILINEAR),
torchvision.transforms.ToTensor()
])
if self.train_data_path:
train_folder_dataset = dset.ImageFolder(root=self.train_data_path)
self.train_dataset = SiameseTriplet(image_folder_dataset=train_folder_dataset,
transform=transform)
if self.val_data_path:
val_folder_dataset = dset.ImageFolder(root=self.val_data_path)
self.val_dataset = SiameseTriplet(image_folder_dataset=val_folder_dataset)
if self.test_data_path:
test_folder_dataset = dset.ImageFolder(root=self.test_data_path)
self.test_dataset = SiameseTriplet(image_folder_dataset=test_folder_dataset)
def training_step(self, batch, batch_idx):
anchor, positive, negative = batch
anchor_out, positive_out, negative_out = self.forward(anchor, positive, negative)
loss_val = self.lossfn(anchor_out, positive_out, negative_out)
return {'loss': loss_val}
def configure_optimizers(self):
optimizer = optim.Adam(self.parameters(), lr=self.hparams.get('learning_rate', 0.001))
scheduler = optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max=10)
return [optimizer], [scheduler]
@pl.data_loader
def train_dataloader(self):
if self.train_dataset:
return DataLoader(self.train_dataset,
self.hparams.get('batch_size', 64),
num_workers=12)
return None
When I try and run it, it seems the beginning of the epoch hangs for like 10 minutes to get data into the model and after that the progress is very sluggish.
I also get these messages in the beginning. Not sure if it is of concern
GPU available: True, used: True
No environment variable for node rank defined. Set as 0.
CUDA_VISIBLE_DEVICES: [0,1,2,3]
MASTER_ADDR environment variable is not defined. Set as localhost
initializing proc_rank 0 world 4
MASTER_ADDR environment variable is not defined. Set as localhost
initializing proc_rank 1 world 4
MASTER_ADDR environment variable is not defined. Set as localhost
initializing proc_rank 2 world 4
MASTER_ADDR environment variable is not defined. Set as localhost
initializing proc_rank 3 world 4
It basically hangs with this:
Epoch 1: 0%| | 0/172 [00:00<?, ?it/s]
During this time, looking at GPU utilisation it seems:
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 418.87.00 Driver Version: 418.87.00 CUDA Version: 10.1 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX 108... Off | 00000000:05:00.0 Off | N/A |
| 48% 79C P2 90W / 250W | 4527MiB / 11176MiB | 100% Default |
+-------------------------------+----------------------+----------------------+
| 1 GeForce GTX 108... Off | 00000000:06:00.0 Off | N/A |
| 45% 76C P2 85W / 250W | 1636MiB / 11178MiB | 100% Default |
+-------------------------------+----------------------+----------------------+
| 2 GeForce GTX 108... Off | 00000000:09:00.0 Off | N/A |
| 45% 76C P2 79W / 250W | 1626MiB / 11178MiB | 100% Default |
+-------------------------------+----------------------+----------------------+
| 3 GeForce GTX 108... Off | 00000000:0A:00.0 Off | N/A |
| 32% 65C P2 79W / 250W | 2689MiB / 11178MiB | 100% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 1584 C /home/pd/.conda/envs/alchera37/bin/python 801MiB |
| 0 10714 C /home/pd/.conda/envs/alchera37/bin/python 543MiB |
| 0 28957 C /home/pd/.conda/envs/alchera37/bin/python 1047MiB |
| 0 30880 C /home/pd/.conda/envs/alchera37/bin/python 1063MiB |
| 0 32266 C /home/pd/.conda/envs/alchera37/bin/python 1063MiB |
| 1 10733 C /home/pd/.conda/envs/alchera37/bin/python 543MiB |
| 1 28972 C /home/pd/.conda/envs/alchera37/bin/python 1063MiB |
| 2 10789 C /home/pd/.conda/envs/alchera37/bin/python 543MiB |
| 2 32297 C /home/pd/.conda/envs/alchera37/bin/python 1063MiB |
| 3 10807 C /home/pd/.conda/envs/alchera37/bin/python 543MiB |
| 3 29006 C /home/pd/.conda/envs/alchera37/bin/python 1063MiB |
| 3 30967 C /home/pd/.conda/envs/alchera37/bin/python 1063MiB |
+-----------------------------------------------------------------------------+
So, it seems that getting the data into the GPU is quite slow even though everything looks maxed out.
And when it does eventually start the epoch after ~30 minutes, it seems to give similar performance as my CPU on MacBook Pro. I am really not sure if I am doing somethingvery wrong here in how I am using PL. | And then it gets stuck after this line (at least no console output)
This looks very familiar, and I am sure I fixed this problem in #2997, please try again with the latest version. Regarding the relative import error, you probably just launched the script in the wrong directory, but anyway I recommend to use absolute imports. Please let me know if the upgrade fixes your problem, thanks. | MDEwOkRpc2N1c3Npb244MjI0MA== | https://github.com/PyTorchLightning/pytorch-lightning/discussions/2065#discussioncomment-238154 |
Trainer cannot handle 1d tensor when return results from test_epoch_end | π Bug
When trainer run_test() called, the results from test cannot properly handle a 1D tensor in the results dictionary.
Such error will happen:
/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/trainer.py in run_test(self)
708 for k, v in result.items():
709 if isinstance(v, torch.Tensor):
--> 710 result[k] = v.cpu().item()
711
712 return eval_loop_results
ValueError: only one element tensors can be converted to Python scalars
Please reproduce using the BoringModel
To Reproduce
To reproduce with BoringModel, only need to replace the test_epoch_end.
def test_epoch_end(self, outputs) -> None:
torch.stack([x["y"] for x in outputs]).mean()
f1_score = torch.tensor([1,1,1,1])
return {'f1_score': f1_score}
Expected behavior
def run_test(self):
# remove the tensors from the eval results
for i, result in enumerate(eval_loop_results):
if isinstance(result, dict):
for k, v in result.items():
if isinstance(v, torch.Tensor):
# should check if you can call .item()
result[k] = v.cpu().item()
Environment
PyTorch Version (e.g., 1.0): 1.1.8
Additional context | Because for example in multi gpu mode, if we would allow the user to return, then we're missing the information what to do with the data, how the data is collected and synced, or reduced or whatever.
The logging api offers reduction and sync, by specifying the custom arguments how to do so.
On the other hand, self.write offers a way to collect all results.
There will also be a prediction api in 1.2. #5752
cc @tchaton | MDEwOkRpc2N1c3Npb24zMjIzNDA4 | https://github.com/PyTorchLightning/pytorch-lightning/discussions/5979#discussioncomment-369831 |
val_dataloader` has `shuffle=True` though its false | I am working with pytorchvideo with pytorch lightning but it say UserWarning: Your `val_dataloader` has `shuffle=True`,it is strongly recommended that you turn this off for val/test/predict dataloaders.
from pytorchvideo.models.resnet import create_resnet
class OurModel(LightningModule):
def __init__(self):
super(OurModel,self).__init__()
self.model = create_resnet(
input_channel=3, # RGB input from Kinetics
model_depth=50, # For the tutorial let's just use a 50 layer network
model_num_class=1, # Kinetics has 400 classes so we need out final head to align
norm=nn.BatchNorm3d,
activation=nn.ReLU)
def forward(self,x):
return self.model(x)
def val_dataloader(self):
val_dataset=LabeledVideoDataset(labeled_video_paths=\
list(zip(val_df.vidpath,val_df.pulse)),
clip_sampler=make_clip_sampler("uniform", 2),\
transform=train_transform, decode_audio=False)
val_loader=DataLoader(val_dataset,shuffle=False,
batch_size=self.batch_size,
num_workers=self.numworker,
pin_memory=True)
return val_loader
def validation_step(self,batch,batch_idx):
out = self(batch["video"]).flatten()
loss=self.criterion(out,batch["label"].float())
mae=self.metric(out,batch["label"].float())
return {'loss':loss,'mae':mae.detach()}
A you can see, shuffle is False. but it keep me giving warning that
UserWarning: Your `val_dataloader` has `shuffle=True`,it is strongly recommended that you turn this off for val/test/predict dataloaders.
Sorry, i am not sure whether i had to ask it at pytorchvideo or here | Hi @talhaanwarch, you're asking it in the right place!
It's a warning from Lightning, and as I looked at the definition of make_clip_sampler of pytorchvideo, I believe it's the same reason as #10771. You can simply ignore it with some filter like below if you need.
import warnings
warnings.simplefilter('ignore', category=UserWarning, message="Your `val_dataloader` has `shuffle=True`.*") | D_kwDOCqWgoM4AOdZ5 | https://github.com/PyTorchLightning/pytorch-lightning/discussions/11392#discussioncomment-1940652 |
Custom gather method | How do I correctly override gather behaviour?
Ideally, I would have a single place where I should specify new method and it would work with dp, ddp and ddp_spawn. | Managed to get this working for dp by subclassing DataParallelPlugin and passing it to trainer. Although this approach requires individual plugins for each accelerator, it would be nice to have a way to set this for all accelerators at the same time. | MDEwOkRpc2N1c3Npb24zMzkwNzAx | https://github.com/PyTorchLightning/pytorch-lightning/discussions/7782#discussioncomment-814754 |
Example in domain_templates: computer_vision_fine_tuning | I found that in the code below sigmoid activation applied before binary_cross_entropy_with_logits loss:
pytorch-lightning/pl_examples/domain_templates/computer_vision_fine_tuning.py
Lines 196 to 231
in
71b4611
def __build_model(self):
"""Define model layers & loss."""
# 1. Load pre-trained network:
model_func = getattr(models, self.backbone)
backbone = model_func(pretrained=True)
_layers = list(backbone.children())[:-1]
self.feature_extractor = nn.Sequential(*_layers)
# 2. Classifier:
_fc_layers = [
nn.Linear(2048, 256),
nn.ReLU(),
nn.Linear(256, 32),
nn.Linear(32, 1),
]
self.fc = nn.Sequential(*_fc_layers)
# 3. Loss:
self.loss_func = F.binary_cross_entropy_with_logits
def forward(self, x):
"""Forward pass. Returns logits."""
# 1. Feature extraction:
x = self.feature_extractor(x)
x = x.squeeze(-1).squeeze(-1)
# 2. Classifier (returns logits):
x = self.fc(x)
return torch.sigmoid(x)
def loss(self, logits, labels):
return self.loss_func(input=logits, target=labels)
From the documentation https://pytorch.org/docs/stable/generated/torch.nn.BCEWithLogitsLoss.html#torch.nn.BCEWithLogitsLoss:
This loss combines a Sigmoid layer and the BCELoss in one single class.
Is that performed intentionally? Or it's just a bug? | Yes, the sigmoid should be removed from the forward, because BCEWithLogits already contains the sigmoid.
Do you want to send a PR to update the example? | MDEwOkRpc2N1c3Npb24zMzI2NTIw | https://github.com/PyTorchLightning/pytorch-lightning/discussions/7081#discussioncomment-627079 |
Add AWS DataParellism | Hi!
I currently use AWS SageMaker to train my PL models. I recently found out this link :
https://aws.amazon.com/fr/blogs/aws/managed-data-parallelism-in-amazon-sagemaker-simplifies-training-on-large-datasets/
SageMaker provides its own implementation of DDP and I think that would be nice to be able to use it with PL :)
I looked into PL code and I think I could add this feature by extending pytorch_lightning.accelerators.accelerator.Accelerator. Does it seems like a good way to implement it? Are there some general advices/guidance you could give me about this?
If the you are interested in this feature, I can make a PR once I'm done with it.
Thank you!
RΓ©mi | In progress in #6271 | MDEwOkRpc2N1c3Npb24zMjc3NDg5 | https://github.com/PyTorchLightning/pytorch-lightning/discussions/6560#discussioncomment-637685 |
How to gather results on multiple GPUs while testing? ddp | β Questions and Help
What is your question?
I want to test summarization model from huggingface summarization example on multiple GPUs . My problem is how could I collect test results on different GPUs , since test_epoch_end only processes epoch for a single GPU.
For more information, the model is trained with ddp backend.
Code
def test_epoch_end(self, outputs):
output_test_predictions_file = os.path.join(self.hparams.output_dir, "test_predictions.txt")
output_test_targets_file = os.path.join(self.hparams.output_dir, "test_targets.txt")
# write predictions and targets for later rouge evaluation.
with open(output_test_predictions_file, "w+") as p_writer, open(output_test_targets_file, "w+") as t_writer:
for output_batch in outputs:
p_writer.writelines(s + "\n" for s in output_batch["preds"])
t_writer.writelines(s + "\n" for s in output_batch["target"])
p_writer.close()
t_writer.close()
return self.test_end(outputs)
What have you tried?
For now, I can only use single GPU to get result of whole dataset.
What's your environment?
OS: Unbuntu 18.04
Packaging pip
Version: 0.7.6 | Use torch.distributed.all_gather to gather and merge the outputs from all GPUs.
And you should remove the redundant examples due to the ddp_sampler adds extra examples to work with multi GPUS. (https://pytorch.org/docs/stable/_modules/torch/utils/data/distributed.html#DistributedSampler)
Here is the workaround snippet used in my own project.
def gather_distributed(*tensors):
output_tensors = []
for tensor in tensors:
tensor_list = [torch.ones_like(tensor) for _ in range(dist.get_world_size())]
dist.all_gather(tensor_list, tensor)
output_tensors.append(torch.cat(tensor_list))
return output_tensors
def deduplicate_and_sort(index, *tensors):
reverse_index = torch.zeros_like(index)
for ri, i in enumerate(index):
reverse_index[i] = ri
reverse_index = reverse_index[:index.max() + 1]
output_tensors = [tensor.index_select(0, reverse_index) for tensor in tensors]
return output_tensors
In the above code, you need the index of each example to remove redundant examples and sort outputs in order.
Notice that the index should consist of consecutive integers (e.g., 0,1,2,...N). | MDEwOkRpc2N1c3Npb244MjI0MQ== | https://github.com/PyTorchLightning/pytorch-lightning/discussions/1974#discussioncomment-238157 |
To find r2score of my model | I have a UNet model. I'm trying for a regression model since, in my output, I have different floating values for each pixel. In order to check the r2score, I tried to put the below code in the 'model class', training_step, validation_step, and test_step.
from pytorch_lightning.metrics.functional import r2score
r2 = r2score(logits, y)
self.log('r2:',r2)
But it's giving the following error
ValueError: Expected both prediction and target to be 1D or 2D tensors, but recevied tensors with dimension torch.Size([50, 1, 32, 32])
How can I check my model fit? | I assume that you just want to calculate the total score, and in that case you should simply flatten your input before calculating the score:
from pytorch_lightning.metrics.functional import r2score
r2 = r2score(logits.flatten(), y.flatten())
self.log('r2:',r2) | MDEwOkRpc2N1c3Npb24zMjMyNTUw | https://github.com/PyTorchLightning/pytorch-lightning/discussions/6125#discussioncomment-400670 |
find_unused_parameters in the lightning trainer (1.3.2) | π Bug
If there are unsued parameters in the model, should we still explicitly mentioned the trainer as follows?
plugins=[DDPPlugin(find_unused_parameters=True)]
trainer = pl.Trainer.from_argparse_args( args, weights_summary=None, callbacks=[logging_callback], logger=logger, plugins=[DDPPlugin(find_unused_parameters=True)], **train_params,) | No, it is set to True by default and you want to turn it off unless you need it :)
Reference:
https://pytorch-lightning.readthedocs.io/en/1.3.3/benchmarking/performance.html#when-using-ddp-set-find-unused-parameters-false | MDEwOkRpc2N1c3Npb24zMzkyMjc2 | https://github.com/PyTorchLightning/pytorch-lightning/discussions/7796#discussioncomment-812910 |
Problem with dictionary of DataLoaders in training_step. | I followed the steps from the documentation when it comes to returning a dictionary of DataLoaders. The problems occurs in the training_step method. I expect the batch parameter to be a dictionary, but in reality it is a string, the first key of the expected dictionary.
def train_dataloader(self):
dl1 = DataLoader(IconDataset(self.train_icons_path),batch_size=128,
collate_fn=self.triplet_mining.batch_hard_negatives,num_workers=5)
dl2 = DataLoader(IconDataset(self.train_icons_path),batch_size=128,
collate_fn=self.triplet_mining.batch_semi_hard_negative,num_workers=5)
return {
"hard": dl1,
"semi-hard": dl2 }
In training_step
def training_step(self,batch,batch_idx):
print(type(batch)) # str
print(batch) # hard
I don't know if there is a problem with my collate_fn method. The batching was working all right when one single DataLoader was used. | It's so funny how fast one can find the answer to his own question. The problem was related to the validation DataLoaders. I was trying to return a dictionary in a similar manner to the training_dataloader, but the validation loop works with a sequence of DataLoaders. There is a similar question answered here: #8623 | MDEwOkRpc2N1c3Npb24zNTI3MDA1 | https://github.com/PyTorchLightning/pytorch-lightning/discussions/8978#discussioncomment-1200410 |
Deterministic DataLoader on DDP reads same data on all subprocesses | I've used seed_everything(7) to initially set the seed then passed deterministic=True, accelerator='ddp' to Trainer to have it run on 4 GPUs.
Then I load my map-style dataset using a plain DataLoader with shuffle=True, num_workers=10 .
Now what happens is that each of the forked DDP processes spin up N (here 10) worker processes to read the data. So total 4 x 10 DataLoader processes. I have tried setting up a worker_init_fn to see the seeds they each receive, and indeed the worker processes for each GPU get different seeds, but they are the same across worker processes of different GPUs. This causes each data item to be read 4 times (the number of GPU / DDP processes) which I checked in the dataset's __getitem__. So the indexes for example would look like [3,3,3,3,7,7,7,7,2,2,2,2,...].
What is the way to fix this? Shouldn't the DistributedSampler for DDP automatically get a seed based on the subprocess that it is forked on? (Similar to DataLoader's worker_init_fn) | Update: This seemed to be the issue if only settings seeds and setting Pytorch and Cuda for deterministic execution, but adding deterministirc=True to PL's Trainer object seems to have resolved the issue. | MDEwOkRpc2N1c3Npb24zMjQ5ODE4 | https://github.com/PyTorchLightning/pytorch-lightning/discussions/6350#discussioncomment-439009 |
Save checkpoint with specific monitor criteria | Hello everyone, I'm currently implementing a Wasserstain type of GAN using Gradient Penalty. I want to save the checkpoints monitoring the negative critic loss, which starts from low values, increases to higher values in the first epochs and then decreases reaching almost 0. A plot of this loss can be seen in the paper: https://arxiv.org/pdf/1704.00028.pdf
The problem is that if I use ModelCheckpoint and set the monitor parameter to negative critic_loss and mode = 'min', it basically saves the first epoch only. However I don't want to consider the training start epochs, when the negative loss increase, but only the epochs when the loss decrease.
I'm currently using multi-gpu training
How can I implement this? Should I override the function on_train_epoch_end and save there the checkpoints, after checking the above criteria? Or should I use a lightning Callback? If so how can I acces to the monitored values?
Thanks in advance | Thanks to @tchaton on the slack community I solved the issue overriding the ModelCheckpoint class.
In the on_train_epoch_end I've added a new check that follow the above conditions, as such:
class WGANModelCheckpoint(ModelCheckpoint):
def __init__(self,
dirpath: Optional[Union[str, Path]] = None,
filename: Optional[str] = None,
monitor: Optional[str] = None,
verbose: bool = False,
save_last: Optional[bool] = None,
save_top_k: int = 1,
save_weights_only: bool = False,
mode: str = "min",
auto_insert_metric_name: bool = True,
every_n_train_steps: Optional[int] = None,
train_time_interval: Optional[timedelta] = None,
every_n_epochs: Optional[int] = None,
save_on_train_epoch_end: Optional[bool] = None,
period: Optional[int] = None,
every_n_val_epochs: Optional[int] = None):
super().__init__(
dirpath=dirpath,
filename=filename,
monitor=monitor,
verbose=verbose,
save_last=save_last,
save_top_k=save_top_k,
save_weights_only=save_weights_only,
mode=mode,
auto_insert_metric_name=auto_insert_metric_name,
every_n_train_steps=every_n_train_steps,
train_time_interval=train_time_interval,
every_n_epochs=every_n_epochs,
save_on_train_epoch_end=save_on_train_epoch_end,
period=period,
every_n_val_epochs=every_n_val_epochs)
self.is_monitoring_on = False
def on_train_epoch_end(self, trainer: "pl.Trainer", pl_module: "pl.LightningModule", unused: Optional = None) -> None:
"""Save a checkpoint at the end of the training epoch."""
# as we advance one step at end of training, we use `global_step - 1` to avoid saving duplicates
trainer.fit_loop.global_step -= 1
if (
not self._should_skip_saving_checkpoint(trainer)
and self._save_on_train_epoch_end
and self._every_n_epochs > 0
and (trainer.current_epoch + 1) % self._every_n_epochs == 0
and (self.is_monitoring_on or self.monitor_can_start(trainer, pl_module))
):
self.save_checkpoint(trainer)
trainer.fit_loop.global_step += 1
def monitor_can_start(self, trainer: pl.Trainer, pl_module: pl.LightningModule) -> bool:
"""Let start monitoring only after the loss curve start increasing"""
monitor_candidates = self._monitor_candidates(trainer, trainer.current_epoch, trainer.global_step - 1)
current = monitor_candidates.get(self.monitor)
# Check if the critic loss is increasing (the network is starting to
# train)
if trainer.current_epoch > 0 and pl_module.previous_metric < current:
self.is_monitoring_on = True
pl_module.previous_metric = current.detach().clone()
return self.is_monitoring_on
The function monitor_can_start() does the trick. | D_kwDOCqWgoM4AOT-T | https://github.com/PyTorchLightning/pytorch-lightning/discussions/11129#discussioncomment-1836537 |
UserWarning: cleaning up ddp environment... | When I execute some program in PyTorch-Lightning, it implements very fast. But, at the last part, I got a message as below
/home/mydirectory/anaconda3/lib/python3.7/site-packages/pytorch_lightning/utilities/distributed.py:69: UserWarning: cleaning up ddp environment...
warnings.warn(*args, **kwargs)
After this message, all the system stops. I can't see the result that I want stopped by this message.
Can anyone tell me how to solve it? I'm in Hurry γ
.γ
(I am using it with wandb & HuggingFace.) | @data-weirdo mind share some sample code to reproduce? I have been using DDP in some of our examples and all is fine π° | MDEwOkRpc2N1c3Npb24zMzk1MjAw | https://github.com/PyTorchLightning/pytorch-lightning/discussions/7820#discussioncomment-976864 |
the process would be blocking in Validation step | Problem:
I have a problem. when i train the model, the process would be blocking in validation step(or validation sanity check) but in the training step, it can work. And I debug in the pytorch-lightning , i found when loading data from validation dataloader it would be blocking. I am not sure what problem.
Environment:
docker; the lastest pytorch-lighting;gpu a100
log:
INFO Using validation DataLoader3DOffset with {}
INFO Building Sampling Cache for Dataloder
Sampling Cache: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 2/2 [00:00<00:00, 1445.07it/s]
INFO Using 5 num_processes and 2 num_cached_per_queue for augmentation. | 0/2 [00:00<?, ?it/s]
INFO VALIDATION KEYS:
odict_keys(['case_0', 'case_7'])
using pin_memory on device 0
I test the validation step and it world jam
can you help me, thank you !!!!! | sorry, It is my fault. now i solve this problem | D_kwDOCqWgoM4APBbi | https://github.com/PyTorchLightning/pytorch-lightning/discussions/12329#discussioncomment-2378383 |
RuntimeError: unable to open shared memory object </torch_91130_1372465664> in read-write mode | I'm getting the following error after setting up an EC2 instance p3.8xlarge (so 4 GPUs) and setting gpus=4:
/home/ubuntu/anaconda3/lib/python3.7/site-packages/pytorch_lightning/trainer/connectors/accelerator_connector.py:524: UserWarning: You requested multiple GPUs but did not specify a backend, e.g. `Trainer(accelerator="dp"|"ddp"|"ddp2")`. Setting `accelerator="ddp_spawn"` for you.
'You requested multiple GPUs but did not specify a backend, e.g.'
GPU available: True, used: True
TPU available: False, using: 0 TPU cores
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0,1,2,3]
Traceback (most recent call last):
File "train.py", line 79, in <module>
File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/pytorch_lightning/tuner/tuning.py", line 197, in lr_find
File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 688, in tune
File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/pytorch_lightning/tuner/tuning.py", line 54, in _tune
File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/pytorch_lightning/tuner/lr_finder.py", line 250, in lr_find
File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/pytorch_lightning/tuner/tuning.py", line 64, in _run
File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 758, in _run
File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 799, in dispatch
File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/pytorch_lightning/accelerators/accelerator.py", line 96, in start_training
File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/pytorch_lightning/plugins/training_type/ddp_spawn.py", line 122, in start_training
File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 230, in spawn
File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 179, in start_processes
File "/home/ubuntu/anaconda3/lib/python3.7/multiprocessing/process.py", line 112, in start
File "/home/ubuntu/anaconda3/lib/python3.7/multiprocessing/context.py", line 284, in _Popen
File "/home/ubuntu/anaconda3/lib/python3.7/multiprocessing/popen_spawn_posix.py", line 32, in __init__
File "/home/ubuntu/anaconda3/lib/python3.7/multiprocessing/popen_fork.py", line 20, in __init__
File "/home/ubuntu/anaconda3/lib/python3.7/multiprocessing/popen_spawn_posix.py", line 47, in _launch
File "/home/ubuntu/anaconda3/lib/python3.7/multiprocessing/reduction.py", line 60, in dump
File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/torch/multiprocessing/reductions.py", line 328, in reduce_storage
RuntimeError: unable to open shared memory object </torch_91130_1372465664> in read-write mode
My code runs fine on a single gpu instance. Any idea what I need to look at here? | Some quick googling π
facebookresearch/maskrcnn-benchmark#103
This issue is not Lightning related, so if the fixes mentioned there do not help, then you should try asking on PyTorch discussions. | MDEwOkRpc2N1c3Npb24zNDczMDQy | https://github.com/PyTorchLightning/pytorch-lightning/discussions/8524#discussioncomment-1040209 |
Can we get the value of `self.log()`? | self.log(on_epoch=True) computes the metrics across GPUS and epoch batches automatically. How can we get the value of it when training in distributed mode? | It should be available in the trainer.callback_metrics dictionary | MDEwOkRpc2N1c3Npb24zNTQzNzYy | https://github.com/PyTorchLightning/pytorch-lightning/discussions/9174#discussioncomment-1257033 |
Questions about Fault-tolerant Training | Hi! I'm working on a SLURM cluster with preemption, so I'm really excited to see the support of Fault-tolerant Training in 1.5.0. However, when I upgrade package and try PL_FAULT_TOLERANT_TRAINING=1 python train.py xxx in the cluster, it doesn't seem to work.
I look into the code of Trainer, it seems that the code responsible for fault-tolerant is here. I assume preemption is a BaseException so the code will go to here and finally here so that we save a checkpoint?
However, when set some print in the code, when I use ctrl+C to interrupt code, it indeed goes to this KeyBoardInterrupt. But if I use scontrol requeue to simulate a preemption, the code didn't got to BaseException. And that's why it didn't save a checkpoint for Fault-tolerant Training.
Is there anything wrong with my code? I assume interruptions like scancel requeue are considered in this case. Can anyone help me? Thank you in advance!
EDIT: I've looked in the code a little bit more, it seems that when I do scancel or scontrol requeue, the code directly exit, without throwing an exception, and that's why it didn't go to the except _on_exception section. Is this expected behavior? Or is there anyway to solve it?
I think that's related to the signal that SLURM sent to my program, and I already see a SignalConnector dealing with SLURM in pytorch-lightning here. I also see this answer about the signal of SLURM. Maybe I should set it in the sbatch script? Any suggestions? | Solved. That's indeed because in my SLURM cluster, there is no time interval between signal sending and program killing, so PyTorch-Lightning just don't have time to do checkpointing | D_kwDOCqWgoM4AN_7D | https://github.com/PyTorchLightning/pytorch-lightning/discussions/10380#discussioncomment-1598270 |
Call or Forward? | Hi, thanks for the nice library. In the readme, the example uses model.forward(x) not model(x). But wouldn't it usually recommended to use model(x) so that other things (hooks etc) can be, well, hooked as well? What's the best practice? | forward should implement what you want to use when calling model(x).
you may need to call that in training step (usually do), which means you have to do self.forward(...) because you are in the model when you make that call. | MDEwOkRpc2N1c3Npb24yNzkyMzU2 | https://github.com/PyTorchLightning/pytorch-lightning/discussions/5811#discussioncomment-339779 |
how to put ```trainer.fit()``` in for loop? | I am trying to create multiple model using loop as below.
for client in clients:
t.manual_seed(10)
client['model'] = LinearNN(learning_rate = args.lr, i_s = args.input_size, h1_s = args.hidden1, h2_s = args.hidden2, n_c = args.output, client=client)
client['optim'] = optim.Adam(client['model'].parameters(), lr= args.lr)
However, trainer.fit() is an async method. To train multiple models, I need to put trainer.fit() in a loop as follows
for client in clients:
trainer = pl.Trainer(
max_epochs=args.epochs+1,
progress_bar_refresh_rate=20,
)
trainer.fit(client['model'])
As this is an async method, it gives an error
AttributeError: can't set attribute
as it doesn't wait for finishing trainer.fit().
Is there any way to do that?
Thanks in advance. | Hi!
I'm not entirely sure, but I don't see why the code snippets you shared would not work.
is an async method
What do you mean exactly by "async" method?
it gives an error
Can you share the full error stacktrace and your Lightning version? | MDEwOkRpc2N1c3Npb24zNDU2MjU5 | https://github.com/PyTorchLightning/pytorch-lightning/discussions/8380#discussioncomment-1040227 |
where to add preprocessing initialization | I would like to have a step called before the first training step, and that yet necessitates the dataloader
e.g. (mock code)
class Scaler(nn.Module):
'''center target data'''
def __init__(self, dims):
self.mean = nn.Parameter(torch.tensor(dims))
self.n = nn.Parameters(torch.zeros(1))
def forward(self, batch):
input, target = batch
if self.training:
self.mean += target.mean(0)
self.n += 1
else:
return input, (target - self.mean)/self.n
class MySystem(pl.LightningModule):
def __init__(self, scaler_dims, model_dims):
self.model = nn.Linear(**model_dims)
self.scaler = Scaler(self.dims).train()
def on_first_epoch(self, dataloader): # <---- not sure where this should live
# learn to scale the dataset
for batch in dataloader:
self.scaler(batch)
def training_step(self, batch, batch_idx):
self.scaler.eval()
input, target = self.scaler(batch)
pred = self.model(input)
loss = F.l1_loss(pred, target)
return loss
dm = MyDataModule()
system = MySystem()
trainer = pl.Trainer()
trainer.fit(system, dm)
I'm not clear on how to do this with PL's API: nn.LithningModule.setup() does not have access to the dataloader.
Any advice?
Thanks! | thanks @ananthsub , I think the point of lightning is to try to keep everything in the same system.
going through the doc, I think the best is either
move the pl.DataLightningModule to the pl.LightningModule and setup such scaler with self._prepare_data (which is called once in distributed, as opposed to self.setup)
using a callback `on_init_start': https://pytorch-lightning.readthedocs.io/en/latest/extensions/callbacks.html | MDEwOkRpc2N1c3Npb24zMzQ2ODUx | https://github.com/PyTorchLightning/pytorch-lightning/discussions/7307#discussioncomment-683664 |
Can the learning rate find by dp using one more gpu be used in ddp? | When using pytorch_lightning.tuner.lr_finder.lr_find, ddp have some error. So i change to dp using 4 gpus. Can the learning rate find by dp used by ddp? They have same gpu numbers. | I think the answer is no.
DP doesn't change your effective batch size while DDP does (in your case with one node and 4GPUs, the effective batch size is 4 times bigger with DDP). You can find more info about the effective batch size in the "multi-GPU" section of Lightning's documentation here.
As a consequence of this, you should probably increase your learning rate. Rule of thumbs is to increase it linearly (so by 4) but there is more than just doing that. Have a look at that paper: https://arxiv.org/pdf/1706.02677.pdf | MDEwOkRpc2N1c3Npb244MjMwMw== | https://github.com/PyTorchLightning/pytorch-lightning/discussions/4878#discussioncomment-238397 |
Is there a way to save only part of the Lightning sub-modules to the checkpoint file? | I'll explain: Let's say that I have two nn.modules inside my main LightningModule, but one of them is frozen, i.e. doesn't learn during the training but is only used for inferencing during training (requires_grad is False in this module) and I would like to avoid saving the state_dictionray of this static (frozen) module to the checkpoint file.
In plain PyTorch I'd probably filter manually the state_dictionray fields of the frozen module before the saving.
Is there a simple way to do that with pytorch-lightning? Or to raise some flag inside the modules which say to the LightningModule not to save all the parameters inside this frozen module?
A simple toy example for clarification.
In this example, I'd like to avoid saving the parameters of self.frozen_nn_module.
All parameters in self.frozen_nn_module don't require grads.
class LightMod(LightningModule):
def __init__(self):
super().__init__()
#some non frozen module
self.non_frozen_nn_module = non_frozen_nn_module
#some frozen(static) nn.Module
self.frozen_nn_module= frozen_nn_module
def forward(self, x):
Some code.... | you have to do that here too.
within lightning you can override on_save_checkpoint hook of LightningModule.
def on_save_checkpoint(self, checkpoint):
checkpoint['state_dict'] <- remove/pop keys from here | D_kwDOCqWgoM4AOJ8D | https://github.com/PyTorchLightning/pytorch-lightning/discussions/10808#discussioncomment-1717736 |
trainer.fit(strategy='ddp') executes code repeatedly | Hi everyone.
I am trying to use 4 gpus in a single node to train my model with DDP strategy. But everytime I run trainer.fit, the whole bunch of codes are executed 4 times repeatedly, and it requires 4 times of CPU memory compared to a single GPU case.
I am not sure whether it is intended behavior or not. I ran the following sample code. It trains MNIST data on 4 gpus.
import warnings
warnings.filterwarnings("ignore")
import os
import torch
from pytorch_lightning import LightningModule, Trainer
from torch import nn
from torch.nn import functional as F
from torch.utils.data import DataLoader#, random_split
from torchvision import transforms
from torchvision.datasets import MNIST
PATH_DATASETS = os.environ.get("PATH_DATASETS", ".")
class MNISTModel(LightningModule):
def __init__(self):
super().__init__()
self.l1 = torch.nn.Linear(28 * 28, 10)
def forward(self, x):
return torch.relu(self.l1(x.view(x.size(0), -1)))
def training_step(self, batch, batch_nb):
x, y = batch
loss = F.cross_entropy(self(x), y)
return loss
def configure_optimizers(self):
return torch.optim.Adam(self.parameters(), lr=0.02)
if __name__ == '__main__':
print('Hello world!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!')
mnist_model = MNISTModel()
train_ds = MNIST(PATH_DATASETS, train=True, download=True, transform=transforms.ToTensor())
train_loader = DataLoader(train_ds, batch_size=256)
trainer = Trainer(gpus=4, strategy='ddp', max_epochs=1, replace_sampler_ddp=True, num_nodes=1)
trainer.fit(mnist_model, train_loader)
And I got the following output:
Hello world!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
GPU available: True, used: True
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
Hello world!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
initializing distributed: GLOBAL_RANK: 1, MEMBER: 2/4
Hello world!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Hello world!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
initializing distributed: GLOBAL_RANK: 2, MEMBER: 3/4
initializing distributed: GLOBAL_RANK: 0, MEMBER: 1/4
initializing distributed: GLOBAL_RANK: 3, MEMBER: 4/4
----------------------------------------------------------------------------------------------------
distributed_backend=nccl
All distributed processes registered. Starting with 4 processes
----------------------------------------------------------------------------------------------------
The training is done well, but the thing is that 'Hello world!' is printed four times. My problem here is that train data is loaded four times also and it takes four times of CPU memory. I am not sure whether it is the intended behavior or am I doing something wrong?
How do you deal with DDP if train data is too large to be copied by multiple (=gpu num) times? | hey @earendil25!
this is how DDP works exactly. To populate data across devices, DistributedSampler is added to avoid data duplication on each device and the model is wrapped around DistributedDataParallel to sync gradients. The command is launched on each device individually. Alternatively, you can also try DDP_Spawn, which creates spawn processes and won't execute the whole script on each device. | D_kwDOCqWgoM4AOzRX | https://github.com/PyTorchLightning/pytorch-lightning/discussions/11938#discussioncomment-2197714 |
Stop training if high enough accuracy isnβt reached | Hi, I know that there is EarlyStopping if validation metrics are deteriorating. But I was wondering if it was possible to stop training if after say epoch 10, the accuracy hasnβt reached say 20%. If such a callback doesnβt exist, any thoughts on how I can get started on the implementation of it?
For context I am running a distributed hyper-parameter optimizer and I know that the βgoodβ hyper-parameter set will get me to 50% accuracy by epoch 5. | You could write a callback similar to early stopping which checks your metric for the target value by whatever epoch. if the metric isn't good enough, you can signal to the trainer to stop, like this:
pytorch-lightning/pytorch_lightning/callbacks/early_stopping.py
Lines 194 to 196
in
490cc57
# stop every ddp process if any world process decides to stop
should_stop = trainer.training_type_plugin.reduce_boolean_decision(should_stop)
trainer.should_stop = trainer.should_stop or should_stop | MDEwOkRpc2N1c3Npb24zMzQ2Mzk5 | https://github.com/PyTorchLightning/pytorch-lightning/discussions/7305#discussioncomment-682487 |
wrong global rank when trying multi-nodes | Hello,
I'm trying to train my model with multi-nodes (2 nodes, 8 gpus per each, using ddp accelator & trying without using slurm)
But I got problem with GLOBAL_RANK
in node 1,
initializing ddp: GLOBAL_RANK: 1, MEMBER: 2/16
...
initializing ddp: GLOBAL_RANK: 7, MEMBER: 8/16
same as in node 2,
initializing ddp: GLOBAL_RANK: 0, MEMBER: 1/16
...
initializing ddp: GLOBAL_RANK: 7, MEMBER: 8/16
And got stuck with repeated message like below
Waiting in store based barrier to initialize process group for rank: 3, key: store_based_barrier_key:1 (world_size=16, worker_count=13, timeout=0:30:00)
Waiting in store based barrier to initialize process group for rank: 1, key: store_based_barrier_key:1 (world_size=16, worker_count=13, timeout=0:30:00)
I'm trying to setup like this document but also got problem, like below
os.environ["MASTER_ADDR"] = master_addr
os.environ["MASTER_PORT"] = master_port
os.environ["WORLD_SIZE"] = "16"
os.environ["NODE_RANK"] = rank
File "/home/user/.pyenv/versions/3.7.9/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 474, in fit
self.accelerator.setup(self, model)
File "/home/user/.pyenv/versions/3.7.9/lib/python3.7/site-packages/pytorch_lightning/accelerators/gpu.py", line 19, in setup
return super().setup(trainer, model)
File "/home/user/.pyenv/versions/3.7.9/lib/python3.7/site-packages/pytorch_lightning/accelerators/accelerator.py", line 69, in setup
self.connect_training_type_plugin(self.training_type_plugin, model)
File "/home/user/.pyenv/versions/3.7.9/lib/python3.7/site-packages/pytorch_lightning/accelerators/accelerator.py", line 328, in connect_training_type_plugin
plugin.connect(model)
File "/home/user/.pyenv/versions/3.7.9/lib/python3.7/site-packages/pytorch_lightning/plugins/training_type/parallel.py", line 68, in connect
self.setup(model)
File "/home/user/.pyenv/versions/3.7.9/lib/python3.7/site-packages/pytorch_lightning/plugins/training_type/ddp.py", line 95, in setup
self.task_idx = self.cluster_environment.local_rank()
File "/home/user/.pyenv/versions/3.7.9/lib/python3.7/site-packages/pytorch_lightning/plugins/environments/torchelastic_environment.py", line 48, in local_rank
return int(os.environ['LOCAL_RANK'])
File "/home/user/.pyenv/versions/3.7.9/lib/python3.7/os.py", line 681, in __getitem__
raise KeyError(key) from None
I'd appreciate any help. thanks in advance | What have you set for MASTER_ADDR and MASTER_PORT? These have to reference to one of the two machines you are using. For example if I have two nodes like this:
IP: 512.124.134.4
IP: 512.124.136.8
And I want 512.124.134.4 to be my master node.
For both my machines I'd need to run something like MASTER_ADDR=512.124.134.4 MASTER_PORT=4500 python train.py.
Let me know if this helps! On top of this, we should update the doc if this does work :) | MDEwOkRpc2N1c3Npb24zMjYwMjk0 | https://github.com/PyTorchLightning/pytorch-lightning/discussions/6454#discussioncomment-479416 |
I cannot assign the GPU index by ddp or dp backends.. | It's weired. I have a single machine with 8 GPUSs and now 0~4 are full load.
I would like to use parallel training on 5 6 7 GPUs but cannot assign the task to them.
Some of my codes:
` parser = argparse.ArgumentParser(description='Solver')
parser.add_argument('--config', required=True, type=str)
fargs = parser.parse_args()
args = parse_config(fargs.config)
data = DDPMData(args.Data)
data.train_dataloader()
data.test_dataloader()
num_cls = data.n_classes
shape = data.data_shapes
args.Model.Unet.kwargs.num_classes = num_cls
args.Model.DiscreteDiffusion.kwargs.num_class = num_cls
args.Model.DiscreteDiffusion.kwargs.shape = shape
model = DDDPM(args)
wandb_logger = WandbLogger()
callbacks = []
callbacks.append(ModelCheckpoint(save_last=True,every_n_train_steps=600))
callbacks.append(LearningRateMonitor(logging_interval='step'))
trainer = pl.Trainer(callbacks=callbacks,max_steps=args.max_steps,
accelerator='dp', gpus=[5,6,7],
logger = wandb_logger, check_val_every_n_epoch=120,
num_sanity_val_steps=0)
trainer.fit(model, data)
`
And now nvidia-smi shows:
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 470.42.01 Driver Version: 470.42.01 CUDA Version: 11.4 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA GeForce ... On | 00000000:04:00.0 Off | N/A |
| 63% 72C P2 151W / 200W | 7997MiB / 8119MiB | 85% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 1 NVIDIA GeForce ... On | 00000000:06:00.0 Off | N/A |
| 71% 77C P2 134W / 200W | 8109MiB / 8119MiB | 98% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 2 NVIDIA GeForce ... On | 00000000:07:00.0 Off | N/A |
| 82% 83C P2 143W / 200W | 7405MiB / 8119MiB | 89% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 3 NVIDIA GeForce ... On | 00000000:08:00.0 Off | N/A |
| 80% 83C P2 158W / 200W | 7263MiB / 8119MiB | 59% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 4 NVIDIA GeForce ... On | 00000000:0C:00.0 Off | N/A |
| 78% 82C P2 148W / 200W | 5719MiB / 8119MiB | 100% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 5 NVIDIA GeForce ... On | 00000000:0D:00.0 Off | N/A |
| 0% 24C P8 7W / 200W | 4MiB / 8119MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 6 NVIDIA GeForce ... On | 00000000:0E:00.0 Off | N/A |
| 0% 34C P8 8W / 200W | 4MiB / 8119MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 7 NVIDIA GeForce ... On | 00000000:0F:00.0 Off | N/A |
| 0% 25C P8 7W / 200W | 4MiB / 8119MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 4082 C python 7993MiB |
| 1 N/A N/A 13619 C python 8105MiB |
| 2 N/A N/A 4082 C python 7401MiB |
| 3 N/A N/A 4082 C python 7259MiB |
| 4 N/A N/A 24206 C python 5715MiB |
+-----------------------------------------------------------------------------+
and when I run the codes, it will assign task to GPU:2 , I don't know why..
RuntimeError: CUDA out of memory. Tried to allocate 60.00 MiB (GPU 2; 7.93 GiB total capacity; 117.19 MiB already allocated; 45.50 MiB free; 120.00 MiB reserved in total by PyTorch)
Any idea..?
##############################
torch = 1.7.1
pl = 1.3.8 | Problemed solved.
I forgot to map device when I load a ckpt file as data... | MDEwOkRpc2N1c3Npb24zNDU0NTI5 | https://github.com/PyTorchLightning/pytorch-lightning/discussions/8368#discussioncomment-1002781 |
TPU Training: No TPU devices were found. | Thanks for great framework.
I tried to train with tpu (Google Cloud Platform Environment). I encounter error like this:
kaki_ai@kaki-ins:~/kopite-bot$ python3 train_blender.py
16:14:31 | Overriding opt["no_cuda"] to True (previously: False)
16:14:31 | Loading model with `--beam-block-full-context false`
16:14:31 | loading dictionary from /home/kaki_ai/ParlAI/data/models/blender/blender_90M/model.dict
16:14:31 | num words = 54944
16:14:32 | DEPRECATED: XLM should only be used for backwards compatibility, as it involves a less-stable layernorm operation.
16:14:33 | Total parameters: 87,508,992 (87,508,992 trainable)
16:14:33 | Loading existing model params from /home/kaki_ai/ParlAI/data/models/blender/blender_90M/model
Traceback (most recent call last):
File "train_blender.py", line 47, in <module>
val_dataloader=test_loader,
File "/home/kaki_ai/kopite-bot/training/lightning_base.py", line 135, in fit
accumulate_grad_batches=self.accumulate_grad_batches,
File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/connectors/env_vars_connector.py", line 39, in insert_env_defaults
return fn(self, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 321, in __init__
replace_sampler_ddp, deterministic, precision, amp_backend, amp_level, plugins
File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/connectors/accelerator_connector.py", line 91, in __init__
self.tpu_cores = device_parser.parse_tpu_cores(tpu_cores)
File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/utilities/device_parser.py", line 113, in parse_tpu_cores
raise MisconfigurationException('No TPU devices were found.')
pytorch_lightning.utilities.exceptions.MisconfigurationException: No TPU devices were found.
If you have any doubts, please help me. Thank you! | See #6778. (just for the record) | MDEwOkRpc2N1c3Npb24zMzAxNjgw | https://github.com/PyTorchLightning/pytorch-lightning/discussions/6774#discussioncomment-556182 |
NN output within a numba jitted function | Hello,
I have a jitted function within which I need to use the output of a neural network (trained using PyTorch Lightning). The pseudo code will make this clearer:
while True:
x = sample_from_model() # β numpy type, hence compatible with numba
out = NN(torch.Tensor(x)) # β incompatible with numba
Is there a way to circumvent this problem? First thing that comes to mind is to manually extract the weights and compute the forward pass.
Thanks in advance,
Petar | Hi Petar,
I'm not that familiar with numba, but if it runs with numpy types, you should be able to do this via ONNX export.
you should be able to simply get this with my_lightning_model.to_onnx().
Note: This dumps it to disk and you can use the onnx runtime for prediction then. | MDEwOkRpc2N1c3Npb24zNDI2NjI5 | https://github.com/PyTorchLightning/pytorch-lightning/discussions/8099#discussioncomment-910104 |
How to pass gradients to `backward()` | In my experiment, I have a loss function, which is not defined by an expression. But I have a formulation of the gradient (formally a subgradient) so I have to pass the gradient manually. In pytorch, I implement it in the following way and it is working fine.
self.optimizer.zero_grad()
y_hat = self.model(x_train)
grad = compute_grad(y_hat, y)
y_hat.backward(gradient=grad)
self.optimizer.step()
Would the following be a correct implementation in lightning?
def training_step(self, batch, batch_idx):
opt = self.optimizers()
x,y = batch
y_hat = self(x)
grad = compute_grad(y_hat, y)
opt.zero_grad()
y_hat.backward(gradient= grad)
opt.step() | @JayMan91 Haven't tried, but you could probably try manual optimization.
def __init__(...):
...
self.automatic_optimization = False # use manual optimization
...
def training_step(...):
...
self.manual_backward(y_hat, gradient=grad)
...
manual optimization: https://pytorch-lightning.readthedocs.io/en/latest/common/optimizers.html#manual-optimization
LightningModule.manual_backward: https://pytorch-lightning.readthedocs.io/en/latest/common/lightning_module.html#manual-backward | D_kwDOCqWgoM4AN88O | https://github.com/PyTorchLightning/pytorch-lightning/discussions/10257#discussioncomment-1600188 |
on_post_move_to_device is leaky | The decorator here is leaky:
pytorch-lightning/pytorch_lightning/core/decorators.py
Lines 73 to 108
in
963c267
def parameter_validation(fn: Callable) -> Callable:
"""
Validates that the module parameter lengths match after moving to the device. It is useful
when tying weights on TPU's.
Args:
fn: ``model_to_device`` method
Note:
TPU's require weights to be tied/shared after moving the module to the device.
Failure to do this results in the initialization of new weights which are not tied.
To overcome this issue, weights should be tied using the ``on_post_move_to_device`` model hook
which is called after the module has been moved to the device.
See Also:
- `XLA Documentation <https://github.com/pytorch/xla/blob/master/TROUBLESHOOTING.md#xla-tensor-quirks>`_
"""
@wraps(fn)
def inner_fn(self, *args, **kwargs):
pre_layer_count = len(list(self.model.parameters()))
module = fn(self, *args, **kwargs)
self.model.on_post_move_to_device()
post_layer_count = len(list(self.model.parameters()))
if not pre_layer_count == post_layer_count:
rank_zero_warn(
f"The model layers do not match after moving to the target device."
" If your model employs weight sharing on TPU,"
" please tie your weights using the `on_post_move_to_device` model hook.\n"
f"Layer count: [Before: {pre_layer_count} After: {post_layer_count}]"
)
return module
return inner_fn
It assumes it is called with the model_to_device method, and that self has access to model which implements on_post_move_to_device and has parameters defined.
The hook here:
pytorch-lightning/pytorch_lightning/core/hooks.py
Lines 341 to 355
in
963c267
def on_post_move_to_device(self) -> None:
"""
Called in the ``parameter_validation`` decorator after :meth:`~pytorch_lightning.core.LightningModule.to`
is called. This is a good place to tie weights between modules after moving them to a device. Can be
used when training models with weight sharing properties on TPU.
Addresses the handling of shared weights on TPU:
https://github.com/pytorch/xla/blob/master/TROUBLESHOOTING.md#xla-tensor-quirks
Example::
def on_post_move_to_device(self):
self.decoder.weight = self.encoder.weight
"""
is only called by the TPU backend. Is it intended to be called by other plugins?
Since this is under the core/decorators.py, one might assume that this is more general. Should this be an implementation detail of the TPU plugins instead? @kaushikb11 | This hook is intended only for TPU as TPU don't support tying parameters on cpu.
However, I believe we could actually depreciate it once #8555 is implemented. | MDEwOkRpc2N1c3Npb24zNTAzNTI2 | https://github.com/PyTorchLightning/pytorch-lightning/discussions/8736#discussioncomment-1213241 |
Doing the sampling of batch indexes inside lightning | I am trying to port my very old pytorch code to lightning and in my training loop, I have something as follows:
batch_order = np.arange(data.x_train.shape[0])
batch_index = np.random.choice(batch_order, batch_size, p=seq_sample_probs).tolist()
batch = torch.tensor(data.x_train[batch_index], dtype=torch_dtype, device=torch_device, requires_grad=False)
# then call the forward on this batch
model.encoder.forward(batch)
I was wondering how I can incorporate this batch index selection in the lightning code. In my code, I have the usual:
def train_dataloader(self):
return DataLoader(self.train_dataset, batch_size=self.batch_size)
But I do not know where I can inject my sampling code inside all this. | I think that's the job of the PyTorch sampler. | D_kwDOCqWgoM4AOIRx | https://github.com/PyTorchLightning/pytorch-lightning/discussions/10741#discussioncomment-1699932 |
Using Multiple Optimisers gives Index Error? | Hello,
I am trying to build a model which uses multiple optimisers. When I try to train the model I get the error validation_step() missing 1 required positional argument: 'optimizer_idx'. I have reproduced this error on the BoringModel used for bug reports:
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
import os
import torch
from torch.utils.data import DataLoader, Dataset
from pytorch_lightning import LightningModule, Trainer
class RandomDataset(Dataset):
def __init__(self, size, length):
self.len = length
self.data = torch.randn(length, size)
def __getitem__(self, index):
return self.data[index]
def __len__(self):
return self.len
class BoringModel(LightningModule):
def __init__(self):
super().__init__()
self.layer = torch.nn.Linear(32, 2)
def forward(self, x):
return self.layer(x)
def training_step(self, batch, batch_idx, optimizer_idx):
if optimizer_idx == 0:
loss = self(batch).sum()
self.log("train_loss", loss)
if optimizer_idx == 1:
loss = self(batch).sum()
self.log("train_loss", loss)
return {"loss": loss}
def validation_step(self, batch, batch_idx, optimizer_idx):
if optimizer_idx == 0:
loss = self(batch).sum()
self.log("valid_loss", loss)
if optimizer_idx == 1:
loss = self(batch).sum()
self.log("valid_loss", loss)
return {"loss": loss}
def test_step(self, batch, batch_idx, optimizer_idx):
if optimizer_idx == 0:
loss = self(batch).sum()
self.log("test_loss", loss)
if optimizer_idx == 1:
loss = self(batch).sum()
self.log("test_loss", loss)
return {"loss": loss}
def configure_optimizers(self):
opt_a = torch.optim.SGD(self.layer.parameters(), lr=0.1)
opt_b = torch.optim.SGD(self.layer.parameters(), lr=0.1)
return [opt_a, opt_b], []
def run():
train_data = DataLoader(RandomDataset(32, 64), batch_size=2)
val_data = DataLoader(RandomDataset(32, 64), batch_size=2)
test_data = DataLoader(RandomDataset(32, 64), batch_size=2)
model = BoringModel()
trainer = Trainer(
default_root_dir=os.getcwd(),
limit_train_batches=1,
limit_val_batches=1,
limit_test_batches=1,
num_sanity_val_steps=0,
max_epochs=1,
enable_model_summary=False,
)
trainer.fit(model, train_dataloaders=train_data, val_dataloaders=val_data)
trainer.test(model, dataloaders=test_data)
if __name__ == "__main__":
run()
What am I missing?
Thanks for the help! | Validation doesn't require optimizers. Try removing the
optimizer_idx argument from your validation_step method definition | D_kwDOCqWgoM4AOwcv | https://github.com/PyTorchLightning/pytorch-lightning/discussions/11846#discussioncomment-2150650 |
How to customize the version or name of log file | v_num in progress bar means the version number of this running, and the log file's save directory is version_{v_num}.
How can i customize the directory's name, such as version_GAT, version_GCN. | you change it by passing your logger:
from pytorch_lightning import Trainer
from pytorch_lightning.loggers import TensorBoardLogger
logger = TensorBoardLogger("tb_logs", name="my_model", version="GAT")
trainer = Trainer(logger=logger) | MDEwOkRpc2N1c3Npb24zNDAyOTgx | https://github.com/PyTorchLightning/pytorch-lightning/discussions/7897#discussioncomment-844256 |
Where to transform and inverse-transform | Hi!
Iβm working on a LSTM to predict price changes. The data has to be transformed (standardized) when training/validering and later inverse-transformed when predicting in production.
Iβm using the LightningModule as well as the LightingDataModule, but Iβm not sure where to apply the StandardScalerβs transform and more specifically; where to save the scaler-parameters and where to apply the inverse-transform on the predictions. And ideeas?
// R | Assuming that you are using pytorch TensorDataset, you can apply transform inside setup and inverse transform inside predict_step itself. | D_kwDOCqWgoM4AOaI0 | https://github.com/PyTorchLightning/pytorch-lightning/discussions/11297#discussioncomment-1901657 |
lowest val/loss ckpt != highest val/Accuracy | Am using the following callback,
checkpoint_callback = ModelCheckpoint(monitor='val/loss',
mode='min',
save_last=True,
filename=cfg.CALLBACKS.FILENAME,
auto_insert_metric_name=cfg.CALLBACKS.AUTO_INSERT_METRIC_NAME,
dirpath=LOGGER_DIR)
I am not sure what is going wrong. Am using F.cross_entropy for loss. | Lowest loss (say CE) will not always give you the highest accuracy. (https://kharshit.github.io/blog/2018/12/07/loss-vs-accuracy)
So, always use the accuracy (or desired metric at hand) to select the best model instead of loss.
(This is probably common knowledge but this was news for me. Putting it out there so someone else doesn't bang their head for two days!) | D_kwDOCqWgoM4AOxVN | https://github.com/PyTorchLightning/pytorch-lightning/discussions/11896#discussioncomment-2167416 |
How to use pytorch-lightning distributed training without SLURM? | βHow to use pytorch-lightning distributed training without SLURM?
Couldn't find anywhere a single note or tutorial on this.
For example I have just 2 node with 4 GPUs on each.
On each node environment variables required for Pytorch distributed communication are configured (see pytorch documentation).
Is this possible to train pytorch-lightning script in this setup and if so how? | you can configure your own environment variables and do your own setup.
Just override LightningModule.init_ddp_connection
https://pytorch-lightning.readthedocs.io/en/latest/lightning-module.html#lightningmodule-class
(corrected) | MDEwOkRpc2N1c3Npb244MjI1Mg== | https://github.com/PyTorchLightning/pytorch-lightning/discussions/1334#discussioncomment-238187 |
Inheritance and `save_hyperparameters` | Hello Lightning folks!
Suppose I have a base model class that I'd like to inherit from as follows:
import pytorch_lightning as pl
class ParentModel(pl.LightningModule):
def __init__(
self,
lr: float = 0.001,
):
super(ParentModel, self).__init__()
self.lr = lr
class ChildModel(ParentModel):
def __init__(
self,
lr: float = 0.005,
loss: str = "mse",
):
super(ParentModel, self).__init__()
self.lr = lr
self.loss = loss
I would like to be able to access the hyperparameters of ChildModel and one way to do that is by including save_hyperparameters() in the __init__ as follows:
class ChildModel(ParentModel):
def __init__(
self,
lr: float = 0.005,
loss: str = "mse",
):
super(ParentModel, self).__init__()
self.lr = lr
self.loss = loss
self.save_hyperparameters()
However, I would like to avoid the need to call save_hyperparameters() in every class that inherits from ParentModel and I was wondering whether it is possible to do this in PyTorch Lightning somehow?
One idea I have in mind is something like a __post_init__ that calls save_hyperparameters() after the __init__ is called, but this doesn't seem to be supported.
Thanks! | You could do something like this:
import pytorch_lightning as pl
class ParentModel(pl.LightningModule):
def __init__(
self,
lr: float = 0.001,
**kwargs
):
super(ParentModel, self).__init__()
self.save_hyperparameters()
self.lr = lr
class ChildModel(ParentModel):
def __init__(
self,
lr: float = 0.005,
loss: str = "mse",
):
super(ParentModel, self).__init__(lr=lr, loss=loss)
self.loss = loss
That would save all hparams passed to the parent model (including the ones passed through the kwargs). If you want to go one step further, you could also include the following there:
for k, v in kwargs.items():
setattr(self, k, v)
which sets all attributes that are passed through kwargs automatically as model attributes.
That means you could also spare the self.loss=loss line in the child model :) | D_kwDOCqWgoM4ANn3D | https://github.com/PyTorchLightning/pytorch-lightning/discussions/9509#discussioncomment-1347143 |
Error when training: "closure_loss" is NoneType | Hi all,
I am trying to train my LightningModule but I seem to keep getting the error TypeError: unsupported operand type(s) for /: 'NoneType' and 'int' on the line closure_loss = closure_loss / self.accumulate_grad_batches, in the function training_step() in the file training_loop.py.
I think it might be something to do with how I format my LightningModule, so here is what my LightningModule looks like
class HPAModelV1(pl.LightningModule):
def __init__(self):
super().__init__()
#self.lossfunc = F.cross_entropy
self.lossfunc = F.nll_loss
self.conv1 = nn.Conv2d(3, 16, kernel_size=3, stride=3, padding=7)
self.conv2 = nn.Conv2d(16, 16, kernel_size=3, stride=1, padding=1)
self.conv3 = nn.Conv2d(16, 16, kernel_size=5, stride=1, padding=1)
self.dense = nn.Linear(16, 19)
def forward(self, x): #input size is (256, 3, 256, 256)
x = x.float()
out = self.conv1(x)
out = F.relu(out)
out = F.max_pool2d(out, 3) # output is (bs, 16, 30, 30)
out = self.conv2(out)
out = F.relu(out)
out = F.max_pool2d(out, 3) # output is (bs, 16, 10, 10)
out = self.conv3(out)
out = F.relu(out)
out = F.max_pool2d(out, 8) # output is (bs, 16, 1, 1)
# dense layer
out = out.reshape(out.size()[0], 16)
out = self.dense(out)
return out
def configure_optimizers(self):
optimizer = torch.optim.Adam(self.parameters(), lr=0.001)
return optimizer
def training_step(self, batch, batchidx):
# set labels and data
x = batch[0]
y = batch[1]
# compute loop
preds = self(x)
probs = F.softmax(preds, dim=1)
# compute the loss function
J = self.lossfunc(probs, y)
# compute accuracy
acc = accuracy(probs, y)
#log for weights and biases
self.log('training loss (step)', J)
self.log('training accuracy (step)', acc)
self.log('mean training loss (epoch)', J, on_step=False, on_epoch=True)
self.log('mean training accuracy (epoch)', acc, on_step=False, on_epoch=True)
# add information to the progress bar
pbar = {'train_acc': acc, 'train_loss' : J}
return J, acc
def validation_step(self, valbatch, valbatchidx):
# use the same training step on the val set
valJ, valAcc = self.training_step(valbatch, valbatchidx)
# log for wb
self.log('validation loss (step)', valJ)
self.log('validation accuracy (step)', valAcc)
self.log('mean validation loss (epoch)', valJ, on_step=False, on_epoch=True)
self.log('mean validation accuracy (epoch)', valAcc, on_step=False, on_epoch=True)
return valJ, valAcc
def validation_epoch_end(self, valStepOutputs):
pass
And if it may help in diagnosing the cause of the issue, here is the stack trace and output of of the Trainer:
GPU available: False, used: False
TPU available: True, using: 1 TPU cores
Global seed set to 0
| Name | Type | Params
---------------------------------
0 | conv1 | Conv2d | 448
1 | conv2 | Conv2d | 2.3 K
2 | conv3 | Conv2d | 6.4 K
3 | dense | Linear | 323
---------------------------------
9.5 K Trainable params
0 Non-trainable params
9.5 K Total params
0.038 Total estimated model params size (MB)
Epoch 0: 0%
0/7759 [00:02<?, ?it/s]
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-29-caf15077ca9b> in <module>()
2 os.environ['WANDB_CONSOLE'] = 'on'
3 trainer = Trainer(logger=wbLogger, tpu_cores=1, deterministic=True, max_epochs=epochNum, replace_sampler_ddp=False, num_sanity_val_steps=0)
----> 4 trainer.fit(HPAModelV1(), trainDL, valDL)
5
6 print(time.time() - t0)
23 frames
/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/trainer.py in fit(self, model, train_dataloader, val_dataloaders, datamodule)
497
498 # dispath `start_training` or `start_testing` or `start_predicting`
--> 499 self.dispatch()
500
501 # plugin will finalized fitting (e.g. ddp_spawn will load trained model)
/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/trainer.py in dispatch(self)
544
545 else:
--> 546 self.accelerator.start_training(self)
547
548 def train_or_test_or_predict(self):
/usr/local/lib/python3.7/dist-packages/pytorch_lightning/accelerators/accelerator.py in start_training(self, trainer)
71
72 def start_training(self, trainer):
---> 73 self.training_type_plugin.start_training(trainer)
74
75 def start_testing(self, trainer):
/usr/local/lib/python3.7/dist-packages/pytorch_lightning/plugins/training_type/tpu_spawn.py in start_training(self, trainer)
264 del os.environ["XLA_USE_BF16"]
265 self._close_logger(trainer)
--> 266 xmp.spawn(self.new_process, **self.xmp_spawn_kwargs)
267
268 def start_testing(self, trainer) -> None:
/usr/local/lib/python3.7/dist-packages/torch_xla/distributed/xla_multiprocessing.py in spawn(fn, args, nprocs, join, daemon, start_method)
384 pf_cfg = _pre_fork_setup(nprocs)
385 if pf_cfg.num_devices == 1:
--> 386 _start_fn(0, pf_cfg, fn, args)
387 else:
388 return torch.multiprocessing.start_processes(
/usr/local/lib/python3.7/dist-packages/torch_xla/distributed/xla_multiprocessing.py in _start_fn(index, pf_cfg, fn, args)
321 # environment must be fully setup before doing so.
322 _setup_replication()
--> 323 fn(gindex, *args)
324
325
/usr/local/lib/python3.7/dist-packages/pytorch_lightning/plugins/training_type/tpu_spawn.py in new_process(self, process_idx, trainer, mp_queue)
98 self.barrier("pre-run-stage")
99
--> 100 results = trainer.train_or_test_or_predict()
101
102 self.__save_end_of_training_weights(self.lightning_module)
/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/trainer.py in train_or_test_or_predict(self)
554
555 else:
--> 556 results = self.run_train()
557
558 return results
/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/trainer.py in run_train(self)
635 with self.profiler.profile("run_training_epoch"):
636 # run train epoch
--> 637 self.train_loop.run_training_epoch()
638
639 if self.max_steps and self.max_steps <= self.global_step:
/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/training_loop.py in run_training_epoch(self)
495 # ------------------------------------
496 with self.trainer.profiler.profile("run_training_batch"):
--> 497 batch_output = self.run_training_batch(batch, batch_idx, dataloader_idx)
498
499 # when returning -1 from train_step, we end epoch early
/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/training_loop.py in run_training_batch(self, batch, batch_idx, dataloader_idx)
657
658 # optimizer step
--> 659 self.optimizer_step(optimizer, opt_idx, batch_idx, train_step_and_backward_closure)
660
661 else:
/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/training_loop.py in optimizer_step(self, optimizer, opt_idx, batch_idx, train_step_and_backward_closure)
436 on_tpu=self.trainer._device_type == DeviceType.TPU and _TPU_AVAILABLE,
437 using_native_amp=using_native_amp,
--> 438 using_lbfgs=is_lbfgs,
439 )
440
/usr/local/lib/python3.7/dist-packages/pytorch_lightning/core/lightning.py in optimizer_step(self, epoch, batch_idx, optimizer, optimizer_idx, optimizer_closure, on_tpu, using_native_amp, using_lbfgs)
1388 # wraps into LightingOptimizer only for running step
1389 optimizer = LightningOptimizer._to_lightning_optimizer(optimizer, self.trainer, optimizer_idx)
-> 1390 optimizer.step(closure=optimizer_closure)
1391
1392 def optimizer_zero_grad(self, epoch: int, batch_idx: int, optimizer: Optimizer, optimizer_idx: int):
/usr/local/lib/python3.7/dist-packages/pytorch_lightning/core/optimizer.py in step(self, closure, *args, **kwargs)
212 profiler_name = f"optimizer_step_and_closure_{self._optimizer_idx}"
213
--> 214 self.__optimizer_step(*args, closure=closure, profiler_name=profiler_name, **kwargs)
215 self._total_optimizer_step_calls += 1
216
/usr/local/lib/python3.7/dist-packages/pytorch_lightning/core/optimizer.py in __optimizer_step(self, closure, profiler_name, **kwargs)
132
133 with trainer.profiler.profile(profiler_name):
--> 134 trainer.accelerator.optimizer_step(optimizer, self._optimizer_idx, lambda_closure=closure, **kwargs)
135
136 def step(self, *args, closure: Optional[Callable] = None, **kwargs):
/usr/local/lib/python3.7/dist-packages/pytorch_lightning/accelerators/accelerator.py in optimizer_step(self, optimizer, opt_idx, lambda_closure, **kwargs)
275 )
276 if make_optimizer_step:
--> 277 self.run_optimizer_step(optimizer, opt_idx, lambda_closure, **kwargs)
278 self.precision_plugin.post_optimizer_step(optimizer, opt_idx)
279 self.training_type_plugin.post_optimizer_step(optimizer, opt_idx, **kwargs)
/usr/local/lib/python3.7/dist-packages/pytorch_lightning/accelerators/tpu.py in run_optimizer_step(self, optimizer, optimizer_idx, lambda_closure, **kwargs)
32
33 def run_optimizer_step(self, optimizer: Optimizer, optimizer_idx: int, lambda_closure: Callable, **kwargs):
---> 34 xm.optimizer_step(optimizer, barrier=False, optimizer_args={'closure': lambda_closure, **kwargs})
35
36 def all_gather(self, tensor: Union[torch.Tensor], group: Optional[Any] = None, sync_grads: bool = False):
/usr/local/lib/python3.7/dist-packages/torch_xla/core/xla_model.py in optimizer_step(optimizer, barrier, optimizer_args, groups)
779 """
780 reduce_gradients(optimizer, groups=groups)
--> 781 loss = optimizer.step(**optimizer_args)
782 if barrier:
783 mark_step()
/usr/local/lib/python3.7/dist-packages/torch/optim/optimizer.py in wrapper(*args, **kwargs)
86 profile_name = "Optimizer.step#{}.step".format(obj.__class__.__name__)
87 with torch.autograd.profiler.record_function(profile_name):
---> 88 return func(*args, **kwargs)
89 return wrapper
90
/usr/local/lib/python3.7/dist-packages/torch/autograd/grad_mode.py in decorate_context(*args, **kwargs)
25 def decorate_context(*args, **kwargs):
26 with self.__class__():
---> 27 return func(*args, **kwargs)
28 return cast(F, decorate_context)
29
/usr/local/lib/python3.7/dist-packages/torch/optim/adam.py in step(self, closure)
64 if closure is not None:
65 with torch.enable_grad():
---> 66 loss = closure()
67
68 for group in self.param_groups:
/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/training_loop.py in train_step_and_backward_closure()
652 def train_step_and_backward_closure():
653 result = self.training_step_and_backward(
--> 654 split_batch, batch_idx, opt_idx, optimizer, self.trainer.hiddens
655 )
656 return None if result is None else result.loss
/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/training_loop.py in training_step_and_backward(self, split_batch, batch_idx, opt_idx, optimizer, hiddens)
745 with self.trainer.profiler.profile("training_step_and_backward"):
746 # lightning module hook
--> 747 result = self.training_step(split_batch, batch_idx, opt_idx, hiddens)
748 self._curr_step_result = result
749
/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/training_loop.py in training_step(self, split_batch, batch_idx, opt_idx, hiddens)
325
326
--> 327 closure_loss = closure_loss / self.trainer.accumulate_grad_batches
328
329 # the loss will get scaled for amp. avoid any modifications to it
TypeError: unsupported operand type(s) for /: 'NoneType' and 'int'
Thank you, and sorry for all the text
A | Hi @adamDhalla, training_step needs to return one of:
Tensor - The loss tensor
dict - A dictionary. Can include any keys, but must include the key 'loss'
None - Training will skip to the next batch
https://pytorch-lightning.readthedocs.io/en/latest/common/lightning_module.html#training-step | MDEwOkRpc2N1c3Npb24zMzA0NjU2 | https://github.com/PyTorchLightning/pytorch-lightning/discussions/6806#discussioncomment-564696 |
How to ensure objects saved as model attributes are saved in the checkpoint file? | Say I have a lightning model model = MyLightningModel() that contains an object that gets created and updated throughout training model.my_object. Upon loading the model from the checkpoint MyLightningModel.load_from_checkpoint(ckpt_path) I'm noticing the model.my_object attribute is not being saved and restored upon loading from checkpoint.
Is it possible to somehow ensure that these model attributes get saved in the checkpoint file and properly restored when loading the model from checkpoint?
Thanks in advance for your help! | Dear @KirillShmilovich.
You could use the LightningModule on_save_checkpoint and on_load_checkpoint hooks.
class MyLightningModel(LightningModule):
def on_save_checkpoint(self):
return {"my_object": self.my_object}
def on_load_checkpoint(self, state_dict):
self.my_object = state_dict["my_object"]
However, pickling objets isn't always the best approach. A slightly better approach is
class MyLightningModel(LightningModule):
def on_save_checkpoint(self):
return {"my_object_state_dict": self.my_object.state_dict()}
def on_load_checkpoint(self, state_dict):
self.my_object = my_object_cls.from_state_dict(state_dict["my_object_state_dict"]) | MDEwOkRpc2N1c3Npb24zNTExMzgz | https://github.com/PyTorchLightning/pytorch-lightning/discussions/8841#discussioncomment-1205322 |
How does a gpu cluster system like SLRUM use ddp training ? | I need to use srun run python, so how does set Trainer of pl` correctly? | Not sure what you are asking. Is the question how to use PL with SLURM?
I can point you to the SLURM tutorial in our docs:
https://pytorch-lightning.readthedocs.io/en/latest/clouds/slurm.html | MDEwOkRpc2N1c3Npb24zMjk2ODM0 | https://github.com/PyTorchLightning/pytorch-lightning/discussions/6715#discussioncomment-568878 |
How to save and load LightningModule whose input containing the pretrained moduel? | Hi,
I'm applying Pytorch Lightning module to VAE and our model We first train VAE and give the best checkpoint of pretrained VAE as the initial weight of our model.
# STEP 1. Train VAE
vae = VAE(...)
trainer = Trainer(...)
trainer.fit(vae)
# STEP 2.
vae = VAE.load_from_checkpoint(...)
class Model(LightningModule):
def __init__(self, encoder, decoder, learning_rate):
super().__init__()
self.encoder = encoder
self.decoder = decoder
self.save_hyperparameters("learning_rate")
...
encoder = copy.deepcopy(vae.encoder)
decoder = copy.deepcopy(vae.decoder)
model = Model(
encoder=encoder,
decoder=decoder,
...
)
trainer.fit(model)
The problem is when I load the model after train ends. Since the torch modules are contained in input arguments of Model, the common approach
model = Model.load_from_checkpoint(...)
yields following error messages. TypeError: __init__() missing 2 required positional arguments: 'encoder' and 'decoder'
So, what is the best practice for saving and loading the model which uses the pre-trained model? | since part of your model is inside arguments, you can randomly initialize your VAE and let the new checkpoint configure it's weights
vae = VAE()
encoder = copy.deepcopy(vae.encoder)
decoder = copy.deepcopy(vae.decoder)
model = Model.load_from_checkpoint(..., encoder=encoder, decoder=decoder) | D_kwDOCqWgoM4AN4Of | https://github.com/PyTorchLightning/pytorch-lightning/discussions/10037#discussioncomment-1507939 |
online hard-example mining/examining under Multi-GPU ='dp' | Background
Hi, I try to track the prediction of each individual sample during training/validation-step. The main purpose is to do online hard-example mining/examining.
I found out a way of doing this is to make the input variable of the functions training/validation_step carrying the sample-id information, for example, the file-name. So I made the input to be a dictionary.
Example Code
class LightningModule():
def validation_step(self, batch, batch_idx):
y = batch['target'].float()
y_hat = self.forward(batch)
loss = self.get_loss(y_hat, y)
# append the individual result
for i in range(len(batch['sample_id'])):
self.validation_result['prediction_result'].append(y_hat[i])
self.validation_result['sample_id'].append(batch['sample_id'][i])
self.validation_result['target'].append(batch['target'][i])
return {'val_loss': loss}
def forward(self, batch):
x = batch['x']
y_hat = self.model( x)
return y_hat
Input-Dict works in Single GPU but fail under multi-GPUs-dp
input_batch = {
'x' : Tensor (1st dimension as batch),
'target': Tensor (1st dimension as batch),
'sample-id': [a, b, c] (list-object)
}
AND It takes me some time to realize that all value-objects inside the input-dictionary should be torch.Tensor, not list contains strings, otherwise while training under Multi-GPU ='dp' mode, the list-obj won't be separated properly.
Input-Dict works in both Single/multi-GPUs-dp
input_batch = {
'x' : Tensor (1st dimension as batch),
'target': Tensor (1st dimension as batch),
'sample-id': 1D-Tensor for sample-id ex Tensor([1 , 3, 5])
}
Currently, I still have some doubts on this approach...
Does anyone try to implement similar functions, online hard-example mining, with different approaches?
Tks : ) | have you considered using a library such as pytorch-metric-learning?
in general, it would look something like
class MinerNetwork(pl.LightningModule):
def __init__(...):
self.network = # define network here
self.miner_function = miners.DistanceWeightedMiner()
self.objective = losses.TripletMarginLoss()
def forward(self, data, labels):
embeddings = self.network(data)
return embeddings
def training_step(self, batch, batch_idx):
data, labels = batch
embeddings = self(data)
pairs = self.miner_function(embeddings, labels)
loss = self.objective(embeddings, labels, pairs)
return loss
this does mining within each batch that you pass in. i'm not sure where you're doing the mining currently but it seems suspicious to be appending data to a class attribute (self.validation_result). this will likely break if you try running on ddp because you send a copy of the model to each worker. | MDEwOkRpc2N1c3Npb244MjI1NQ== | https://github.com/PyTorchLightning/pytorch-lightning/discussions/1170#discussioncomment-238207 |
Precision and Recall over validation step | When Precision and Recall are directly computed, I get the following result:
import torch
from pytorch_lightning.metrics import Precision
from pytorch_lightning.metrics import Recall
y = torch.tensor([0, 0, 2, 2, 1, 1, 1, 2, 0, 0])
y_hat = torch.tensor([1, 1, 2, 1, 1, 1, 1, 1, 2, 1])
precision = Precision(num_classes=3)
recall = Recall(num_classes=3)
precision(y_hat, y)
#>>>tensor(0.2917)
recall(y_hat, y)
#>>>tensor(0.4444)
However, when the same metrics are computed over validation_step, I get the following stranger result:
def validation_step(self, batch, batch_idx):
x, y = batch["x"], batch["y"] # y = tensor([0, 0, 2, 2, 1, 1, 1, 2, 0, 0], device='cuda:0')
y_hat = self(x) # y_hat = tensor([1, 1, 2, 1, 1, 1, 1, 1, 2, 1], device='cuda:0')
precision = self.precision_score(y_hat, y) # precision = tensor(0.4000, device='cuda:0')
recall = self.recall_score(y_hat, y) # recall = tensor(0.4000, device='cuda:0')
what am I missing? | @ceceu after trying myself, I assume you have set the average argument in the first case to macro and in the second case to micro (default):
y = torch.tensor([0, 0, 2, 2, 1, 1, 1, 2, 0, 0])
y_hat = torch.tensor([1, 1, 2, 1, 1, 1, 1, 1, 2, 1])
precision = Precision(num_classes=3, average='macro')
recall = Recall(num_classes=3, average='macro')
print(precision(y_hat, y), recall(y_hat, y)) # tensor(0.2917), tensor(0.4444)
precision = Precision(num_classes=3, average='micro')
recall = Recall(num_classes=3, average='micro')
print(precision(y_hat, y), recall(y_hat, y)) # tensor(0.4000), tensor(0.4000) | MDEwOkRpc2N1c3Npb24yNzkyMjYx | https://github.com/PyTorchLightning/pytorch-lightning/discussions/5809#discussioncomment-339770 |
How to access `LightningDataModule` in `LightningModule` | In TorchGeo, we use PyTorch Lightning to organize reproducible benchmarks for geospatial datasets. Currently, we have a set of LightningDataModules for each dataset and a much smaller number of LightningModules for each task (semantic segmentation, classification, regression, etc.). Each Dataset defines its own plot() method that describes how to plot images and masks.
During training/validation steps, we would like to plot a few examples to see how training is progressing. However, the LightningModule doesn't seem to know anything about the LightningDataModule/DataLoader/Dataset. Because of this, if we want to perform dataset-specific plotting during training or validation steps, we're forced to create a separate LightningModule for each dataset, increasing code duplication and defeating the whole purpose of PyTorch Lightning (example).
Is there an easy way for a LightningModule to tell which DataModule/DataLoader/Dataset is being used and call its dataset.plot() method?
@calebrob6 @isaaccorley
@tchaton this is slightly related to #10469 but different enough that I wanted to start a separate discussion about it. | @adamjstewart There is a reference to datamodule via trainer from LightningModule, but would that solve your issue?
self.trainer.datamodule | D_kwDOCqWgoM4AOCkt | https://github.com/PyTorchLightning/pytorch-lightning/discussions/10492#discussioncomment-1628589 |