title
stringlengths
5
164
labels
sequence
bodyText
stringlengths
0
46.7k
test transfer to discussion thread
[ "question" ]
a quick test
Saving meta tags hparams file in Tensorboard Logger can occur multiple times
[ "bug", "help wanted", "logger" ]
πŸ› Bug The bug exists exactly here https://github.com/PyTorchLightning/pytorch-lightning/blob/master/pytorch_lightning/loggers/tensorboard.py#L220. The if statement is checking using os.path rather than self._fs. As written, when using a non-local filesystem (such as S3, etc.), the os.path.isfile will always return false, meaning the content in the if-statement is always executed. This is counter to expectations where normally the file is not overwritten if it already exists. Please reproduce using the BoringModel and post here No reproduction. To Reproduce The bug is not immediately obvious, but we found it when using Lightning with FB's internal distributed storage. In a workflow where multiple trainers are launched with the same hypers, the distributed storage system will throw when the same file is being written to at the same time. This is how we discovered this bug. Expected behavior From the code and from behavior on the local filesystem, it seems clear that the expected behavior is to check if the file exists and write it only if it does not. Environment N/A Additional context I'll be sending a pull-request to fix this minor bug shortly.
Make EarlyStopping throw an error on unknown modes
[ "feature", "help wanted" ]
πŸš€ Feature Right now, passing in a nonsense mode to pl.callbacks.EarlyStopping passes silently >>> import pytorch_lightning as pl >>> pl.callbacks.EarlyStopping("val_f1", mode="werwer") In the code, it looks this is deliberate: in this case, we only raise a warning when the verbosity is >0 pytorch-lightning/pytorch_lightning/callbacks/early_stopping.py Lines 104 to 110 in 6adc1b3 if self.mode not in self.mode_dict and self.mode != 'auto': if self.verbose > 0: rank_zero_warn( f'EarlyStopping mode={self.mode} is unknown, fallback to auto mode.', RuntimeWarning, ) self.mode = 'auto' I think this is undesirable behavior -- if you make a typo when passing in a parameter to mode, EarlyStopping should fail with an exception, not just produce a warning about potentially incorrect behavior. Worse, EarlyStopping only outputs a warning when the verbosity > 0 -- that means by default, EarlyStopping will just silently just produce incorrect behavior.
Fix Test Set doc to include datamodule
[ "docs" ]
In the test set documentation it still uses the old test_dataloader format and not the new datamodule argument: https://pytorch-lightning.readthedocs.io/en/stable/test_set.html
LightningModule .load_from_checkpoint requires specific model signature
[]
I have a LightningModule similar to this: class MyLightningModule(pl.LightningModule): def __init__(self, model: nn.Module, hparams: argparse.Namespace): init code here @staticmethod def add_task_specific_args(parent_parser: Optional[Union[argparse.ArgumentParser,list]] = None): arguments added here Unfortunately this type of signature does not seem to be supported by .load_from_checkpoint(checkpoint_path=my/ckpt/path) see here: pytorch-lightning/pytorch_lightning/core/saving.py Line 192 in cdbddbe model = cls(**_cls_kwargs) at least I get the following Error: File "/opt/conda/envs/MyEnv/lib/python3.8/site-packages/pytorch_lightning/core/saving.py", line 192, in _load_model_state model = cls(**_cls_kwargs) TypeError: __init__() missing 2 required positional arguments: 'model' and 'hparams' So basically what I am wondering, I would like to keep the "model" separate from the "training/experiment" and also use Lightning for saving/loading weights. How do I achieve this?
What sync_dist_op can be set in self.log()?
[ "question" ]
Hi, the document shows sync_dist_op= "mean" by default. But in my case, I do training in ddp with 2 GPUs and I want to record the number of true positive samples in validation dataset. So sync_dist_op should be set like "sum" which I didn't see in the source code and the document. Another thing is that I want to record validation results on epoch, so I aggregated the results in val_epoch_end() and call self.log("step", self.current_epoch) to overwrite steps. Then if I set syn_dist=True in self.log(), the following error occurred. self.log('Val loss', loss, sync_dist=True) File "/shared_home/r08922129/anaconda3/envs/tabfact_pepper/lib/python3.7/site-packages/pytorch_lightning/core/lightning.py", line 281, in log return sync_ddp_if_available(tensor, group, reduce_op) File "/shared_home/r08922129/anaconda3/envs/tabfact_pepper/lib/python3.7/site-packages/pytorch_lightning/utilities/distributed.py", line 124, in sync_ddp_if_available return sync_ddp(result, group=group, reduce_op=reduce_op) File "/shared_home/r08922129/anaconda3/envs/tabfact_pepper/lib/python3.7/site-packages/pytorch_lightning/utilities/distributed.py", line 156, in sync_ddp self._current_dataloader_idx, File "/shared_home/r08922129/anaconda3/envs/tabfact_pepper/lib/python3.7/site-packages/pytorch_lightning/core/step_result.py", line 142, in log torch.distributed.all_reduce(result, op=reduce_op, group=group, async_op=False) File "/shared_home/r08922129/anaconda3/envs/tabfact_pepper/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py", line 954, in all_reduce value = sync_fn(value, group=sync_dist_group, reduce_op=sync_dist_op) File "/shared_home/r08922129/anaconda3/envs/tabfact_pepper/lib/python3.7/site-packages/pytorch_lightning/accelerators/ddp_accelerator.py", line 344, in sync_tensor work = _default_pg.allreduce([tensor], opts) RuntimeError: Tensors must be CUDA and dense return sync_ddp_if_available(tensor, group, reduce_op) File "/shared_home/r08922129/anaconda3/envs/tabfact_pepper/lib/python3.7/site-packages/pytorch_lightning/utilities/distributed.py", line 124, in sync_ddp_if_available return sync_ddp(result, group=group, reduce_op=reduce_op) File "/shared_home/r08922129/anaconda3/envs/tabfact_pepper/lib/python3.7/site-packages/pytorch_lightning/utilities/distributed.py", line 156, in sync_ddp torch.distributed.all_reduce(result, op=reduce_op, group=group, async_op=False) File "/shared_home/r08922129/anaconda3/envs/tabfact_pepper/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py", line 954, in all_reduce work = _default_pg.allreduce([tensor], opts) RuntimeError: Tensors must be CUDA and dense The loss in the first line self.log('Val loss', loss, sync_dist=True) is the result of BCEWithLogitsLoss. What is weird is that I could set sync_dist=True without that error if I call self.log() in validation_step() rather than in validation_epoch_end().
Replace custom samplers with distributed versions when replace_sampler_ddp=True
[ "feature", "help wanted", "won't fix", "docs" ]
πŸš€ Feature Replace custom samplers with distributed versions when replace_sampler_ddp=True. Motivation One of the issues I've been having is using custom samplers with distributed training. Prior to Lightning, I was using the DistributedSamplerWrapper class from Catalyst (https://catalyst-team.github.io/catalyst/_modules/catalyst/data/sampler.html) as part of my own pipeline to use custom samplers during distributed training. When I try to define my own custom DistributedSampler when using Lightning, I always get the following error: AssertionError: Default process group is not initialized It makes sense because the data loader gets called before distributed is initialized. (See: #3238 (comment)) However unlike in the comment, I'm unable to create the DistributedSampler by just leaving the arguments as their default since I always get the above AssertionError. It looks like the original poster was using TPUs, and it seems that on TPUs you can pass rank and num_replicas to the sampler without having to run torch.distributed.init_process_group(). Pitch Basically, what I do now is I manually edit the TrainerDataLoadingMixin class in trainer/data_loading.py. I change the auto_add_sampler() method: if self.replace_sampler_ddp and need_dist_sampler: if not isinstance(dataloader.sampler, (SequentialSampler, RandomSampler)): rank_zero_warn('Replacing custom sampler with distributed version ...') sampler = self._wrap_distributed_sampler(dataloader, shuffle) else: sampler = self._get_distributed_sampler(dataloader, shuffle) dataloader = self.replace_sampler(dataloader, sampler) New _wrap_distributed_sampler() method: def _wrap_distributed_sampler(self, dataloader, shuffle): kwargs = self.distributed_sampler_kwargs kwargs['shuffle'] = shuffle and not self.overfit_batches sampler = DistributedSamplerWrapper(dataloader.sampler, **kwargs) return sampler Alternatives There may be a simple workaround that I'm missing for why I'm not able to pass in my own custom distributed samplers, in which case this wouldn't be needed.
Secondary Pytorch LightningModule-declared Tensors not migrating to correct device when called from other LightningModule.
[ "help wanted", "question", "working as intended" ]
πŸ› Bug Background After working on a Seq2Seq model with attention using only LightingModules, I was still getting on_device errors. This also occurred with PackedSequence objects but I think that is due to a PyTorch cpu-specific implementation, not Lightning. This also occurred with functions returning tensors called from LightingModules, but less of an issue than LightningModules themselves exhibiting the error. ExampeL The following class for Attention required the .cuda() flag ( .to(DEVICE) ): class Attention(pl.LightningModule): ''' Attention is calculated using key, value and query from Encoder and decoder. Below are the set of operations you need to perform for computing attention: energy = bmm(key, query) attention = softmax(energy) context = bmm(attention, value) ''' def __init__(self): super(Attention, self).__init__() def forward(self, query, key, value, lens): ''' :param query :(N, context_size) Query is the output of LSTMCell from Decoder :param key: (N, T_max, key_size) Key Projection from Encoder per time step :param value: (N, T_max, value_size) Value Projection from Encoder per time step :param lens: (N, T) Length of key and value, used for binary masking :return output: Attended Context :return attention: Attention mask that can be plotted ''' energy = torch.bmm(key, query.unsqueeze(2)).squeeze(2) # (N, T_max, key_size) * (N, context_size, 1) = (N, T_max, 1) -> (N, T_max) # binary masking for padded positions mask = torch.arange(key.size(1)).unsqueeze(0) >= lens.unsqueeze(1) # (1, T) >= (B, 1) -> (N, T_max) mask = mask.to(DEVICE) energy.masked_fill_(mask, -1e9) # (N, T_max) attention = nn.functional.softmax(energy, dim=1) # (N, T_max) output = torch.bmm(attention.unsqueeze(1), value).squeeze(1) # (N, T_max) return output, attention The class is called in another LightningModule during sequence decoding, which is in turn called by the training /testing loop. Without the .to(DEVICE) flag, we get: File "/content/11785-hw4-lightning/models.py", line 46, in forward energy.masked_fill_(mask, -1e9) # (N, T_max) RuntimeError: Expected object of device type cuda but got device type cpu for argument #2 'mask' in call to _th_masked_fill_bool_ I will reproduce the error using the minimal example provided: Please reproduce using the BoringModel COLAB To Reproduce Use following BoringModel and post here: Same link To reproduce, try having torch return a CPU tensor with a separate LightningModule loaded as a submodule. The CPU tensor will not migrate to the correct device causing the above problem. In this case, I usetorch.arange()on an unsqueezed scalar (the loss of the Boring Model. This is a common use for masked Seq2Seq loss, but it is common to see CPU tensors returned in LightningModules in other use cases. I've noted how this was apparent during an Attention-based implementation in the notebook as well as here. Expected behavior Torch-returned CPU tensor in a secondary LightningModule should be migrated to the correct device (cuda) but does not, such that a .cuda() call is required to fix the issue. Environment Note: Bugs with code are solved faster ! Colab Notebook should be made public ! IDE: Please, use our python bug_report_model.py template. Colab Notebook: Please copy and paste the output from our environment collection script (or fill out the checklist below manually). You can get the script and run it with: wget https://raw.githubusercontent.com/PyTorchLightning/pytorch-lightning/master/tests/collect_env_details.py # For security purposes, please check the contents of collect_env_details.py before running it. python collect_env_details.py CUDA: GPU: Tesla V100-SXM2-16GB available: True version: 10.1 Packages: numpy: 1.19.4 pyTorch_debug: True pyTorch_version: 1.7.0+cu101 pytorch-lightning: 1.1.2 tqdm: 4.41.1 System: OS: Linux architecture: 64bit processor: x86_64 python: 3.6.9 version: #1 SMP Thu Jul 23 08:00:38 PDT 2020 Additional context I feel this is very important to address as projects scale, since many projects rely both on on more than a single LightningModule, and rely on torch-returned (as opposed to declared (?)) tensors. Thanks!
Loading checkpoint creates new logging directory and tries to load checkpoint from that directory
[ "bug", "help wanted" ]
πŸ› Bug I'm doing training and then loading the checkpoint later to do some experiments. I'm using hydra. What's happening is the checkpoints are being created as expected. However, after I load the checkpoint with load_from_checkpoint a new output directory is created, and the checkpoint attempting to be read from that new output directory (and so of course it can't find it). if cfg.train is True : trainer = Trainer(max_epochs=cfg.max_epochs, gpus=cfg.gpus) model = Net(cfg) trainer.fit(model) trainer.test(model) else : # plot some data model = Net.load_from_checkpoint(cfg.checkpoint) model.eval() ...
Clean printing to terminal alongside progress bar
[ "feature", "help wanted", "good first issue" ]
πŸš€ Feature A way to print to terminal without breaking up the progress bar. Motivation A lot of people print stuff to terminal while training/validating/testing, and currently a simple call to print() will break the progress bar. A way to get around this is to set up a custom progress bar, with methods for calling tqdm.write, and passing that as a callback to the trainer. However, this feels like a lot of effort for just getting clean terminal output alongside the progress bar. Pitch Ability to get the ProgressBar - or the current active tqdm instance (main_progress_bar, val_progres_bar, etc) - through the Trainer or the LightningModule. Something like pbar = trainer.pbar would feel intuitive. Then, the user can call pbar.write(), and get clean printing like tqdm.write().
the speed of each iteration halves with two GPUs
[ "bug", "help wanted", "waiting on author", "priority: 1" ]
πŸ› Bug I am running this github repository https://github.com/mindslab-ai/faceshifter. In the first experiment, I used one GPU and the speed was about 1.34it/s (there was 994470 iterations, using DDP). In the next experiment, I used two GPUs and this time the speed almost halved to 1.38s/it (this time there was 497235 iterations). Consequently, the speed of the whole algorithm does not change considerably. Expected behavior I expect that the speed of each iteration does not change with 2 GPUs, as I am working with DDP and the whole batch is given to one GPU and since I have not changed batch size in both experiments (with one and two GPUs). Environment PyTorch Version: 1.8 OS: ubuntu 18 Python version: Python 3.6.10 CUDA/cuDNN version: V11.1.74 GPUs: GeForce RTX 3090, GeForce RTX 3090
How to use sparse.mm in float16 training pipeline
[ "question" ]
What is your question? How can we assign certain operation (e.g. torch.sparse.mm) as float32 operation in float16 training setting? Details and what I have tried I am trying to train a model using pl.Trainer(distributed_backend='ddp', precision=16, amp_level='01', gpus=2) and I need to use sparse tensor multiplication in the forward loop. I got RuntimeError: "addmm_sparse_cuda" not implemented for 'Half' as reported in Pytorch issue #41069. However, this error remains even after I changed the variable type into float32. I guess the apex or pytorch-lightening is still calling the sparse.mm with float16 setting. Is it possible to assign certain operation in the float16 training pipeline as float32 operation? Or if there is any alternative way that I can use torch.sparse.mm within float16 training process. Reproduce Initialize any model (e.g. the official MNIST demo), set trainer = pl.Trainer(distributed_backend='ddp', precision=16, amp_level='01') add following code in the forward function a = torch.randn(3,2).float().cuda() i = torch.LongTensor([[0, 1, 1], [2, 0, 2]]) v = torch.FloatTensor([3, 4, 5]) b = torch.sparse.FloatTensor(i, v, torch.Size([2,3])).float().cuda() c = torch.sparse.mm(b, a) I cannot afford to do c= b.to_dense() @ a in practice, because of the limited GPU memory. What's your environment? OS: Ubuntu 16.04 Packaging: conda Pytorch: v1.6.0 Pytorch_lightning: v0.9.0 CUDA: 10.2
Error: forward() missing positional argument
[ "question" ]
I'm trying to implement a model with multiple input images, something similar to this: https://rosenfelder.ai/multi-input-neural-network-pytorch/ Here is my model: class FaceModel(pl.LightningModule): def __init__(self): super().__init__() self.example_input_array = torch.rand(1, 3, 64, 64) self.face_c1 = nn.Conv2d(3, 16, 3) self.face_p1 = nn.MaxPool2d(2, 2) self.face_c2 = nn.Conv2d(16, 32, 3) self.face_p2 = nn.MaxPool2d(2, 2) self.l_eye_c1 = nn.Conv2d(3, 16, 3) self.l_eye_p1 = nn.MaxPool2d(2, 2) self.l_eye_c2 = nn.Conv2d(16, 32, 3) self.l_eye_p2 = nn.MaxPool2d(2, 2) self.fc1 = nn.Linear(12544, 32) self.fc2 = nn.Linear(32, 2) def forward(self, face, l_eye): face = self.face_p1(F.relu(self.face_c1(face))) face = self.face_p2(F.relu(self.face_c2(face))) face = face.view(-1, 32 * 14 * 14) l_eye = self.l_eye_p1(F.relu(self.l_eye_c1(l_eye))) l_eye = self.l_eye_p2(F.relu(self.l_eye_c2(l_eye))) l_eye = l_eye.view(-1, 32 * 14 * 14) out = torch.cat((face, l_eye), dim=1) out = F.relu(self.fc1(out)) out = self.fc2(out) return out def configure_optimizers(self): optimizer = optim.Adam(self.parameters(), lr=1e-5) return optimizer def training_step(self, batch, batch_idx): face, l_eye = batch['face'], batch['l_eye'] y_hat = self(face, l_eye) loss = F.mse_loss(y_hat, batch['targets']) self.log('train_loss', loss) return loss def validation_step(self, batch, batch_idx): y_hat = self(batch['face'], batch['l_eye']) val_loss = F.mse_loss(y_hat, batch['targets']) self.log('val_loss', val_loss) return val_loss def test_step(self, batch, batch_idx): y_hat = self(batch['face'], batch['l_eye']) loss = F.mse_loss(y_hat, batch['targets']) self.log('test_loss', loss) return loss img_types = ['face', 'l_eye'] d_train, d_val, d_test = create_datasets(*img_types, batch_size=128) logger = TensorBoardLogger(save_dir='logs', name='multi', log_graph=True) model = FaceModel() trainer = pl.Trainer(logger=logger, gpus=1, max_epochs=20, benchmark=True) trainer.fit(model, train_dataloader=d_train, val_dataloaders=d_val) I keep getting the following error and I'm not sure why: TypeError: forward() missing 1 required positional argument: 'l_eye' So my forward method takes both a face and a l_eye, but I feel like both are being supplied during the training step here: face, l_eye = batch['face'], batch['l_eye'] y_hat = self(face, l_eye) I cant see where else forward is being called (somewhere under the hood?), or where else I'm meant to be supplying my second set of images
Update PipeRPCPlugin compatibility to pytorch 1.7.0+
[ "feature", "help wanted" ]
πŸš€ Feature Latest pytorch-lightning supports Sequential Model Parallelism, but compatible only with torch==1.6.0. Supporting 1.7.0+ allows us to exploit latest GPUs such as RTX 3000 series. Motivation Fine-tuning models too large for a single GPU. Pitch PipeRPCPlugin to support torch==1.7.0 or higher. Alternatives pl.Trainer supports accelerator='sharded' option without data-parallelism. (i.e. batch_size=1) Additional context N/A
Named formatting options in filename argument is broken when it contains "/"
[ "feature", "help wanted", "good first issue", "won't fix", "logger", "checkpointing", "priority: 1" ]
πŸ› Bug / Limitation I think this might be a limitation instead of a bug. Named formatting options in filename argument is broken when it contains "/". "/" is used because I want to group different metrics. From pytorch documentation Lots of information can be logged for one experiment. To avoid cluttering the UI and have better result clustering, we can group plots by naming them hierarchically. For example, β€œLoss/train” and β€œLoss/test” will be grouped together, while β€œAccuracy/train” and β€œAccuracy/test” will be grouped separately in the TensorBoard interface. for example, if I want to save my model using xx/iou metric, I wil do something like the following trainer_config = {} checkpoint_callback = ModelCheckpoint( monitor='xx/iou', dirpath="/path/to/dir", filename='my_model_{epoch:02d}_{xx/iou:.2f}', save_top_k=5, mode='max', ) trainer_config["callbacks"] = [checkpoint_callback] trainer = pl.Trainer(**trainer_config) I would expect models are saved with names like /path/to/dir/my_model_epoch=123_iou=80.ckpt but the slash in "xx/iou" causes some trouble. The checkpoint file is actually saved to "/path/to/dir/my_model_epoch=123_xx" directory with the name iou=80.ckpt. Since "/" is not allowed in filenames in linux, I think maybe xx/iou should be updated to xx_iou to work around the issue ? Environment pytorch-lightning 1.1.2 python 3.6.9
TestTubeLogger fails to log tensorboard hparams
[ "bug", "help wanted" ]
πŸ› Bug I recall TestTubeLogger used to give me an HParams tab in tensorboard. Possibly since the changes described #2974 #3610 this is no longer the case, and the "fix" seems to entail discarding TestTubeLogger and using TensorboardLogger instead, but I am unable to do such a thing due to heavy code dependence on TestTubeLogger. There is never an HParams tab in TB regardless of calls to tt_logger.log_hyperparams(config) and self.save_hyperparameters(config) in my LightningModule constructor. Switching to TensorboardLogger does cause the HParams tab to appear as expected, as suggested in the aforementioned issues. This issue is limited to PL's TestTubeLogger. Does TestTubeLogger have the capability of writing HParams data for TB? Perhaps I am mistaken, and this was never possible. Thanks! Environment pytorch-lightning==1.1.2
Logging with gradient accumulation & DDP
[ "question" ]
I have got two questions about logging if using gradient accumulation and DDP: Is there a way to average logged values across the accumulated batches? Logging in training step simply as follows: def validation_step(self, batch, batch_idx): ... self.log('val_loss', outputs.loss, sync_dist=True) return loss with accumulate_grad_batches=N does not work and logs N values each 50*N steps instead of a single average value each 50 steps. It is recommended to use sync_dist=True while using DDP. However, when logging text, images, etc., I would like to use something like: self.logger.experiment.log({'loss' : ..., 'text' :...}, step=self.global_step) but log it only on one device. Is it possible? Note that I am using the W&B logger.
Sequential model parallel with multi nodes regarding --num_nodes
[ "bug", "help wanted", "priority: 1" ]
Sequential model parallel with multi nodes regarding --num_nodes Please reproduce using the BoringModel 8 MP with 2 DP as follow. # node 0 MASTER_ADDR=XXX.XXX.XXX.1 MASTER_PORT=7874 NODE_RANK=0 python train.py --gpus 8 --accelerator ddp .... --use_ddp_sequential --num_nodes 2 # node 1 MASTER_ADDR=XXX.XXX.XXX.1 MASTER_PORT=7874 NODE_RANK=1 python train.py --gpus 8 --accelerator ddp .... --use_ddp_sequential --num_nodes 2 It produces error as follow. File "/home/hey/.local/lib/python3.7/site-packages/pytorch_lightning/plugins/ddp_sequential_plugin.py", line 233, in _infer_check_num_gpus "Pipe currently only supports splitting the module onto all available GPUs" So I edited _infer_check_num_gpus as follow. haven-jeon@c96f144 After that, learning proceeds normally. Environment PyTorch Version (e.g., 1.0): 1.6.0 OS (e.g., Linux): CentOS 7 How you installed PyTorch (conda, pip, source): pip Build command you used (if compiling from source): pip install -U . Python version: 3.7 CUDA/cuDNN version: 10.1 GPU models and configuration: P40 Any other relevant information:
Plotting metrics with Tensorboard plots two graphs instead on one. What is the second one?
[ "question", "logger", "3rd party" ]
Before asking: I tried looking for an answer and asking on StackOverflow, but no luck. When plotting to Tensorboard with i.e self.log('validation_accuracy_epoch', self.val_acc.compute()), the output is two lines - a dark and a light one. Only the dark one allows any interaction. What is the other line? Code full code below, but the main line is above in the quesiton, which is really the only line logging the relevant metric import torch.nn as nn import torch.nn.functional as F import torchvision from pytorch_lightning.core.lightning import LightningModule from Testing.Research.config.ConfigProvider import ConfigProvider from pytorch_lightning import Trainer, seed_everything from torch import optim from Testing.Research.data_modules.MNISTDataModule import MNISTDataModule from Testing.Research.data_modules.MyDataModule import MyDataModule from pytorch_lightning.loggers import TensorBoardLogger from Testing.Research.config.paths import tb_logs_folder from Testing.Research.config.paths import classifier_checkpoints_path from Testing.Research.config.paths import vae_checkpoints_path from Testing.Research.autoencoding.VAEFC_Lightning import VAEFC from pytorch_lightning.callbacks.model_checkpoint import ModelCheckpoint import os import glob from typing import List, Optional, Any import torch import pytorch_lightning as pl import seaborn as sn import pandas as pd import numpy as np import matplotlib.pyplot as plt from matplotlib.backends.backend_agg import FigureCanvasAgg as FigureCanvas from matplotlib.figure import Figure import io from PIL import Image class LatentSpaceClassifierLightning(LightningModule): def __init__(self, config, trained_vae, latent_dim): super(LatentSpaceClassifierLightning, self).__init__() self._config = config self._trained_vae = trained_vae self._trained_vae.eval() self.fc1 = nn.Linear(latent_dim, 20) self.fc2 = nn.Linear(20, 10) self.fc3 = nn.Linear(10, self._config.n_clusters) self._criterion = nn.CrossEntropyLoss() self.train_acc = pl.metrics.Accuracy() self.val_acc = pl.metrics.Accuracy() self.test_acc = pl.metrics.Accuracy() self.val_confusion = pl.metrics.classification.ConfusionMatrix(num_classes=self._config.n_clusters) self.logger: Optional[TensorBoardLogger] = None def forward(self, x): decoded, mu_batch, logvar = self._trained_vae(x) # TODO cancel decoder part evaluation x = F.relu(self.fc1(mu_batch)) x = F.relu(self.fc2(x)) x = self.fc3(x) # prediction = F.softmax(x) # return prediction log_probs = x return log_probs def training_step(self, batch, batch_index): if self._config.dataset == "toy": (orig_batch, noisy_batch), label_batch = batch # TODO put in the noise here and not in the dataset? elif self._config.dataset == "mnist": orig_batch, label_batch = batch orig_batch = orig_batch.reshape(-1, 28 * 28) noisy_batch = orig_batch else: raise ValueError("wrong config.dataset") # recon_batch, mu_batch, logvar_batch = self._trained_vae(orig_batch) log_probs = self.forward(orig_batch) loss = self._criterion(log_probs, label_batch) self.train_acc(log_probs, label_batch) self.log('Classifier_train_accuracy_step', self.train_acc, on_step=True, on_epoch=False) return loss def training_step_end(self, outputs): # update and log # loss = self._criterion(outputs['preds'], outputs['target']) # self.log('train_metric', loss) return outputs def training_epoch_end(self, outs): self.log('Classifier_train_accuracy_epoch', self.train_acc.compute()) def validation_step(self, batch, batch_index): if self._config.dataset == "toy": (orig_batch, noisy_batch), label_batch = batch # TODO put in the noise here and not in the dataset? elif self._config.dataset == "mnist": orig_batch, label_batch = batch orig_batch = orig_batch.reshape(-1, 28 * 28) noisy_batch = orig_batch else: raise ValueError("wrong config.dataset") # recon_batch, mu_batch, logvar_batch = self._trained_vae(orig_batch) log_probs = self.forward(orig_batch) loss = self._criterion(log_probs, label_batch) self.val_acc.update(log_probs, label_batch) self.val_confusion.update(log_probs, label_batch) # self.log('validation_accuracy_step', self.train_acc, on_step=True, on_epoch=False) # self.log('validation_confusion_step', self.val_confusion, on_step=True, on_epoch=False) return {"loss": loss, "labels": label_batch} def validation_step_end(self, outputs): # update and log # loss = self._criterion(outputs['preds'], outputs['target']) # self.log('train_metric', loss) return outputs def validation_epoch_end(self, outs): tb = self.logger.experiment # accuracy self.log('validation_accuracy_epoch', self.val_acc.compute()) # confusion matrix conf_mat = self.val_confusion.compute().detach().cpu().numpy().astype(np.int) df_cm = pd.DataFrame( conf_mat, index=np.arange(self._config.n_clusters), columns=np.arange(self._config.n_clusters)) plt.figure() sn.set(font_scale=1.2) sn.heatmap(df_cm, annot=True, annot_kws={"size": 16}, fmt='d') buf = io.BytesIO() # plt.show() plt.savefig(buf, format='jpeg') buf.seek(0) im = Image.open(buf) im = torchvision.transforms.ToTensor()(im) tb.add_image("val_confusion_matrix", im, global_step=self.current_epoch) def configure_optimizers(self): optimizer = optim.Adam(self.parameters(), lr=self._config.learning_rate) return optimizer def test_step(self, batch, batch_index): if self._config.dataset == "toy": (orig_batch, noisy_batch), label_batch = batch # TODO put in the noise here and not in the dataset? elif self._config.dataset == "mnist": orig_batch, label_batch = batch orig_batch = orig_batch.reshape(-1, 28 * 28) noisy_batch = orig_batch else: raise ValueError("wrong config.dataset") # recon_batch, mu_batch, logvar_batch = self._trained_vae(orig_batch) log_probs = self.forward(orig_batch) loss = self._criterion(log_probs, label_batch) self.test_acc(log_probs, label_batch) self.log('Classifier_test_accuracy_step', self.test_acc, on_step=True, on_epoch=False) return loss def test_step_end(self, outputs): # update and log # loss = self._criterion(outputs['preds'], outputs['target']) # self.log('train_metric', loss) return outputs def test_epoch_end(self, outs): self.log('Classifier_test_accuracy_epoch', self.test_acc.compute()) def train_latent_classifier(): config = ConfigProvider.get_config() seed_everything(config.random_seed) if config.dataset == "toy": datamodule = MyDataModule(config) latent_dim = config.latent_dim_toy enc_layer_sizes = config.enc_layer_sizes_toy + [latent_dim] dec_layer_sizes = [latent_dim] + config.dec_layer_sizes_toy elif config.dataset == "mnist": datamodule = MNISTDataModule(config) latent_dim = config.latent_dim_mnist enc_layer_sizes = config.enc_layer_sizes_mnist + [latent_dim] dec_layer_sizes = [latent_dim] + config.dec_layer_sizes_mnist else: raise ValueError("undefined config.dataset. Allowed are either 'toy' or 'mnist'") # model = VAEFC(config=config, encoder_layer_sizes=enc_layer_sizes, decoder_layer_sizes=dec_layer_sizes) last_vae = max(glob.glob(os.path.join(os.path.abspath(vae_checkpoints_path), r"**/*.ckpt"), recursive=True), key=os.path.getctime) trained_vae = VAEFC.load_from_checkpoint(last_vae, config=config, encoder_layer_sizes=enc_layer_sizes, decoder_layer_sizes=dec_layer_sizes) logger = TensorBoardLogger(save_dir=tb_logs_folder, name='Classifier') logger.hparams = config # TODO only put here relevant stuff checkpoint_callback = ModelCheckpoint(dirpath=classifier_checkpoints_path) trainer = Trainer(deterministic=config.is_deterministic, # auto_lr_find=config.auto_lr_find, # log_gpu_memory='all', # min_epochs=99999, max_epochs=config.num_epochs, default_root_dir=classifier_checkpoints_path, logger=logger, callbacks=[checkpoint_callback], gpus=1 ) # trainer.tune(model) classifier = LatentSpaceClassifierLightning(config, trained_vae, latent_dim=latent_dim) trainer.fit(classifier, datamodule=datamodule) best_model_path = checkpoint_callback.best_model_path print("done training classifier with lightning") print(f"best model path = {best_model_path}") return trainer def run_trained_classifier(trainer): trainer.test() if __name__ == "__main__": trainer = train_latent_classifier() run_trained_classifier(trainer) What have you tried? Well, it works, and probably the other line does mean something, and I just don't understand. What's your environment? OS: Windows Packaging pip Version 1.1.2
Hanging process on DDP and ModelCheckpoint Callback if one of the dirpath is None
[ "bug", "help wanted" ]
πŸ› Bug When running on DDP and using the ModelCheckpoint Callback, if the callback is given dirpath=None in one of the processes, the trainer hangs before sanity checks start and becomes unresponsive. Please reproduce using the BoringModel Cannot reproduce on Colab due to needing 2 GPUs to reproduce To Reproduce # Copyright The PyTorch Lightning team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # -------------------------------------------- # -------------------------------------------- # -------------------------------------------- # USE THIS MODEL TO REPRODUCE A BUG YOU REPORT # -------------------------------------------- # -------------------------------------------- # -------------------------------------------- import os import torch from torch.utils.data import Dataset from pl_examples import cli_lightning_logo from pytorch_lightning import LightningModule, Trainer from pytorch_lightning.callbacks import ModelCheckpoint class RandomDataset(Dataset): """ >>> RandomDataset(size=10, length=20) # doctest: +ELLIPSIS <...bug_report_model.RandomDataset object at ...> """ def __init__(self, size, length): self.len = length self.data = torch.randn(length, size) def __getitem__(self, index): return self.data[index] def __len__(self): return self.len class BoringModel(LightningModule): """ >>> BoringModel() # doctest: +ELLIPSIS +NORMALIZE_WHITESPACE BoringModel( (layer): Linear(...) ) """ def __init__(self): """ Testing PL Module Use as follows: - subclass - modify the behavior for what you want class TestModel(BaseTestModel): def training_step(...): # do your own thing or: model = BaseTestModel() model.training_epoch_end = None """ super().__init__() self.layer = torch.nn.Linear(32, 2) def forward(self, x): return self.layer(x) def loss(self, batch, prediction): # An arbitrary loss to have a loss that updates the model weights during `Trainer.fit` calls return torch.nn.functional.mse_loss(prediction, torch.ones_like(prediction)) def step(self, x): x = self.layer(x) out = torch.nn.functional.mse_loss(x, torch.ones_like(x)) return out def training_step(self, batch, batch_idx): output = self.layer(batch) loss = self.loss(batch, output) self.log("loss", loss) return {"loss": loss} def training_step_end(self, training_step_outputs): return training_step_outputs def training_epoch_end(self, outputs) -> None: torch.stack([x["loss"] for x in outputs]).mean() def validation_step(self, batch, batch_idx): output = self.layer(batch) loss = self.loss(batch, output) return {"x": loss} def validation_epoch_end(self, outputs) -> None: torch.stack([x["x"] for x in outputs]).mean() def test_step(self, batch, batch_idx): output = self.layer(batch) loss = self.loss(batch, output) return {"y": loss} def test_epoch_end(self, outputs) -> None: torch.stack([x["y"] for x in outputs]).mean() def configure_optimizers(self): optimizer = torch.optim.SGD(self.layer.parameters(), lr=0.1) lr_scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=1) return [optimizer], [lr_scheduler] # NOTE: If you are using a cmd line to run your script, # provide the cmd line as below. # opt = "--max_epochs 1 --limit_train_batches 1".split(" ") # parser = ArgumentParser() # args = parser.parse_args(opt) def get_local_rank(): local_rank = int(os.environ.get("LOCAL_RANK", 0)) return local_rank def create_log_dir(): """Each run's log dir will have same name as wandb runid""" if get_local_rank() != 0: return return "./logs" def test_run(): class TestModel(BoringModel): def on_train_epoch_start(self) -> None: print("override any method to prove your bug") # fake data train_data = torch.utils.data.DataLoader(RandomDataset(32, 64)) val_data = torch.utils.data.DataLoader(RandomDataset(32, 64)) test_data = torch.utils.data.DataLoader(RandomDataset(32, 64)) ##### Error occurs here ##### log_dir = create_log_dir() # Rank0 process gets str, Rank1 gets None print(f"Rank-{get_local_rank()}: log_dir: {log_dir}") callbacks = [ModelCheckpoint(dirpath=log_dir, monitor="loss", mode="min", save_last=True, period=1, save_top_k=1)] # model model = TestModel() trainer = Trainer( default_root_dir=os.getcwd(), limit_train_batches=1, limit_val_batches=1, max_epochs=1, weights_summary=None, gpus=2, accelerator="ddp", callbacks=callbacks, ) trainer.fit(model, train_data, val_data) trainer.test(test_dataloaders=test_data) if __name__ == "__main__": cli_lightning_logo() test_run() Expected behavior The trainer starts and performs as normal. The default value for dirpath is None, so one of the processes passing None should not affect it. Environment * CUDA: - GPU: - GeForce GTX 1080 Ti - GeForce GTX 1080 Ti - available: True - version: 10.2 * Packages: - numpy: 1.19.4 - pyTorch_debug: False - pyTorch_version: 1.7.1 - pytorch-lightning: 1.1.2 - tqdm: 4.54.1 * System: - OS: Linux - architecture: - 64bit - ELF - processor: x86_64 - python: 3.8.0 - version: #109-Ubuntu SMP Fri Jun 19 11:33:10 UTC 2020 How you installed PyTorch (conda, pip, source): pip Additional context Bug does not occur when running on a single GPU. Bug does not occur if both processes give valid dirpath (str)
TypeError: optimizer_step() got an unexpected keyword argument 'on_tpu'
[ "question", "won't fix", "waiting on author" ]
❓ Questions and Help I have got this error /usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/trainer.py in fit(self, model, train_dataloader, val_dataloaders, datamodule) 468 self.call_hook('on_fit_start') 469 --> 470 results = self.accelerator_backend.train() 471 self.accelerator_backend.teardown() 472 /usr/local/lib/python3.6/dist-packages/pytorch_lightning/accelerators/gpu_accelerator.py in train(self) 66 67 # train or test ---> 68 results = self.train_or_test() 69 return results 70 /usr/local/lib/python3.6/dist-packages/pytorch_lightning/accelerators/accelerator.py in train_or_test(self) 67 results = self.trainer.run_test() 68 else: ---> 69 results = self.trainer.train() 70 return results 71 /usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/trainer.py in train(self) 519 with self.profiler.profile("run_training_epoch"): 520 # run train epoch --> 521 self.train_loop.run_training_epoch() 522 523 if self.max_steps and self.max_steps <= self.global_step: /usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/training_loop.py in run_training_epoch(self) 558 # ------------------------------------ 559 with self.trainer.profiler.profile("run_training_batch"): --> 560 batch_output = self.run_training_batch(batch, batch_idx, dataloader_idx) 561 562 # when returning -1 from train_step, we end epoch early /usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/training_loop.py in run_training_batch(self, batch, batch_idx, dataloader_idx) 716 717 # optimizer step --> 718 self.optimizer_step(optimizer, opt_idx, batch_idx, train_step_and_backward_closure) 719 720 else: /usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/training_loop.py in optimizer_step(self, optimizer, opt_idx, batch_idx, train_step_and_backward_closure) 499 on_tpu=self.trainer.use_tpu and TPU_AVAILABLE, 500 using_native_amp=using_native_amp, --> 501 using_lbfgs=is_lbfgs, 502 ) 503 TypeError: optimizer_step() got an unexpected keyword argument 'on_tpu' How can I solve this issue? while trying to understand how to use T5 this is the code : class T5FineTuner(pl.LightningModule): def __init__(self, hparams): super(T5FineTuner, self).__init__() self.hparams = hparams self.model = T5ForConditionalGeneration.from_pretrained(hparams.model_name_or_path) self.tokenizer = T5Tokenizer.from_pretrained(hparams.tokenizer_name_or_path) def is_logger(self): #AttributeError: 'Trainer' object has no attribute 'proc_rank' #return self.trainer.proc_rank <= 0 # What is this? print("self.trainer.global_rank : ", self.trainer.global_rank ) #https://github.com/PyTorchLightning/pytorch-lightning/issues/2267 fixes the issue return self.trainer.global_rank <= 0 def forward( self, input_ids, attention_mask=None, decoder_input_ids=None, decoder_attention_mask=None, lm_labels=None ): return self.model( input_ids, attention_mask=attention_mask, decoder_input_ids=decoder_input_ids, decoder_attention_mask=decoder_attention_mask, labels=lm_labels,# lm_labels=lm_labels, ) def _step(self, batch): lm_labels = batch["target_ids"] lm_labels[lm_labels[:, :] == self.tokenizer.pad_token_id] = -100 # why is this ? outputs = self( input_ids=batch["source_ids"], attention_mask=batch["source_mask"], decoder_attention_mask=batch['target_mask'], lm_labels=lm_labels ) loss = outputs[0] return loss def training_step(self, batch, batch_idx): loss = self._step(batch) tensorboard_logs = {"train_loss": loss} return {"loss": loss, "log": tensorboard_logs} def training_epoch_end(self, outputs): avg_train_loss = torch.stack([x["loss"] for x in outputs]).mean() tensorboard_logs = {"avg_train_loss": avg_train_loss} return {"avg_train_loss": avg_train_loss, "log": tensorboard_logs, 'progress_bar': tensorboard_logs} def validation_step(self, batch, batch_idx): loss = self._step(batch) return {"val_loss": loss} def validation_epoch_end(self, outputs): avg_loss = torch.stack([x["val_loss"] for x in outputs]).mean() tensorboard_logs = {"val_loss": avg_loss} return {"avg_val_loss": avg_loss, "log": tensorboard_logs, 'progress_bar': tensorboard_logs} def configure_optimizers(self): "Prepare optimizer and schedule (linear warmup and decay)" model = self.model no_decay = ["bias", "LayerNorm.weight"] optimizer_grouped_parameters = [ { "params": [p for n, p in model.named_parameters() if not any(nd in n for nd in no_decay)], "weight_decay": self.hparams.weight_decay, }, { "params": [p for n, p in model.named_parameters() if any(nd in n for nd in no_decay)], "weight_decay": 0.0, }, ] optimizer = AdamW(optimizer_grouped_parameters, lr=self.hparams.learning_rate, eps=self.hparams.adam_epsilon) self.opt = optimizer return [optimizer] def optimizer_step(self, epoch, batch_idx, optimizer, optimizer_idx, second_order_closure=None): if self.trainer.use_tpu: print("optimizer is using TPU...") xm.optimizer_step(optimizer) else: optimizer.step() optimizer.zero_grad() self.lr_scheduler.step() def get_tqdm_dict(self): tqdm_dict = {"loss": "{:.3f}".format(self.trainer.avg_loss), "lr": self.lr_scheduler.get_last_lr()[-1]} return tqdm_dict def train_dataloader(self): train_dataset = get_dataset(tokenizer=self.tokenizer, type_path="train", args=self.hparams) dataloader = DataLoader(train_dataset, batch_size=self.hparams.train_batch_size, drop_last=True, shuffle=True, num_workers=4) t_total = ( (len(dataloader.dataset) // (self.hparams.train_batch_size * max(1, self.hparams.n_gpu))) // self.hparams.gradient_accumulation_steps * float(self.hparams.num_train_epochs) ) scheduler = get_linear_schedule_with_warmup( self.opt, num_warmup_steps=self.hparams.warmup_steps, num_training_steps=t_total ) self.lr_scheduler = scheduler return dataloader def val_dataloader(self): val_dataset = get_dataset(tokenizer=self.tokenizer, type_path="val", args=self.hparams) return DataLoader(val_dataset, batch_size=self.hparams.eval_batch_size, `num_workers=4)` train_params = dict( accumulate_grad_batches=args.gradient_accumulation_steps, gpus=args.n_gpu, max_epochs=args.num_train_epochs, #callbacks=[early_stop_callback], precision= 16 if args.fp_16 else 32, amp_level=args.opt_level, gradient_clip_val=args.max_grad_norm, checkpoint_callback=checkpoint_callback, callbacks=[LoggingCallback(), early_stop_callback], ) trainer = pl.Trainer(**train_params) I am working on colab
Logging momentum in LearningRateMonitor
[ "bug", "help wanted" ]
πŸ› Bug Trying to log momentum in LearningRateMonitor crashes training. When log_momentum = False, everything runs smoothly. Please reproduce using the BoringModel https://colab.research.google.com/drive/1JiSrB4JYexxMAjPPdnbv4Q0ShqRH2YJQ?usp=sharing Environment CUDA: GPU: Tesla T4 available: True version: 10.1 Packages: numpy: 1.19.4 pyTorch_debug: True pyTorch_version: 1.7.0+cu101 pytorch-lightning: 1.1.2 tqdm: 4.41.1 System: OS: Linux architecture: 64bit processor: x86_64 python: 3.6.9 version: #1 SMP Thu Jul 23 08:00:38 PDT 2020
Unexpected global_steps/ accelerator behaviour
[ "bug", "help wanted" ]
πŸ› Bug I observed some, imo, unexpected behaviour when experimenting with different combinations of gpus/ num_processes and accelerator trainer args. To Reproduce I use the following bug_report_model.py. I adapted the one from pl to take accelerator, gpus, and num_processes args, only use fake train_data, and train for ten epochs. When running it with the default args (accelerator=None, gpus=0, num_processes=1), trainer.global_step=640 as expected: .venv ❯ python -W ignore bug_report_model.py GPU available: True, used: False TPU available: None, using: 0 TPU cores Epoch 9: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 64/64 [00:00<00:00, 969.92it/s, loss=718] trainer.accelerator_backend=<pytorch_lightning.accelerators.cpu_accelerator.CPUAccelerator object at 0x7f8e39ea7dc0> trainer.gpus=0 trainer.num_processes=1 trainer.global_step=640 When running it with --num_processes 2 --accelerator ddp_cpu, trainer.global_step=0 which is not expected: .venv ❯ python -W ignore bug_report_model.py --num_processes 2 --accelerator ddp_cpu GPU available: True, used: False TPU available: None, using: 0 TPU cores initializing ddp: GLOBAL_RANK: 1, MEMBER: 2/2 initializing ddp: GLOBAL_RANK: 0, MEMBER: 1/2 Epoch 9: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 32/32 [00:00<00:00, 305.89it/s, loss=45.2] trainer.accelerator_backend=<pytorch_lightning.accelerators.ddp_cpu_spawn_accelerator.DDPCPUSpawnAccelerator object at 0x7f325f6e6df0> trainer.gpus=0 trainer.num_processes=2 trainer.global_step=0 When running it with --gpus 2, trainer.accelerator_backend is DDPSpawnAccelerator and trainer.global_step=0 which is not expected: .venv ❯ python -W ignore bug_report_model.py --gpus 2 GPU available: True, used: True TPU available: None, using: 0 TPU cores LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0,1,2,3] initializing ddp: GLOBAL_RANK: 0, MEMBER: 1/2 initializing ddp: GLOBAL_RANK: 1, MEMBER: 2/2 Epoch 9: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 32/32 [00:00<00:00, 284.00it/s, loss=0.159] trainer.accelerator_backend=<pytorch_lightning.accelerators.ddp_spawn_accelerator.DDPSpawnAccelerator object at 0x7fa00deafe50> trainer.gpus=2 trainer.num_processes=2 trainer.global_step=0 When running it with --gpus 2 --accelerator ddp, trainer.global_step=320 which makes sense due to ddp, but changes logging behaviour and max_steps calculations: .venv ❯ python -W ignore bug_report_model.py --gpus 2 --accelerator ddp GPU available: True, used: True TPU available: None, using: 0 TPU cores LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0,1,2,3] LOCAL_RANK: 1 - CUDA_VISIBLE_DEVICES: [0,1,2,3] initializing ddp: GLOBAL_RANK: 1, MEMBER: 2/2 initializing ddp: GLOBAL_RANK: 0, MEMBER: 1/2 Epoch 9: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 32/32 [00:00<00:00, 233.42it/s, loss=0.19] trainer.accelerator_backend=<pytorch_lightning.accelerators.ddp_accelerator.DDPAccelerator object at 0x7f1b88743370> trainer.gpus='0,1' trainer.num_processes=2 trainer.global_step=320 trainer.accelerator_backend=<pytorch_lightning.accelerators.ddp_accelerator.DDPAccelerator object at 0x7fb45ae2ee80> trainer.gpus=2 trainer.num_processes=2 trainer.global_step=320 Expected behavior Regarding 2.: trainer.global_step should be 320 or 640? Regarding 3.: trainer.accelerator_backend should be DDPAccelerator as this is the recommended one for multi GPU training? Regarding 4.: Would it be possible and/ or make sense to have total_steps that multplies global_steps and gpus that is used for max_steps and logging? Environment * CUDA: - GPU: - GeForce RTX 2080 Ti - GeForce RTX 2080 Ti - GeForce RTX 2080 Ti - GeForce RTX 2080 Ti - available: True - version: 10.2 * Packages: - numpy: 1.19.4 - pyTorch_debug: False - pyTorch_version: 1.7.1 - pytorch-lightning: 1.1.2 - tqdm: 4.55.0 * System: - OS: Linux - architecture: - 64bit - ELF - processor: x86_64 - python: 3.8.6 - version: #51~18.04.1-Ubuntu SMP Sat Sep 5 14:35:50 UTC 2020
Loading PL model from checkpoint with only weights
[ "question" ]
Hello, How to properly load the PL model when ModelCheckpoint has save_weights_only=True ?
Error with `load_from_checkpoint`
[ "bug", "working as intended", "checkpointing", "priority: 1" ]
I am trying to fine-tune a language model and facing some issues with loading the model from the saved checkpoint. I have defined my own model which takes in argparse.NameSpace as the input as is suggested in the documentation here. class FineTuner(pl.LightningModule): def __init__(self, hparams): # pass all the arguments super(FineTuner, self).__init__() self.hparams = hparams After I am done with the fine-tuning, I try to load the model from saved checkpoint in a separate file as follows - from model_file import FineTuner model = FineTuner.load_from_checkpoint(ckpt_path) but it gives me the following error - AttributeError: 'dict' object has no attribute 'tokenizer_name' I tried to look into the source code and found that the error is happening at this line. I believe the error is occurring because the aforementioned line in saving.py file expects a **dict which is different from my model definition. It would be great if you could solve this issue as I have seen many others are also having similar issues with model loading. For others who might still be looking for the answer, here is the workaround that I am using right now (but would still like to use the cleaner official version) - ckpt_path = abs_path('./path/to/ckpt/file') checkpoint = torch.load(ckpt_path, map_location=lambda storage, loc: storage) model = my_model(checkpoint['hyper_parameters']) #(better solution than explicitly providing all the hyperparameters) model.load_state_dict(checkpoint['state_dict']) If one were to look at the source code, you would notice that they are kind of doing the same steps but with some extra stuff ;) But hey, who does not like one-liners. I am using version version 1.0.4.
Add docs to all Trainer attributes
[ "docs" ]
Find all attributes under: dir(self.trainer)
Use scheduler.get_last_lr() instead of manually searching for optimizers.param_groups
[ "feature", "help wanted" ]
πŸ› Bug / Feature Request I am not sure whether to classify this issue as a feature request or bug, but in your codebase, you get the learning rate from schedulers as scheduler.optimizer.param_groups[0]['lr']. However, PyTorch provides the get_last_lr() method for this. I created my own scheduler, which allows me to combine multiple schedulers together (eg. burn-in LR for the first few steps, and then step down LR later during training). As this scheduler holds multiple different schedulers itself, there is no scheduler.optimizer argument and thus I cannot use this with PyTorch Lightning. I did however implement the get_last_lr() method, which gets the LR from the underlying schedulers, so if you could modify the PyTorch Lightning code to use that method, it should all work! Problems Obviously, nothing is ever as straight forward with the PyTorch scheduler API, which means that there might be problems with this solution. One small issue I found is that ReduceLROnPlateau is behaving differently and does not implement the get_last_lr() method. (The PyTorch team should really look into unifying their API, but that should be discussed there...) If you look at GH-5364, which I made on top of the release/1.2-dev branch, you will see that for the ReduceLROnPlateau scheduler, I still use the old method of finding the LR through the scheduler.optimizer object. However, for any other scheduler, we use the get_last_lr() method. I could adapt the code to be even more defensive and check whether there is a get_last_lr() method and only use it if it is available, but I don't know whether that is necessary, as all other official schedulers have the same base Scheduler class which implements this method. I send in a Draft PR with the necessary changes. Kind Regards, 0phoff
Refactor: hpc_load and entangled logics in CheckpointConnector
[ "feature", "help wanted", "refactor", "checkpointing" ]
πŸš€ Feature Refactor CheckpointConnector by reducing duplicated parts and separating different functionality. Motivation CheckpointConnector can be refactored. hpc_load Almost all hpc_load code is duplicated with restore (normal restore). Entangled logics top-level method restore_weights have several functionality/responsibility. prevent OOM (CUDA cache clear) apply checkpoint to state control parallel environment (barrier) Different responsibility should be in different function. Pitch hpc_load: delete by using restore restore_weights the name do not reflect responsibility -> change to attempt_to_restore multiple responsibility -> separate "applying checkpoint" parts as attempt_to_apply_checkpoint cc @Borda @justusschock @awaelchli @akihironitta @rohitgr7 @ananthsub @ninginthecloud
LSTM training with hiddens in training_step()
[ "question", "waiting on author" ]
I am trying an LSTM network where I require in each training step the hidden value to be passed over to the next training step until the end of epoch. At the start of new epoch we reset by initializing the hidden to zeros. Following is how i do this, def training_step(self, batch, batch_nb, hiddens): ... if batch_nb == 0: #init only in new epochs hiddens = self.model.init_hidden(self.hparams.batch_size) out, hiddens = self.forward(posts, hiddens) ... loss = self.criterion(out, targets) return {'loss': loss, 'hiddens': hiddens} As per the docs suggest, I've added the truncated_bptt_steps argument to Trainer, but it does not work as expected. It gives the following exception, AssertionError: Batch time dimension length is ambiguous I've tried setting truncated_bptt_steps=2 to different values but what ever value i set it results in the same exception. How does one train LSTMs like this with PTL? What am i doing wrong here? PTL version 1.1.2
trainer.tune() fails when Trainer.__init__(auto_lr_find=True, auto_scale_batch_size=True)
[ "bug", "help wanted", "trainer: tune", "priority: 1" ]
πŸ› Bug trainer.tune() works just fine when either Trainer.__init__(auto_lr_find=False, auto_scale_batch_size=True) or Trainer.__init__(auto_lr_find=True, auto_scale_batch_size=False) However, trainer.tune() fails when Trainer.__init__(auto_lr_find=True, auto_scale_batch_size=True) LR finder stopped early due to diverging loss. INFO lr_finder.py:186:lr_find(): LR finder stopped early due to diverging loss. /home/vin/.local/lib/python3.8/site-packages/pytorch_lightning/utilities/distributed.py:49: UserWarning: You're resuming from a checkpoint that ended mid-epoch. This can cause unreliable results if further training is done, consider using an end of epoch checkpoint. warnings.warn(*args, **kwargs) Failed to compute suggesting for `lr`. There might not be enough points. Traceback (most recent call last): File "/home/vin/.local/lib/python3.8/site-packages/pytorch_lightning/tuner/lr_finder.py", line 353, in suggestion min_grad = np.gradient(loss).argmin() File "<__array_function__ internals>", line 5, in gradient File "/home/vin/.local/lib/python3.8/site-packages/numpy/lib/function_base.py", line 1052, in gradient raise ValueError( ValueError: Shape of array too small to calculate a numerical gradient, at least (edge_order + 1) elements are required. ERROR lr_finder.py:357:suggestion(): Failed to compute suggesting for `lr`. There might not be enough points. Please reproduce using the BoringModel To Reproduce Use following BoringModel and post here In your own environment trainer.tune() should fail when Trainer.init(auto_lr_find=True, auto_scale_batch_size=True). However, if you want to reproduce the bug from my code then go to Github, and fork from https://github.com/vineetk1/conversational-transaction-bot Then run the following on commandline: python3 ctbMain.py input_param_files/distilgpt2_params Expected behavior trainer.tune() should find the Batch-Size and the initial Learning-Rate Environment Note: Bugs with code are solved faster ! Colab Notebook should be made public ! IDE: Please, use our python bug_report_model.py template. Colab Notebook: Please copy and paste the output from our environment collection script (or fill out the checklist below manually). You can get the script and run it with: wget https://raw.githubusercontent.com/PyTorchLightning/pytorch-lightning/master/tests/collect_env_details.py # For security purposes, please check the contents of collect_env_details.py before running it. python collect_env_details.py PyTorch Version (e.g., 1.0): 1.6.0 OS (e.g., Linux): Ubuntu-Linux How you installed PyTorch (conda, pip, source): pip Build command you used (if compiling from source): Python version: 3.8.5 CUDA/cuDNN version: 11.0 GPU models and configuration: Nvidia GeForce GTX 1080 Any other relevant information: Additional context
How to implement a custom Metric based on sklearn's functions
[ "question" ]
❓ Questions and Help What is your question? How to implement a custom Metric based on sklearn's functions? Code I tried to implement a ROCAUC Metric with help of sklearn's roc_auc_score, because it supports multilabel classification. from pytorch_lightning.metrics import Metric from sklearn.metrics import roc_auc_score from typing import Optional, Any import torch class ROCAUC(Metric): def __init__( self, average='macro', compute_on_step: bool = True, dist_sync_on_step: bool = False, process_group: Optional[Any] = None, ): super().__init__( compute_on_step=compute_on_step, dist_sync_on_step=dist_sync_on_step, process_group=process_group, ) self.average = average self.add_state("preds", default=[], dist_reduce_fx=None) self.add_state("target", default=[], dist_reduce_fx=None) def update(self, preds: torch.Tensor, target: torch.Tensor): """ Update state with predictions and targets. Args: preds: Predictions from model target: Ground truth values """ self.preds.append(preds) self.target.append(target) def compute(self): preds = torch.cat(self.preds, dim=0).cpu().numpy() target = torch.cat(self.target, dim=0).cpu().numpy().astype(int) score = roc_auc_score(target, preds, average=self.average) return torch.tensor([score], dtype=torch.float32) In the training procedure, I log the ROCAUC score along with other metrics at the end of validation epoch. # this is the code in LightningModule def validation_epoch_end(self, outputs): for name, metric in self.metrics.items(): value = metric.compute() self.log( name, value, prog_bar=True, logger=True, on_step=False, on_epoch=True, sync_dist=True) It worked fine with single GPU training (without ddp). However, when using ddp accelerator and setting move_metrics_to_cpu = True . It threw the following error: Traceback (most recent call last): File "train.py", line 75, in <module> main() File "train.py", line 71, in main train_classifier(cfg, train_loader, val_loader, work_dir, seed, **args) File "/home/sandylaker/PycharmProjects/xray-classification/xray_cls/apis/train.py", line 40, in train_classifier trainer.fit(lit_model, train_loader, val_loader) File "/home/sandylaker/anaconda3/envs/ml/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 470, in fit results = self.accelerator_backend.train() File "/home/sandylaker/anaconda3/envs/ml/lib/python3.8/site-packages/pytorch_lightning/accelerators/ddp_accelerator.py", line 152, in train results = self.ddp_train(process_idx=self.task_idx, model=model) File "/home/sandylaker/anaconda3/envs/ml/lib/python3.8/site-packages/pytorch_lightning/accelerators/ddp_accelerator.py", line 307, in ddp_train results = self.train_or_test() File "/home/sandylaker/anaconda3/envs/ml/lib/python3.8/site-packages/pytorch_lightning/accelerators/accelerator.py", line 69, in train_or_test results = self.trainer.train() File "/home/sandylaker/anaconda3/envs/ml/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 492, in train self.run_sanity_check(self.get_model()) File "/home/sandylaker/anaconda3/envs/ml/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 690, in run_sanity_check _, eval_results = self.run_evaluation(test_mode=False, max_batches=self.num_sanity_val_batches) File "/home/sandylaker/anaconda3/envs/ml/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 622, in run_evaluation deprecated_eval_results = self.evaluation_loop.evaluation_epoch_end() File "/home/sandylaker/anaconda3/envs/ml/lib/python3.8/site-packages/pytorch_lightning/trainer/evaluation_loop.py", line 208, in evaluation_epoch_end deprecated_results = self.__run_eval_epoch_end(self.num_dataloaders, using_eval_result) File "/home/sandylaker/anaconda3/envs/ml/lib/python3.8/site-packages/pytorch_lightning/trainer/evaluation_loop.py", line 246, in __run_eval_epoch_end eval_results = model.validation_epoch_end(eval_results) File "/home/sandylaker/PycharmProjects/xray-classification/xray_cls/apis/lit_model.py", line 97, in validation_epoch_end self.log( File "/home/sandylaker/anaconda3/envs/ml/lib/python3.8/site-packages/pytorch_lightning/core/lightning.py", line 266, in log self._results.log( File "/home/sandylaker/anaconda3/envs/ml/lib/python3.8/site-packages/pytorch_lightning/core/step_result.py", line 146, in log value = sync_fn(value, group=sync_dist_group, reduce_op=sync_dist_op) File "/home/sandylaker/anaconda3/envs/ml/lib/python3.8/site-packages/pytorch_lightning/accelerators/ddp_accelerator.py", line 344, in sync_tensor return sync_ddp_if_available(tensor, group, reduce_op) File "/home/sandylaker/anaconda3/envs/ml/lib/python3.8/site-packages/pytorch_lightning/utilities/distributed.py", line 124, in sync_ddp_if_available return sync_ddp(result, group=group, reduce_op=reduce_op) File "/home/sandylaker/anaconda3/envs/ml/lib/python3.8/site-packages/pytorch_lightning/utilities/distributed.py", line 155, in sync_ddp torch.distributed.all_reduce(result, op=op, group=group, async_op=False) File "/home/sandylaker/anaconda3/envs/ml/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 954, in all_reduce work = _default_pg.allreduce([tensor], opts) RuntimeError: Tensors must be CUDA and dense It seemed to be caused by sync_dist=True in the log function. When I modified the last line in ROCAUC's compute function: return torch.tensor([score], dtype=torch.float32, device=self.preds[0].device) the error is solved. When we wrap the sklearn's metric function into pytorch-lightning's Metric, we have to first convert Tensor to numpy.ndarray, then we have to convert numpy.ndarray to Tensor after computing the metric value. So my question is: Is there a better way than manually moving the Tensor to GPU like this? What's your environment? OS: Ubuntu 20.04 Packaging: pip Version: 1.1.2
pl_examples reinforce_learn_qnet wrong argument name and description in argparse
[ "bug", "help wanted" ]
πŸ› Bug In refinforce_learn_qnet.py, the argument parser adds two arguments that the model doesn't take input to: max_episode_reward and warm_start_size. It adds both warm_start_size and warm_start_steps arguments instead of just warm_start_steps. Can be seen here. warm_start_steps has a wrong description as well (seems to be copied from max_episode_reward). Its description is supposed to be the description of warm_start_size. Can be verified with its implementation in pl bolts
Value interpolation with hydra composition
[ "bug", "help wanted", "priority: 1" ]
I am using hydra composition with the following structure: β”œβ”€β”€ configs β”‚Β Β  β”œβ”€β”€ config.yaml β”‚Β Β  β”œβ”€β”€ data β”‚Β Β  β”‚Β Β  β”œβ”€β”€ dataset_01.yaml β”‚Β Β  β”‚Β Β  └── dataset_02.yaml β”‚Β Β  └── model β”‚Β Β  β”œβ”€β”€ bert.yaml β”‚Β Β  └── gpt.yaml config.yaml defaults: - model: bert - data: dataset_01 ... data/dataset_01.yaml # @package _group_ name: "dataset_01" train: path: "../resources/datasets/dataset_01/train.jsonl" num_samples: 1257391 test: path: "../resources/datasets/dataset_01/test.jsonl" num_samples: 71892 val: path: "../resources/datasets/dataset_01/val.jsonl" num_samples: 73805 model/bert.yaml # @package _group_ name: "bert" encoder: "source.encoder.BertEncoder.BertEncoder" encoder_hparams: architecture: "bert-base-uncased" lr: 1e-7 tokenizer: architecture: "bert-base-uncased" predictions: path: "../resources/predictions/bert_${data.name}_predictions.pt" -entry point: @hydra.main(config_path="configs/", config_name="config.yaml") def perform_tasks(hparams): model = MyModel(hparams.model) if __name__ == '__main__': perform_tasks() However, I am having the following error when interpolating the dataset.name: Traceback (most recent call last): File "xCoFormer.py", line 153, in perform_tasks fit(hparams) File "xCoFormer.py", line 88, in fit trainer.fit(model, datamodule=dm) File "/home/celso/projects/venvs/xCoFormer/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 470, in fit results = self.accelerator_backend.train() File "/home/celso/projects/venvs/xCoFormer/lib/python3.7/site-packages/pytorch_lightning/accelerators/gpu_accelerator.py", line 65, in train self.trainer.train_loop.setup_training(model) File "/home/celso/projects/venvs/xCoFormer/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 147, in setup_training self.trainer.logger.save() File "/home/celso/projects/venvs/xCoFormer/lib/python3.7/site-packages/pytorch_lightning/utilities/distributed.py", line 39, in wrapped_fn return fn(*args, **kwargs) File "/home/celso/projects/venvs/xCoFormer/lib/python3.7/site-packages/pytorch_lightning/loggers/tensorboard.py", line 221, in save save_hparams_to_yaml(hparams_file, self.hparams) File "/home/celso/projects/venvs/xCoFormer/lib/python3.7/site-packages/pytorch_lightning/core/saving.py", line 366, in save_hparams_to_yaml OmegaConf.save(hparams, fp, resolve=True) omegaconf.errors.ConfigKeyError: str interpolation key 'data.name' not found This happening only in my PL project. In another project, interpolations like that resolves normally. Is there any way around this error?
adding proximal policy optimization template to pl_examples
[ "feature", "help wanted" ]
πŸš€ Feature An implementation of proximal policy optimization (PPO) to pl_examples. Motivation pl_examples features one reinforcement learning algorithm- DQN. It could be good to have a template of a new policy gradient method. I had implemented PPO in Lightning and it was mentioned on Slack that this could be useful to add to pl_examples
pre-commit isort hook failure.
[ "duplicate", "feature" ]
When running pre-commit run isort --all-files, I get the following: isort....................................................................Failed files were modified by this hook Fixing /home/arnaud/dev/pytorch-lightning/tests/base/models.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/callbacks/model_checkpoint.py Fixing /home/arnaud/dev/pytorch-lightning/tests/utilities/test_all_gather_grad.py Fixing /home/arnaud/dev/pytorch-lightning/tests/loggers/test_tensorboard.py Fixing /home/arnaud/dev/pytorch-lightning/tests/loggers/test_wandb.py Fixing /home/arnaud/dev/pytorch-lightning/tests/metrics/classification/test_precision_recall.py Fixing /home/arnaud/dev/pytorch-lightning/tests/trainer/data_flow/test_eval_loop_flow_1_0.py Fixing /home/arnaud/dev/pytorch-lightning/pl_examples/basic_examples/simple_image_classifier.py Fixing /home/arnaud/dev/pytorch-lightning/tests/plugins/test_sharded_plugin.py Fixing /home/arnaud/dev/pytorch-lightning/tests/checkpointing/test_checkpoint_callback_frequency.py Fixing /home/arnaud/dev/pytorch-lightning/tests/base/model_train_dataloaders.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/profiler/init.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/metrics/regression/mean_squared_error.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/plugins/precision_plugin.py Fixing /home/arnaud/dev/pytorch-lightning/tests/plugins/test_rpc_plugin.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/metrics/functional/precision_recall_curve.py Fixing /home/arnaud/dev/pytorch-lightning/tests/metrics/functional/test_reduction.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/trainer/training_loop.py Fixing /home/arnaud/dev/pytorch-lightning/tests/metrics/utils.py Fixing /home/arnaud/dev/pytorch-lightning/tests/callbacks/test_early_stopping.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/tuner/lr_finder.py Fixing /home/arnaud/dev/pytorch-lightning/tests/trainer/test_dataloaders.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/utilities/argparse_utils.py Fixing /home/arnaud/dev/pytorch-lightning/tests/callbacks/test_callback_hook_outputs.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/trainer/connectors/checkpoint_connector.py Fixing /home/arnaud/dev/pytorch-lightning/tests/backends/test_tpu_backend.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/loggers/init.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/utilities/device_parser.py Fixing /home/arnaud/dev/pytorch-lightning/tests/utilities/test_apply_func.py Fixing /home/arnaud/dev/pytorch-lightning/tests/metrics/test_metric.py Fixing /home/arnaud/dev/pytorch-lightning/setup.py Fixing /home/arnaud/dev/pytorch-lightning/tests/base/model_utilities.py Fixing /home/arnaud/dev/pytorch-lightning/tests/trainer/test_config_validator.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/accelerators/cpu_accelerator.py Fixing /home/arnaud/dev/pytorch-lightning/tests/metrics/regression/test_ssim.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/core/optimizer.py Fixing /home/arnaud/dev/pytorch-lightning/tests/trainer/test_trainer_tricks.py Fixing /home/arnaud/dev/pytorch-lightning/tests/loggers/test_neptune.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/metrics/functional/ssim.py Fixing /home/arnaud/dev/pytorch-lightning/pl_examples/domain_templates/semantic_segmentation.py Fixing /home/arnaud/dev/pytorch-lightning/tests/loggers/test_csv.py Fixing /home/arnaud/dev/pytorch-lightning/tests/trainer/logging/test_logger_connector.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/metrics/classification/confusion_matrix.py Fixing /home/arnaud/dev/pytorch-lightning/tests/loggers/test_base.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/metrics/classification/roc.py Fixing /home/arnaud/dev/pytorch-lightning/tests/base/init.py Fixing /home/arnaud/dev/pytorch-lightning/tests/trainer/dynamic_args/test_multiple_eval_dataloaders.py Fixing /home/arnaud/dev/pytorch-lightning/pl_examples/basic_examples/autoencoder.py Fixing /home/arnaud/dev/pytorch-lightning/tests/metrics/init.py Fixing /home/arnaud/dev/pytorch-lightning/pl_examples/basic_examples/mnist_datamodule.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/profiler/profilers.py Fixing /home/arnaud/dev/pytorch-lightning/tests/metrics/regression/test_mean_error.py Fixing /home/arnaud/dev/pytorch-lightning/benchmarks/test_basic_parity.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/metrics/regression/mean_absolute_error.py Fixing /home/arnaud/dev/pytorch-lightning/tests/trainer/optimization/test_multiple_optimizers.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/trainer/connectors/training_trick_connector.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/cluster_environments/slurm_environment.py Fixing /home/arnaud/dev/pytorch-lightning/tests/utilities/test_apply_func_torchtext.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/utilities/model_utils.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/metrics/classification/precision_recall.py Fixing /home/arnaud/dev/pytorch-lightning/tests/base/simple_model.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/metrics/regression/explained_variance.py Fixing /home/arnaud/dev/pytorch-lightning/tests/metrics/test_metric_lightning.py Fixing /home/arnaud/dev/pytorch-lightning/tests/backends/test_dp.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/trainer/connectors/env_vars_connector.py Fixing /home/arnaud/dev/pytorch-lightning/tests/base/model_template.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/trainer/data_loading.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/tuner/batch_size_scaling.py Fixing /home/arnaud/dev/pytorch-lightning/tests/trainer/properties/log_dir.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/plugins/ddp_plugin.py Fixing /home/arnaud/dev/pytorch-lightning/tests/trainer/logging_tests/test_eval_loop_logging_1_0.py Fixing /home/arnaud/dev/pytorch-lightning/tests/deprecated_api/test_remove_1-3.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/trainer/optimizers.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/overrides/data_parallel.py Fixing /home/arnaud/dev/pytorch-lightning/tests/deprecated_api/test_remove_1-2.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/accelerators/ddp_spawn_accelerator.py Fixing /home/arnaud/dev/pytorch-lightning/tests/loggers/test_comet.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/metrics/regression/init.py Fixing /home/arnaud/dev/pytorch-lightning/tests/callbacks/test_lr_monitor.py Fixing /home/arnaud/dev/pytorch-lightning/tests/metrics/functional/test_nlp.py Fixing /home/arnaud/dev/pytorch-lightning/tests/trainer/test_trainer_cli.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/metrics/functional/confusion_matrix.py Fixing /home/arnaud/dev/pytorch-lightning/tests/core/test_memory.py Fixing /home/arnaud/dev/pytorch-lightning/pl_examples/test_examples.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/metrics/functional/init.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/metrics/classification/accuracy.py Fixing /home/arnaud/dev/pytorch-lightning/tests/base/model_test_dataloaders.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/utilities/upgrade_checkpoint.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/metrics/functional/mean_squared_error.py Fixing /home/arnaud/dev/pytorch-lightning/tests/models/test_horovod.py Fixing /home/arnaud/dev/pytorch-lightning/pl_examples/domain_templates/computer_vision_fine_tuning.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/trainer/trainer.py Fixing /home/arnaud/dev/pytorch-lightning/tests/metrics/classification/test_precision_recall_curve.py Fixing /home/arnaud/dev/pytorch-lightning/tests/metrics/classification/test_accuracy.py Fixing /home/arnaud/dev/pytorch-lightning/tests/models/test_tpu.py Fixing /home/arnaud/dev/pytorch-lightning/benchmarks/generate_comparison.py Fixing /home/arnaud/dev/pytorch-lightning/tests/trainer/test_states.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/accelerators/ddp_cpu_spawn_accelerator.py Fixing /home/arnaud/dev/pytorch-lightning/pl_examples/basic_examples/dali_image_classifier.py Fixing /home/arnaud/dev/pytorch-lightning/pl_examples/domain_templates/generative_adversarial_net.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/plugins/ddp_sequential_plugin.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/trainer/training_tricks.py Fixing /home/arnaud/dev/pytorch-lightning/tests/callbacks/test_progress_bar.py Fixing /home/arnaud/dev/pytorch-lightning/tests/trainer/model_hooks/test_model_hooks.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/core/step_result.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/callbacks/progress.py Fixing /home/arnaud/dev/pytorch-lightning/tests/checkpointing/test_trainer_checkpoint.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/callbacks/early_stopping.py Fixing /home/arnaud/dev/pytorch-lightning/tests/base/model_valid_steps.py Fixing /home/arnaud/dev/pytorch-lightning/tests/metrics/classification/test_inputs.py Fixing /home/arnaud/dev/pytorch-lightning/tests/models/test_torchscript.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/utilities/init.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/core/lightning.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/accelerators/accelerator_connector.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/loggers/neptune.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/metrics/functional/classification.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/trainer/connectors/data_connector.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/metrics/classification/average_precision.py Fixing /home/arnaud/dev/pytorch-lightning/tests/trainer/flags/test_fast_dev_run.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/metrics/functional/average_precision.py Fixing /home/arnaud/dev/pytorch-lightning/tests/backends/ddp_model.py Fixing /home/arnaud/dev/pytorch-lightning/tests/base/model_test_steps.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/metrics/functional/explained_variance.py Fixing /home/arnaud/dev/pytorch-lightning/tests/trainer/dynamic_args/test_multiple_optimizers.py Fixing /home/arnaud/dev/pytorch-lightning/tests/utilities/test_xla_device_utils.py Fixing /home/arnaud/dev/pytorch-lightning/tests/base/datasets.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/metrics/functional/mean_squared_log_error.py Fixing /home/arnaud/dev/pytorch-lightning/tests/models/test_restore.py Fixing /home/arnaud/dev/pytorch-lightning/tests/core/test_lightning_module.py Fixing /home/arnaud/dev/pytorch-lightning/tests/models/test_cpu.py Fixing /home/arnaud/dev/pytorch-lightning/tests/trainer/logging_tests/test_train_loop_logging_1_0.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/loggers/base.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/metrics/classification/helpers.py Fixing /home/arnaud/dev/pytorch-lightning/tests/metrics/functional/test_classification.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/plugins/rpc_plugin.py Fixing /home/arnaud/dev/pytorch-lightning/tests/core/test_lightning_optimizer.py Fixing /home/arnaud/dev/pytorch-lightning/tests/callbacks/test_callbacks.py Fixing /home/arnaud/dev/pytorch-lightning/benchmarks/test_sharded_parity.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/utilities/parsing.py Fixing /home/arnaud/dev/pytorch-lightning/tests/base/develop_utils.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/trainer/logging.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/metrics/init.py Fixing /home/arnaud/dev/pytorch-lightning/tests/models/test_gpu.py Fixing /home/arnaud/dev/pytorch-lightning/docs/source/conf.py Fixing /home/arnaud/dev/pytorch-lightning/tests/trainer/flags/test_val_check_interval.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/trainer/deprecated_api.py Fixing /home/arnaud/dev/pytorch-lightning/tests/base/deterministic_model.py Fixing /home/arnaud/dev/pytorch-lightning/tests/callbacks/test_gpu_stats_monitor.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/accelerators/init.py Fixing /home/arnaud/dev/pytorch-lightning/tests/core/test_decorators.py Fixing /home/arnaud/dev/pytorch-lightning/tests/models/test_sync_batchnorm.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/trainer/connectors/precision_connector.py Fixing /home/arnaud/dev/pytorch-lightning/tests/utilities/test_dtype_device_mixin.py Fixing /home/arnaud/dev/pytorch-lightning/tests/plugins/test_amp_plugin.py Fixing /home/arnaud/dev/pytorch-lightning/tests/trainer/test_optimizers.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/tuner/auto_gpu_select.py Fixing /home/arnaud/dev/pytorch-lightning/tests/models/test_amp.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/metrics/functional/reduction.py Fixing /home/arnaud/dev/pytorch-lightning/tests/metrics/classification/test_confusion_matrix.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/trainer/connectors/logger_connector/logger_connector.py Fixing /home/arnaud/dev/pytorch-lightning/tests/models/test_hooks.py Fixing /home/arnaud/dev/pytorch-lightning/tests/loggers/test_all.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/utilities/debugging.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/metrics/classification/f_beta.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/trainer/evaluation_loop.py Fixing /home/arnaud/dev/pytorch-lightning/tests/utilities/test_upgrade_checkpoint.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/metrics/metric.py Fixing /home/arnaud/dev/pytorch-lightning/tests/trainer/test_trainer.py Fixing /home/arnaud/dev/pytorch-lightning/tests/conftest.py Fixing /home/arnaud/dev/pytorch-lightning/tests/checkpointing/test_model_checkpoint.py Fixing /home/arnaud/dev/pytorch-lightning/tests/metrics/test_ddp.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/accelerators/ddp2_accelerator.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/trainer/connectors/profiler_connector.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/metrics/utils.py Fixing /home/arnaud/dev/pytorch-lightning/tests/base/develop_pipelines.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/core/saving.py Fixing /home/arnaud/dev/pytorch-lightning/tests/trainer/properties/test_get_model.py Fixing /home/arnaud/dev/pytorch-lightning/tests/loggers/test_mlflow.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/distributed/dist.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/accelerators/tpu_accelerator.py Fixing /home/arnaud/dev/pytorch-lightning/tests/trainer/optimization/test_parity_manual_optimization.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/metrics/classification/init.py Fixing /home/arnaud/dev/pytorch-lightning/tests/core/test_results.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/loggers/comet.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/loggers/csv_logs.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/callbacks/init.py Fixing /home/arnaud/dev/pytorch-lightning/tests/core/test_metric_result_integration.py Fixing /home/arnaud/dev/pytorch-lightning/tests/metrics/regression/test_psnr.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/metrics/functional/mean_absolute_error.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/cluster_environments/torchelastic_environment.py Fixing /home/arnaud/dev/pytorch-lightning/pl_examples/basic_examples/conv_sequential_example.py Fixing /home/arnaud/dev/pytorch-lightning/tests/metrics/classification/inputs.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/trainer/connectors/logger_connector/epoch_result_store.py Fixing /home/arnaud/dev/pytorch-lightning/tests/checkpointing/test_torch_saving.py Fixing /home/arnaud/dev/pytorch-lightning/tests/trainer/optimization/test_manual_optimization.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/loggers/mlflow.py Fixing /home/arnaud/dev/pytorch-lightning/tests/trainer/data_flow/test_train_loop_flow_dict_1_0.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/loggers/wandb.py Fixing /home/arnaud/dev/pytorch-lightning/tests/metrics/functional/test_self_supervised.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/metrics/functional/psnr.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/plugins/sharded_native_amp_plugin.py Fixing /home/arnaud/dev/pytorch-lightning/tests/models/data/horovod/train_default_model.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/utilities/data.py Fixing /home/arnaud/dev/pytorch-lightning/tests/trainer/test_supporters.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/accelerators/gpu_accelerator.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/core/memory.py Fixing /home/arnaud/dev/pytorch-lightning/tests/trainer/test_lr_finder.py Fixing /home/arnaud/dev/pytorch-lightning/tests/tuner/test_auto_gpu_select.py Fixing /home/arnaud/dev/pytorch-lightning/tests/core/test_datamodules.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/metrics/classification/precision_recall_curve.py Fixing /home/arnaud/dev/pytorch-lightning/tests/trainer/legacy_deprecate_flow_log_tests/test_trainer_steps_scalar_return.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/plugins/native_amp.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/metrics/functional/roc.py Fixing /home/arnaud/dev/pytorch-lightning/tests/trainer/legacy_deprecate_flow_log_tests/test_eval_loop_dict_return.py Fixing /home/arnaud/dev/pytorch-lightning/pl_examples/domain_templates/imagenet.py Fixing /home/arnaud/dev/pytorch-lightning/tests/base/model_valid_dataloaders.py Fixing /home/arnaud/dev/pytorch-lightning/pl_examples/bug_report_model.py Fixing /home/arnaud/dev/pytorch-lightning/tests/trainer/flags/test_overfit_batches.py Fixing /home/arnaud/dev/pytorch-lightning/tests/trainer/data_flow/test_train_loop_flow_scalar_1_0.py Fixing /home/arnaud/dev/pytorch-lightning/tests/models/test_hparams.py Fixing /home/arnaud/dev/pytorch-lightning/tests/plugins/test_apex_plugin.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/accelerators/ddp_hpc_accelerator.py Fixing /home/arnaud/dev/pytorch-lightning/tests/backends/test_ddp_spawn.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/accelerators/horovod_accelerator.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/metrics/functional/f_beta.py Fixing /home/arnaud/dev/pytorch-lightning/tests/metrics/classification/test_roc.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/metrics/regression/mean_squared_log_error.py Fixing /home/arnaud/dev/pytorch-lightning/tests/metrics/regression/test_explained_variance.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/plugins/apex.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/accelerators/ddp_accelerator.py Fixing /home/arnaud/dev/pytorch-lightning/pl_examples/basic_examples/backbone_image_classifier.py Fixing /home/arnaud/dev/pytorch-lightning/tests/plugins/test_ddp_sequential_plugin.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/trainer/connectors/slurm_connector.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/accelerators/accelerator.py Fixing /home/arnaud/dev/pytorch-lightning/tests/backends/test_accelerator_connector.py Fixing /home/arnaud/dev/pytorch-lightning/pl_examples/domain_templates/reinforce_learn_Qnet.py Fixing /home/arnaud/dev/pytorch-lightning/tests/backends/test_ddp.py Fixing /home/arnaud/dev/pytorch-lightning/tests/metrics/classification/test_f_beta.py Fixing /home/arnaud/dev/pytorch-lightning/tests/base/model_train_steps.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/utilities/device_dtype_mixin.py Fixing /home/arnaud/dev/pytorch-lightning/tests/trainer/warnings_tests/test_flow_warnings.py Fixing /home/arnaud/dev/pytorch-lightning/tests/base/boring_model.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/accelerators/dp_accelerator.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/utilities/seed.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/metrics/regression/psnr.py Fixing /home/arnaud/dev/pytorch-lightning/tests/collect_env_details.py Fixing /home/arnaud/dev/pytorch-lightning/tests/base/test_datasets.py Fixing /home/arnaud/dev/pytorch-lightning/tests/trainer/optimization/test_parity_automatic_optimization.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/loggers/tensorboard.py Fixing /home/arnaud/dev/pytorch-lightning/tests/models/test_onnx.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/plugins/sharded_plugin.py Fixing /home/arnaud/dev/pytorch-lightning/tests/metrics/classification/test_average_precision.py Fixing /home/arnaud/dev/pytorch-lightning/tests/trainer/test_trainer_test_loop.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/utilities/distributed.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/tuner/tuning.py Fixing /home/arnaud/dev/pytorch-lightning/pytorch_lightning/metrics/regression/ssim.py Fixing /home/arnaud/dev/pytorch-lightning/tests/trainer/logging_tests/test_distributed_logging.py
W&B logger not working as expected with accumulate_grad_batches>1
[ "bug", "help wanted", "priority: 0", "logger" ]
πŸ› Bug When logging inside training step to wandb logger and using accumulate_grad_batches > 1 the behavior is not as expected. Similar issue as in #4304 for Tensorboard (which was closed and the fix was merged in #4738). First half with accumulate_grad_batches == 1, second with accumulate_grad_batches == 8: Moreover, the logging steps are accumulate_grad_batches * number_of_backward_passes and so when using LearningRateMonitor, the logger refuses to log anything (similar to #4811) saying: wandb: WARNING Step must only increase in log calls. Step 499 < 2497; dropping {'lr-AdamW': 4.9900000000000005e-06}. To Reproduce Sorry for not using the BoringModel, updated the example from #4304: import os import torch from torch.nn import functional as F from torch.utils.data import DataLoader, random_split import pytorch_lightning as pl from pytorch_lightning.loggers import WandbLogger from torchvision.datasets.mnist import MNIST from torchvision import transforms class LitClassifier(pl.LightningModule): def __init__(self, hidden_dim=128, learning_rate=1e-3): super().__init__() self.save_hyperparameters() self.l1 = torch.nn.Linear(28 * 28, self.hparams.hidden_dim) self.l2 = torch.nn.Linear(self.hparams.hidden_dim, 10) def forward(self, x): x = x.view(x.size(0), -1) x = torch.relu(self.l1(x)) x = torch.relu(self.l2(x)) return x def training_step(self, batch, batch_idx): x, y = batch y_hat = self(x) loss = F.cross_entropy(y_hat, y) self.log("train_loss",loss) return loss def validation_step(self, batch, batch_idx): x, y = batch y_hat = self(x) loss = F.cross_entropy(y_hat, y) def configure_optimizers(self): return torch.optim.Adam(self.parameters(), lr=self.hparams.learning_rate) def run_test(accumulate_grad_batches, batch_size, num_workers=4): dataset = MNIST('', train=True, download=True, transform=transforms.ToTensor()) mnist_train, mnist_val = random_split(dataset, [55000, 5000]) train_loader = DataLoader(mnist_train,batch_size) val_loader = DataLoader(mnist_val,batch_size) model = LitClassifier() trainer = pl.Trainer( logger=WandbLogger(name="bug", project='.....', save_dir=".", log_model=False), accumulate_grad_batches=accumulate_grad_batches, max_epochs=2 ) trainer.fit(model, train_loader, val_loader) run_test(1,32) run_test(8,32) Expected behavior Take a mean (or whatever) of the values logged at the same step and not at every forward pass. Environment pytorch-lightning: 1.1.2 PyTorch: 1.7.1 OS: Linux How you installed PyTorch: pip Python version: 3.7.0 wandb: 0.10.12
LightningModule models using `setup` don't load checkpoints properly.
[ "bug", "discussion", "design", "checkpointing" ]
πŸ› Bug Using setup methods within a LightningModule does not allow proper checkpoint loading using the load_from_checkpoint method. Furthermore, even if setup and torch.load are used to manually load a checkpoint, the trainer seems to always invoke setup with fit is called, thereby overwriting the loaded parameters. There are ways to get around these issues, but they don't fit nicely into the recommended PTL style standards, as I interpreted them. This issue has been filed as a bug on the recommendation in this forum post. Please reproduce using the BoringModel I have reproduced this bug with a simple model in this Colab Notebook. https://colab.research.google.com/drive/1weAyerRPpe5jQOD0s-yzymmF2czFx18Q?usp=sharing To Reproduce Use the above Colab notebook to reproduce, noting that cell [5] is expected to fail. Otherwise Notebook runs top-to-bottom. Expected behavior Ideally, code in cell [5] should not error. Additionally, trainer should not reset model Parameters with fit is called (e.g., in cell [7]). Environment This Colab Notebook includes a pip install of PTL in the top cell. Otherwise, it uses the default Colab environment settings. Additional context Please refer to this forum post for more information.
allow val_check_interval to be larger than training dataset size
[ "feature", "help wanted", "priority: 1", "priority: 2" ]
πŸš€ Feature allow val_check_interval to be larger than the number of the training batches in one epoch Motivation I am using a small datasets, so instead of specifying max_epochs in Trainer, I want to use max_steps and evaluate every val_check_interval , but when val_check_interval is larger than number of batches in training set, there is an error, like this: `val_check_interval` (100) must be less than or equal to the number of the training batches (24). If you want to disable validation set `limit_val_batches` to 0.0 instead. Pitch val_check_interval should't be limited by the number of the training batches in one epoch, it should be a warning, not an error Alternatives I am currently using a wrapper to make it an iterable dataset, so it allows me to do that Additional context
Remove mypy from pre-commit hooks
[ "feature", "won't fix" ]
As of now, pre-commit hooks are not used by developers. I see 2 reasons for dropping mypy hooks at this point: mypy command line use setup.cfg as configuration file. It is fine when used alone to get configuration and list of files. But when running with pre-commit, pre-commit add at files in the command line, which overrides the list of files with provided configuration. So when running, pre-commit run --all-files mypy runs on all python files (not the one given in setup.cfg). Since in general, you may want to add pre-commit checks in your CI to enforce pre-commit usage, it will dramatically fail with the following errors: mypy.....................................................................Failed hook id: mypy exit code: 1 docs/source/conf.py:28: error: Module has no attribute "LIGHTNING_SETUP" [attr-defined] docs/source/conf.py:277: error: Function is missing a type annotation [no-untyped-def] docs/source/conf.py:295: error: Function is missing a type annotation [no-untyped-def] setup.py:24: error: Name 'builtins' already defined (by an import) [no-redef] setup.py:29: error: Module has no attribute "LIGHTNING_SETUP" [attr-defined] tests/models/data/horovod/train_default_model.py:25: error: Item "None" of "Optional[str]" has no attribute "split" [union-attr] tests/models/data/horovod/train_default_model.py:46: error: Function is missing a type annotation [no-untyped-def] tests/trainer/logging/test_logger_connector.py:36: error: Function is missing a type annotation for one or more arguments [no-untyped-def] tests/trainer/logging/test_logger_connector.py:52: error: Function is missing a type annotation [no-untyped-def] tests/trainer/logging/test_logger_connector.py:64: error: Function is missing a type annotation [no-untyped-def] tests/trainer/logging/test_logger_connector.py:74: error: Function is missing a type annotation [no-untyped-def] tests/trainer/logging/test_logger_connector.py:110: error: Function is missing a type annotation [no-untyped-def] tests/trainer/logging/test_logger_connector.py:123: error: Function is missing a type annotation [no-untyped-def] tests/trainer/logging/test_logger_connector.py:126: error: Function is missing a return type annotation [no-untyped-def] tests/trainer/logging/test_logger_connector.py:133: error: Function is missing a return type annotation [no-untyped-def] tests/trainer/logging/test_logger_connector.py:133: note: Use "-> None" if function does not return a value tests/trainer/logging/test_logger_connector.py:139: error: Function is missing a type annotation [no-untyped-def] tests/trainer/logging/test_logger_connector.py:160: error: Function is missing a return type annotation [no-untyped-def] tests/trainer/logging/test_logger_connector.py:168: error: Function is missing a type annotation [no-untyped-def] tests/trainer/logging/test_logger_connector.py:205: error: Function is missing a type annotation [no-untyped-def] tests/trainer/logging/test_logger_connector.py:216: error: Function is missing a type annotation [no-untyped-def] tests/trainer/logging/test_logger_connector.py:226: error: Function is missing a type annotation [no-untyped-def] tests/trainer/logging/test_logger_connector.py:230: error: Function is missing a return type annotation [no-untyped-def] tests/trainer/logging/test_logger_connector.py:230: note: Use "-> None" if function does not return a value tests/trainer/logging/test_logger_connector.py:234: error: Function is missing a return type annotation [no-untyped-def] tests/trainer/logging/test_logger_connector.py:271: error: Function is missing a type annotation [no-untyped-def] tests/trainer/logging/test_logger_connector.py:373: error: Function is missing a type annotation [no-untyped-def] tests/trainer/logging/test_logger_connector.py:379: error: Function is missing a type annotation [no-untyped-def] tests/trainer/logging/test_logger_connector.py:384: error: Function is missing a type annotation [no-untyped-def] tests/trainer/logging/test_logger_connector.py:388: error: Function is missing a type annotation [no-untyped-def] tests/trainer/logging/test_logger_connector.py:392: error: Function is missing a type annotation [no-untyped-def] tests/trainer/logging/test_logger_connector.py:397: error: Function is missing a type annotation [no-untyped-def] tests/trainer/logging/test_logger_connector.py:401: error: Function is missing a type annotation [no-untyped-def] tests/trainer/logging/test_logger_connector.py:406: error: Function is missing a type annotation [no-untyped-def] tests/trainer/logging/test_logger_connector.py:410: error: Function is missing a return type annotation [no-untyped-def] tests/trainer/logging/test_logger_connector.py:413: error: Function is missing a return type annotation [no-untyped-def] tests/trainer/logging/test_logger_connector.py:416: error: Function is missing a return type annotation [no-untyped-def] tests/trainer/logging/test_logger_connector.py:433: error: Function is missing a type annotation [no-untyped-def] Found 36 errors in 4 files (checked 372 source files) In my humble opinion, you want to keep git pre-commit hooks light. Formatting, removing trailing spaces, sorting import are definitively OK, but running static analysis, unit tests can last long and can discourage people from contributing to the projects. It is rather a blocking experience rather than helping one. So I would recommend removing mypy hooks from pre-commit but makes sure it is part of the script to be used before a submission or executed by the CI. What do you think?
Document exceptions
[ "good first issue", "docs" ]
πŸ“š Documentation This is only a suggestion, but I find it useful to have what exceptions functions/classes can raise in the docs, so I'd like to suggest adding the Raises: section to public functions/classes. Progress dirname $(grep -rl "raise " pytorch_lightning)|sort|uniq|awk '{print "- [ ] "$1}' pytorch_lightning/accelerators (#9558) pytorch_lightning/callbacks pytorch_lightning/callbacks/progress pytorch_lightning/core pytorch_lightning/core/mixins pytorch_lightning/loggers pytorch_lightning/loops pytorch_lightning/loops/dataloader pytorch_lightning/loops/epoch pytorch_lightning/loops/optimization pytorch_lightning/overrides pytorch_lightning/plugins pytorch_lightning/plugins/environments pytorch_lightning/plugins/io pytorch_lightning/plugins/precision pytorch_lightning/plugins/training_type pytorch_lightning/profiler pytorch_lightning/trainer pytorch_lightning/trainer/connectors pytorch_lightning/trainer/connectors/logger_connector pytorch_lightning/tuner pytorch_lightning/utilities
`on_test_end` is not called in test
[ "bug", "help wanted", "design", "priority: 1" ]
πŸ› Bug I'm running a model with fit and test called. However I noticed that on_test_end is not called at the end of test. Please reproduce using the BoringModel https://colab.research.google.com/drive/1j5J8TXAIqoFCqc-3WnAPMox8amNsRDjE?usp=sharing To Reproduce Use following BoringModel and post here See the code addition at the bottom of this cell, which prints when the method is hit. Then check the output here. Notice how ##### hit on_test_end ##### is missing, whereas ##### hit on_train_end ##### is present and just fine. Expected behavior When running the trainer.test step, on_test_end should be called at the end of test. Environment See the shared Colab notebook. Below are pasted form the notebook: * CUDA: - GPU: - Tesla T4 - available: True - version: 10.1 * Packages: - numpy: 1.19.4 - pyTorch_debug: True - pyTorch_version: 1.7.0+cu101 - pytorch-lightning: 1.1.3 - tqdm: 4.41.1 * System: - OS: Linux - architecture: - 64bit - - processor: x86_64 - python: 3.6.9 - version: #1 SMP Thu Jul 23 08:00:38 PDT 2020
Remove unused import in `accelerators`
[ "feature", "help wanted", "refactor" ]
πŸš€ Feature pytorch_lightning/acclerators/*.py There are many unused import in pytorch_lightning/acclerators. These should be removed. Motivation Pitch Alternatives Additional context
Validation step is ignored when using DataModule
[ "question" ]
What is your question? Hi, guys! I created my own DataModule and loading it to the trainer. However, it appears that the "fit" is skipping the validation step. How can I ensure that the code runs through the validation step too? Code class DataModule(pl.LightningDataModule): def __init__(self, batch_size=25, seed=0): # def __init__(self, dataset, batch_size=25, seed=0): super().__init__() self.dataset = dataset self.batch_size = batch_size self.seed = seed self.split = [801, 100, 100] # self.transform = torchvision.transforms.ToTensor() def setup(self, stage=None): # train/valid/test split # and assign to use in dataloaders via self train_set, valid_set, test_set = torch.utils.data.random_split(self.dataset, self.split, generator=torch.Generator().manual_seed(self.seed)) if stage == 'fit' or stage is None: self.train_set = train_set self.valid_set = valid_set if stage == 'test' or stage is None: self.test_set = test_set def train_dataloader(self): return torch.utils.data.DataLoader(self.train_set, batch_size=self.batch_size, shuffle=True) def valid_dataloader(self): return torch.utils.data.DataLoader(self.valid_set, batch_size=self.batch_size, shuffle=False) def test_dataloader(self): return torch.utils.data.DataLoader(self.test_set, batch_size=self.batch_size, shuffle=False) class LitReg(pl.LightningModule): def __init__(self, in_dims, out_dims, lr=2e-4, max_dict={}): super().__init__() self.in_size = in_dims self.out_size = out_dims self.lr = lr self.max_dict = max_dict # model self.model = nn.Sequential( nn.Linear(self.in_size, self.in_size), nn.LeakyReLU(0.02), nn.Linear(self.in_size, self.out_size) ) self.model.apply(self.weights_init) def forward(self, data): out = self.model(data) return out def training_step(self, batch, batch_idx): x, y, l_rate = batch pred_y = self.model(x) train_loss = F.mse_loss(pred_y, y) self.log('train_loss', train_loss, prog_bar=True) return train_loss def validation_step(self, batch, batch_idx): self._shared_eval(batch, batch_idx, 'val') def test_step(self, batch, batch_idx): self._shared_eval(batch, batch_idx, 'test') def _shared_eval(self, batch, batch_idx, prefix): x, y, l_rate = batch pred_y = self.model(x) loss = F.mse_loss(pred_y, y) self.log(f'{prefix}_loss', loss, prog_bar=True) return loss def configure_optimizers(self): optimizer = torch.optim.Adam(self.parameters(), self.lr) return optimizer def weights_init(self, m): if isinstance(m, nn.Conv2d) or isinstance(m, nn.ConvTranspose2d): torch.nn.init.normal_(m.weight, 0.0, 0.02) if isinstance(m, nn.BatchNorm2d) or isinstance(m, nn.BatchNorm1d): torch.nn.init.normal_(m.weight, 0.0, 0.02) torch.nn.init.constant_(m.bias, 0) if isinstance(m, nn.Linear): torch.nn.init.normal_(m.weight, 0.0, 0.02) torch.nn.init.constant_(m.bias, 0) What have you tried? Placing breakpoints to debug in VSCode, but all in vain. Also accessed both valid and test datasets and loaders. All looks set. What is working? If I load the data the following way. **train_loader = DataLoader(X_train, batch_size=args.batch_size) val_loader = DataLoader(X_val, batch_size=args.batch_size) test_loader = DataLoader(X_test, batch_size=args.batch_size) trainer.fit(model, train_loader, val_loader)** What's your environment? OS: Win Packaging pip Version 1.1.3 Thank you for your attention!
[BUG] Logging in a callback does not work with multiple optimizers
[ "bug", "help wanted", "priority: 0" ]
πŸ› Bug #5063 is not solved when logging in callbacks (@tchaton ). Specifically, self.log returns the following File "/usr/local/lib/python3.8/dist-packages/pytorch_lightning/trainer/connectors/logger_connector/epoch_result_store.py", line 240, in auto_reduce_results_on_epoch_end opt_outputs = epoch_metrics[opt_idx] KeyError: 0 when: there are multiple optimizers a callback calls pl_module.log Please reproduce using the BoringModel https://colab.research.google.com/drive/1JoHFDQx1vad6th2IhNSxL0m2n_1HzQro?usp=sharing Environment the bug appears in stable or master (see notebook) Additional context I'm using 2 optimizers and pl_bolts.callbacks.ssl_online.SSLOnlineEvaluator which does not work as it logs the loss in the callback. (Note that there are other issues with this callback and multiple optimizers #4955 )
Mismatching stats logged while training and while evaluating with the saved model
[ "question" ]
❓ Questions and Help Before asking: Try to find answers to your questions in the Lightning Forum! Search for similar issues. Search the docs. What is your question? I manually print the training accuracy after every epoch. But get different accuracy when evaluate the training set using the saved model at that epoch. Code ` def training_step(self, batch, batch_idx): logits, softmax_logits = self.forward(**batch) loss, prediction_label_count = self.loss_function(logits, batch["labels"]) accuracy = self.compute_accuracy(logits, batch["labels"]) return {"loss": loss, "prediction_label_count": prediction_label_count, "accuracy": accuracy, "log": {"train_loss": loss, "training_accuracy": accuracy}} ` ` def training_epoch_end(self, outputs_of_training_steps): avg_loss = torch.cat([x['loss'].view(-1) for x in outputs_of_training_steps]).mean() training_accuracy = torch.cat([x['accuracy'].view(-1) for x in outputs_of_training_steps]).mean() print("\nTraining epoch stats below:") for label in range(parameters["num_labels"]): pred_freq = torch.cat( [x['prediction_label_count'][label].view(-1) for x in outputs_of_training_steps]).sum() print("Frequency of prediction of label {label} is : {freq}".format(label=label, freq=pred_freq)) if (parameters["use_weighted_CE"] == True): print("weights in this epoch were: ", self.weights) print("training loss: ", avg_loss) print("training accuracy: ", training_accuracy) print("***Training epoch stats finish***\n") log = {"train_accuracy": training_accuracy} return {"train_accuracy": training_accuracy, 'log': log} ` What have you tried? 3 class classification problem, I'm using a "batch_size" of 32, 8 gpus, and training and validation sets are of size 5k each. This is what I get Epoch 0: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 314/314 [01:50<00:00, 2.85it/s, loss=0.751, v_num=7143817] Validation epoch stats below:157/157 [00:33<00:00, 5.06it/s] Frequency of prediction of label 0 is : 2208.0 Frequency of prediction of label 1 is : 1457.0 Frequency of prediction of label 2 is : 1335.0 val accuracy: tensor(0.6483, device='cuda:0') val loss: tensor(0.8154, device='cuda:0') Validation epoch stats finish. Epoch 0: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 314/314 [01:53<00:00, 2.76it/s, loss=0.751, v_num=7143817] Training epoch stats below: Frequency of prediction of label 0 is : 346.25 Frequency of prediction of label 1 is : 437.0 Frequency of prediction of label 2 is : 466.75 training loss: tensor(0.9375, device='cuda:0') training accuracy: tensor(0.5388, device='cuda:0') Training epoch stats finish But when I evaluate the training set using the saved model, I get Frequency of prediction of label 0 is : 2148.0 Frequency of prediction of label 1 is : 1537.0 Frequency of prediction of label 2 is : 1315.0 accuracy: tensor(0.7282, device='cuda:0') Note: Accuracy on Val set comes the same as in the logs i.e 0.6483. Why there is a mismatch in the accuracy on the training set What's your environment? OS: [e.g. iOS, Linux, Win] Packaging [e.g. pip, conda] Version [e.g. 0.5.2.1]
TPU and multo-GPU for RL
[ "feature", "help wanted", "won't fix" ]
πŸš€ Feature RL on TPUs and multi-GPUs Motivation Run "seed rl" and other asynchronous and distributed learning in/for RL
model_to_device() missing 1 required positional argument 'process_idx'
[ "bug", "help wanted", "environment: slurm" ]
πŸ› Bug When running the code for ddp_cpu on SLURM based cluster, I get this error: Traceback (most recent call last): File "image_classifier.py", line 99, in <module> cli_main() File "image_classifier.py", line 87, in cli_main trainer.fit(model, datamodule=dm) File "/pylon5/cis200022p/balu/softwares/pytorch/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 472, in fit results = self.accelerator_backend.train() File "/pylon5/cis200022p/balu/softwares/pytorch/lib/python3.8/site-packages/pytorch_lightning/accelerators/ddp_hpc_accelerator.py", line 64, in train self.ddp_train(process_idx=self.task_idx, model=model) File "/pylon5/cis200022p/balu/softwares/pytorch/lib/python3.8/site-packages/pytorch_lightning/accelerators/ddp_hpc_accelerator.py", line 172, in ddp_train self.model_to_device(model) TypeError: model_to_device() missing 1 required positional argument: 'process_idx' When I look here the model_to_device function needs process_idx as an input, but is not sent here Please reproduce using the BoringModel I used this code : https://github.com/PyTorchLightning/pytorch-lightning/blob/master/pl_examples/basic_examples/simple_image_classifier.py Along with this slurm job script: > #!/bin/bash > #SBATCH --job-name='pl_dist' > #SBATCH --nodes=2 > #SBATCH -p RM > #SBATCH --ntasks-per-node=1 > #SBATCH -t 1:00:00 > > module load anaconda3 > source activate /pylon5/softwares/pytorch > > export NCCL_DEBUG=INFO > export PYTHONFAULTHANDLER=1 > > srun -n 2 --ntasks-per-node 1 python image_classifier.py --accelerator 'ddp_cpu' --num_nodes 2 --num_processes 1 --max_epochs 50 Environment CUDA: - GPU: - available: False - version: 10.2 Packages: - numpy: 1.19.2 - pyTorch_debug: False - pyTorch_version: 1.7.1 - pytorch-lightning: 1.1.3 - tqdm: 4.56.0 System: - OS: Linux - architecture: - 64bit - ELF - processor: x86_64 - python: 3.8.5 - version: #1 SMP Mon Jul 29 17:46:05 UTC 2019
How to gather results during inference in ddp
[ "question" ]
❓ Questions and Help Hi, I am using multiple gpus and ddp mode for model inference. I am wondering how to gather the results from all distributed processes and save them into one file in the test_epoch_end. My code looks like this: Code class PLModel(pl.LightningModule): def on_test_epoch_start(self): self.text_predictions = [] def test_step(self, batch, batch_idx): pred_ids = model(batch) pred_texts = ids_to_texts(pred_ids) # some function that converts predicted ids to texts self.text_predictions.extend(pred_texts) def test_epoch_end(self, outputs): # here I want to know how to gather the text_predictions from all processes # after gathering df_text_predictions = pandas.DataFrame{'predictions': self.text_predictions} df_text_predictions.to_csv('predictions.csv', index=False) What have you tried? I am able to save the results into separate files in each distributed process. But I have to do post-processing to put all the files together. I want to know how to gather the results automatically so that I don't need to run another script for post-processing.
Automatic calculation of layer sizes
[ "feature", "help wanted", "won't fix" ]
πŸš€ Feature Calculate number of layer inputs (especially linear layers) automatically, without having to resort to manual calculation or print Motivation I have a model that receives multiple images as inputs. Each input goes through some number of convolution/pool layers, each with potentially different kernel sizes/stride/etc. All of these are hyperparameters that are being tuned The output from those layers gets flattened, merged, and passed into a linear layer The problem here is that during init, the linear layer needs to know how many input connections it needs. I cant hard code this value because hyperparameter tuning means I wont know ahead of time what the size of each image will be post-convolution/pool. I cant resort to using print to check the size, because the size will change between each tuning run. print is fine if you're dealing with a fixed architecture, but doesnt cut it when tuning parameters that end up changing the architecture I was working on a personal project recently where I had to calculate and combine all the output sizes and it ended up being about 100 lines in total: https://github.com/sho-87/webcam-eye-tracker/blob/master/Models.py#L299-L397 Am I just doing this slowly/inefficiently? Is there a saner way to do this without all the hand calculation? This is a relatively simple case without tuning things like stride. I expect it to get more complicated in those situations. If this is the best we have currently, then consider this a feature request for automatic calculation of those sizes, because this feels quite excessive in terms of boilerplate. I would essentially like to just replace line 391 in my code with something like self.fc1 = nn.Linear(None, config["dense_nodes"]), and have the None replaced for me As a side note, we can set self.example_input_array to log the graph to tensorboard, so arent these layer sizes already stored somewhere in a lightning module? Maybe we can use that, or a similar idea, to avoid hand calculating everything?
Add a `Trainer.predict()`, similar to `Trainer.test()`, except returns the predictions
[ "feature", "priority: 0", "design" ]
πŸš€ Feature Add a Trainer.predict(), similar to Trainer.test(), that returns the predictions Motivation I want to get prediction, without writing for batch in ..., to(cuda), output.append() ... Pitch Making prediction should be part of a LightningModule "system". And it should be easy to add this functionality. Alternatives Currently the doc suggests to make prediction like normal PyTorch: https://pytorch-lightning.readthedocs.io/en/latest/trainer.html#deployment-prediction Additional context Related forum post: https://forums.pytorchlightning.ai/t/save-test-predictions/549/3
Add plugin trainer flag- Trainer(plugin="ddp_sharded")
[ "feature", "help wanted" ]
@edenlightning commented on Wed Nov 18 2020 As a user, I would like to set the plugin using a string flag. Currently, you need to pass in the plugin explicitly. It would be nicer if we can make plugin the configurable, so that the user has to only pass a string to enable a plugin. @edenlightning commented on Thu Nov 19 2020 Trainer(plugin='sharded')
Model- and data-specific unit tests
[ "feature", "ci", "discussion" ]
πŸš€ Feature Trainer option to run callbacks for model- and data-specific unit testing Motivation The docs mention a fast_dev_run option for initializing a trainer. Correct me if I'm wrong, but my understanding is this simply hits every line of code in the train/val loops to ensure there won't be any crashes on a full-scale training. This seems to fall a bit short of proper unit testing that would validate several assertions in addition to confirming there are no crashes. Unit testing for machine learning models appears to be an ongoing and relatively immature development in the community. However, there are a few useful articles discussing its use and importance PyTorch Lightning maintainer Jeremy Jordan wrote a helpful article that suggests several tests This Medium article which was originally written for TensorFlow, but already has a PyTorch port here Additional unit tests ideas are brainstormed in this article Pitch Define a set of model- and data-specific unit tests using the above articles as a starting point Separate the information needed to run the tests into two categories Generic: consistent across all models/dataset, can be deduced within the Lightning framework automatically Specific: depends on the model and/or dataset - practitioner is expected to provide this Determine an appropriate set of pl.LightningModule callbacks to allow users to provide the specific inputs discussed above Use these callbacks in pl.Trainer(do_unit_tests=True) to execute the tests I would suggest tests are on by default If the callbacks have not been implemented, skip the tests and provide a logging message to the user to encourage them to define the callbacks Alternatives Use the existing fast_dev_run - I explain above why this is different from proper unit tests Allow users to integrate with a 3rd party test library (like the link above for torchtest) - I think it would be preferable to have this functionality built-into the Lightning API rather than relying on other imports. Doing so would make it easier to automate the generic inputs discussed in the pitch Expect users to write their own test suites - PyTorch Lightning strives to minimize boilerplate code and this seems like an excellent candidate for that!
Continuing training when using learning rate schedulers
[ "question" ]
❓ Questions and Help When restarting training on a model using learning rate scheduler, it seems like the original learning rate is used rather than the scheduler-update learning rate. Code For example, a model with the following configure_optimizers: def configure_optimizers(self): optimizer = optim.Adam(self.parameters(), lr=self.hparams.learning_rate) scheduler = optim.lr_scheduler.ExponentialLR( optimizer, gamma=self.hparams.learning_gamma ) return [optimizer], [scheduler] With learning_gamma != 1.0, when restarting training, e.g.: model = myModel.load_from_checkpoint(ckpt_fname) lr_monitor = LearningRateMonitor(logging_interval="epoch") trainer = Trainer(resume_from_checkpoint=ckpt_fname, callbacks=[lr_monitor]) trainer.fit(model) The logged learning rate is equal to the original initial learning rate, rather than the schedule-updated learning rate. What's your environment? OS: Linux Packaging: conda Version: 1.1.0
Wrong exception message
[ "bug", "good first issue", "refactor" ]
pytorch-lightning/pytorch_lightning/accelerators/accelerator_connector.py Lines 344 to 347 in 1f6236a raise MisconfigurationException( 'DataParallel does not support num_nodes > 1. Switching to DistributedDataParallel for you. ' 'To silence this warning set `accelerator="ddp"` or `accelerator="ddp2"`' ) The message is written as if it was a warning, not an exception
LightningModule docs is wrong when defining "Training with DataParallel"
[ "won't fix", "docs" ]
The docs says when training with DataParallel the training_step_end function should be like this: def training_step_end(self, batch_parts): gpu_0_prediction = batch_parts[0]['pred'] gpu_1_prediction = batch_parts[1]['pred'] # do something with both outputs return (batch_parts[0]['loss'] + batch_parts[1]['loss']) / 2 but acually, when i try it for myself, it should be like this: def training_step_end(self, batch_parts): gpu_0_prediction = batch_parts['pred'][0] gpu_1_prediction = batch_parts['pred'][1] # do something with both outputs return (batch_parts['loss'][0] + batch_parts['loss'][1]) / 2 yes, the order of the axes are opposite.
Empty "outputs" argument in the on_train_batch_end() method of Callback
[ "bug", "help wanted", "priority: 1" ]
πŸ› Bug The "outputs" argument of the 'on_train_batch_end' method of a lightning Callback seems to be empty, unless training_epoch_end() is implemented in the lightning model. I'm looking for a way to process the outputs of training_step() in a callback. If I'm not mistaken, the "outputs" argument of the on_train_batch_end() of a lightning callback is meant for use cases like this. If I don't implement the training_epoch_end() method in my lightning model, the "outputs" argument is consistently an empty list. Implementing training_epoch_end() does fill the "outputs" argument with the output of training_step(), but I'd like to avoid this, as keeping track of all the training_step outputs for an entire epoch might be memory intensive. Reproducing the issue To Reproduce The following link BoringModel contains the behaviour I'm referring to. Expected behavior The "outputs" argument of the on_train_batch_end() method of a lightning callback is an empty list if one comments out train_epoch_end() in the lightning model. Environment CUDA: GPU: Tesla T4 available: True version: 10.1 Packages: numpy: 1.19.5 pyTorch_debug: True pyTorch_version: 1.7.0+cu101 pytorch-lightning: 1.1.4 tqdm: 4.41.1 System: OS: Linux architecture: 64bit processor: x86_64 python: 3.6.9 version: #1 SMP Thu Jul 23 08:00:38 PDT 2020 Additional context If this is a feature rather than a bug, how do you recommend we use the outputs of training_step in a callback without having to track all training_step outputs for an entire epoch?
on_after_backward docs for logging histograms
[ "question" ]
What is your question? I was following the example on_after_backward code for logging parameter variables for tensorboard histograms. https://pytorch-lightning.readthedocs.io/en/latest/_modules/pytorch_lightning/core/hooks.html#ModelHooks.on_after_backward Code # example to inspect gradient information in tensorboard if self.trainer.global_step % 25 == 0: # don't make the tf file huge params = self.state_dict() for k, v in params.items(): self.logger.experiment.add_histogram( tag=k, values=v.grad, global_step=self.trainer.global_step ) and I was getting errors with v.grad in my state_dict v.grad is returning NoneTypes rather than valid tensors for all tensors? Has the API for pytorch changed perhaps? or is this method mainly just suitable for logging the values of the variables instead of the gradients? What's your environment? OS: Ubuntu 20.04 Packaging pip Version 1.1.4 pytorch 1.7.0
Illegal instruction (core dumped) when running MNIST Hello World
[ "bug", "help wanted", "won't fix", "priority: 2" ]
πŸ› Bug This may be similar to issue #5488, with the following differences: I'm not using a GPU the error message is Illegal instruction (core dumped) rather than Segmentation fault (core dumped). I copy pasted the code here, modifying the trainer declaration so as not to use a GPU: import os import torch from torch import nn from torch.nn import functional as F from torch.utils.data import DataLoader, random_split from torchvision.datasets import MNIST from torchvision import transforms import pytorch_lightning as pl from pytorch_lightning.metrics.functional import accuracy class MNISTModel(pl.LightningModule): def __init__(self): super(MNISTModel, self).__init__() self.l1 = torch.nn.Linear(28 * 28, 10) def forward(self, x): return torch.relu(self.l1(x.view(x.size(0), -1))) def training_step(self, batch, batch_nb): x, y = batch loss = F.cross_entropy(self(x), y) return loss def configure_optimizers(self): return torch.optim.Adam(self.parameters(), lr=0.02) # Init our model mnist_model = MNISTModel() # Init DataLoader from MNIST Dataset train_ds = MNIST(os.getcwd(), train=True, download=True, transform=transforms.ToTensor()) train_loader = DataLoader(train_ds, batch_size=32) # Initialize a trainer trainer = pl.Trainer(max_epochs=3, progress_bar_refresh_rate=20) # no more GPU # Train the model ⚑ trainer.fit(mnist_model, train_loader) To Reproduce The script above fails when I execute it remotely, with the following specifications: PyTorch Version: 1.7.1 PyTorch Lightning Version: 1.1.4 OS: Ubuntu 20.04.1 LTS How you installed PyTorch: pip Python version: 3.8 I get the error message: UserWarning: CUDA initialization: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx (Triggered internally at /pytorch/c10/cuda/CUDAFunctions.cpp:100.) return torch._C._cuda_getDeviceCount() > 0 GPU available: False, used: False TPU available: None, using: 0 TPU cores Illegal instruction (core dumped) My understanding is that I can ignore the UserWarning, as I don't have - and my code does not require - a GPU. When I comment the line trainer.fit(mnist_model, train_loader), the rest of the code runs fine. When I execute the script above locally, with the following specifications, the script also runs just fine: PyTorch Version: 1.7.1 PyTorch Lightning Version: 1.1.4 OS: macOS Big Sur (version 11.1) How you installed PyTorch: conda Python version: 3.9
Remove assert in production code
[]
assert are removed with compiling to optimized byte code (python -o producing *.pyo files). This caused various protections to be removed. assert isinstance -> raise TypeError #5536 assert *.rank == 0 -> assert len(...) == len(...) ->
fast_dev_run breaks with val_check_interval
[ "bug", "help wanted", "priority: 0" ]
πŸ› Bug It looks like val_check_interval must be <= num_training_batches, but the latter is set to 1 in fast_dev_run, so things break. Please reproduce using the BoringModel https://colab.research.google.com/drive/1cMWbhk5pmkGe_y6znQFHebRUrqa_ssu9?usp=sharing To Reproduce Use following BoringModel and post here Expected behavior Probably validation checks should be disabled in fast_dev_run Or val_check_interval should be set to 1 as well. Environment CUDA: GPU: Tesla T4 available: True version: 10.1 Packages: numpy: 1.19.5 pyTorch_debug: True pyTorch_version: 1.7.0+cu101 pytorch-lightning: 1.1.4 tqdm: 4.41.1 System: OS: Linux architecture: 64bit processor: x86_64 python: 3.6.9 version: #1 SMP Thu Jul 23 08:00:38 PDT 2020 Additional context
Why are losses different when logging from '_step' (with on_epoch=True) compared to logging from '_epoch_end'?
[ "bug", "help wanted", "3rd party", "priority: 1", "logging" ]
πŸ› Bug When logging losses from {prefix}_step with self.log("{prefix}_loss", loss, on_step=False, on_epoch=True), they are different from losses logged in {prefix}_epoch_end, using avg_loss = torch.stack([x["loss"] for x in outputs]).mean() self.log( name="loss/" + prefix, value=avg_loss, prog_bar=False, logger=True, on_step=False, on_epoch=True ) Please reproduce using the BoringModel https://colab.research.google.com/drive/1Sz9kgGuMWxcAPOZ7SPcm4XYhsoWgwOFt?usp=sharing To Reproduce Run the code from the link (I copied it into a .py-file and ran it from commandline) and see csv-file with logged losses (under ./default/version_{int}/metrics.csv). Expected behavior {prefix}_loss (logged from {prefix}_step) and loss/{prpefix} (logged from {prefix}_epoch_end) should be equal. Environment CUDA: - GPU: - available: False - version: 10.2 Packages: - numpy: 1.19.5 - pyTorch_debug: False - pyTorch_version: 1.7.1 - pytorch-lightning: 1.1.4 - tqdm: 4.56.0 System: - OS: Linux - architecture: - 64bit - ELF - processor: x86_64 - python: 3.8.5 - version: #1 SMP Tue Jun 23 12:58:10 UTC 2020 Additional context The differences in losses seem marginally using this boring model, however, to me it is unclear, why this happens. From my understanding, there should be no differences at all when logging losses from steps with on_epoch=True or from epoch_ends.
Load callback states while testing.
[ "feature", "help wanted", "checkpointing", "priority: 1", "trainer: validate", "trainer: test" ]
πŸš€ Feature Load callback states while testing. Motivation #5161 (comment) Pitch Two possible API changes: with an additional argument restore_states: test(ckpt_path, restore_states=True/False) # give an option whether to load states or not test(model, ckpt_path, restore_states=True/False) # same as above but will just load checkpoint states and not the model # raise an error test(ckpt_path=None, restore_states=True) or without any additional argument: test(ckpt_path) # always load states test(ckpt_path=None) # don't load any states. test(model, ckpt_path) # reload checkpoint states only from ckpt_path Alternatives Alternatively, one can just reload checkpoints manually, call on_load_checkpoint for all the callbacks manually, and test. PS: There may be a better solution. Open to suggestions :) cc: @ananthsub cc @Borda @awaelchli @ananthsub @ninginthecloud @rohitgr7 @tchaton @akihironitta
Failing to load the GPU-trained model onto CPU-machine
[ "question", "checkpointing" ]
What is your question? Hi, I cannot load neither model nor checkpoint from GPU onto CPU machine. I followed the docs but the problem still remains. How can I resolve this issue? Please follow the code sections below and its corresponding error messages. Code m_ae_model = LitAE.load_from_checkpoint(r'Path\version_1_load.ckpt', in_out_size, z_size) TypeError: 'tuple' object is not callable (in_out_size is a tuple, image dimensions for convolutional operations, whereas z_size is the latent variable dimension) I also tried loading ".pt" model by placing it onto the CPU using map_location. Code location = torch.device('cpu') m_ae_model = torch.load("m_ae_model.pt", map_location=location) AttributeError: 'Trainer' object has no attribute '_enable_pl_optimizer' What have you tried? Saving GPU-trained model onto CPU (by model.to('CPU')), loading checkpoints. What's your environment? OS: Win (both machines) Packaging pip Version 1.1.3
GAN Manual optimization not working after 1.0.7
[ "bug", "help wanted", "priority: 0" ]
πŸ› Bug I have a GAN model and as of 1.0.7 it no longer trains correctly. I've tried to troubleshoot the issue to the best of my ability but I have no idea what's causing the problem. Please reproduce using the BoringModel https://colab.research.google.com/gist/import-antigravity/0730243bb11b56031110fd6aa7d58971/the-boringmodel.ipynb To Reproduce See boringmodel above^^^ Expected behavior Using the colab notebook switch between version 1.0.6 and 1.0.7 to see the bug. For 1.0.6 after training a few epochs it's clear the GAN is beginning to converge, for 1.0.7 it just makes noise. Environment PyTorch Version (e.g., 1.0): 1.0.7 OS (e.g., Linux): macOS, Linux How you installed PyTorch (conda, pip, source): conda Build command you used (if compiling from source): n/a Python version: 3.8 CUDA/cuDNN version: N/A GPU models and configuration: N/A Any other relevant information: N/A
How to save/load hyperparameters of system componenets?
[ "question", "waiting on author" ]
How should I handle hyperparameters of a submodule? For example in the snippet below, say MyGenerator takes it's hparams in constructor, how should I handle that? class LitMNIST(LightningModule): def __init__(self, loss_fx, generator_network, layer_1_dim=128 **kwargs): super().__init__() self.layer_1_dim = layer_1_dim self.loss_fx = loss_fx # call this to save (layer_1_dim=128) to the checkpoint self.save_hyperparameters('layer_1_dim') # to load specify the other args model = LitMNIST.load_from_checkpoint(PATH, loss_fx=torch.nn.SomeOtherLoss, generator_network=MyGenerator(???))
validation_epoch_end or on_train_epoch_end receive extra arguments/data
[ "question" ]
❓ Questions and Help What is your question? Basically I need to validate my model after each epoch, but the labels information is a little tricky(mixture of list of tuple of int and string), that it could not be included in DataLoader. however, validation_epoch_end, validation_step, etc.. only receives DataLoader output. Is there any solution? I actually dun want to add the raw data into module which make the code difficult to understand. Code
Refactoring GIF pl.TrainResult is (deprecated) Gif needs to be updated.
[ "duplicate", "docs" ]
πŸ“š Documentation The example gif showing refactoring needs to be updated as the reference to pl.TrainResult has been deprecated. Thanks!
GPU memory leak in For Loop with AMP mode
[ "bug", "priority: 2" ]
πŸ› Bug Computation in For loop with AMP ON cause GPU memory leak, and crash training. To Reproduce Notebook. Core part: def generate(self, device: str) -> None: ipt = torch.tensor([[float(j) for j in range(0, self.size_i)]], device=device) for _ in range(0, 10000): self.fc(ipt) Run .generate() in validation step with precision=16 trainer's argument. Expected behavior self.fc(ipt)'s output is simply discarded, so GPU memory will be released. GPU memory leak never occur. Actual behavior Out of Memory error only when precision=16, not precision=32. RuntimeError: CUDA out of memory. .... Environment Check in Colab Notebook.
Need check_val_every_n_steps in Trainer
[ "feature", "help wanted" ]
πŸš€ Feature Add an argument check_val_every_n_steps in Trainer.__init__ function to check metrics of validation set for certain steps. Motivation For many tasks, large models are trained in steps not complete epochs, especially pretrained models in CV and NLP. As a consequence, step-based arguments like max_steps, log_every_n_steps may be more convenient than epoch-based ones. However, the Trainer API only has a check_val_every_n_epoch argument for computing metrics of validation data. It's very helpful to have an additional argument like check_val_every_n_steps in Trainer constructor. Pitch Trainer.init(..., check_val_every_n_epoch=1, check_val_every_n_steps=100, ...)
Training using DDP and SLURM
[ "question", "distributed", "environment: slurm" ]
❓ Questions and Help What is your question? The current scenario is two nodes with different free GPUs. For instance, node1 has 5 free gpus and node2 has 3 free gpus. I can requested the 8 free gpus using slurm without care the number of nodes. Is there any way that I can use PL for using the 8 available gpus in this context?. I read the documentation and it looks that one constraint is to have always the same number of free gpus on each node.
args error when running pl_examples semantic_segmentation through command line
[ "bug", "help wanted" ]
πŸ› Bug Running python pl_examples\domain_templates\semantic_segmentation.py leads to argument options conflicting error. The issue is the same as the one with #5382 This error seems to be because add_help=True in argparse.ArgumentParser in both the parent_parser and in the parser for adding model specific args. Setting it to False in only one of either parser seems to fix this and help options work too To Reproduce https://colab.research.google.com/drive/1KgByl810j5ux7vmHoGfOz5MfYG3dIwqd?usp=sharing Error: Traceback (most recent call last): File "pytorch-lightning/pl_examples/domain_templates/semantic_segmentation.py", line 288, in <module> parser = SegModel.add_model_specific_args(parser) File "pytorch-lightning/pl_examples/domain_templates/semantic_segmentation.py", line 247, in add_model_specific_args parser = ArgumentParser(parents=[parent_parser]) File "/usr/lib/python3.6/argparse.py", line 1666, in __init__ self._add_container_actions(parent) File "/usr/lib/python3.6/argparse.py", line 1435, in _add_container_actions group_map.get(action, self)._add_action(action) File "/usr/lib/python3.6/argparse.py", line 1565, in _add_action action = super(_ArgumentGroup, self)._add_action(action) File "/usr/lib/python3.6/argparse.py", line 1375, in _add_action self._check_conflict(action) File "/usr/lib/python3.6/argparse.py", line 1514, in _check_conflict conflict_handler(action, confl_optionals) File "/usr/lib/python3.6/argparse.py", line 1523, in _handle_conflict_error raise ArgumentError(action, message % conflict_string) argparse.ArgumentError: argument -h/--help: conflicting option strings: -h, --help Environment Google Colab
"has conflicts" label removal
[ "bug", "ci" ]
The "has conflicts" label is not removed automatically once conflicts are fixed cc: @Borda
Accelerator examples cannot run
[ "help wanted", "good first issue", "docs" ]
πŸ“š Documentation In the accelerator documentation, the documentation tells you to use a custom accelerator like: trainer = Trainer(accelerator=DDPAccelerator()) Unfortunately, this is not actually possible -- all of the accelerators (AFAICT) require a trainer argument, and actually trying to execute this code raises: TypeError: __init__() missing 1 required positional argument: 'trainer' Because of this, I'm not actually sure how to use a custom accelerator because there seems to be a circular dependency: the accelerators require the trainer as input, the trainer itself needs an accelerator.
Add a warning for returning none for from training_step using multi-GPU
[ "docs" ]
Mechanism to skip certain hooks
[ "feature", "discussion", "design", "hooks" ]
πŸš€ Feature Do we need a way to prevent certain hooks to be executed? I'm not entirely sure how solid this idea is, so I'm hoping for some discussion :) Motivation A user encountered a use case in which they wanted to build the model in the setup hook. However, because the setup hook is exectued everytime regardless whether the model was already trained or not, this would then overwrite the weights of the model, making continued training impossible. #5410 Pitch Something abstract like model.skip_hooks = ["setup", ...] could be considered. Alternatives The user can handle it on their side with some conditional code in their hook, def setup(self, stage): if self.has_setup: return # do your thing ... self.has_setup = True or forcefully remove the method from the model object, delattr(model, "setup"), for example. Additional context Originates from here cc: @sjgosai cc @Borda @tchaton @justusschock @awaelchli @carmocca @ninginthecloud @daniellepintz @rohitgr7
tensorboard displays incorrect Learning-rates
[ "bug", "help wanted" ]
πŸ› Bug For example, if the learning-rate is euler's e-6 (i.e. 0.00247875217), then it is displayed as 2.4788e-3 on the vertical axis of tensorboard's graph. The correct value should be 2.4788 * (10 raised to the power of -3). Another example, if the learning-rate is euler's e-9 (i.e. 0.0001234098), then it is displayed as 1.2341e-4 on the vertical axis of tensorboard's graph. The correct value should be 1.2341 * (10 raised to the power of -4). So, instead of "e" there should be "10". I am using the LearningRateMonitor. Please reproduce using the BoringModel Using PDB, I traced the code when learning rate is 0.00247875217. Everything is fine up until the following: /home/vin/.local/lib/python3.8/site-packages/pytorch_lightning/callbacks/lr_monitor.py(156)_extract_lr() -> return {name: lr} (Pdb) name, lr ('lr-Adam', 0.00247875217) /home/vin/.local/lib/python3.8/site-packages/tensorboard/summary/writer/event_file_writer.py(118)add_event() -> self._async_writer.write(event.SerializeToString()) (Pdb) event wall_time: 1611158418.5658185 summary { value { tag: "lr-Adam" simple_value: 0.0024787522852420807 } } However, I am unable to trace how this number is incorrectly displayed. Expected behavior For the learning-rate, the TensorBoard should disply "10" instead of "e". Environment PyTorch Version (e.g., 1.0): 1.6.0 OS (e.g., Linux): Linux-Ubuntu How you installed PyTorch (conda, pip, source): pip Build command you used (if compiling from source): Python version: 3.8.5 CUDA/cuDNN version: 10.2 GPU models and configuration: Nvidea GTX-1080 Any other relevant information: Pytorch Lightning version 1.1.4 Additional context
outputs of training_epoch_end for different configure_optimizers conditions
[ "question" ]
Condition One: when I write optimizer as follows: def configure_optimizers(self): return [disOptim,genOptim],[] I can simply write the training_epoch_end as follows: def training_epoch_end(self,outputs): sum_loss_D_real=torch.stack([x['D_loss_real'] for x in outputs[0]]).sum() Condition Two: However when I write the optimizer as follows: def configure_optimizers(self): return ({'optimizer':disOptim,'frequency':2},{'optimizer':genOptim,'frequency':1}) I need to write the training_epoch_end as follows: def training_epoch_end(self,outputs): mean_d_loss=torch.stack([outputs[i]['d_loss'] for i in range(len(outputs)) if ((i+1)%(self.hparams.n_critic+1))]).mean() Is there any way that I can write the optimizer the same as condition two and the training_epoch_end as condition one?
Training is interrupted without error with MulitGPU
[ "bug", "help wanted", "priority: 0", "waiting on author", "distributed" ]
πŸ› Bug The training is interrupted randomly in the middle of an epoch without errors. The console only says: Terminated. The error does not necessarily occur, if it does then mostly between epochs 2-4. It is noticeable that processes are still running after the termination, the graphic cards are still used by python processes. We train the PyTorch version of the ImageGPT model with huggingface transformers. Could also be problem of huggingface, we are not sure. Epoch 1: 29%|β–ˆβ– | 9413/32393 [3:28:18<8:28:33, 1.33s/it, loss=3.23, v_num=9]Terminated Please reproduce using the BoringModel Cant reproduce with Boring Model. Code class ImageGPT(pl.LightningModule): def __init__(self, learning_rate=learning_rate ): super().__init__() self.gpt2 = ImageGPT2LMHeadModel(config=...) self.criterion = nn.CrossEntropyLoss(reduction='none') self.learning_rate = learning_rate def forward(self, x): return self.gpt2(x, past_key_values=None) .... logger = pl_loggers.TensorBoardLogger(save_dir="logs", name=name) checkpoint_callback = ModelCheckpoint( save_top_k=1, verbose=True, monitor='val_loss', mode='min', filepath='../models', prefix='ImageGPT' ) trainer = Trainer( accelerator='ddp', max_epochs=10, max_steps=None, precision=32, accumulate_grad_batches=1, gpus=[0, 1, 2], callbacks=[checkpoint_callback], logger=logger, gradient_clip_val=0.6 ) trainer.fit(model=model, datamodule=datamodule) Expected behavior The training is fully completed across all epochs. Environment CUDA: GPU: TITAN RTX TITAN RTX TITAN RTX available: True version: 10.2 Packages: numpy: 1.19.4 pyTorch_debug: False pyTorch_version: 1.7.1 pytorch-lightning: 1.1.2 transformers: 3.5.1 tqdm: 4.55.0 System: OS: Linux, 64bit processor: x86_64 python: 3.7.4 version: 86-Ubuntu SMP Fri Jan 17 17:24:28 UTC 2020 Additional context We have made the following points to solve the problem: set the num-workers of the dataloaders to 0 or 1 (instead of 32-64) go back to 32 bit precision different learning rates added gradient clipping used AdamW implementation from huggingface
Should the LightningModule contain a 'from_argparse_args' attribute as does the LightningDataModule?
[]
❓ Questions and Help Hello everyone, love the project so far. Searching for a workaround for the following issue. Any help would be greatly appreciated. What is your question? Should the LightningModule contain a 'from_argparse_args' attribute as does the LightningDataModule? Code Lightning module code class UNet(pl.LightningModule): @staticmethod def add_specific_args(parent_parser): parser = ArgumentParser(parents=[parent_parser], add_help=False) parser.add_argument('--loss_function', type=str, default='F.binary_cross_entropy_with_logits') parser.add_argument('--optimizer', type=str, default='torch.optim.Adam') parser.add_argument('--encoder_args', type=tuple, default=( (1, 32), # x (32, 64), # x/2 (64, 128), # x/4 (128, 256), # x/8 (256, 512), # x/16 (512, 1024) # /32 ) ) parser.add_argument('--output_channels', type=int, default=1) parser.add_argument('--learning_rate', type=float, default=1e-3) return parser def __init__(self, loss_function, optimizer, encoder_args, output_channels, learning_rate): super().__init__() self.loss_function = eval(loss_function) self.optimizer = eval(optimizer) self.encoder_args = encoder_args self.output_channels = output_channels self.learning_rate = learning_rate self.init_layers() Training and testing def main(args): # DataModule dm = DataModule.from_argparse_args(args) dm.prepare_data() dm.setup() # LightningModule model = UNet.from_argparse_args(args) # Trainer trainer = pl.Trainer.from_argparse_args(args) trainer.fit(model, dm) trainer.test(datamodule=dm) if __name__ == '__main__': # ArgumentParser parser = ArgumentParser() parser = DataModule.add_specific_args(parser) parser = UNet.add_specific_args(parser) parser = pl.Trainer.add_argparse_args(parser) args = parser.parse_args() main(args) Current behaviour Calliing # LightningModule model = UNet.from_argparse_args(args) results in the following AttributeError model = UNet.from_argparse_args(args) AttributeError: type object 'UNet' has no attribute 'from_argparse_args' Current workaround Pass the arguments explicitly model = UNet(args.loss_function, args.optimizer, args.encoder_args, args.output_channels, args.learning_rate)
Code stuck after running 1 epoch on TPU
[ "question", "accelerator: tpu" ]
❓ Questions and Help What is your question? I'm trying to run the LitAutoEncoder on TPUs, but the code runs for 1 epoch and gets stuck there. Code class LitAutoEncoder(pl.LightningModule): def __init__(self, hparams): super().__init__() self.hparams = hparams self.encoder = nn.Sequential( nn.Linear(28*28, 64), nn.ReLU(), nn.Linear(64, 3) ) self.decoder = nn.Sequential( nn.Linear(3, 64), nn.ReLU(), nn.Linear(64, 28*28) ) def forward(self, x): # in lightning, forward defines the prediction/inference actions embedding = self.encoder(x) return embedding def training_step(self, batch, batch_idx): # training_step defined the train loop. # It is independent of forward x, y = batch x = x.view(x.size(0), -1) z = self.encoder(x) x_hat = self.decoder(z) loss = F.mse_loss(x_hat, x) # Logging to TensorBoard by default self.log('train_loss', loss) return loss def configure_optimizers(self): optimizer = torch.optim.Adam(self.parameters(), lr=1e-3) return optimizer dataset = MNIST(os.getcwd(), download=True, transform=transforms.ToTensor()) train_loader = DataLoader(dataset, drop_last=True, batch_size=32) args_dict = dict( num_train_epochs=1, seed=42, ) args = argparse.Namespace(**args_dict) autoencoder = LitAutoEncoder(args) train_params = dict( tpu_cores=8, progress_bar_refresh_rate=30, ) trainer = pl.Trainer(**train_params) trainer.fit(autoencoder, train_loader) Reproducible Colab Notebook Notebook What's your environment? Colab Packaging pip pytorch-1.7 pytorch-lightning-1.1.5
Failing to log to Neptune.ai when resuming from checkpoint
[ "bug", "duplicate", "help wanted" ]
πŸ› Bug I'm trying to resume training from a checkpoint, and I'm trying to resume logging using the Neptune.ai logger, but it throws this particular error: neptune.api_exceptions.ChannelsValuesSendBatchError: Received batch errors sending channels' values to experiment CAS-68. Cause: Error(code=400, message='X-coordinates must be strictly increasing for channel: b7ab6110-7a1b-4093-a35f-dc0904ff943f. Invalid point: InputChannelValue(timestamp=2021-01-23T11:02:05.882Z, x=98.0, numericValue=0.30072227120399475, text', type=None) (metricId: 'b7ab6110-7a1b-4093-a35f-dc0904ff943f', x: 98.0) Skipping 3 values. At the point that I'm resuming, I've already trained for 7000 steps, yet the logger tries to log at step x=98.0 as seen in the error message. Please reproduce using the BoringModel I'm not sure how I should reproduce this bug since it's more related to how Neptune.ai's logger specifically works, but I can try to come up with a repro soon. Environment PyTorch Version (e.g., 1.0): OS (e.g., Linux): Linux How you installed PyTorch (conda, pip, source): pip Build command you used (if compiling from source): Python version: 3.8.5 CUDA/cuDNN version: 11.1 GPU models and configuration: Tesla V100 Any other relevant information:
MetricList for moving entire list of metrics onto GPU
[ "feature", "help wanted" ]
πŸš€ MetricList MetricList serves the same function for Metrics in PL as nn.ModuleList in PyTorch for Modules. As such, they take care of flexibly moving all Metrics and their states onto GPU or CPU. Motivation As a dynamic inference researcher, I want to be able to test multiple setups for a single model (f.e. executing my model at different depths or widths). For each of these setups, I want to be able to track mostly the same metrics (f.e. Classification Accuracy) and making separate PL Metrics for each one of these metrics is tedious. Moreover, I might want to specify the amount of setups with a parameter which makes constructing them in a list format the most elegant solution. Pitch In regular PyTorch they introduced nn.ModuleList especially for this purpose (f.e. swapping out different FC layers for different setups) Alternatives Evaluating if any of the internal variables is a list that might contain metrics does not seem like good design to me, but has the advantages that the user will not need to know of the existence of MetricList. Maybe we can add a warning or error to use a MetricList when any errors as per "Additional Context" happens? Additional context Using regular lists that hold metrics and updating them while training on GPU will now keep metric states on CPU leading to errors, see a BoringModel example here, that as expected returns: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! Happy to work on a PR for this feature, but wanted to know your thoughts.
activate lr_scheduler after epoch 10
[ "question" ]
Is there any way to activate lr_scheduler after epoch 10? { 'scheduler': lr_scheduler, # The LR scheduler instance (required) 'interval': 'epoch', # The unit of the scheduler's step size 'frequency': 1, # The frequency of the scheduler 'reduce_on_plateau': False, # For ReduceLROnPlateau scheduler 'monitor': 'val_loss', # Metric for ReduceLROnPlateau to monitor 'strict': True, # Whether to crash the training if `monitor` is not found 'name': None, # Custom name for LearningRateMonitor to use }
build-conda CI failed
[ "bug", "help wanted", "won't fix", "ci", "priority: 1" ]
πŸ› Bug build-conda CI failed almost every time. To Reproduce Run CI. Example1: current master HEAD Example2: current release/1.2-dev HEAD Expected behavior Build successfully. Environment Both master release/1.2-dev Additional context It is long-lasting error which prevent efficient CI usage (now we routinely ignore CI result). At least, we can collect information about this error here for all (future) contributors.
Understanding accumulate_grad_batches parameter?
[ "question" ]
I am very new to PL. As far as I understand accumulate_grad_batches works similar to 'gradient_accumulation_steps' , where the main task is to increase the effective batch size. But I do not see any change in training epoch step count when increasing the accumulate_grad_batches parameters. Let's say, I have a dataset of 1000 examples and my batch_size is one and I only use a single GPU. So in this case, if I use the value 2 for the accumulate_grad_batches, the number of steps for an epoch should be shown as 500 (logger). But I still see 1000. Is it a bug or PL doesn't divide the number of steps when showing in the log?
Apex with multiple optimizers error "element 0 of tensors does not require grad and does not have grad_fn"
[ "bug", "help wanted", "3rd party" ]
πŸ› Bug File "repro apex.py", line 51, in <module> trainer.fit(model) File "/home/aw18f408/repositories/pytorch-lightning/pytorch_lightning/trainer/trainer.py", line 481, in fit results = self.accelerator_backend.train() File "/home/aw18f408/repositories/pytorch-lightning/pytorch_lightning/accelerators/gpu_accelerator.py", line 67, in train results = self.train_or_test() File "/home/aw18f408/repositories/pytorch-lightning/pytorch_lightning/accelerators/accelerator.py", line 68, in train_or_test results = self.trainer.train() File "/home/aw18f408/repositories/pytorch-lightning/pytorch_lightning/trainer/trainer.py", line 532, in train self.train_loop.run_training_epoch() File "/home/aw18f408/repositories/pytorch-lightning/pytorch_lightning/trainer/training_loop.py", line 572, in run_training_epoch batch_output = self.run_training_batch(batch, batch_idx, dataloader_idx) File "/home/aw18f408/repositories/pytorch-lightning/pytorch_lightning/trainer/training_loop.py", line 729, in run_training_batch self.optimizer_step(optimizer, opt_idx, batch_idx, train_step_and_backward_closure) File "/home/aw18f408/repositories/pytorch-lightning/pytorch_lightning/trainer/training_loop.py", line 505, in optimizer_step model_ref.optimizer_step( File "/home/aw18f408/repositories/pytorch-lightning/pytorch_lightning/core/lightning.py", line 1263, in optimizer_step optimizer.step(closure=optimizer_closure) File "/home/aw18f408/repositories/pytorch-lightning/pytorch_lightning/core/optimizer.py", line 278, in step self.__optimizer_step(*args, closure=closure, profiler_name=profiler_name, **kwargs) File "/home/aw18f408/repositories/pytorch-lightning/pytorch_lightning/core/optimizer.py", line 133, in __optimizer_step trainer.precision_connector.backend.optimizer_step(trainer, optimizer, closure) File "/home/aw18f408/repositories/pytorch-lightning/pytorch_lightning/plugins/apex.py", line 138, in optimizer_step closure() File "/home/aw18f408/repositories/pytorch-lightning/pytorch_lightning/trainer/training_loop.py", line 719, in train_step_and_backward_closure result = self.training_step_and_backward( File "/home/aw18f408/repositories/pytorch-lightning/pytorch_lightning/trainer/training_loop.py", line 827, in training_step_and_backward self.backward(result, optimizer, opt_idx) File "/home/aw18f408/repositories/pytorch-lightning/pytorch_lightning/trainer/training_loop.py", line 847, in backward result.closure_loss = self.trainer.accelerator_backend.backward( File "/home/aw18f408/repositories/pytorch-lightning/pytorch_lightning/accelerators/accelerator.py", line 97, in backward closure_loss = self.trainer.precision_connector.backend.backward( File "/home/aw18f408/repositories/pytorch-lightning/pytorch_lightning/plugins/apex.py", line 53, in backward model.backward(closure_loss, optimizer, opt_idx) File "/home/aw18f408/repositories/pytorch-lightning/pytorch_lightning/core/lightning.py", line 1155, in backward loss.backward(*args, **kwargs) File "/home/aw18f408/.conda/envs/lightning/lib/python3.8/site-packages/torch/tensor.py", line 221, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph) File "/home/aw18f408/.conda/envs/lightning/lib/python3.8/site-packages/torch/autograd/__init__.py", line 130, in backward Variable._execution_engine.run_backward( RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn To Reproduce import torch from torch import optim from torch.utils.data import Dataset, DataLoader from pytorch_lightning import LightningModule, Trainer class RandomDataset(Dataset): def __init__(self, size, length): self.len = length self.data = torch.randn(length, size) def __getitem__(self, index): return self.data[index] def __len__(self): return self.len class AMPModel(LightningModule): def __init__(self): super().__init__() self.layer = torch.nn.Linear(32, 2) def forward(self, x): return self.layer(x) def training_step(self, batch, batch_idx, optimizer_idx): output = self(batch) loss = output.mean() return {"loss": loss} def train_dataloader(self): return DataLoader(RandomDataset(32, 64)) def configure_optimizers(self): optimizer1 = torch.optim.Adam(self.parameters(), lr=0.01) optimizer2 = optim.SGD(self.parameters(), lr=0.01) return [optimizer1, optimizer2] if __name__ == "__main__": model = AMPModel() trainer = Trainer( max_epochs=1, precision=16, amp_backend='apex', gpus=1, ) trainer.fit(model) Expected behavior No crash Environment CUDA: - GPU: - GeForce RTX 2080 Ti - GeForce RTX 2080 Ti - available: True - version: 11.0 Packages: - numpy: 1.19.5 - pyTorch_debug: False - pyTorch_version: 1.7.1 - pytorch-lightning: 1.2.0dev - tqdm: 4.56.0 System: - OS: Linux - architecture: - 64bit - ELF - processor: x86_64 - python: 3.8.3 - version: #1 SMP Thu Apr 9 13:49:54 UTC 2020 Additional context discovered in #5507, in the test tests/models/test_amp::test_amp_with_apex
DDPShardedPlugin consolidate_state_dict RuntimeError
[ "bug", "won't fix", "waiting on author", "distributed", "3rd party" ]
πŸ› Bug After an (seemingly arbitrary) number of steps/epochs, DDPShardedPlugin::optimizer_state crashes on its consolidate_state_dict call: Pytorch's distributed broadcast_object_list tries object_tensor = torch.ByteTensor(torch.sum(object_sizes_tensor).item()) RuntimeError: Trying to create tensor with negative dimension -5193452289200645882: [-5193452289200645882] Stacktrace: Traceback (most recent call last): File "/home/robertsc/.conda/envs/pytorch1.7/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 560, in train self.train_loop.run_training_epoch() File "/home/robertsc/.conda/envs/pytorch1.7/lib/python3.8/site-packages/pytorch_lightning/trainer/training_loop.py", line 562, in run_training_epoch self.trainer.run_evaluation() File "/home/robertsc/.conda/envs/pytorch1.7/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 667, in run_evaluation self.evaluation_loop.on_evaluation_end() File "/home/robertsc/.conda/envs/pytorch1.7/lib/python3.8/site-packages/pytorch_lightning/trainer/evaluation_loop.py", line 110, in on_evaluation_end self.trainer.call_hook('on_validation_end', *args, **kwargs) File "/home/robertsc/.conda/envs/pytorch1.7/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 924, in call_hook trainer_hook(*args, **kwargs) File "/home/robertsc/.conda/envs/pytorch1.7/lib/python3.8/site-object_sizes_tensorpackages/pytorch_lightning/trainer/callback_hook.py", line 177, in on_validation_end callback.on_validation_end(self, self.get_model()) File "/home/robertsc/.conda/envs/pytorch1.7/lib/python3.8/site-packages/pytorch_lightning/callbacks/model_checkpoint.py", line 203, in on_validation_end self.save_checkpoint(trainer, pl_module) File "/home/robertsc/.conda/envs/pytorch1.7/lib/python3.8/site-packages/pytorch_lightning/callbacks/model_checkpoint.py", line 253, in save_checkpoint self._save_last_checkpoint(trainer, pl_module, monitor_candidates) File "/home/robertsc/.conda/envs/pytorch1.7/lib/python3.8/site-packages/pytorch_lightning/callbacks/model_checkpoint.py", line 567, in _save_last_checkpoint self._save_model(last_filepath, trainer, pl_module) File "/home/robertsc/.conda/envs/pytorch1.7/lib/python3.8/site-packages/pytorch_lightning/callbacks/model_checkpoint.py", line 361, in _save_model self.save_function(filepath, self.save_weights_only) File "/home/robertsc/.conda/envs/pytorch1.7/lib/python3.8/site-packages/pytorch_lightning/trainer/properties.py", line 257, in save_checkpoint self.checkpoint_connector.save_checkpoint(filepath, weights_only) File "/home/robertsc/.conda/envs/pytorch1.7/lib/python3.8/site-packages/pytorch_lightning/trainer/connectors/checkpoint_connector.py", line 392, in save_checkpoint checkpoint = self.dump_checkpoint(weights_only) File "/home/robertsc/.conda/envs/pytorch1.7/lib/python3.8/site-packages/pytorch_lightning/trainer/connectors/checkpoint_connector.py", line 283, in dump_checkpoint optimizer_state = self.trainer.accelerator_backend.optimizer_state(optimizer) File "/home/robertsc/.conda/envs/pytorch1.7/lib/python3.8/site-packages/pytorch_lightning/accelerators/accelerator.py", line 206, in optimizer_state return self.ddp_plugin.optimizer_state(optimizer) File "/home/robertsc/.conda/envs/pytorch1.7/lib/python3.8/site-packages/pytorch_lightning/plugins/sharded_plugin.py", line 42, in optimizer_state optimizer.consolidate_state_dict() File "/home/robertsc/.conda/envs/pytorch1.7/lib/python3.8/site-packages/fairscale/optim/oss.py", line 320, in consolidate_state_dict self._broadcast_state_dict() File "/home/robertsc/.conda/envs/pytorch1.7/lib/python3.8/site-packages/fairscale/optim/oss.py", line 349, in _broadcast_state_dict dist.broadcast_object_list([0], src=global_rank, group=self.group) File "/home/robertsc/.conda/envs/pytorch1.7/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 1687, in broadcast_object_list object_tensor = torch.ByteTensor(torch.sum(object_sizes_tensor).item()) RuntimeError: Trying to create tensor with negative dimension -5193452289200645882: [-5193452289200645882] Environment CUDA: GPU: TITAN RTX available: True version: 11.0 Packages: numpy: 1.18.1 pyTorch_debug: False pyTorch_version: 1.8.0.dev20210122 pytorch-lightning: 1.1.4 tqdm: 4.48.2 System: OS: Linux architecture: 64bit ELF processor: python: 3.8.3 version: #1 SMP Debian 4.19.160-2 (2020-11-28) cc @awaelchli @rohitgr7 @akihironitta
Behaviour of accumulate_gradients and multi-gpu
[ "question" ]
Training setup: 2 GPUs on a single machine running in DDP mode. If I use a batch size of 16 and accumulate gradients=2, how does lightning handle this? Possibility 1: GPU1 processes one batch of size 16. GPU2 processes one batch of size 16. average gradients from GPU1 and GPU2 and apply weight update. or Possibility 2 GPU1 processes one batch of size 16. GPU1 processes another batch of size 16 GPU1 averages the gradients of the two batches. GPU2 processes one batch of size 16. GPU2 processes another batch of size 16 GPU2 averages the gradients of the two batches. average the averaged gradients from GPU1 and GPU2 and apply weight update Which of the two ways does lightning handle this under DDP? I am asking because in the first scenario the effective batch size is 32 and in the second scenario the effective batch size is 64.
Regression between Lightning 1.1.3 and 1.1.5
[ "bug", "help wanted", "distributed" ]
πŸ› Bug Posted originally by @okuchaiev: Has anyone observed a model performance degradation when switching from 1.1.3 to 1.1.4 and 1.1.5? On the plot below you can see exactly the same model/hyperparams trained using 1.1.3 (runs named enes3) and 1.1.5 (runs named enes5). You can see that 1.1.3 outperforms 1.1.5 consistently. Please reproduce using the BoringModel Currently not reproduced. cc @ericharper
Update metrics to use Enum
[ "feature", "help wanted", "good first issue" ]
πŸš€ Feature Motivation Update metrics package to use Enum where it makes sense. For example: pytorch-lightning/pytorch_lightning/metrics/classification/helpers.py Lines 79 to 87 in f782230 # Get the case if preds.ndim == 1 and preds_float: case = "binary" elif preds.ndim == 1 and not preds_float: case = "multi-class" elif preds.ndim > 1 and preds_float: case = "multi-label" else: case = "multi-dim multi-class" the case (data type) could be made an enum type class DataType(LightningEnum): BINARY = 'binary' MULTILABEL = 'multi-label' MULTICLASS = 'multi-class' MDMC = 'multi-dim multi-class' Pitch Alternatives Additional context @Borda
Pass all running stages to DataModule.setup
[ "feature", "help wanted", "refactor" ]
πŸš€ Feature Currently. DataModule.setup is only called with stages fit or test. But we have several more: Stages: pytorch-lightning/pytorch_lightning/trainer/states.py Lines 39 to 49 in 5f33728 class RunningStage(LightningEnum): """Type of train phase. >>> # you can match the Enum with string >>> RunningStage.TRAINING == 'train' True """ TRAINING = 'train' EVALUATING = 'eval' TESTING = 'test' TUNING = 'tune' Note that it's a bit tricky because fit is not a RunningStage. It indicates train or eval Motivation Allows having custom logic for each stage Pitch def setup(stage: Optional[str] = None): assert stage in list(RunningStage) ... Additional context We are passing 'test' when predicting as seen in #5579 pytorch-lightning/pytorch_lightning/trainer/trainer.py Line 909 in 9137b16 self.data_connector.attach_datamodule(model or self.get_model(), datamodule, 'test')
ModuleNotFoundError: __path__ attribute not found on 'hydra' while trying to find 'hydra.experimental'
[ "bug", "help wanted", "won't fix", "3rd party" ]
πŸ› Bug AttributeError: module 'hydra' has no attribute 'path' PyTorch Version (e.g., 1.0): 1.7.1 OS (e.g., Linux): linux How you installed PyTorch (conda, pip, source): conda Python version: 3.8 CUDA/cuDNN version: 11.0/8.0 Additional context import pytorch_lightning as pl --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) ~/miniconda3/envs/enn/lib/python3.8/importlib/util.py in find_spec(name, package) 95 try: ---> 96 parent_path = parent.__path__ 97 except AttributeError as e: AttributeError: module 'hydra' has no attribute '__path__' The above exception was the direct cause of the following exception: ModuleNotFoundError Traceback (most recent call last) <ipython-input-1-efd8697dde72> in <module> ----> 1 import pytorch_lightning as pl ~/miniconda3/envs/enn/lib/python3.8/site-packages/pytorch_lightning/__init__.py in <module> 60 # We are not importing the rest of the lightning during the build process, as it may not be compiled yet 61 else: ---> 62 from pytorch_lightning import metrics 63 from pytorch_lightning.callbacks import Callback 64 from pytorch_lightning.core import LightningDataModule, LightningModule ~/miniconda3/envs/enn/lib/python3.8/site-packages/pytorch_lightning/metrics/__init__.py in <module> 12 # See the License for the specific language governing permissions and 13 # limitations under the License. ---> 14 from pytorch_lightning.metrics.metric import Metric 15 16 from pytorch_lightning.metrics.classification import ( ~/miniconda3/envs/enn/lib/python3.8/site-packages/pytorch_lightning/metrics/metric.py in <module> 21 from torch import nn 22 ---> 23 from pytorch_lightning.metrics.utils import _flatten, dim_zero_cat, dim_zero_mean, dim_zero_sum 24 from pytorch_lightning.utilities.apply_func import apply_to_collection 25 from pytorch_lightning.utilities.distributed import gather_all_tensors ~/miniconda3/envs/enn/lib/python3.8/site-packages/pytorch_lightning/metrics/utils.py in <module> 16 import torch 17 ---> 18 from pytorch_lightning.utilities import rank_zero_warn 19 20 METRIC_EPS = 1e-6 ~/miniconda3/envs/enn/lib/python3.8/site-packages/pytorch_lightning/utilities/__init__.py in <module> 33 OMEGACONF_AVAILABLE = _module_available("omegaconf") 34 HYDRA_AVAILABLE = _module_available("hydra") ---> 35 HYDRA_EXPERIMENTAL_AVAILABLE = _module_available("hydra.experimental") 36 HOROVOD_AVAILABLE = _module_available("horovod.torch") 37 BOLTS_AVAILABLE = _module_available("pl_bolts") ~/miniconda3/envs/enn/lib/python3.8/site-packages/pytorch_lightning/utilities/package_utils.py in _module_available(module_path) 30 for i in range(len(mods)): 31 module_path = '.'.join(mods[:i + 1]) ---> 32 if importlib.util.find_spec(module_path) is None: 33 return False 34 return True ~/miniconda3/envs/enn/lib/python3.8/importlib/util.py in find_spec(name, package) 96 parent_path = parent.__path__ 97 except AttributeError as e: ---> 98 raise ModuleNotFoundError( 99 f"__path__ attribute not found on {parent_name!r} " 100 f"while trying to find {fullname!r}", name=fullname) from e ModuleNotFoundError: __path__ attribute not found on 'hydra' while trying to find 'hydra.experimental'
Specify Gradient Clipping Norm in Trainer
[ "feature", "help wanted", "won't fix", "design", "priority: 1" ]
πŸš€ Feature Allow specification of the gradient clipping norm_type, which by default is euclidean and fixed. Motivation We are using pytorch lightning to increase training performance in the standalone Federated Learning context (experimental setting). In this context the trained models diverge from their underlying data and get aggregated on the server side which leads to larger gradients in general. To preserve the direction of the gradient, but limit the magnitude per single dimension, we need to apply the inf norm. Pitch Add a parameter gradient_clipping_norm_type: float=2.0 to trainer. Pass the parameter to the _clip_gradients method. Changing the call from _clip_gradients(optimizer, grad_clip_val) to somewhat like _clip_gradients(optimizer, grad_clip_val, grad_clip_norm_type) Additional context The impact is minimal and only effects the following function pytorch-lightning/pytorch_lightning/accelerators/accelerator.py Lines 119 to 128 in 861d699 def clip_gradients(self, optimizer, clip_val=None): # use the trainer's clip val if none passed grad_clip_val = self.trainer.gradient_clip_val if clip_val is not None: grad_clip_val = clip_val grad_clip_val = float(grad_clip_val) if grad_clip_val <= 0: return self._clip_gradients(optimizer, grad_clip_val) Which is calling pytorch-lightning/pytorch_lightning/accelerators/accelerator.py Line 130 in 861d699 def _clip_gradients(self, optimizer: Optimizer, grad_clip_val: Union[float, int], norm_type: float = 2.0):
multiple processes running all tasks after trainer.fit(accelerator="ddp")
[ "question", "distributed" ]
❓ Questions and Help What is your question? When training with ddp the script calls multiple python scripts to run the training. This causes an issue when I use the same python script to do other stuff after Im done with training. What is best practice here? My only solution so far would be to condition on os.environ["LOCAL_RANK"]=='0' when doing stuff later on. This feels a bit hacky - is there a better way to do this? Code Running the example code with 2 gpus: import os import torch from torch import nn import torch.nn.functional as F from torchvision.datasets import MNIST from torch.utils.data import DataLoader, random_split from torchvision import transforms import pytorch_lightning as pl class LitAutoEncoder(pl.LightningModule): def __init__(self): super().__init__() self.encoder = nn.Sequential(nn.Linear(28 * 28, 128), nn.ReLU(), nn.Linear(128, 3)) self.decoder = nn.Sequential(nn.Linear(3, 128), nn.ReLU(), nn.Linear(128, 28 * 28)) def forward(self, x): # in lightning, forward defines the prediction/inference actions embedding = self.encoder(x) return embedding def training_step(self, batch, batch_idx): # training_step defined the train loop. It is independent of forward x, y = batch x = x.view(x.size(0), -1) z = self.encoder(x) x_hat = self.decoder(z) loss = F.mse_loss(x_hat, x) self.log('train_loss', loss) return loss def configure_optimizers(self): optimizer = torch.optim.Adam(self.parameters(), lr=1e-3) return optimizer dataset = MNIST(os.getcwd(), download=True, transform=transforms.ToTensor()) train, val = random_split(dataset, [55000, 5000]) autoencoder = LitAutoEncoder() trainer = pl.Trainer( max_epochs=1, overfit_batches = 0.05, gpus =-1, accelerator="ddp") trainer.fit(autoencoder, DataLoader(train), DataLoader(val)) print(f"This will print twice {os.environ['LOCAL_RANK']}") if os.environ["LOCAL_RANK"] =='0': print(f"This will print once {os.environ['LOCAL_RANK']}") What's your environment? ubuntu pytorch 1.6.0 pytorch-lightning 1.1.6
Request for additional documentation on learning rate scheduling on `step` instead of `epoch`.
[ "docs" ]
πŸ“š Documentation The current documentation states that returning {'interval': 'step'} in configure_optimizers will alter the learning rate scheduler update interval to step-wise update instead of epoch-wise update. However, as mentioned in #4929, this is not true. I would like to request an update to the documentation, including a warning that the previous recommendation does not actually work.
Loss divided by `accumulate_grad_batches` number
[ "bug", "help wanted", "priority: 0", "waiting on author", "logging" ]
πŸ› Bug After the 1.1.4 with the fix 5417, logging was fixed but my loss was divided by accumulate_grad_batches. Please reproduce using the BoringModel Sorry, there is no BoringModel. I paste my code here To Reproduce def training_step(self, batch, batch_idx, optimizer_idx): with autocast(): outs = self.model(**batch) gen_loss = outs['gen_loss'] dis_loss = outs['dis_loss'] (gen_opt, dis_opt) = self.optimizers() # Manual backward Generator loss self.manual_backward(gen_loss, gen_opt) gen_opt.step() # Manual backward Discriminator loss self.manual_backward(dis_loss, dis_opt) dis_opt.step() # Accumulate losses self.loss_accumulation(gen_loss.cpu().item(), dis_loss.cpu().item()) # Log all losses self.log_metrics() # Logging total loss to the progress bar total_loss = (gen_loss + dis_loss).cpu().item() self.log('total_loss', total_loss, prog_bar=True, logger=False, on_step=True, on_epoch=False) # THIS IS THE BUG I use DDP with manual backward. If I used my environment w pl=1.1.0 or accumulate_grad_batches=1 (version 1.1.6), the loss is around 11 at first: If I used accummulate_grad_batches=3, the loss is divided by 3: Expected behavior Loss should not be divided. I guess 1.1.3 and before, train_loop sums all loss then average. Now it divides by accumulate_grad_batches then sum. Environment PyTorch Version (e.g., 1.0): 1.7.1 OS (e.g., Linux): Ubuntu 18.04 How you installed PyTorch (conda, pip, source): conda Build command you used (if compiling from source): Python version: 3.8 CUDA/cuDNN version: 10.1 GPU models and configuration: Quadro RTX8000 Any other relevant information:
Add a documentation page for "manual vs auto optimization"
[ "won't fix", "docs" ]
Help for adversarial learning with pytorch lighting
[ "question" ]
Help for adversarial learning with pytorch lighting What is your question? Code the old method for adversarial learning is like this: fgm = FGM(model) for batch_input, batch_label in data: # normal training loss = model(batch_input, batch_label) loss.backward() # adversarial training fgm.attack() loss_adv = model(batch_input, batch_label) loss_adv.backward() fgm.restore() optimizer.step() model.zero_grad() but I don't know how to deal it by pytorch lighting What have you tried? It seems to work, but Iβ€˜m not sure if it is correct or there is a better way to achieve it. def backward(self, loss: Tensor, optimizer: Optimizer, optimizer_idx: int, *args, **kwargs) -> None: if not self.hparams.do_adv: super().backward(loss, optimizer, optimizer_idx, *args, **kwargs) def training_step(self, batch, batch_idx): outputs = self(**batch) loss = outputs[0] if self.hparams.do_adv: loss.backward() self.fgm.attack() outputs = self(**batch) adv_loss = outputs[0] self.fgm.restore() adv_loss.backward() return loss What's your environment? OS: Linux Packaging conda Version latest
TensorBoardLogger doesn't close SummaryWriter on finalize
[ "bug", "help wanted" ]
πŸ› Bug The file handle managed by the SummaryWriter under the attribute _experiment in the TensorBoardLogger is never closed by any cleanup routines. This leaves a dangling file handle that restricts access to the output tfevent files until the parent script exists or the Jupyter kernel is restarted. Please reproduce using the BoringModel https://colab.research.google.com/drive/1QW9KWh7OjePfLuTPx51LhAONoHzFehvF?usp=sharing To Reproduce Use the above colab. Expected behavior TensorBoardLogger leaves no open file handles once it is finalized. Environment CUDA: GPU: Tesla T4 available: True version: 10.1 Packages: numpy: 1.19.5 pyTorch_debug: True pyTorch_version: 1.7.0+cu101 pytorch-lightning: 1.1.6 tqdm: 4.41.1 System: OS: Linux architecture: 64bit processor: x86_64 python: 3.6.9 version: #1 SMP Thu Jul 23 08:00:38 PDT 2020 Additional context
Why do some metrics require `num_classes=1` for binary classification?
[ "question" ]
❓ Why do some metrics require num_classes=1 for binary classification? What is your question? Why do some metrics require the argument num_classes=1 for binary classification (and some don't) to give the correct results? I find it rather unintuitively to calculate Recall/Precision/F1 with the argument num_classes=1 for a binary classification, whereas e.g. ConfusionMatrix requires num_classes=2 in the same situation. Furthermore, using Recall/Precision/F1 with num_classes=2 for a binary classification gives wrong results - so this also might be considered a bug-report. It took me quite some time to figure out, why calculated metrics are different from what I calculated by hand from the confusion matrix. Code import torch from pytorch_lightning import metrics # example data preds = [0] * 200 + [1] * 30 + [0] * 10 + [1] * 20 targets = [0] * 200 + [1] * 30 + [1] * 10 + [0] * 20 preds = torch.tensor(preds) targets = torch.tensor(targets) # define method for printing metrics def _print_some_metrics(preds, targets, num_classes): precision = metrics.classification.Precision(num_classes=num_classes) recall = metrics.classification.Recall(num_classes=num_classes) f1 = metrics.classification.F1(num_classes=num_classes) accuracy = metrics.classification.Accuracy() avg_precision = metrics.classification.AveragePrecision( num_classes=1) confusion_matrix = metrics.ConfusionMatrix(num_classes=2) # print results print("Precision:\n{}\n".format(precision(preds, targets))) print("Recall:\n{}\n".format(recall(preds, targets))) print("F1:\n{}\n".format(f1(preds, targets))) print("AVG Precision:\n{}\n".format(avg_precision(preds, targets))) print("Accuracy:\n{}\n".format(accuracy(preds, targets))) print("ConfMat:\n{}\n".format(confusion_matrix(preds, targets))) _print_some_metrics(preds, targets, num_classes=1) _print_some_metrics(preds, targets, num_classes=2) Results in $ _print_some_metrics(preds, targets, num_classes=1) Precision: 0.6000000238418579 Recall: 0.75 F1: 0.6666666865348816 AVG Precision: 0.48846155405044556 Accuracy: 0.8846153616905212 ConfMat: tensor([[200., 20.], [ 10., 30.]]) $ _print_some_metrics(preds, targets, num_classes=2) Precision: 0.8846153616905212 Recall: 0.8846153616905212 F1: 0.8846153616905212 AVG Precision: 0.48846155405044556 Accuracy: 0.8846153616905212 ConfMat: tensor([[200., 20.], [ 10., 30.]]) As one can see, Precision/Recall/F1 give different (wrong) results when setting num_classes=2 in a binary classification. AveragePrecision doesn't even work with the binary usecase when setting num_classes=2 whereas ConfusionMatrix doesn't work when setting num_classes=1. I wonder if there is a specific reason why one would set num_classes=1 in a binary classification (where actually 2 classes exist). Wouldn't it be more straightforward to set num_classes=2 for binary classification for all metrics?