title
stringlengths
5
164
labels
sequence
bodyText
stringlengths
0
46.7k
TPUs: crash using torch-xla nightly
[ "bug", "help wanted", "won't fix", "accelerator: tpu" ]
πŸ› Bug If I try to use torch-xla nightly with PyTorch Lightning, I see this crash: (torch-xla-nightly) zcain@zcain-pl-verify:~/pytorch-lightning/pl_examples/domain_templates$ python computer_vision_fine_tuning.py Traceback (most recent call last): File "computer_vision_fine_tuning.py", line 55, in <module> import pytorch_lightning as pl File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/pytorch_lightning/__init__.py", line 65, in <module> from pytorch_lightning.trainer import Trainer File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/pytorch_lightning/trainer/__init__.py", line 18, in <module> from pytorch_lightning.trainer.trainer import Trainer File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 30, in <module> from pytorch_lightning.loggers import LightningLoggerBase File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/pytorch_lightning/loggers/__init__.py", line 18, in <module> from pytorch_lightning.loggers.tensorboard import TensorBoardLogger File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/pytorch_lightning/loggers/tensorboard.py", line 24, in <module> from torch.utils.tensorboard import SummaryWriter File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/torch/utils/tensorboard/__init__.py", line 8, in <module> from .writer import FileWriter, SummaryWriter # noqa F401 File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/torch/utils/tensorboard/writer.py", line 9, in <module> from tensorboard.compat.proto.event_pb2 import SessionLog File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/tensorboard/compat/proto/event_pb2.py", line 17, in <module> from tensorboard.compat.proto import summary_pb2 as tensorboard_dot_compat_dot_proto_dot_summary__pb2 File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/tensorboard/compat/proto/summary_pb2.py", line 17, in <module> from tensorboard.compat.proto import tensor_pb2 as tensorboard_dot_compat_dot_proto_dot_tensor__pb2 File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/tensorboard/compat/proto/tensor_pb2.py", line 16, in <module> from tensorboard.compat.proto import resource_handle_pb2 as tensorboard_dot_compat_dot_proto_dot_resource__handle__pb2 File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/tensorboard/compat/proto/resource_handle_pb2.py", line 150, in <module> '__module__' : 'tensorboard.compat.proto.resource_handle_pb2' SystemError: google/protobuf/pyext/descriptor.cc:354: bad argument to internal function Repro create VM: gcloud compute instances create zcain-vm --zone=us-central1-a --machine-type=e2-highmem-16 --image-family=torch-xla --image-project=ml-images --boot-disk-size=300GB --scopes=https://www.googleapis.com/auth/cloud-platform create TPU: gcloud compute tpus create zcain-tpu --zone=us-central1-a --network=default --version=pytorch-nightly --accelerator-type=v3-8 SSH into VM: (VM) git clone https://github.com/PyTorchLightning/pytorch-lightning.git (VM) conda activate torch-xla-nightly (VM) export TPU_IP_ADDRESS=<TPU's IP address> (VM) export XRT_TPU_CONFIG="tpu_worker;0;$TPU_IP_ADDRESS:8470" (VM) cd pytorch-lightning (VM) pip install . (VM) pip install -r requirements/test.txt (VM) cd pl_examples/domain_templates/ (VM) vim computer_vision_fine_tuning.py (VM) <replace GPUs arg with tpu_cores=8,> (VM) python computer_vision_fine_tuning.py Note: I get the same crash when using pip install pytorch-lightning instead of installing from source If I use torch-xla-1.7, training works fine. Environment * CUDA: - GPU: - available: False - version: None * Packages: - numpy: 1.19.2 - pyTorch_debug: False - pyTorch_version: 1.9.0a0+8c185e6 - pytorch-lightning: 1.1.8 - tqdm: 4.56.0 * System: - OS: Linux - architecture: - 64bit - - processor: - python: 3.6.10 - version: #1 SMP Debian 4.9.246-2 (2020-12-17) Additional context The crash seems related to Tensorboard. I wonder if the issue is related to Tensorboard versioning, however pip install pytorch-lightning doesn't seem to install any new tensorboard version compared to what is already installed in this public torch-xla conda env: Installing collected packages: PyYAML, fsspec, pytorch-lightning, yarl Attempting uninstall: PyYAML Found existing installation: PyYAML 5.4.1 Uninstalling PyYAML-5.4.1: Successfully uninstalled PyYAML-5.4.1 Attempting uninstall: fsspec Found existing installation: fsspec 0.7.4 Uninstalling fsspec-0.7.4: Successfully uninstalled fsspec-0.7.4 Attempting uninstall: yarl Found existing installation: yarl 1.6.3 Uninstalling yarl-1.6.3: Successfully uninstalled yarl-1.6.3 Successfully installed PyYAML-5.3.1 fsspec-0.8.5 pytorch-lightning-1.1.8 yarl-1.5.1 Let me know if you can think of anything else I should try
Fix docs typo in starter files
[ "docs" ]
πŸ“š Documentation For typos and doc fixes, please go ahead and: Create an issue. Fix the typo. Submit a PR. Thanks!
Fix typo in starter files
[ "docs" ]
πŸ“š Documentation For typos and doc fixes, please go ahead and: Create an issue. Fix the typo. Submit a PR. Thanks!
on_*_batch_transfer hooks should include a dataloader_index parameter
[ "feature", "help wanted" ]
πŸš€ Feature See title Motivation Users might want to do different logic based on which dataloader produced the batch in particular, as different dataloaders might contain different batch structures Pitch def on_*_batch_transfer(batch, dataloader_idx) Additional context docs: https://pytorch-lightning.readthedocs.io/en/latest/common/lightning_module.html#on-after-batch-transfer cc: @SeanNaren @rohitgr7 @williamFalcon
Allow arbitrary val check intervals when using max_steps
[ "feature", "help wanted" ]
πŸš€ Feature Currently when using the max epochs setting we can set val_check_interval to any arbitrary value we like. We cannot do this if max_steps is set. It must be less than one epoch. Motivation Constraining this value to be determined as a number of steps and also be less than an epoch makes it difficult to configure this value ahead of runtime, because an epoch length depends on many different factors. I usually use max_steps to easily configure schedulers and such that are stepped on a per batch basis, it allows values to be set before runtime in a portable way. Pitch allow this value to be any step count independent of epochs. Alternatives allow it to be defined a multiplier as when max_epochs is used
is auto_find_lr support ddp mode?
[ "bug", "help wanted" ]
πŸ› Bug Please reproduce using the BoringModel To Reproduce Use following BoringModel and post here Expected behavior Environment Note: Bugs with code are solved faster ! Colab Notebook should be made public ! IDE: Please, use our python bug_report_model.py template. Colab Notebook: Please copy and paste the output from our environment collection script (or fill out the checklist below manually). You can get the script and run it with: wget https://raw.githubusercontent.com/PyTorchLightning/pytorch-lightning/master/tests/collect_env_details.py # For security purposes, please check the contents of collect_env_details.py before running it. python collect_env_details.py PyTorch Version (e.g., 1.0): OS (e.g., Linux): How you installed PyTorch (conda, pip, source): Build command you used (if compiling from source): Python version: CUDA/cuDNN version: GPU models and configuration: Any other relevant information: Additional context
Mixed precision not working with v 1.2
[ "bug", "help wanted", "priority: 0" ]
πŸ› Bug After updating to 1.2 from 1.1.1, automatic mixed precision stopped working. Everything's float32 and getting CUDA OOM when I shouldn't get it (with float16 tensors). Worked fine on 1.1.1. Here's my Trainer args (maybe there's a conflicting combo of args or something): Trainer(logger=logger, callbacks=[checkpoint_callback, lr_monitor], default_root_dir=None, gradient_clip_val=args.gradient_clip_val, gpus=args.gpus, auto_select_gpus=False, log_gpu_memory=None, progress_bar_refresh_rate=1, check_val_every_n_epoch=args.check_val_every_n_epoch, overfit_batches=0., fast_dev_run=False, accumulate_grad_batches=1, max_epochs=args.max_epochs, limit_train_batches=vars(args).get('limit_train_batches', 1.), val_check_interval=args.val_check_interval, limit_val_batches=args.limit_val_batches, accelerator='ddp', sync_batchnorm=True, precision=args.precision, weights_summary='top', weights_save_path=None, num_sanity_val_steps=args.num_sanity_val_steps, resume_from_checkpoint=None, benchmark=False, deterministic=False, reload_dataloaders_every_epoch=True, terminate_on_nan=False, prepare_data_per_node=True, amp_backend='native', profiler=args.profiler) Environment GCP VM with V100 GPU(s) NVIDIA-SMI 450.51.06 Driver Version: 450.51.06 CUDA Version: 11.0 Don't really have the time to go deeper than this, but just rolled back to 1.1.1 and everything's fine.
Encapsulate logic in DistributedType
[ "help wanted", "good first issue", "refactor" ]
πŸš€ Feature Any logic to compare different DistributedTypes should be encapsulated by the enum itself. Motivation #5743 (comment) #5743 (comment) #5970 (comment) Additional context pytorch-lightning/pytorch_lightning/utilities/enums.py Lines 51 to 69 in 0b27147 class DistributedType(LightningEnum): """ Define type of ditributed computing. >>> # you can math the type with string >>> DistributedType.DDP == 'ddp' True >>> # which is case invariant >>> DistributedType.DDP2 in ('ddp2', ) True """ DP = 'dp' DDP = 'ddp' DDP2 = 'ddp2' DDP_SPAWN = 'ddp_spawn' DEEPSPEED = 'deepspeed' HOROVOD = 'horovod' DDP_SHARDED = 'ddp_sharded' DDP_SHARDED_SPAWN = 'ddp_sharded_spawn' RPC_SEQUENTIAL_PLUGIN = 'rpc_sequential' cc: @awaelchli
Parameterized freeze/unfreeze functions for finetuning
[ "feature", "help wanted", "won't fix" ]
πŸš€ Feature Add an optional parameter to the freeze und unfreeze methods to define which layers should be freezed/unfreezed. Motivation Reduce the amount of code to freeze/unfreeze a number of layers. Pitch When finetuning networks we often want to freeze/unfreeze a certain number of layers for training. While it's nice that lightning comes quick access to freeze/unfreeze them all. Wouldn't it be cool it we could to that to only a fraction of parameters? Per default the functions don't take additional parameters. We could add an optional one preferrably with different data types. Something like: float (0.0 - 1.0) - freezes/unfreezes a fraction of parameters int (0 - len(params)) - freezes/unfreezes the first n (last n when negative) parameters list - freezes/unfreezes a list of parameters What do you think?
DDP does not work well with `torch.no_grad()` in 1.2
[ "bug", "help wanted", "distributed", "priority: 1" ]
πŸ› Bug My code that uses torch.no_grad stopped working after updating to 1.2 (I'm doing knowledge distillation and using it to protect the teacher model computation). Now it throws RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one To Reproduce Here is my modification of Boring Model. Basically, I've changed model definition and train_step to provide guard part of the computational graph with torch.no_grad. Also, I've change the final loss, it's now a sum of the original one and one computed as a similarity between mutable and guarded layers. As you can see, it throws RuntimeError at the first training iteration. Expected behavior Training runs without any RuntimeException Environment CUDA: GPU: Tesla K80 available: True version: 10.1 Packages: numpy: 1.19.5 pyTorch_debug: True pyTorch_version: 1.7.0+cu101 pytorch-lightning: 1.2.0 tqdm: 4.41.1 System: OS: Linux architecture: 64bit processor: x86_64 python: 3.6.9 version: #1 SMP Thu Jul 23 08:00:38 PDT 2020
API consistency: "val" vs "validation"
[ "feature", "good first issue", "discussion", "design", "priority: 1" ]
Motivation As a new user learning PyTorch Lightning, I was surprised at the naming inconsistency between the validation_step() and val_dataloader() hooks. Searching through the codebase, it seems that most methods or getters/setters favor "validation"... validation_step_end validation_step validation_epoch_end on_validation_start on_validation_model_train on_validation_model_eval on_validation_epoch_start on_validation_epoch_end on_validation_end on_validation_batch_start on_validation_batch_end init_validation_tqdm enable_validation disable_validation _on_validation_start_log _on_validation_epoch_start_log _on_validation_epoch_end_log _on_validation_end_log _on_validation_batch_start_log _on_validation_batch_end_log ...while a handful use "val" instead: val_transforms val_step_context val_dataloader val_batch_idx total_val_batches should_check_val_fx reset_val_dataloader reset_train_val_dataloaders num_seen_val_check_batches I wasn't able to see any obvious pattern in which methods got which naming convention, and think having a consistent convention throughout would make for a (slightly) cleaner and easier-to-learn library. Related: #634 Pitch Standardize all validation-related methods (and properties, if any exist) to use the same naming convention; either "val" (for consistency with e.g. torchvision) or "validation" (for consistency with the bulk of the current Lightning API). Renamed methods would be marked as deprecated and removed in the next major version bump.
apex amp not working in 1.2.0
[ "bug", "help wanted" ]
πŸ› Bug Got RuntimeError: Invoked 'with amp.scale_loss, but internal Amp state has not been initialized. model, optimizer = amp.initialize(model, optimizer, opt_level=...) must be called before with amp.scale_loss.` When training with apex amp No problem in 1.1.8. Also got amp.load_state_dict when resuming from checkpoints trained with earlier version. Please reproduce using the BoringModel https://colab.research.google.com/drive/1OKQgh1gDUp78HMXX-mmrkxktVC2qV5oE?usp=sharing To Reproduce Expected behavior Environment Note: Bugs with code are solved faster ! Colab Notebook should be made public ! IDE: Please, use our python bug_report_model.py template. Colab Notebook: Please copy and paste the output from our environment collection script (or fill out the checklist below manually). You can get the script and run it with: wget https://raw.githubusercontent.com/PyTorchLightning/pytorch-lightning/master/tests/collect_env_details.py # For security purposes, please check the contents of collect_env_details.py before running it. python collect_env_details.py PyTorch Version (e.g., 1.0): OS (e.g., Linux): How you installed PyTorch (conda, pip, source): Build command you used (if compiling from source): Python version: CUDA/cuDNN version: GPU models and configuration: Any other relevant information: Additional context
on_{validation,test}_epoch_end functions should have an outputs parameter
[ "duplicate", "feature", "help wanted", "design" ]
πŸš€ Feature pytorch-lightning/pytorch_lightning/core/hooks.py Line 255 in 3b0e4e0 def on_validation_epoch_end(self) -> None: pytorch-lightning/pytorch_lightning/core/hooks.py Line 267 in 3b0e4e0 def on_test_epoch_end(self) -> None: Should have an outputs parameter as pytorch-lightning/pytorch_lightning/core/hooks.py Line 243 in 3b0e4e0 def on_train_epoch_end(self, outputs) -> None:
pl+wandb: Hanging during "cleaning up ddp environment" when using DDPSpawnPlugin + WandbLogger
[ "bug", "help wanted", "won't fix", "distributed", "logger", "priority: 1" ]
πŸ› Bug When using an accelerator that bascially uses a "spawn" start method for multiprocessing (rather than Linux default "fork"), any program that actually spawns a new worker (num_processes>1) seems to hang upon cleanup. Concretely, I've only seen this when: Accelerator is either ddp_cpu or ddp_spawn; AND WandbLogger is instantiated (and I guess used for training) Please reproduce using the BoringModel My model (ConstantMultiply) is more boring ;) To Reproduce Clone this repo (it's small), and then run the example: https://github.com/EricCousineau-TRI/repro/tree/cae9aa31f07f90c4cfb3b908fe84107e102ab06f/python/wandb_pytorch_lightning_combo git clone https://github.com/EricCousineau-TRI/repro cd repro git checkout cae9aa31f07f90c4cfb3b908fe84107e102ab06f cd python/wandb_pytorch_lightning_combo ./isolate.sh ./setup.sh ./train_wandb_pl_main.py Ignore the stuff about sweeps for now (I can make less noisy dir if you want). Expected behavior It doesn't freeze? Environment PyTorch Version (e.g., 1.0): 1.7.1 OS (e.g., Linux): Linux, Ubuntu, 18.04 How you installed PyTorch: pip Build command you used (if compiling from source): N/A Python version: 3.6.9, 3.8.0 CUDA/cuDNN version: N/A GPU models and configuration: N/A Any other relevant information: 😒 Additional context It would be nice to have a "fork" version of DDP for CPU, so that way we can test things more easily for the suggested mode of DDP for GPU (per the PL docs, at least as of a couple of days ago). If I use ddp, that means that colleagues who have only 1 GPU cannot test it, which hurts development, b/c the intended abstractions of pl breakdown 😿 (When trying with num_processes=2, gpus=[0], it just reduces the number of workers, so then we don't test those branches...) The interactions between wandb and pl are a bit non-trivial, esp. if we want to try things like sweeps. We can hack around it, but jeepers it feels like flailing when doing it on the full setup.
ModelCheckpoint is not saving top k models
[ "help wanted", "question", "docs" ]
πŸ› Bug ModelCheckpoint is not correctly monitoring metric values. To Reproduce https://colab.research.google.com/drive/1onBmED7dngP_VwFxcFBMsnQi82KbizSk?usp=sharing Expected behavior ModelCheckpoint should save top k models based on x metric, but it currently displays Epoch XXX, step XXX: x was not in top 2 for every epoch. Environment CUDA: GPU: Tesla T4 available: True version: 10.1 Packages: numpy: 1.19.5 pyTorch_debug: True pyTorch_version: 1.7.0+cu101 pytorch-lightning: 1.2.0 tqdm: 4.41.1 System: OS: Linux architecture: 64bit processor: x86_64 python: 3.6.9 version: #1 SMP Thu Jul 23 08:00:38 PDT 2020 Additional context The documentation doesn't mention how one should set the metric to be used in ModelCheckpoint. Tried to use both x or loss value, but ModelCheckpoint shows the same message for both cases. Also, the message should be more clear, saying that ModelCheckpoint couldn't find chosen value to monitor instead of saying that it was not in top k, since it displays the same message if I choose to monitor some value that doesn't exist.
DDPCPU not woking with DDPPlugin(find_unused_parameters=True) in Ver 1.2.0
[ "bug", "help wanted", "priority: 0", "distributed" ]
πŸ› Bug when using accelerator="ddp_cpu" together with plugins=[DDPPlugin(find_unused_parameters=True)] to create a trainer, the trainer will cause the program tries to re-run its self (and recreate the trainer) and finally then failed at checking gpu devices. Please reproduce using the BoringModel trainer = Trainer( max_epochs=1, gpus=0, accelerator="ddp_cpu", num_processes=4, plugins=[DDPPlugin(find_unused_parameters=True)], ) To Reproduce Expected behavior Environment PyTorch Version (e.g., 1.0): 1.7.1 OS (e.g., Linux): osX How you installed PyTorch (conda, pip, source): pip Build command you used (if compiling from source): Python version: 3.8.7 CUDA/cuDNN version: No GPU models and configuration: gpus=None Any other relevant information: Additional context
@auto_move_data unexpectedly uses transfer_batch_to_device from DataModule
[ "bug", "help wanted", "docs", "discussion", "design" ]
πŸ› Bug I have a LightningDataModule that produces a custom batch object, and I have implemented transfer_batch_to_device to move this data to the GPU. This works. I have a separate infer method (on my LightningModule) which is invoked with a Tensor, and I wanted to use @auto_move_data to move this Tensor to the GPU. However, this unexpectedly uses my data module's transfer_batch_to_device method, and assumed that the input to infer will be of my custom batch type. To Reproduce https://colab.research.google.com/drive/18f8886hG9AVSkYdVi96CIcZjrl9zX6Np#scrollTo=AAtq1hwSmjKe Expected behavior I think this API is a bit confusing/surprising. I was expecting @auto_move_data to behave in the "default" mode since I passed in a Tensor, and I was not expecting it to use my data module's transfer_batch_to_device method - especially since the infer method and the @auto_move_data decorator were on the LightningModule, not the data module. IMHO, a less confusing API would be to expect a to() method on the custom batch object, and make transfer_batch_to_device just call that to() method. Then it would work uniformly for tensors and custom batch objects. Another thing that makes this problematic: the signature of transfer_batch_to_device is supposed to be def transfer_batch_to_device(self, batch, device): https://pytorch-lightning.readthedocs.io/en/stable/extensions/datamodules.html?highlight=transfer_batch_to_device#transfer-batch-to-device But in @auto_move_data, it is getting called with a tuple of (args, kwargs), and no device AFAICT. Environment Note: Bugs with code are solved faster ! Colab Notebook should be made public ! CUDA: GPU: Tesla T4 available: True version: 10.1 Packages: numpy: 1.19.5 pyTorch_debug: True pyTorch_version: 1.7.0+cu101 pytorch-lightning: 1.2.0 tqdm: 4.41.1 System: OS: Linux architecture: 64bit processor: x86_64 python: 3.6.9 version: #1 SMP Thu Jul 23 08:00:38 PDT 2020 Additional context
PyTorchProfiler crashes when emit_nvtx=True
[ "bug", "help wanted", "priority: 0", "callback" ]
πŸ› Bug When training with PyTorchProfiler(emit_nvtx=True), the training stops with the following error : AttributeError: 'emit_nvtx' object has no attribute 'function_events' Please reproduce using the BoringModel -> https://colab.research.google.com/drive/1cqMxMgDVgltluaYZAkn9d2srSDmOu-1p?usp=sharing To Reproduce Use following BoringModel and post here Expected behavior Environment Additional context Additional docs issue : It says here (https://pytorch-lightning.readthedocs.io/en/1.2.0/advanced/profiler.html) that you have to use nvprof to collect the profiler traces, but it is no longer supported for devices with compute capability 8.0 and higher. I think that an exemple using nsight-compute would be great ( I can't test it right now because of the nvtx issue )
Load models give different results from original
[ "question" ]
❓ Questions and Help What is your question? What is the right way to retrieve a trained model and use it after load class NER_Model(pl.LightningModule): def __init__(self, hyperparams, model_parameters, dataset_infos, extra_infos): super(NER_Model, self).__init__() # ---------- hyperparams self.learning_rate = hyperparams["learning_rate"] self.train_batch_size = hyperparams["train_batch_size"] self.eval_batch_size = hyperparams["eval_batch_size"] self.eps = hyperparams["eps"] self.seed = hyperparams["seed"] self.max_epochs = hyperparams["max_epochs"] self.MAX_LEN = hyperparams["MAX_LEN"] # ---------- model_parameters self.tokenizer_class = model_parameters["tokenizer_class"] self.tokenizer = self.tokenizer_class.tokenizer self.model_name = model_parameters["model_name"] self.output_attentions = model_parameters["output_attentions"] self.output_hidden_states = model_parameters["output_hidden_states"] # ---------- dataset_infos self.all_data = dataset_infos["all_data"] self.tags_infos_dict = dataset_infos["tags_infos_dict"] self.CustomDataset = dataset_infos["CustomDataset"] # ---------- extra_infos self.overfit = extra_infos["overfit"] self.sampler = extra_infos["sampler"] # ---------- other_infos self.predict_proba = torch.nn.Softmax(dim=1) self.step = "Experiment" # ---------- fixing seeds self.seed_everything() # # ---------- Model self.model = AutoModelForTokenClassification.from_pretrained( self.model_name, num_labels=len(self.tags_infos_dict["tag2idx"]), output_attentions = self.output_attentions, output_hidden_states = self.output_hidden_states ) #Getting Dataset self.train_dataset = self.CustomDataset( self.all_data['X_train'], self.all_data['y_train'], self.tokenizer_class,self.tags_infos_dict,self.MAX_LEN,self.step ) self.valid_dataset = self.CustomDataset( self.all_data['X_valid'], self.all_data['y_valid'], self.tokenizer_class,self.tags_infos_dict,self.MAX_LEN,self.step ) self.test_dataset = self.CustomDataset( self.all_data['X_test'], self.all_data['y_test'], self.tokenizer_class,self.tags_infos_dict,self.MAX_LEN,self.step ) # ---------- Creating Dataframe for saving accuracy and loss self.df_performance_train_batch = pd.DataFrame( columns=["train_batch_loss", "train_batch_acc"] ) self.df_performance_train_epoch = pd.DataFrame( columns=["train_epoch_loss", "train_epoch_acc"] ) self.df_performance_valid_batch = pd.DataFrame( columns=["valid_batch_loss", "valid_batch_acc"] ) self.df_performance_valid_epoch = pd.DataFrame( columns=["valid_epoch_loss", "valid_epoch_acc"] ) # ---------- Creating comparision dataframe with expected and predicted results # self.response_columns = [] # self.df_valid = pd.DataFrame(columns=self.response_columns) # self.df_test = pd.DataFrame(columns=self.response_columns) # self.valid_true_labels = [] # self.valid_pred_labels = [] self.test_true_tags = [] self.test_pred_tags = [] def predict(self, X: str): self.step = "Deployment" self.test_pred_tags = [] batch = self.tokenizer.encode_plus(X, return_tensors="pt") batch["attention_masks"] = torch.ones_like(batch["input_ids"]) batch = dict((key, input.to( self.device)) for key, input in batch.items()) return self.test_step(batch, None) def retrieve_tags_on_prediction(self,predictions,inputs): predictions = predictions.detach().cpu().numpy() inputs =inputs.to('cpu').numpy() lables_list = [] for input,prediction in zip(inputs,predictions): labels = [] tokens = self.tokenizer.convert_ids_to_tokens(input) for token, pred_idx in zip(tokens, prediction): if not token.startswith("##"): labels.append(self.tags_infos_dict['tag_values'][pred_idx]) lables_list.append(labels) return lables_list def forward(self,input_ids,attention_mask=None,labels=None): if self.step == "Experiment": # loss, logits outputs = self.model(input_ids, token_type_ids=None, attention_mask=attention_mask, labels=labels) if self.step == "Deployment": # loss, logits outputs = self.model(input_ids) return outputs def training_step(self, batch, batch_nb): # batch inputs = batch['input_ids'] mask = batch['attention_masks'] targets = batch['tags'] # fwd outputs = self.forward(inputs, mask, targets) loss = outputs[0] logits = outputs[1] # acc predictions = torch.argmax(logits, dim=2) pred_tags, true_tags = self.retrieve_tags(predictions,targets) acc = torch.tensor(accuracy_score(true_tags, pred_tags)) # What to log tensorboard_logs = {"loss": loss, "acc": acc} self.df_performance_train_batch = self.df_performance_train_batch.append( pd.Series( [loss.item(), acc.item()], index=self.df_performance_train_batch.columns ), ignore_index=True, ) return { "loss": loss, "train_acc_batch": acc, "train_loss_batch": loss, "log": tensorboard_logs, } def training_epoch_end(self, outputs): if not outputs: return {} temp_avg_loss_batch = [x["train_loss_batch"] for x in outputs] temp_avg_acc_batch = [x["train_acc_batch"] for x in outputs] avg_train_loss = torch.stack(temp_avg_loss_batch).mean() avg_train_acc = torch.stack(temp_avg_acc_batch).mean() self.df_performance_train_epoch = self.df_performance_train_epoch.append( pd.Series( [avg_train_loss.item(), avg_train_acc.item()], index=self.df_performance_train_epoch.columns, ), ignore_index=True, ) tensorboard_logs = { "avg_train_acc": avg_train_acc, "avg_train_loss": avg_train_loss, } return {"avg_train_acc": avg_train_acc, "log": tensorboard_logs} def validation_step(self, batch, batch_nb): # batch inputs = batch['input_ids'] mask = batch['attention_masks'] targets = batch['tags'] # fwd outputs = self.forward(inputs, mask, targets) loss = outputs[0] logits = outputs[1] # acc predictions = torch.argmax(logits, dim=2) pred_tags, true_tags = self.retrieve_tags(predictions,targets) # self.valid_true_labels.extend(true_tags) # self.valid_pred_labels.extend(pred_tags) acc = torch.tensor(accuracy_score(true_tags, pred_tags)) self.df_performance_valid_batch = self.df_performance_valid_batch.append( pd.Series( [loss.item(), acc.item()], index=self.df_performance_valid_batch.columns ), ignore_index=True, ) return {"valid_acc_batch": acc, "valid_loss_batch": loss} def validation_epoch_end(self, outputs): if not outputs: return {} temp_avg_loss_batch = [x["valid_loss_batch"] for x in outputs] temp_avg_acc_batch = [x["valid_acc_batch"] for x in outputs] avg_valid_loss = torch.stack(temp_avg_loss_batch).mean() avg_valid_acc = torch.stack(temp_avg_acc_batch).mean() self.df_performance_valid_epoch = self.df_performance_valid_epoch.append( pd.Series( [avg_valid_loss.item(), avg_valid_acc.item()], index=self.df_performance_valid_epoch.columns, ), ignore_index=True, ) tensorboard_logs = { "avg_valid_acc": avg_valid_acc, "avg_valid_loss": avg_valid_loss, } return {"avg_valid_acc": avg_valid_acc, "log": tensorboard_logs} def test_step(self, batch, batch_nb): if self.step =="Experiment": # batch inputs = batch['input_ids'] mask = batch['attention_masks'] targets = batch['tags'] # fwd outputs = self.forward(inputs, mask, targets) loss = outputs[0] logits = outputs[1] # acc predictions = torch.argmax(logits, dim=2) pred_tags, true_tags = self.retrieve_tags(predictions,targets) self.test_true_tags.extend(true_tags) self.test_pred_tags.extend(pred_tags) acc = torch.tensor(accuracy_score(true_tags, pred_tags)) final_return = {"test_acc_batch": acc} if self.step =="Deployment": # batch inputs = batch['input_ids'] mask = batch['attention_masks'] # fwd outputs = self.forward(inputs) logits = outputs[0] predictions = torch.argmax(logits, dim=2) pred_tags = self.retrieve_tags_on_prediction(predictions,inputs) final_return = pred_tags return final_return def test_epoch_end(self, outputs): if not outputs: return {} if self.step == "Experiment": avg_test_acc = torch.stack([x["test_acc_batch"] for x in outputs]).mean() tensorboard_logs = {"avg_test_acc": avg_test_acc} retorno = {"avg_test_acc": avg_test_acc, "log": tensorboard_logs} if self.step == "Deployment": retorno = outputs return retorno def configure_optimizers(self): optimizer = AdamW( [p for p in self.parameters() if p.requires_grad], lr = self.learning_rate, eps = self.eps ) return optimizer def decode_tags(self,codes:np.ndarray): targets = [] for elem in codes: targets.append(self.tags_infos_dict["idx2tag"][elem]) return np.array(targets) def retrieve_tags(self,predictions,targets): targets = targets.detach().cpu().numpy() predictions = predictions.detach().cpu().numpy() pred_tags = [self.tags_infos_dict['tag_values'][p_i] for p, l in zip(predictions, targets) for p_i, l_i in zip(p, l) if self.tags_infos_dict['tag_values'][l_i] != self.tokenizer.pad_token] true_tags = [self.tags_infos_dict['tag_values'][l_i] for l in targets for l_i in l if self.tags_infos_dict['tag_values'][l_i] != self.tokenizer.pad_token] return pred_tags, true_tags def seed_everything(self): random.seed(self.seed) os.environ['PYTHONHASHSEED'] = str(self.seed) np.random.seed(self.seed) torch.manual_seed(self.seed) torch.cuda.manual_seed(self.seed) torch.cuda.manual_seed_all(self.seed) torch.backends.cudnn.deterministic = True torch.backends.cudnn.benchmark = False def gpu_mem_restore(func): @functools.wraps(func) def wrapper(*args, **kwargs): try: return func(*args, **kwargs) except: type, val, tb = sys.exc_info() traceback.clear_frames(tb) raise type(val).with_traceback(tb) from None return wrapper @gpu_mem_restore def train_dataloader(self): train_sampler = RandomSampler(self.train_dataset) train_dataloader = DataLoader( self.train_dataset, sampler=train_sampler, batch_size=self.train_batch_size, num_workers=cpu_count(), ) return train_dataloader @gpu_mem_restore def val_dataloader(self): valid_sampler = SequentialSampler(self.valid_dataset) val_dataloader = DataLoader( self.valid_dataset, sampler=valid_sampler, batch_size=self.eval_batch_size, num_workers=cpu_count(), ) return val_dataloader @gpu_mem_restore def test_dataloader(self): test_sampler = SequentialSampler(self.test_dataset) test_dataloader = DataLoader( self.test_dataset, sampler=test_sampler, batch_size=self.eval_batch_size, num_workers=cpu_count(), ) return test_dataloader #### Training and testing a prediction (Wright Result!) trainer = pl.Trainer( gpus=num_gpus, max_epochs=max_epochs, check_val_every_n_epoch=check_val_every_n_epoch, profiler=profiler, checkpoint_callback=checkpoint_callback, progress_bar_refresh_rate=progress_bar_refresh_rate, resume_from_checkpoint=resume_from_checkpoint, gradient_clip_val=gradient_clip_val, ) model = NER_Model( hyperparams=hyperparams, model_parameters=model_parameters, dataset_infos=dataset_infos, extra_infos=extra_infos, ) trainer.fit(model) trainer.test(model) #### Performing inference after training model.predict("I love Brazilian beaches") Result: [['O', 'O', 'O', 'O', 'O', 'Tag1', 'Tag2, 'O', 'O']] #### Saving **State Dict** and **Checkpoint** model_path_dict = "model.pt" model_path_checkpoint = "model.ckpt" torch.save(model.state_dict(), model_path_dict) trainer.save_checkpoint(model_path_checkpoint) #### Testing lightning documentation sugestion (Wrong Result!) device_cpu = torch.device('cpu') model = NER_Model.load_from_checkpoint( checkpoint_path = model_path_checkpoint, map_location=device, hyperparams=hyperparams, model_parameters=model_parameters, dataset_infos=dataset_infos, extra_infos=extra_infos, ) model.predict("I love Brazilian beaches") Result: [['Tag5', 'Tag2', 'O', 'Tag4', 'Tag4', 'Tag3', 'Tag1, 'Tag5', 'Tag4',]] #### Testing loading the checkpoint (Wrong Result!) model = NER_Model( hyperparams=hyperparams, model_parameters=model_parameters, dataset_infos=dataset_infos, extra_infos=extra_infos, ) storage = 'cpu' checkpoint = torch.load(model_path_checkpoint, map_location=lambda storage, loc: storage) model.load_state_dict(checkpoint['state_dict']) model.predict("I love Brazilian beaches") Result: [['Tag5', 'Tag2', 'O', 'Tag4', 'Tag4', 'Tag3', 'Tag1, 'Tag5', 'Tag4',]] #### Testing loading directly the state dict (Wrong Result!) model = NER_Model( hyperparams=hyperparams, model_parameters=model_parameters, dataset_infos=dataset_infos, extra_infos=extra_infos, ) device_cpu = torch.device('cpu') model.load_state_dict(torch.load(model_path_dict,map_location=device_cpu)) model.eval() model.predict("I love Brazilian beaches") Result: [['Tag5', 'Tag2', 'O', 'Tag4', 'Tag4', 'Tag3', 'Tag1, 'Tag5', 'Tag4,]] Conclusion Before saving and loading weights somewhere it works well (as expected). After loading, in all of the tried strategies it does not works as expected. The answer is wrong. What's your environment? Google Colab
Inference AUROC on valid set using ModelCheckpoint saved ckpt does not equal valid AUROC from training
[ "bug", "help wanted", "checkpointing" ]
πŸ› Bug During training, I have a ModelCheckpoint callback that saves the top 5 models based on valid_AUC computed at the end of validation phase using multiclass_auroc() function. The callback saves the .ckpt file as epoch=X_valid_AUC=0.XXXX.ckpt. When I load the ckpt and run trainer.test() on the same validation set used in training, it returns a different value (see screenshot below). I decided to test if it would work using 1 GPU in ddp, and it works in that case. When using 4 GPUs it does not work. My validation set size is 543 examples, but in my validation_epoch_end function I use self.log("valid_preds_size", preds.shape[0]) and it outputs 136 which is approx 543/4. Shouldn't it output 543 because validation_epoch_end has gathered all the batch dicts from every GPU? I create my preds tensor like this: preds = torch.cat( [step[head.name]["preds"] for step in valid_outputs if head.name in step] ) CUDA: GPU: Tesla V100-SXM2-16GB Tesla V100-SXM2-16GB Tesla V100-SXM2-16GB Tesla V100-SXM2-16GB available: True version: 10.1 Packages: numpy: 1.18.1 pyTorch_debug: False pyTorch_version: 1.7.1+cu101 pytorch-lightning: 1.1.1 tqdm: 4.46.1 System: OS: Linux architecture: 64bit processor: python: 3.7.6 version: #1 SMP Debian 4.9.210-1+deb9u1 (2020-06-07) Additional context
Prevent deprecated documentation from showing up in search engine top results
[ "good first issue", "docs", "let's do it!" ]
πŸ“š Documentation Currently when we search keywords like "pytorch lightning trainer" we get results that point to very very very outdated docs! It should instead point to the latest stable documentation pages. Investigate these options here: https://docs.readthedocs.io/en/stable/faq.html#how-can-i-avoid-search-results-having-a-deprecated-version-of-my-docs https://www.sphinx-doc.org/en/master/usage/configuration.html#confval-html_extra_path adding a robots.txt is probably enough cc @alemkhenter
Slurm GPUs Not Properly Detected
[ "bug", "help wanted" ]
πŸ› Bug I'm using slurm and the GPUs don't seem to be detected properly, the doc at https://pytorch-lightning.readthedocs.io/en/stable/clouds/slurm.html implies it's as simple as passing the number of GPUs and nodes to the trainer with a suitable sbatch script. But when I do this for a single node, 2 GPUs, I get the following error: pytorch_lightning.utilities.exceptions.MisconfigurationException: You requested GPUs: [0, 1] But your machine only has: [0] There's no output printed about slurm, I have created a minimal reproduction of the error To Reproduce slurm_test.py: import pytorch_lightning as pl pl.Trainer(gpus=2, num_nodes=1, accelerator="ddp") slurm_test.sh #!/bin/bash # Parameters #SBATCH --cpus-per-task=4 #SBATCH --gres=gpu:p6000:1 #SBATCH --mem=32GB #SBATCH --nodes=1 #SBATCH --ntasks-per-node=2 #SBATCH --open-mode=append #SBATCH --qos=high #SBATCH --signal=USR1@90 #SBATCH --time=2160 # command srun --output /cfarhomes/maxehr/mm/%j_%t_log.out --error /cfarhomes/maxehr/mm/%j_%t_log.err --unbuffered /cfarhomes/maxehr/.cache/pypoetry/virtualenvs/mm-XIIWUkOg-py3.8/bin/python /cfarhomes/maxehr/mm/slurm_test.py /cfarhomes/maxehr/mm Parameters (gres) may need to be adjusted, this is how my cluster calls them. Run this with sbatch slurm_test.sh As you can see there are 2 tasks per node and 1 node and the syntax matches with that used in the docs. This fails with the following exception: Traceback (most recent call last): File "/cfarhomes/maxehr/mm/slurm_test.py", line 3, in <module> pl.Trainer(gpus=2, num_nodes=1, accelerator="ddp") File "/cfarhomes/maxehr/.cache/pypoetry/virtualenvs/mm-XIIWUkOg-py3.8/lib/python3.8/site-packages/pytorch_lightning/trainer/connectors/env_vars_connector.py", line 41, in overwrite_by_env_vars return fn(self, **kwargs) File "/cfarhomes/maxehr/.cache/pypoetry/virtualenvs/mm-XIIWUkOg-py3.8/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 348, in __init__ self.accelerator_connector.on_trainer_init( File "/cfarhomes/maxehr/.cache/pypoetry/virtualenvs/mm-XIIWUkOg-py3.8/lib/python3.8/site-packages/pytorch_lightning/accelerators/accelerator_connector.py", line 104, in on_trainer_init self.trainer.data_parallel_device_ids = device_parser.parse_gpu_ids(self.trainer.gpus) File "/cfarhomes/maxehr/.cache/pypoetry/virtualenvs/mm-XIIWUkOg-py3.8/lib/python3.8/site-packages/pytorch_lightning/utilities/device_parser.py", line 78, in parse_gpu_ids gpus = _sanitize_gpu_ids(gpus) File "/cfarhomes/maxehr/.cache/pypoetry/virtualenvs/mm-XIIWUkOg-py3.8/lib/python3.8/site-packages/pytorch_lightning/utilities/device_parser.py", line 139, in _sanitize_gpu_ids raise MisconfigurationException(f""" pytorch_lightning.utilities.exceptions.MisconfigurationException: You requested GPUs: [0, 1] But your machine only has: [0] Both jobs produce the same exception and neither job has anything written to stdout. Expected behavior It works. Environment CUDA: - GPU: - available: False - version: 10.2 Packages: - numpy: 1.20.1 - pyTorch_debug: False - pyTorch_version: 1.7.1 - pytorch-lightning: 1.1.6 - tqdm: 4.57.0 System: - OS: Linux - architecture: - 64bit - ELF - processor: x86_64 - python: 3.8.1 - version: #1 SMP Thu Jan 21 16:15:07 EST 2021 Additional context I would also like to know if there's a way to disable automatic requeueing that I've read about, supposing this eventually works.
logging error with 1.2.0 version
[ "help wanted", "docs", "working as intended", "logging" ]
πŸ› Bug if I use logger named "lightning", logger print log twice. It happens with pytorch-lightning 1.2.0 version and doesn't happend with 1.1.8 version. This is reproduced easily like below. Error Example Type "help", "copyright", "credits" or "license" for more information. >>> import pytorch_lightning as pl >>> import logging >>> logger = logging.getLogger("lightning") >>> logger.info("test") test >>> logging.basicConfig(level=logging.INFO) >>> logger.info("test") test INFO:lightning:test >>> pl.__version__ '1.2.0' >>> import pytorch_lightning as pl >>> import logging >>> logger = logging.getLogger("lightning") >>> logger.info("test") >>> logging.basicConfig(level=logging.INFO) >>> logger.info("test") INFO:lightning:test >>> pl.__version__ '1.1.8' Please reproduce using the BoringModel Don't need model. To Reproduce Refer above codes 1.2.0 version Expected behavior Refer above codes 1.1.8 version Environment PyTorch Version (e.g., 1.0): 1.2.0 OS (e.g., Linux): Ubuntu 20.04.1 LTS How you installed PyTorch (conda, pip, source): pip Build command you used (if compiling from source): Python version: 3.7,9 CUDA/cuDNN version: GPU models and configuration: Any other relevant information: Additional context
Pickle error and OOM when upgrading to 1.2.0
[ "question", "won't fix" ]
When upgrading from 1.1.6 to 1.2.0, I notices 2 changes Significant increase of gpu memory Pickle error in my module class (the object metric_fn is sure not pickable but the same code worked fine in 1.1.6) Do you have an idea what changes in 1.2.0 may cause the issues ? Any suggestion for the memory problem ? My pseudo code class ClfModule(pl.LightningModule): def __init__(self, model, tokenizer): self.model = model self.tokenizer = tokenizer self.metric_fn = not_pickable_object() I am using Huggingface transformers and datasets
val_check_interval equivalent for training loss logging
[ "won't fix" ]
I very much like the feature of val_check_interval. I use it for logging the validation loss when the epochs are too large. However, I would like to log the training loss at the same steps as well so that for every validation loss entry in my logs, I also have a corresponding training loss entry. Is there an easy way to do it, or would I have to write my own logger for that? Thank you.
T5ForConditionalGeneration to Trainer.fit()
[ "bug", "help wanted", "priority: 0" ]
Model I am using (MT5ForConditionalGeneration ('google/mt5-base')): The problem arises when passing pl.LightningDataModule with T5ForConditionalGeneration to Trainer.fit() /usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/connectors/model_connector.py in copy_trainer_model_properties(self, model) AttributeError: 'QAModel' object has no attribute 'automatic_optimization' <!β€” A clear and concise description of what the bug is. β€”> class QAModel(pl.LightningDataModule): [def __init__(self): super().__init__() self.model = MT5ForConditionalGeneration.from_pretrained(MODEL_NAME, return_dict=True) def forward(self, input_ids, attention_mask, labels=None): output = self.model( input_ids=input_ids, attention_mask=attention_mask, labels=labels ) return output.loss, output.logits def training_step(self, batch, batch_idx): input_ids = batch['input_ids'] attention_mask = batch['attention_mask'] labels = batch['labels'] loss, outputs = self(input_ids, attention_mask, labels) self.log('train_loss', loss, prog_bar=True, logger=True) return loss def validation_step(self, batch, batch_idx): input_ids = batch['input_ids'] attention_mask = batch['attention_mask'] labels = batch['labels'] loss, outputs = self(input_ids, attention_mask, labels) self.log('val_loss', loss, prog_bar=True, logger=True) return loss def test_step(self, batch, batch_idx): input_ids = batch['input_ids'] attention_mask = batch['attention_mask'] labels = batch['labels'] loss, outputs = self(input_ids, attention_mask, labels) self.log('train_loss', loss, prog_bar=True, logger=True) return loss def configure_optimizers(self): print('done') return AdamW(self.parameters(), lr=0.0001)] model = QAModel() trainer.fit(model, data_module) <!β€” Please paste your BoringModel colab link here. β€”> To Reproduce Use following - https://colab.research.google.com/drive/1wRYnuQhkO8Uv.. <!β€” If you could not reproduce using the BoringModel and still think there's a bug, please post here β€”> Expected behavior Environment transformers version: 4.3.2 Platform: google colab pytorch-lightning version (GPU?): 1.2.0 Additional context <!β€” Add any other context about the problem here. β€”> trainer.fit Trainer.fit
Add verbose option to prog_bar to print summary of every epoch
[ "feature", "help wanted", "good first issue", "won't fix" ]
Similar to ModelCheckpoint(verbose=true), we can add verbose_progress_bar trainer flag, to print the logs to the screen after every epoch
Latest Lightning does not support multiple callbacks that stop
[ "bug", "help wanted", "priority: 0" ]
πŸ› Bug In the latest version of lightning, you do not seem to be able to have multiple callbacks which can stop. Please reproduce using the BoringModel If you have mulitple callbacks which can do early stopping, only the last one can be active. Create a callback with early stopping, MyStoppingCallback(). Add it, then EarlyStoppingCallback() to the callbacks argument of the trainer, e.g. callbacks = [MyStoppingCallback(), EarlyStoppingCallback('val_loss')] The callback is triggered and calculates that it needs to stop, but it ontinues training On the other hand, if you change the order (e.g. callbacks = [EarlyStoppingCallback('val_loss'),MyStoppingCallback()] it will be stop with MyStoppingCallback but probably doesn't triggle the EarlyStoppingCallback. # Copyright The PyTorch Lightning team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # -------------------------------------------- # -------------------------------------------- # -------------------------------------------- # USE THIS MODEL TO REPRODUCE A BUG YOU REPORT # -------------------------------------------- # -------------------------------------------- # -------------------------------------------- import os import torch from torch.utils.data import Dataset from pl_examples import cli_lightning_logo from pytorch_lightning import LightningModule, Trainer from pytorch_lightning import Trainer from pytorch_lightning.callbacks import EarlyStopping from pytorch_lightning.callbacks import Callback class RandomDataset(Dataset): """ >>> RandomDataset(size=10, length=20) # doctest: +ELLIPSIS <...bug_report_model.RandomDataset object at ...> """ def __init__(self, size, length): self.len = length self.data = torch.randn(length, size) def __getitem__(self, index): return self.data[index] def __len__(self): return self.len class BoringModel(LightningModule): """ >>> BoringModel() # doctest: +ELLIPSIS +NORMALIZE_WHITESPACE BoringModel( (layer): Linear(...) ) """ def __init__(self): """ Testing PL Module Use as follows: - subclass - modify the behavior for what you want class TestModel(BaseTestModel): def training_step(...): # do your own thing or: model = BaseTestModel() model.training_epoch_end = None """ super().__init__() self.layer = torch.nn.Linear(32, 2) def forward(self, x): return self.layer(x) def loss(self, batch, prediction): # An arbitrary loss to have a loss that updates the model weights during `Trainer.fit` calls return torch.nn.functional.mse_loss(prediction, torch.ones_like(prediction)) def step(self, x): x = self.layer(x) out = torch.nn.functional.mse_loss(x, torch.ones_like(x)) return out def training_step(self, batch, batch_idx): output = self.layer(batch) loss = self.loss(batch, output) return {"loss": loss} def training_step_end(self, training_step_outputs): return training_step_outputs def training_epoch_end(self, outputs) -> None: torch.stack([x["loss"] for x in outputs]).mean() def validation_step(self, batch, batch_idx): output = self.layer(batch) loss = self.loss(batch, output) self.log('val_loss', loss) return {"x": loss} def validation_epoch_end(self, outputs) -> None: torch.stack([x['x'] for x in outputs]).mean() def test_step(self, batch, batch_idx): output = self.layer(batch) loss = self.loss(batch, output) return {"y": loss} def test_epoch_end(self, outputs) -> None: torch.stack([x["y"] for x in outputs]).mean() def configure_optimizers(self): optimizer = torch.optim.SGD(self.layer.parameters(), lr=0.1) lr_scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=1) return [optimizer], [lr_scheduler] # NOTE: If you are using a cmd line to run your script, # provide the cmd line as below. # opt = "--max_epochs 1 --limit_train_batches 1".split(" ") # parser = ArgumentParser() # args = parser.parse_args(opt) class EarlyStoppingExample(Callback): def on_validation_end(self, trainer, pl_module): if trainer.current_epoch > 5: should_stop = True else: should_stop = False if bool(should_stop): print("\nSTOPPING!!!!!!!!!!!!!!!!!!!!\n") self.stopped_epoch = trainer.current_epoch trainer.should_stop = True # stop every ddp process if any world process decides to stop should_stop = trainer.training_type_plugin.reduce_early_stopping_decision(should_stop) trainer.should_stop = should_stop def test_run(): class TestModel(BoringModel): def on_train_epoch_start(self) -> None: pass # fake data train_data = torch.utils.data.DataLoader(RandomDataset(32, 64)) val_data = torch.utils.data.DataLoader(RandomDataset(32, 64)) test_data = torch.utils.data.DataLoader(RandomDataset(32, 64)) # model early_stopping = EarlyStopping('val_loss', patience=50) model = TestModel() trainer = Trainer( default_root_dir=os.getcwd(), limit_train_batches=1, limit_val_batches=1, max_epochs=100, weights_summary=None, callbacks=[ EarlyStoppingExample(), early_stopping, ] ) trainer.fit(model, train_data, val_data) trainer.test(test_dataloaders=test_data) if __name__ == '__main__': #cli_lightning_logo() test_run() To Reproduce Use following BoringModel and post here Expected behavior PyTorch Version (e.g., 1.0): 1.7 OS (e.g., Linux): Windows How you installed PyTorch (conda, pip, source): pip Build command you used (if compiling from source): Python version: 3.7.4 CUDA/cuDNN version: GPU models and configuration: Any other relevant information:
Pure PyTorch vs Lightning is faster with CPU small toy example
[ "won't fix", "docs", "working as intended" ]
πŸ› Bug Recently I found that Lightning runs much slower than simple PyTorch code. Code using Lightning: import os import math import torch from torch import nn from torch.nn import functional as F from torch.utils.data import DataLoader, random_split from torchvision.datasets import MNIST from torchvision import transforms import pytorch_lightning as pl from pytorch_lightning.metrics.functional import accuracy from pytorch_lightning.callbacks import LearningRateMonitor from pytorch_lightning.loggers import TensorBoardLogger, CSVLogger from pl_bolts.datasets import DummyDataset from torch.optim.lr_scheduler import CosineAnnealingLR, ExponentialLR, LambdaLR train = DummyDataset((1, 28, 28), (1,), num_samples=100000) train = DataLoader(train, batch_size=32) val = DummyDataset((1, 28, 28), (1,)) val = DataLoader(val, batch_size=32) test = DummyDataset((1, 28, 28), (1,)) test = DataLoader(test, batch_size=32) class LitAutoEncoder(pl.LightningModule): def __init__(self): super().__init__() self.encoder = nn.Sequential(nn.Linear(28 * 28, 128), nn.ReLU(), nn.Linear(128, 3)) self.decoder = nn.Sequential(nn.Linear(3, 128), nn.ReLU(), nn.Linear(128, 28 * 28)) def training_step(self, batch, batch_idx): # -------------------------- # REPLACE WITH YOUR OWN x, y = batch x = x.view(x.size(0), -1) z = self.encoder(x) x_hat = self.decoder(z) loss = F.mse_loss(x_hat, x) return loss # -------------------------- def validation_step(self, batch, batch_idx): # -------------------------- # REPLACE WITH YOUR OWN x, y = batch x = x.view(x.size(0), -1) z = self.encoder(x) x_hat = self.decoder(z) loss = F.mse_loss(x_hat, x) self.log('val_loss', loss) # -------------------------- def test_step(self, batch, batch_idx): # -------------------------- # REPLACE WITH YOUR OWN x, y = batch x = x.view(x.size(0), -1) z = self.encoder(x) x_hat = self.decoder(z) loss = F.mse_loss(x_hat, x) self.log('test_loss', loss) # -------------------------- def configure_optimizers(self): learning_rate = 1e-3 optimizer = torch.optim.SGD(self.parameters(), lr=learning_rate) return optimizer if __name__ == '__main__': ae = LitAutoEncoder() # Initialize a trainer trainer = pl.Trainer(gpus=None, max_epochs=5, progress_bar_refresh_rate=1000, log_every_n_steps=1000, # profiler='simple' ) # Train the model ⚑ # trainer.fit(ae, train, val) trainer.fit(ae, train) Code using just PyTorch: import os import math import time import torch from torch import nn from torch.nn import functional as F from torch.utils.data import DataLoader, random_split from torchvision.datasets import MNIST from torchvision import transforms import pytorch_lightning as pl from pytorch_lightning.metrics.functional import accuracy from pytorch_lightning.callbacks import LearningRateMonitor from pytorch_lightning.loggers import TensorBoardLogger, CSVLogger from pl_bolts.datasets import DummyDataset from torch.optim.lr_scheduler import CosineAnnealingLR, ExponentialLR, LambdaLR train = DummyDataset((1, 28, 28), (1,), num_samples=100000) train = DataLoader(train, batch_size=32) val = DummyDataset((1, 28, 28), (1,)) val = DataLoader(val, batch_size=32) test = DummyDataset((1, 28, 28), (1,)) test = DataLoader(test, batch_size=32) class PlainModel(nn.Module): def __init__(self): super().__init__() self.encoder = nn.Sequential(nn.Linear(28 * 28, 128), nn.ReLU(), nn.Linear(128, 3)) self.decoder = nn.Sequential(nn.Linear(3, 128), nn.ReLU(), nn.Linear(128, 28 * 28)) def forward(self, batch): # -------------------------- # REPLACE WITH YOUR OWN x, y = batch x = x.view(x.size(0), -1) z = self.encoder(x) x_hat = self.decoder(z) loss = F.mse_loss(x_hat, x) return loss if __name__ == '__main__': model = PlainModel() optimizer = torch.optim.SGD(model.parameters(), lr=0.001) for epoch_num in range(30): batch_count = 0 start_time = time.time() for batch in train: optimizer.zero_grad() loss = model(batch) loss.backward() optimizer.step() batch_count += 1 end_time = time.time() print('Epoch {0} speed: {1} in {2} time'.format(epoch_num, batch_count / (end_time - start_time), end_time - start_time)) The Lightning code runs about 450 it/s on my Mac using CPU vs vanilla PyTorch's 650 it/s. Vanilla PyTorch code runs about 1.44 times faster than Lightning. To Reproduce Use the above code. Expected behavior Lightning runs at almost same speed for vanilla PyTorch code. Environment PyTorch Version (e.g., 1.0): 1.5.0 OS (e.g., Linux): Mac How you installed PyTorch (conda, pip, source): conda Build command you used (if compiling from source): n/a Python version: 3.8.3 CUDA/cuDNN version: n/a GPU models and configuration: n/a Any other relevant information: n/a -PL version: tried both 1.1.8 and 1.2.1 Additional context n/a
validation_epoch_end does not contain all `validation_step` outputs when using DDP
[ "help wanted", "question", "won't fix", "distributed", "priority: 1" ]
πŸ› Bug When using DDP, valdiation_epoch_end does not receive all of the outputs from validation_step. For instance, running the following script: import pytorch_lightning as pl import torch from torch import nn class Module(pl.LightningModule): def __init__(self): super().__init__() self.linear = nn.Linear(5, 1) def configure_optimizers(self): return torch.optim.Adam(self.linear.parameters()) def training_step(self, batch, batch_idx): return self.linear(batch).sum() def validation_step(self, batch, batch_idx): return batch_idx def validation_epoch_end(self, outputs): print("VALIDATING", len(outputs)) if __name__ == "__main__": m = Module() datasets = [torch.rand([5]) for __ in range(100)] train_loader = torch.utils.data.DataLoader(datasets, batch_size=8) val_loader = torch.utils.data.DataLoader(datasets, batch_size=1) trainer = pl.Trainer( num_sanity_val_steps=0, max_epochs=1, accelerator="ddp_cpu", num_processes=4, ) trainer.fit(m, train_loader, val_loader) Will output: VALIDATING 25 VALIDATING 25 VALIDATING 25 VALIDATING 25 Expected behavior IMO, PyTorch Lightning would handle the reduction across GPUs for me This way, any metrics computations I do in validation_epoch_end would be exactly the same, whether or not I use DDP or not. I can do this manually by inserting: if torch.distributed.is_initialized(): torch.distributed.barrier() gather = [None] * torch.distributed.get_world_size() torch.distributed.all_gather_object(gather, outputs) outputs = [x for xs in gather for x in xs] into the validation_epoch_end, but ideally Lightning would handle this for me. We could also consider just running the validation_epoch_end on a single process (which would mean there aren't any more implicit reductions), but I'm less convinced that's necessary/useful. if torch.distributed.get_rank() == 0: return Environment PyTorch Version (e.g., 1.0): 1.7 OS (e.g., Linux): Linux How you installed PyTorch (conda, pip, source): conda PyTorch Lightning Version: 1.2.1
MLFlow Logger Makes a New Run When Resuming from hpc Checkpoint
[ "bug", "help wanted", "checkpointing", "environment: slurm", "priority: 2", "logger: mlflow" ]
πŸ› Bug Currently the MLFlowLogger creates a new run when resuming from an hpc checkpoint, e.g., after preemption by slurm and requeuing. Runs are an MLFlow concept that groups things in their UI, so when resuming after requeue, it should really be reusing the run ID. I think this can be patched into the hpc checkpoint using the logger which I believe exposes the run ID. This can also be seen on the v_num on the progress bar which changes after preemption (in general that v_num probably shouldnt be changing in this case). I'm happy to attempt to PR this if the owners agree that it's a bug. To Reproduce Use MLFlowLogger on a slurm cluster and watch the mlflow UI when preemption happens, there will be a new run created. Expected behavior Runs are grouped neatly on the MLFlow UI Environment CUDA: - GPU: - available: False - version: 10.2 Packages: - numpy: 1.20.1 - pyTorch_debug: False - pyTorch_version: 1.7.1 - pytorch-lightning: 1.2.0 - tqdm: 4.57.0 System: - OS: Linux - architecture: - 64bit - ELF - processor: x86_64 - python: 3.8.1 - version: #1 SMP Thu Jan 21 16:15:07 EST 2021 cc @awaelchli @ananthsub @ninginthecloud @rohitgr7 @tchaton @akihironitta
Training stuck running on the SLURM cluster with multiple gpus per node
[ "bug", "won't fix", "waiting on author", "distributed", "environment: slurm" ]
πŸ› Bug I try to train a model across multiple nodes on a slurm cluster, where each node has two gpus. Therefore, I use the following flags in the trainer: trainer = pl.Trainer( gpus=2, num_nodes=2, accelerator='ddp', max_epochs=2 ) and submit the job with sbatch run_training.sh . However, I end up with the following output and nothing happens further: GPU available: True, used: True TPU available: None, using: 0 TPU cores GPU available: True, used: True TPU available: None, using: 0 TPU cores initializing ddp: GLOBAL_RANK: 1, MEMBER: 2/4 initializing ddp: GLOBAL_RANK: 1, MEMBER: 2/4 initializing ddp: GLOBAL_RANK: 0, MEMBER: 1/4 initializing ddp: GLOBAL_RANK: 0, MEMBER: 1/4 Are there any other flags I miss? Thanks for any help. Below you find the content of the files used above. run_training.sh #!/bin/bash #SBATCH -o slurm_outfiles/autoencoder-%j-%A-%a.out #SBATCH -N 2 #SBATCH -c 40 #SBATCH --gres=gpu:2 #SBATCH -t 24:00:00 #SBATCH --mail-type=ALL #SBATCH --mem 60G srun python torch_ddp_toy.py torch_ddp_toy.py import pytorch_lightning as pl import torch from torch import nn class Module(pl.LightningModule): def __init__(self): super().__init__() self.linear = nn.Linear(5, 1) def configure_optimizers(self): return torch.optim.Adam(self.linear.parameters()) def training_step(self, batch, batch_idx): return self.linear(batch).sum() def validation_step(self, batch, batch_idx): return batch_idx def validation_epoch_end(self, outputs): print("VALIDATING", len(outputs)) if __name__ == "__main__": m = Module() datasets = [torch.rand([5]) for __ in range(100)] train_loader = torch.utils.data.DataLoader(datasets, batch_size=8) val_loader = torch.utils.data.DataLoader(datasets, batch_size=1) trainer = pl.Trainer( gpus=2, num_nodes=2, accelerator='ddp', max_epochs=2 ) trainer.fit(m, train_loader, val_loader) PyTorch version 1.7.1 PyTorch Lightning version 1.2.0 CentOS Linux release 8.1.1911 PyTorch installed via conda PyTorch Lightning via pip slurm 20.02.3 UPDATE: added version of PyTorch Lightning
incorrect usage of detach/cpu/to
[ "bug", "help wanted" ]
πŸ› Bug Incorrect use of detach() and cpu() during fixing #4592. Please reproduce using the BoringModel You cannot really. To Reproduce Use following BoringModel and post here The fix for #4592 has good intentions but an obvious bug slipped through. It is easy to understand but hard to test it, so let's rely on common sense: That is the original code: if isinstance(output, Result): output.detach() if self.move_metrics_to_cpu: output.cpu() Expected behavior And that supposed to be the correct code, obviously: if isinstance(output, Result): output = output.detach() if self.move_metrics_to_cpu: output = output.cpu() Environment Any. Additional context I will make a trivial PR for this, I just wanted to create a ticket to actually track the issue. In theory, the bug is general pattern, but in practice these two lines were the only ones affected in PL.
cli: Confused on (str, int, List[int]) variants for argparse for --gpus flag?
[ "bug", "help wanted", "question", "docs", "priority: 1" ]
πŸ› Bug A colleague (@siyuanfeng-tri) and I sometimes get confused on how the --gpus flag is to be interpreted by argparse. I see the following docs: https://pytorch-lightning.readthedocs.io/en/1.2.1/advanced/multi_gpu.html#select-gpu-devices But we're sometimes confused about when argparse interpretation will either assume it's the count of the gpu (int/str) or the device index (List[int]). Are there docs for this? If not, can that be clarified somehow? Please reproduce using the BoringModel see notebook The main complaint is that gpus=3 implies gpus=[3], while gpus="3" implies gpus=[0,1,2]. Mix that with implicit conversion from argparse from str to int, and you get a kinda weird public interface. To Reproduce example notebook: https://colab.research.google.com/drive/1pe9_F2S73-gQ3hOeh_MMiGhmbXmGURDQ?usp=sharing Expected behavior Less confusing / more explicit options? (maybe my complaint is with weird implicit behavior of Trainer(gpus=...)?) Environment PyTorch Version (e.g., 1.7.1): OS: Ubuntu 18.04 How you installed PyTorch: pip Python version: 3.6.9 CUDA/cuDNN version: N/A GPU models and configuration: N/A Any other relevant information: N/A Additional context N/A
TPU: Crashes using trainer.test()
[ "bug", "help wanted", "priority: 0", "accelerator: tpu" ]
πŸ› Bug trainer.test() does not work with TPUs. There are a few different ways we've seen it crash. 1. Looks like a call to barrier() coming from __test_using_best_weights RuntimeError Traceback (most recent call last) <ipython-input-17-587e2a9e3858> in <module> ----> 1 trainer.test(datamodule=dm) /opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py in test(self, model, test_dataloaders, ckpt_path, verbose, datamodule) 922 results = self.__test_given_model(model, test_dataloaders) 923 else: --> 924 results = self.__test_using_best_weights(ckpt_path, test_dataloaders) 925 926 self.teardown('test') /opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py in __test_using_best_weights(self, ckpt_path, test_dataloaders) 950 return {} 951 if not self._device_type == DeviceType.TPU: --> 952 self.accelerator.barrier() 953 954 ckpt = pl_load(ckpt_path, map_location=lambda storage, loc: storage) /opt/conda/lib/python3.7/site-packages/pytorch_lightning/accelerators/accelerator.py in barrier(self, name) 375 376 def barrier(self, name: Optional[str] = None) -> None: --> 377 self.training_type_plugin.barrier(name=name) 378 379 def broadcast(self, obj: object, src: int = 0) -> object: /opt/conda/lib/python3.7/site-packages/pytorch_lightning/plugins/training_type/tpu_spawn.py in barrier(self, name) 113 114 def barrier(self, name: Optional[str] = None) -> None: --> 115 rendezvous(f"pl.Trainer.{name}") 116 117 def transfer_distrib_spawn_state_on_fit_end(self, results): /opt/conda/lib/python3.7/site-packages/torch_xla/core/xla_model.py in rendezvous(tag, payload, replicas) 859 ordinal `i` at position `i` in the returned tuple. 860 """ --> 861 return torch_xla._XLAC._xla_rendezvous(get_ordinal(), tag, payload, replicas) 862 863 RuntimeError: tensorflow/compiler/xla/xla_client/mesh_service.cc:316 : Check failed: impl_->channel->WaitForConnected( std::chrono::system_clock::now() + std::chrono::seconds(connect_wait_seconds)) So the barrier is coming from here. This is strange that barrier is being called - I think this means that if not self._device_type == DeviceType.TPU is mistakenly evaluating to True? I think pytorch lightning spins up 8 processes for 8 TPU cores, is it possible only some of them are evaluating to True? Basically it seems like at least 1 process is not making it to this point, which means the other processes are waiting in the barrier and the meetup never happens so we get the RuntimeError shown. 2. Looks like a call to xm.save() is being misused: Traceback (most recent call last): File "/opt/conda/lib/python3.7/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 330, in _mp_start_fn _start_fn(index, pf_cfg, fn, args) File "/opt/conda/lib/python3.7/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 330, in _mp_start_fn _start_fn(index, pf_cfg, fn, args) File "/opt/conda/lib/python3.7/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 324, in _start_fn fn(gindex, *args) File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/plugins/training_type/tpu_spawn.py", line 103, in new_process self.transfer_distrib_spawn_state_on_fit_end(results) File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/plugins/training_type/tpu_spawn.py", line 129, in transfer_distrib_spawn_state_on_fit_end xm.save(self.lightning_module.state_dict(), last_path) File "/opt/conda/lib/python3.7/site-packages/torch_xla/core/xla_model.py", line 817, in save rendezvous('torch_xla.core.xla_model.save') File "/opt/conda/lib/python3.7/site-packages/torch_xla/core/xla_model.py", line 861, in rendezvous return torch_xla._XLAC._xla_rendezvous(get_ordinal(), tag, payload, replicas) File "/opt/conda/lib/python3.7/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 324, in _start_fn fn(gindex, *args) RuntimeError: tensorflow/compiler/xla/xla_client/mesh_service.cc:364 : Failed to meet rendezvous 'torch_xla.core.xla_model.save': Socket closed (14) File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/plugins/training_type/tpu_spawn.py", line 103, in new_process self.transfer_distrib_spawn_state_on_fit_end(results) File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/plugins/training_type/tpu_spawn.py", line 129, in transfer_distrib_spawn_state_on_fit_end xm.save(self.lightning_module.state_dict(), last_path) File "/opt/conda/lib/python3.7/site-packages/torch_xla/core/xla_model.py", line 817, in save rendezvous('torch_xla.core.xla_model.save') File "/opt/conda/lib/python3.7/site-packages/torch_xla/core/xla_model.py", line 861, in rendezvous return torch_xla._XLAC._xla_rendezvous(get_ordinal(), tag, payload, replicas) Exception in device=TPU:6: tensorflow/compiler/xla/xla_client/mesh_service.cc:364 : Failed to meet rendezvous 'torch_xla.core.xla_model.save': Socket closed (14)Exception in device=TPU:3: tensorflow/compiler/xla/xla_client/mesh_service.cc:364 : Failed to meet rendezvous 'torch_xla.core.xla_model.save': Socket closed (14) I think the problem is here with the usage of xm.save(). xm.save() already handles the multiprocess case by checking the ordinal and only writing to disk if the process is on the master ordinal. In general, if you surround xm.save() with if statements, it means some TPU cores enter the if statement and some will not, so the cores that entered the if statement will be waiting for those that didn't enter and eventually it will time out and crash. Repro methods 1. (Colab) Make 3 modifications to the BoringModel Switch runtime version to TPU Add this cell as the first cell: VERSION = "1.7" #@param ["1.7" , "20200516", "nightly"] !curl https://raw.githubusercontent.com/pytorch/xla/master/contrib/scripts/env-setup.py -o pytorch-xla-env-setup.py !python pytorch-xla-env-setup.py --version $VERSION Add tpu_cores=8 to the trainer cell 2. (Google Cloud) Use the attached repro.py file in the following way: create a TPU create a Google Cloud VM (I used e2-standard-32 but size shouldn't matter too much) SSH into VM (VM) conda activate torch-xla-1.7 (VM) pip install pytorch-lightning==1.2.1 (VM) export TPU_IP_ADDRESS=my.tpu.ip.addr (VM) export XRT_TPU_CONFIG="tpu_worker;0;$TPU_IP_ADDRESS:8470" (VM) python3 repro.py 3. (Your CI setup) Modify TPU unit tests as follows: Add a trainer.test(test_dataloaders=DataLoader(RandomDataset(32, 2000), batch_size=32)) after some call to trainer.fit For example, I changed test_model_tpu_early_stop test to look like this: @pytest.mark.skipif(not _TPU_AVAILABLE, reason="test requires TPU machine") @pl_multi_process_test def test_model_tpu_early_stop(tmpdir): """Test if single TPU core training works""" # todo: Test on 8 cores - hanging. class CustomBoringModel(BoringModel): def validation_step(self, *args, **kwargs): out = super().validation_step(*args, **kwargs) self.log('val_loss', out['x']) return out tutils.reset_seed() model = CustomBoringModel() trainer = Trainer( callbacks=[EarlyStopping(monitor='val_loss')], default_root_dir=tmpdir, progress_bar_refresh_rate=0, max_epochs=2, limit_train_batches=2, limit_val_batches=2, tpu_cores=[1], ) trainer.fit(model) + trainer.test(test_dataloaders=DataLoader(RandomDataset(32, 2000), batch_size=32)) Ran tests with coverage run --source=pytorch_lightning -m pytest tests/models/test_tpu.py -v. This should allow testing on the CI framework Environment PyTorch Version (e.g., 1.0): 1.7 OS (e.g., Linux): Linux Build command you used (if compiling from source): pip install pytorch-lightning==1.2.1 (note that earlier versions hang due to #5841 ) Python version: 3.6 Any other relevant information:
Improve verbosity and progress bar display for early stopping
[ "feature", "help wanted", "won't fix", "priority: 1" ]
πŸš€ Feature When using EarlyStopping, it would be be great if the progress bar added two values, like "espatience" (the number of epochs of patience left before it might stop early) and "estarget" (which is the objective including min_delta, that must be achieved to avoid early stopping). Motivation EarlyStopping verbose=True has almost no effect. Pitch When doing a run, it would be great to have progress indication of the status of early stopping. i.e. how long until the next early stopping check will be done, and what objective value must be achieved for early stopping not to trigger. Alternatives Watching the progress bar blindly and feeding pigeons bread crumbs when they fly onto my balcony. Occasionally hitting carriage return at the end of an epoch so I can see what the early stopping objective was at that epoch. Additional context n/a
Improve documentation for EarlyStopping patience parameter
[ "docs" ]
πŸ“š Documentation EarlyStopping API docs currently reads: patience (int) – number of validation epochs with no improvement after which training will be stopped. Default: 3. However, this is quite confusing because 'validation epochs' is not really a term used much in the documentation, and this leads the user to believe this is based upon training epochs. I would propose something like the following: patience (int) – number of validation checks with no improvement after which training will be stopped. Default: 3. For example, if `check_val_every_n_epoch=4`, early stopping will take at least 12 training epochs to occur. Related: #6253
Calling the callback on_before_accelerator_backend_setup gives an error.
[ "bug", "help wanted" ]
I am trying to use this callback with the following code. from pytorch_lightning.callbacks import Callback class InitCallback(Callback): def on_before_accelerator_backend_setup(trainer, pl_module): print(trainer.model) print(trainer.global_rank) exit() It gives the following error: TypeError: on_before_accelerator_backend_setup() takes 2 positional arguments but 3 were given
Error in PL 1.2 when loading models that calls save_hyperparameters and is trained using PL <1.2
[ "bug", "help wanted", "priority: 0", "checkpointing" ]
πŸ› Bug After updating to PL 1.2 LightningModule.load_from_checkpoint(checkpoint), using a checkpoint from a model trained using PL 1.1.6, fails with the following AttributeError: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Users/larshbj/Library/Caches/pypoetry/virtualenvs/vake-TBRjjU-l-py3.8/lib/python3.8/site-packages/pytorch_lightning/core/saving.py", line 134, in load_from_checkpoint checkpoint = pl_load(checkpoint_path, map_location=lambda storage, loc: storage) File "/Users/larshbj/Library/Caches/pypoetry/virtualenvs/vake-TBRjjU-l-py3.8/lib/python3.8/site-packages/pytorch_lightning/utilities/cloud_io.py", line 32, in load return torch.load(f, map_location=map_location) File "/Users/larshbj/Library/Caches/pypoetry/virtualenvs/vake-TBRjjU-l-py3.8/lib/python3.8/site-packages/torch/serialization.py", line 594, in load return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args) File "/Users/larshbj/Library/Caches/pypoetry/virtualenvs/vake-TBRjjU-l-py3.8/lib/python3.8/site-packages/torch/serialization.py", line 853, in _load result = unpickler.load() AttributeError: Can't get attribute '_gpus_arg_default' on <module 'pytorch_lightning.utilities.argparse_utils' from '/Users/larshbj/Library/Caches/pypoetry/virtualenvs/vake-TBRjjU-l-py3.8/lib/python3.8/site-packages/pytorch_lightning/utilities/argparse_utils.py'> The model used is torchvision.models.detection.fasterrcnn_resnet50_fpn and can be found here. However, the problem does not seem to be related to type of model (see BoringModel colab). To Reproduce See Colab: https://colab.research.google.com/drive/1JbDHiipjx7zBYQYTPUzWtEatUB1AIfq4?usp=sharing Reproducing requires training the model and saving a checkpoint using PL version 1.1.6, then loading model using PL version 1.2. To do this in the Colab: Run all cells down to (and including) cell that installs/updates PL 1.2 Reset colab runtime and re-run "Deps" and "Model" steps Run test steps Expected behavior LightningModule.load_from_checkpoint(checkpoint) successfully loads model. Environment * CUDA: - GPU: - Tesla K80 - available: True - version: 10.1 * Packages: - numpy: 1.19.5 - pyTorch_debug: False - pyTorch_version: 1.7.1+cu101 - pytorch-lightning: 1.2.1 - tqdm: 4.41.1 * System: - OS: Linux - architecture: - 64bit - - processor: x86_64 - python: 3.7.10 - version: #1 SMP Thu Jul 23 08:00:38 PDT 2020 Additional context When reproducing the error I noticed that it does not fail if one omit self.save_hyperparameters() in the init method of the Lightning Module that is trained. I guess this saves the hyper parameters to the lightning module, and thus to the checkpoint. Printing the saved hparams from the checkpoint generated in the Colab: {'accelerator': 'ddp', 'accumulate_grad_batches': 1, 'amp_backend': 'native', 'amp_level': 'O2', 'auto_lr_find': False, 'auto_scale_batch_size': False, 'auto_select_gpus': False, 'automatic_optimization': None, 'benchmark': False, 'check_val_every_n_epoch': 1, 'checkpoint_callback': True, 'default_root_dir': None, 'deterministic': False, 'distributed_backend': None, 'enable_pl_optimizer': None, 'fast_dev_run': False, 'flush_logs_every_n_steps': 100, 'gpus': 1, 'gradient_clip_val': 0, 'limit_test_batches': 1.0, 'limit_train_batches': 1.0, 'limit_val_batches': 1.0, 'log_every_n_steps': 50, 'log_gpu_memory': None, 'logger': True, 'max_epochs': 1, 'max_steps': None, 'min_epochs': 1, 'min_steps': None, 'move_metrics_to_cpu': False, 'num_nodes': 1, 'num_processes': 1, 'num_sanity_val_steps': 2, 'overfit_batches': 0.0, 'plugins': None, 'precision': 32, 'prepare_data_per_node': True, 'process_position': 0, 'profiler': None, 'progress_bar_refresh_rate': 1, 'reload_dataloaders_every_epoch': False, 'replace_sampler_ddp': True, 'resume_from_checkpoint': None, 'sync_batchnorm': False, 'terminate_on_nan': False, 'tpu_cores': <function pytorch_lightning.utilities.argparse_utils._gpus_arg_default>, 'track_grad_norm': -1, 'truncated_bptt_steps': None, 'val_check_interval': 1.0, 'weights_save_path': None, 'weights_summary': 'top'} My guess is that the problem occurs due to the 'tpu_cores': <function pytorch_lightning.utilities.argparse_utils._gpus_arg_default>, and this path changed to pytorch_lightning.utilities.argparse in PL 1.2.
all_gather for TPU doesn't support backward gradients.
[ "feature", "help wanted", "good first issue", "won't fix", "accelerator: tpu" ]
Currently, we rely on AllGatherGrad to compute gather for GPUs. TODO: [] Extend this class to support TPU [] Add tests
Introduce a SecurePlugin
[ "feature", "help wanted", "won't fix" ]
πŸš€ Feature Motivation Pitch PR #6212 will add check if PySyft is present, which we would like to avoid. The idea is to introduce a secure plugin, which will handle those checks instead. Alternatives Additional context
Efficientnet_pytorch breaks on 'ddp'
[ "bug", "help wanted", "won't fix", "distributed" ]
πŸ› Bug I'm trying to train an Efficientnet-B0 model using the implementation from https://github.com/lukemelas/EfficientNet-PyTorch repository. Now even though it works fine in 'dp' mode it breaks on 'ddp' or 'ddp2' and i keep getting the following error : RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by (1) passing the keyword argument find_unused_parameters=True to torch.nn.parallel.DistributedDataParallel; (2) making sure all forward function outputs participate in calculating loss. If you already have done the above two steps, then the distributed data parallel module wasn't able to locate the output tensors in the return value of your module's forward function. Please include the loss function and the structure of the return value of forward of your module when reporting this issue (e.g. list, dict, iterable). I am just using your vanilla mnist script in pl_examples/basic_examples/simple_image_classifier and modified the__init__ and forward methods to use Efficientnet-B0 : from efficientnet_pytorch import EfficientNet def __init__(self, hidden_dim=128, learning_rate=1e-3): super().__init__() self.save_hyperparameters() self.cnn1 = torch.nn.Conv2d(1, 3, kernel_size=3) self.efficientnet = EfficientNet.from_name( "efficientnet-b0", include_top=False, drop_connect_rate=0.1 ) self.l1 = torch.nn.Linear(1280, 10) def forward(self, x): x = self.cnn1(x) x = self.efficientnet(x) x = x.squeeze(3).squeeze(2) x = torch.relu(self.l1(x)) return x I also tried setting find_unused_parameters=True in the following way using DDPPlugin: from pytorch_lightning.plugins import DDPPlugin trainer = pl.Trainer.from_argparse_args(args, plugins=[DDPPlugin(find_unused_parameters=True)] ) This time i didnt get the above error but it behaves super bizarre as in that runs two epochs, make a gigantic (~10 minute pause), runs another two epochs and then even a larger pause, and on and on .. Expected behavior I would expect it to work just as the original mnist script in the distributed ddp mode.
Default process group is not initialized in setup() function
[ "bug", "help wanted", "priority: 0", "distributed" ]
πŸ› Bug Default process group is not initialized in Datamodule setup() function. This is a BC breaking with PL >= 1.2.0 With, PL == 1.1.8 this code works. Reproduce notebook: https://colab.research.google.com/drive/1AHadRi0Bly9OnzrJFv8XmS2T9Y5zklvg?usp=sharing Expected behavior fit() should be work. Environment * CUDA: - GPU: - Tesla T4 - available: True - version: 10.1 * Packages: - numpy: 1.19.5 - pyTorch_debug: False - pyTorch_version: 1.7.1+cu101 - pytorch-lightning: 1.2.1 - tqdm: 4.41.1 * System: - OS: Linux - architecture: - 64bit - - processor: x86_64 - python: 3.7.10 - version: #1 SMP Thu Jul 23 08:00:38 PDT 2020
DDP + mixed precision + sharded not working on PL 1.2.1
[ "bug", "help wanted", "priority: 0" ]
πŸ› Bug After upgrading to pytorch-lightning 1.2.1, training with DDP + 16 bit precision + sharded is broken, as the training loss doesn't go down (stays around 2.31). Without the sharded option it seems to work. To Reproduce from argparse import ArgumentParser import torch from torch.nn import functional as F import pytorch_lightning as pl from pl_examples.basic_examples.mnist_datamodule import MNISTDataModule class LitClassifier(pl.LightningModule): def __init__(self, hidden_dim=128, learning_rate=1e-3): super().__init__() self.save_hyperparameters() self.l1 = torch.nn.Linear(28 * 28, self.hparams.hidden_dim) self.l2 = torch.nn.Linear(self.hparams.hidden_dim, 10) def forward(self, x): x = x.view(x.size(0), -1) x = torch.relu(self.l1(x)) x = torch.relu(self.l2(x)) return x def training_step(self, batch, batch_idx): x, y = batch y_hat = self(x) loss = F.cross_entropy(y_hat, y) return loss def configure_optimizers(self): return torch.optim.Adam(self.parameters(), lr=self.hparams.learning_rate) @staticmethod def add_model_specific_args(parent_parser): parser = ArgumentParser(parents=[parent_parser], add_help=False) parser.add_argument('--hidden_dim', type=int, default=128) parser.add_argument('--batch_size', type=int, default=32) parser.add_argument('--num_workers', type=int, default=4) parser.add_argument('--learning_rate', type=float, default=0.0001) return parser def cli_main(): pl.seed_everything(1234) parser = ArgumentParser() parser = pl.Trainer.add_argparse_args(parser) parser = LitClassifier.add_model_specific_args(parser) parser = MNISTDataModule.add_argparse_args(parser) args = parser.parse_args() dm = MNISTDataModule.from_argparse_args(args) model = LitClassifier(args.hidden_dim, args.learning_rate) trainer = pl.Trainer.from_argparse_args(args, precision=16, gpus=[0, 1], accelerator="ddp", plugins='ddp_sharded') trainer.fit(model, datamodule=dm) if __name__ == '__main__': cli_main() Expected behavior Training loss starts to decrease. Environment CUDA: - GPU: 4x TITAN X (Pascal) - available: True - version: 10.2 Packages: - numpy: 1.20.1 - pyTorch_debug: True - pyTorch_version: 1.7.0 - pytorch-lightning: 1.2.1 - tqdm: 4.56.0
auto_scale_batch_size fails with datamodule in pl==1.2.*, succeeds in pl==1.1.8
[ "bug", "help wanted", "trainer: tune", "priority: 1" ]
πŸ› Bug Running trainer = pl.Trainer(auto_scale_batch_size=True); trainer.tune(model, datamodule=dm) succeeds in pl==1.1.8, but fails in pl==1.2.* (tested both 1.2.0 and 1.2.1) with error: Traceback (most recent call last): File "train.py", line 42, in <module> trainer.tune(model, datamodule=dm) File "/home/ubuntu/.local/share/virtualenvs/__project__/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1062, in tune self.tuner.tune(model, train_dataloader, val_dataloaders, datamodule) File "/home/ubuntu/.local/share/virtualenvs/__project__/lib/python3.8/site-packages/pytorch_lightning/tuner/tuning.py", line 46, in tune self.scale_batch_size( File "/home/ubuntu/.local/share/virtualenvs/__project__/lib/python3.8/site-packages/pytorch_lightning/tuner/tuning.py", line 104, in scale_batch_size return scale_batch_size( File "/home/ubuntu/.local/share/virtualenvs/__project__/lib/python3.8/site-packages/pytorch_lightning/tuner/batch_size_scaling.py", line 79, in scale_batch_size raise MisconfigurationException(f'Field {batch_arg_name} not found in both `model` and `model.hparams`') pytorch_lightning.utilities.exceptions.MisconfigurationException: Field batch_size not found in both `model` and `model.hparams` Please reproduce using the BoringModel https://colab.research.google.com/drive/1vgPLCwLg7uACtb3fxVp-t-__NtZ3onsD?usp=sharing Expected behavior Prior to pl==1.2.0, it successfully detected and tuned the dm.batch_size property. Environment CUDA: GPU: Tesla T4 available: True version: 10.1 Packages: numpy: 1.19.5 pyTorch_debug: False pyTorch_version: 1.7.1+cu101 pytorch-lightning: 1.2.1 tqdm: 4.41.1 System: OS: Linux architecture: 64bit processor: x86_64 python: 3.7.10 version: #1 SMP Thu Jul 23 08:00:38 PDT 2020
Support save_hyperparameters() to checkpoints without sending to logger
[ "feature", "help wanted" ]
πŸš€ Feature self.save_hyperparameters() is an awesome simple way to save input arguments with checkpoints such that when loading a checkpoint the module will be constructed with the same parameters. The method also seems to log the hyperparameters to the experiment logger, which is not always desired. It would be great to add a flag to turn this on or off. Pitch Add a log=True flag, which is true by default, to save_hyperparameters(). This way hyperparameters could be saved but not logged by calling self.save_hyperparameters(log=False).
trainer.training_type_plugin.broadcast doesn't seem to work properly
[ "bug", "help wanted", "priority: 1" ]
πŸ› Bug Please reproduce using the BoringModel To Reproduce Use following BoringModel and post here Expected behavior Environment Note: Bugs with code are solved faster ! Colab Notebook should be made public ! IDE: Please, use our python bug_report_model.py template. Colab Notebook: Please copy and paste the output from our environment collection script (or fill out the checklist below manually). You can get the script and run it with: wget https://raw.githubusercontent.com/PyTorchLightning/pytorch-lightning/master/tests/collect_env_details.py # For security purposes, please check the contents of collect_env_details.py before running it. python collect_env_details.py PyTorch Version (e.g., 1.0): OS (e.g., Linux): How you installed PyTorch (conda, pip, source): Build command you used (if compiling from source): Python version: CUDA/cuDNN version: GPU models and configuration: Any other relevant information: Additional context
[RFC] Gradient clipping hooks in the LightningModule
[ "feature", "help wanted", "refactor", "design" ]
πŸš€ Feature Add clipping hooks to the LightningModule Motivation It's currently very difficult to change the clipping logic Pitch class LightningModule: def clip_gradients(self, optimizer, optimizer_idx): ... The default implementation would be the same as we currently provide, where the trainer's clipping flags are used. Maybe those would be deprecated in favor of LightningModule properties. class LightningOptimizer def step(closure=closure) if closure is None: closure = do_nothing_closure def wrapper_closure() closure() self._trainer.call_hook("clip_gradients", self.optimizer) self.optimizer.step(closure=wrapper_closure) Need to evaluate the limitations since clipping is currently tied to plugins Additional context This would fix #5096, #6123 (comment), #5671, #5982, and allow easily implementing new clipping techniques without having to merge them into Lightning cc: @rohitgr7 who has been pushing for this for a while
dp + manual_optimization is not working on PL 1.2.1
[ "bug", "help wanted", "strategy: dp", "priority: 2" ]
πŸ› Bug dp + manual optimization is not working on PL 1.2.1 I am setting automatic optimization = False in my model and giving Trainer() accelerator='dp' Expected behavior Traceback (most recent call last): File "/usr/local/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 644, in run_train self.train_loop.run_training_epoch() File "/usr/local/lib/python3.8/site-packages/pytorch_lightning/trainer/training_loop.py", line 492, in run_training_epoch batch_output = self.run_training_batch(batch, batch_idx, dataloader_idx) File "/usr/local/lib/python3.8/site-packages/pytorch_lightning/trainer/training_loop.py", line 653, in run_training_batch self._curr_step_result = self.training_step( File "/usr/local/lib/python3.8/site-packages/pytorch_lightning/trainer/training_loop.py", line 293, in training_step training_step_output = self.trainer.accelerator.training_step(args) File "/usr/local/lib/python3.8/site-packages/pytorch_lightning/accelerators/accelerator.py", line 157, in training_step return self.training_type_plugin.training_step(*args) File "/usr/local/lib/python3.8/site-packages/pytorch_lightning/plugins/training_type/dp.py", line 61, in training_step return self.model(*args, **kwargs) File "/usr/local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/usr/local/lib/python3.8/site-packages/torch/nn/parallel/data_parallel.py", line 161, in forward outputs = self.parallel_apply(replicas, inputs, kwargs) File "/usr/local/lib/python3.8/site-packages/torch/nn/parallel/data_parallel.py", line 171, in parallel_apply return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)]) File "/usr/local/lib/python3.8/site-packages/torch/nn/parallel/parallel_apply.py", line 86, in parallel_apply output.reraise() File "/usr/local/lib/python3.8/site-packages/torch/_utils.py", line 428, in reraise raise self.exc_type(msg) KeyError: Caught KeyError in replica 1 on device 1. Original Traceback (most recent call last): File "/usr/local/lib/python3.8/site-packages/torch/nn/parallel/parallel_apply.py", line 61, in _worker output = module(*input, **kwargs) File "/usr/local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/usr/local/lib/python3.8/site-packages/pytorch_lightning/overrides/data_parallel.py", line 74, in forward output = super().forward(*inputs, **kwargs) File "/usr/local/lib/python3.8/site-packages/pytorch_lightning/overrides/base.py", line 48, in forward output = self.module.training_step(*inputs, **kwargs) File "/workdir/flow-based-test/StableGlowModel.py", line 131, in training_step opt.step() File "/usr/local/lib/python3.8/site-packages/pytorch_lightning/core/optimizer.py", line 219, in step self.__optimizer_step(*args, closure=closure, profiler_name=profiler_name, **kwargs) File "/usr/local/lib/python3.8/site-packages/pytorch_lightning/core/optimizer.py", line 135, in __optimizer_step trainer.accelerator.optimizer_step(optimizer, self._optimizer_idx, lambda_closure=closure, **kwargs) File "/usr/local/lib/python3.8/site-packages/pytorch_lightning/accelerators/accelerator.py", line 278, in optimizer_step self.run_optimizer_step(optimizer, opt_idx, lambda_closure, **kwargs) File "/usr/local/lib/python3.8/site-packages/pytorch_lightning/accelerators/accelerator.py", line 283, in run_optimizer_step self.training_type_plugin.optimizer_step(optimizer, lambda_closure=lambda_closure, **kwargs) File "/usr/local/lib/python3.8/site-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py", line 160, in optimizer_step optimizer.step(closure=lambda_closure, **kwargs) File "/usr/local/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 26, in decorate_context return func(*args, **kwargs) File "/usr/local/lib/python3.8/site-packages/torch/optim/adamax.py", line 67, in step exp_avg, exp_inf = state['exp_avg'], state['exp_inf'] KeyError: 'exp_avg' Environment PyTorch Version (e.g., 1.0): 1.7.1 OS (e.g., Linux): Linux How you installed PyTorch (conda, pip, source): pip Build command you used (if compiling from source): Python version: 3.8.7 CUDA/cuDNN version: 10.2 GPU models and configuration: Any other relevant information:
trainer.global_step print repetitive steps when there are more that one gradient_accumulation_steps
[ "bug", "help wanted" ]
I am using the latest pytorch_lighting version from conda. Seems in this version, trainer.global_step change with gradient accumulation steps. Also, the number of entire training steps. In the previous version, this wasn't the case. What is the correct approach if I want to do some task in every 5000 training steps? should I use self.trainer.global_step or batch_idx ? When I use self.trainer.global_step actually repeats a number of times equal to the gradient accumulation steps.
DDP: Multiple processes try to create the logger directory tree
[ "bug", "help wanted", "distributed", "priority: 1" ]
πŸ› Bug An user from our supercomputing center run into an issue which I think turned out to be a bug in PyTorch-Lightning. When using the DDP accelerator together with a logger, multiple processes will try creating the logger directory tree, causing some errors about already existing directories or files. Troubleshooting PyTorch-Lightning uses extensively the rank_zero_only function to ensure that some actions are only performed by the process with rank 0: pytorch-lightning/pytorch_lightning/utilities/distributed.py Lines 35 to 42 in b3b8f95 def rank_zero_only(fn): @wraps(fn) def wrapped_fn(*args, **kwargs): if rank_zero_only.rank == 0: return fn(*args, **kwargs) return wrapped_fn rank_zero_only.rank default value is set there: pytorch-lightning/pytorch_lightning/utilities/distributed.py Lines 45 to 46 in b3b8f95 # add the attribute to the function but don't overwrite in case Trainer has already set it rank_zero_only.rank = getattr(rank_zero_only, 'rank', int(os.environ.get('LOCAL_RANK', 0))) but can be set in other modules, for example in our case DDP: pytorch-lightning/pytorch_lightning/plugins/training_type/ddp.py Lines 227 to 228 in b3b8f95 # set warning rank rank_zero_only.rank = self.global_rank Unfortunately it seems that the initialization by the DDP module happens too late, I think because of commit da6dbc8: self.setup_trainer(model) gets called on line 467 effectively initializing the logger and creating the logger directory tree DDP initialization and thus rank_zero_only.rank getting the correct value only happens at line 477 when calling self.training_type_plugin.pre_training(). To Reproduce I have attached the code the user provided together the Slurm script: only_rank_zero.tar.gz. I understand that you would prefer a BoringModel and Collab based reproducer but I am from the HPC world and I am not used to those. Let me know if I can help in any other way. I hope that my own digging into the code will hep. Environment (probably not relevant in this case) PyTorch Version: 1.7.1 OS: Linux (Red Hat 8.1) How you installed PyTorch: conda, I tried the latest version of PyTorch-Lightning available on conda but also tested installing the current master branch from source and the behavior is still the same. Python version: 3.7.10 CUDA/cuDNN version: 11.0.221/8.0.5 GPU models and configuration: NVIDIA V100
trainer.fit must be called before trainer.predict else predict fails with Misconfiguration Exception
[ "bug", "help wanted" ]
πŸ› Bug I am trying to use the new predict API of the trainer by loading a checkpoint. But it seems that trainer.fit must be called before trainer.predict else the config validator fails: GPU available: False, used: False TPU available: None, using: 0 TPU cores Traceback (most recent call last): File "pytorch-lightning_bug.py", line 49, in <module> run_bug() File "pytorch-lightning_bug.py", line 45, in run_bug trainer.predict(model, train_data) File "/venv/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 1035, in predict results = self.fit(model) File "/venv/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 462, in fit self.train_loop.setup_fit(model, train_dataloader, val_dataloaders, datamodule) File "/venv/lib/python3.9/site-packages/pytorch_lightning/trainer/training_loop.py", line 120, in setup_fit self.trainer.config_validator.verify_loop_configurations(model) File "/venv/lib/python3.9/site-packages/pytorch_lightning/trainer/configuration_validator.py", line 34, in verify_loop_configurations self.__verify_train_loop_configuration(model) File "/venv/lib/python3.9/site-packages/pytorch_lightning/trainer/configuration_validator.py", line 46, in __verify_train_loop_configuration raise MisconfigurationException( pytorch_lightning.utilities.exceptions.MisconfigurationException: No `training_step()` method defined. Lightning `Trainer` expects as minimum a `training_step()`, `train_dataloader()` and `configure_optimizers()` to be defined. To Reproduce import os import torch from torch.utils.data import Dataset from pytorch_lightning import LightningModule, Trainer class RandomDataset(Dataset): def __init__(self, size, length): self.len = length self.data = torch.randn(length, size) def __getitem__(self, index): return self.data[index] def __len__(self): return self.len class SimpleModel(LightningModule): def __init__(self): super().__init__() self.layer = torch.nn.Linear(32, 2) def forward(self, x): return self.layer(x) def run_bug(): # fake data train_data = torch.utils.data.DataLoader(RandomDataset(32, 64)) # model model = SimpleModel() # load the checkpoint … trainer = Trainer( default_root_dir=os.getcwd(), limit_train_batches=1, limit_val_batches=1, max_epochs=1, weights_summary=None, ) # use the train data for prediction trainer.predict(model, train_data) if __name__ == '__main__': run_bug() Expected behavior Get the predictions of the data source. Environment PyTorch Version (e.g., 1.0): 1.2.2 OS (e.g., Linux): macOS, Linux How you installed PyTorch (conda, pip, source): pip Build command you used (if compiling from source): Python version: 3.9.1
Don't `lr_scheduler.step()` in manual optimization
[ "feature", "help wanted" ]
πŸš€ Feature Currently, lr_scheduler.step() is called in both manual and automatic optimization. We should let users call lr_scheduler.step() manually in manual optimization for ultimate flexibility. Requested by @carmocca.
ssim functional metric not working with 16 bit precision
[ "bug", "duplicate", "help wanted" ]
πŸ› Bug ssim functional metric doesn't work with native 16 bit precision Please reproduce using the BoringModel https://colab.research.google.com/drive/1Wzxjq-oQavT-kg9go9Ti-AcCEhuir7H9?usp=sharing if you remove 16 bit precision from trainer it works but with 16 bit precision it gives the following error: TypeError: Expected `preds` and `target` to have the same data type. Got pred: torch.float32 and target: torch.float16.
Fitting hangs at "cleaning up ddp environment..." when tpu_cores=8
[ "bug", "help wanted", "priority: 0", "accelerator: tpu" ]
πŸ› Bug When setting tpu_cores of Trainer to 8, fitting hangs at "cleaning up ddp environment...". Please reproduce using the BoringModel https://colab.research.google.com/drive/1tJswNaT0I-GrGsi6ngwwRDUmeFY1pFr3?usp=sharing To Reproduce Run above URL notebook. Expected behavior Trainer.fit ends normally . Environment PyTorch Version (e.g., 1.0): 1.7.0a0+7e71a98 OS (e.g., Linux): Linux How you installed PyTorch (conda, pip, source): pip Build command you used (if compiling from source): N/A Python version: 3.7.10 CUDA/cuDNN version: N/A GPU models and configuration: N/A Any other relevant information: I tried the notebook on Google Colab with TPU.
fast_dev_run fail on log_hyperparams
[ "bug", "help wanted", "logger" ]
πŸ› Bug Issue when running: fast_dev_run=True "TypeError: log_hyperparams() takes 2 positional arguments but 3 were given" To Reproduce When using the following: Where self.hp_metrics is a list of strings where each string is an available metric that is being logged, example "accuracy/val". def on_train_start(self): if self.logger: self.logger.log_hyperparams(self.hparams, {metric:0 for metric in self.hp_metrics}) Expected behavior Assume the unit test is wrong since the documentation say that self.logger.log_hyperparams takes one positional argument and one dictionary. The code run fine without fast_dev_run=True and everything is logged correctly to tensorboard. Environment pytorch_lightning 1.2.2
Make verbose=False prevent showing "Saving latest checkpoint..."
[ "feature", "help wanted", "good first issue", "let's do it!" ]
πŸš€ Feature Currently Lightning prints: [lightning][INFO] - Saving latest checkpoint... when running training with ModelCheckpoint(verbose=False). Not sure if it's a bug or intended... Can we make it to not log that when passing verbose=False?
Find_unused_parameter=false is causing multi GPU to hang
[ "bug", "help wanted", "priority: 0", "distributed", "design" ]
We need to decide if we should: a. set the default to True b. add more clarification in the docs/warning should it be a property of the LightningModule, not the trainer? see more context here: #5604
self.device does not return the correct device in DataParallel
[ "bug", "help wanted", "strategy: dp" ]
πŸ› Bug The self.device property does not get updated in the replicas of DataParallel. Please reproduce using the BoringModel ### To Reproduce import os import torch from torch.utils.data import Dataset from pytorch_lightning import LightningModule, Trainer class RandomDataset(Dataset): def __init__(self, size, length): self.len = length self.data = torch.randn(length, size) def __getitem__(self, index): return self.data[index] def __len__(self): return self.len class BoringModel(LightningModule): def __init__(self): super().__init__() self.layer = torch.nn.Linear(32, 2) def forward(self, x): return self.layer(x) def training_step(self, batch, batch_idx): print(f"on device {batch.device.index} the value of self.device is {self.device}") output = self.layer(batch) return output.sum() def configure_optimizers(self): optimizer = torch.optim.SGD(self.layer.parameters(), lr=0.1) lr_scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=1) return [optimizer], [lr_scheduler] if __name__ == '__main__': train_data = torch.utils.data.DataLoader(RandomDataset(32, 64), batch_size=2) model = BoringModel() trainer = Trainer( default_root_dir=os.getcwd(), limit_train_batches=1, limit_val_batches=1, max_epochs=1, weights_summary=None, accelerator="dp", gpus=2, ) trainer.fit(model, train_data) Output of this script is: on device 0 the value of self.device is cuda:0 on device 1 the value of self.device is cuda:0
Cannot Import LearningRateLogger
[ "question" ]
πŸ› Bug I have been working with the same code in Colab for some time with no issues. Since today, PL could not be imported (#6415). As per linked thread, this is resolved with installing from master, however the following problem still persists: import pytorch_lightning as pl from pytorch_lightning.callbacks import LearningRateLogger ImportError: cannot import name 'LearningRateLogger' from 'pytorch_lightning.callbacks' (/usr/local/lib/python3.7/dist-packages/pytorch_lightning/callbacks/init.py)
[RFC] Create explicit setup and teardown hooks for each stage on the Lightning and DataModules
[ "feature", "won't fix", "design" ]
πŸš€ Feature LightningModules and DataModules currently support a setup API which takes an optional stage argument. #6386 addresses some issues in the setup/teardown lifecycle, so I was wondering if we should take this further (#6401) Motivation Pros of making the separate hooks for each stage: Clarity in the API that helps forwards compatibility: In the current scheme, the Lightning trainer can pass an arbitrary value for stage that user code might not handle. With the explicit hooks, new stages becomes opt-in for users, as users must implement the corresponding hook in their lightning/data module Consistency in the API: this matches the pattern already established for Lightning/data modules which have train/validation/test/predict defined as separate hooks On the Lightning internals, we can remove the base datamodule wrapper class, and remove the has_setup_{stage} attributes since it'll be obvious when the hooks are called Cons: This requires a deprecation process and can cause thrash for users Users now have to implement more hooks. However, a mitigation is that the refactoring should be straightforward as users can easily share code with a helper function in the lightning/data module. Pitch We add the following hooks to the DataHooks base: on_{stage}_prepare_data on_{stage}_setup on_{stage}_teardown for the existing values of stage: fit, test, validate, predict Similarly, we add corresponding hooks to the Callback base: on_{stage}_setup on_{stage}_teardown During the migration, in the trainer, if the Lightning(Data)Module has this hook implemented, then we call it. Otherwise, we fallback to calling the existing setup/teardown hooks. We do the same for the callback hooks. We could set a longer deprecation timeline for this given how prevalent these hooks are. For example, we don't deprecate prepare_data, setup, or teardown until version 1.7+. Additionally, we should move the trainer argument prepare_data_per_node to the DataHooks base, similar to how automatic_optimization is a property of the LightningModule. This point is separate from the overall hooks discussion and could happen faster to slightly simplify the trainer API. Alternatives Keep the existing hooks Additional context
trainer.test is breaking when a model is not passed
[ "bug", "help wanted", "priority: 0" ]
From the docs: # (1) load the best checkpoint automatically (lightning tracks this for you) trainer.test() Trainer.test should use the best checkpoint when a model isn't provided, and currently, that doesn't work.
ImportError: cannot import name 'Batch' from 'torchtext.data' (/usr/local/lib/python3.7/dist-packages/torchtext/data/__init__.py)
[ "bug", "help wanted" ]
Not able to import PyTorch lightning to google colab
Formalize progress tracking inside of the trainer internals
[ "feature", "help wanted", "discussion", "refactor", "design" ]
πŸš€ Feature We should better enforce progress tracking across these dimensions: Stage: training, evaluation (validation and test), and prediction loops Granularity: batches, steps, epochs batches vs steps: steps = optimizer steps (parameter updates) and applies to training loop only. this will differ from batches when gradient accumulation is used, for instance Motivation Provide consistency across trainer stages for users Better debugging Create mid-epoch resumable state for training Pitch See the full example here Create a dataclass that each Loop in Lightning is responsible for maintaining. All of this state is local to a particular trainer rank. These dataclasses live as attributes on the corresponding loop. @dataclass class LoopProgress: # this also serves as the index of the current epoch total_epochs_processed: int = 0 # this monotonically increases and is summed across epochs total_batches_processed: int = 0 # this resets at the end of the epoch back to 0 batches_processed_this_epoch: int = 0 # convenience utils for accessing state def bump_batch(self, increment: int = 1): self.total_batches_processed += increment self.batches_processed_this_epoch += increment def bump_epoch(self, increment: int = 1): self.total_epochs_processed += increment def reset_batch_in_epoch(self): self.batches_processed_this_epoch = 0 The train loop extends this to track optimizer steps @dataclass class TrainLoopProgress(LoopProgress): total_optimizer_steps_processed: int = 0 optimizer_steps_processed_this_epoch: int = 0 def bump_step(self, increment: int = 1): self.total_optimizer_steps_processed += increment self.optimizer_steps_processed_this_epoch += increment def reset_step_in_epoch(self): self.optimizer_steps_processed_this_epoch = 0 The Trainer maintains its own tracker that has references to the individual loop progress trackers @dataclass class Progress: train_progress: TrainLoopProgress val_progress: LoopProgress test_progress: LoopProgress predict_progress: LoopProgress For convenience, we could offer synchronization utilities to sum the progress state across ranks to get totals We can also offer convenience utilities to get the totals across different stages We update the loops to populate this state, and make backwards compatible changes to reference this state We save the progress state into checkpoints as part of the trainer state We handle loading this state when resuming from checkpoints
Continuing training resets logger step
[ "bug", "help wanted", "won't fix", "priority: 1" ]
πŸ› Bug I am running Pytorch Lightning in a federated learning setting. Therefore I have several models and I need to instantiate a Trainer object for one model multiple times. Every time I do that the associated logger resets the epoch and logs the metrics on top of each other in the plots. Since instantiating a new Trainer object to continue training a model is allowed as far as I know: Do you know if that is expected behaviour and whether there is a workaround? In this picture is an example of the logging output of three consecutively called trainers with a common logger. Please reproduce using the BoringModel The Colab currently does not work with lightning and lightning_bolts. To Reproduce Create a TestTubeLogger and instantiate multiple Trainers with a common logger for the same model and fit the trainers consecutively. Expected behaviour The x-axis of the logged metrics should not reset between runs. Environment CUDA: GPU: available: False version: None Packages: numpy: 1.20.1 pyTorch_debug: False pyTorch_version: 1.8.0+cpu pytorch-lightning: 1.2.1 tqdm: 4.58.0 System: OS: Linux architecture: 64bit ELF processor: x86_64 python: 3.8.5 version: #74-Ubuntu SMP Wed Jan 27 22:54:38 UTC 2021
Early Stopping Min Epochs
[ "feature", "help wanted", "won't fix", "design", "callback" ]
πŸš€ Feature The EarlyStopping callback should allow the user to specify a minimum number of epochs to run before early stopping is triggered Motivation In many modern training loops, the learning rate is varied in some kind of cycle. For example, in the Transformer paper, they warm up the learning rate by increasing it gradually until a certain number of training steps, then they cool it by decreasing it as 1/sqrt(steps). In cases like these, the user may want to run a certain minimum number of epochs, for example to ensure that the learning rate has achieved a given value before stopping. Pitch An additional parameter, min_epochs should be added to EarlyStopping (default=0 for backwards compatibility). The stopping will then trigger if both patience epochs have passed since the last improvement and at least min_epochs epochs have passed in total.
[DeepSpeed] `MPI world size 1 does not match torch world size 2` when launched with >1 GPU.
[ "bug", "priority: 1" ]
πŸ› Bug When trying to train a model with PyTorch Lightning and Deepspeed, the process fails with an assertion error: AssertionError: MPI world size 1 does not match torch world size 2. The number 2 in this case is replaced with the number of GPUs specified by the PyTorch Lightning Trainer. Please reproduce using the BoringModel Because Colab does not expose multiple GPUs, I cannot post a Colab link that reproduces this issue. The script to reproduce is simple: from pytorch_lightning import Trainer from pytorch_lightning.plugins import DeepSpeedPlugin, DeepSpeedPrecisionPlugin from tests.helpers.boring_model import BoringModel model = BoringModel() trainer = Trainer( plugins=[DeepSpeedPlugin()], default_root_dir="/tmp", gpus=2, fast_dev_run=True, precision=16, ) trainer.fit(model) trainer.test(model) Expected behavior I expect the world size to be set to the number of GPUs specified so that DeepSpeed will run with my model on multiple GPUs. Environment PyTorch Version (e.g., 1.0): 1.9.0.dev20210305+cu111 PyTorch Lightning Version: 1.2.3 OS (e.g., Linux): Ubuntu 20.04 How you installed PyTorch (conda, pip, source): pip Python version: 3.8 CUDA/cuDNN version: 11.1 GPU models and configuration: 4x A100-SXM4-40GB Any other relevant information: Deepspeed version: 0.3.10 Additional context I searched Google and this repository extensively to find some way to adjust the MPI world size, but this was to no avail.
[RFC] Support checkpointing multiple callbacks of the same type
[ "feature", "help wanted", "design", "callback" ]
πŸš€ Feature Currently when dumping the checkpoint dict, we overwrite callback states if there are multiple callbacks of the same type: pytorch-lightning/pytorch_lightning/trainer/callback_hook.py Lines 209 to 224 in 1c013b4 def on_save_checkpoint(self, checkpoint: Dict[str, Any]) -> Dict[Type, dict]: """Called when saving a model checkpoint.""" callback_states = {} for callback in self.callbacks: if self.__is_old_signature(callback.on_save_checkpoint): rank_zero_warn( "`Callback.on_save_checkpoint` signature has changed in v1.3." " A `checkpoint` parameter has been added." " Support for the old signature will be removed in v1.5", DeprecationWarning ) state = callback.on_save_checkpoint(self, self.lightning_module) # noqa: parameter-unfilled else: state = callback.on_save_checkpoint(self, self.lightning_module, checkpoint) if state: callback_states[type(callback)] = state return callback_states Motivation This blocks #2908 and is an existing bug or at least unexpected behavior with checkpointing callback states today. Pitch [WIP] We add a state_identifier property to the base Callback class @property def state_identifier(self) -> str: return "" TODO: should this raise NotImplementedError? Reasoning: as it stands, we'd call this only when saving/loading callback states. Could we be even more selective and call this only when we have multiple callback instances of the same type? Is that complexity warranted? Callback implementations define this property based on the business logic of the class in order to determine uniqueness, while also preserving flexibility for development. For instance class ModelCheckpoint(Callback): ... @property def state_identifier(self): return f"monitor={self.monitor}" # handle special case where this is not available, include other params from here TODO: provide better default implementation based on args here At save time, we still partition the callback state dicts by the type, but include the state_identifier to further disambiguate. # we include the checkpoint dict in order to look at the lightning version or other metadata that can be used to better preserve backwards compatibility def _compute_callback_state_key(self, checkpoint: Dict[str, Any], callback: Callback) -> str: return type(callback) + callback.state_identifier() def on_save_checkpoint(self, checkpoint: Dict[str, Any]) -> Dict[Type, dict]: """Called when saving a model checkpoint.""" callback_states = {} for callback in self.callbacks: if self.__is_old_signature(callback.on_save_checkpoint): rank_zero_warn( "`Callback.on_save_checkpoint` signature has changed in v1.3." " A `checkpoint` parameter has been added." " Support for the old signature will be removed in v1.5", DeprecationWarning ) state = callback.on_save_checkpoint(self, self.lightning_module) # noqa: parameter-unfilled else: state = callback.on_save_checkpoint(self, self.lightning_module, checkpoint) if state: key = self._compute_callback_state_key(checkpoint, callback) callback_states[key] = state return callback_states At load time, we do something similar: def on_load_checkpoint(self, checkpoint): """Called when loading a model checkpoint.""" callback_states = checkpoint.get('callbacks') # Todo: the `callback_states` are dropped with TPUSpawn as they # can't be saved using `xm.save` # https://github.com/pytorch/xla/issues/2773 if callback_states is not None: for callback in self.callbacks: key = self._compute_callback_state_key(checkpoint, callback) state = callback_states.get(key) if state: state = deepcopy(state) callback.on_load_checkpoint(state) Alternatives Why don't Callback implementations simply hash the value of the constructor args? from @awaelchli : The problematic part is around Trainer(resume_from_checkpoint=..., callbacks=[...]), which would restore the trainer state plus callback state. If the source code changes, what will happen? Well in the normal case you wouldn't modify the code to perfectly resume training, right? But I think it is still worth discussing what kind of flexibility we could allow, if any? For instance, if we hashed all the constructor args, a callback that changes a verbose flag from False to True would no longer be able to access its previously checkpointed state. Additional context
Global step always zero after loading checkpoint
[ "bug", "help wanted", "priority: 0", "checkpointing" ]
I am saving checkpoints inside my module using self.trainer.save_checkpoint(path). I am able to load these checkpoints into the model using MyModel.load_from_checkpoint(path) and trainer using Trainer(resume_from_checkpoint=path). However, both the resulting model and trainer have global_step=0 regardless of the step when saving. From the documentation I was of the impression that checkpoints saved the global step, which is important for my use case. How can I attain the global step from a checkpoint?
LayerSummary does not work with ScriptModules
[ "bug", "help wanted" ]
πŸ› Bug I am trying to do finetuning on a pre-trained model which is saved as TorchScript. Unfortunately, it looks like Lightning's LayerSummary does not support scripted modules: To Reproduce Run import torch from torch import nn import pytorch_lightning as pl class Module(pl.LightningModule): def __init__(self): super().__init__() self.linear = torch.jit.script(nn.Linear(5, 1)) # Notice the scripting! def configure_optimizers(self): return torch.optim.Adam(self.linear.parameters()) def training_step(self, batch, batch_idx): return self.linear(batch).sum() if __name__ == "__main__": m = Module() datasets = [torch.rand([5]) for __ in range(100)] train_loader = torch.utils.data.DataLoader(datasets, batch_size=8) trainer = pl.Trainer( num_sanity_val_steps=0, max_epochs=1, ) trainer.fit(m, train_loader) fails with Traceback (most recent call last): File "scratch.py", line 29, in <module> trainer.fit(m, train_loader) File "/home/alandu/miniconda3/envs/ctrldev/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 513, in fit self.dispatch() File "/home/alandu/miniconda3/envs/ctrldev/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 553, in dispatch self.accelerator.start_training(self) File "/home/alandu/miniconda3/envs/ctrldev/lib/python3.6/site-packages/pytorch_lightning/accelerators/accelerator.py", line 74, in start_training self.training_type_plugin.start_training(trainer) File "/home/alandu/miniconda3/envs/ctrldev/lib/python3.6/site-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py", line 111, in start_training self._results = trainer.run_train() File "/home/alandu/miniconda3/envs/ctrldev/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 609, in run_train self._pre_training_routine() File "/home/alandu/miniconda3/envs/ctrldev/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 595, in _pre_training_routine ref_model.summarize(mode=self.weights_summary) File "/home/alandu/miniconda3/envs/ctrldev/lib/python3.6/site-packages/pytorch_lightning/core/lightning.py", line 1456, in summarize model_summary = ModelSummary(self, mode=mode) File "/home/alandu/miniconda3/envs/ctrldev/lib/python3.6/site-packages/pytorch_lightning/core/memory.py", line 184, in __init__ self._layer_summary = self.summarize() File "/home/alandu/miniconda3/envs/ctrldev/lib/python3.6/site-packages/pytorch_lightning/core/memory.py", line 236, in summarize summary = OrderedDict((name, LayerSummary(module)) for name, module in self.named_modules) File "/home/alandu/miniconda3/envs/ctrldev/lib/python3.6/site-packages/pytorch_lightning/core/memory.py", line 236, in <genexpr> summary = OrderedDict((name, LayerSummary(module)) for name, module in self.named_modules) File "/home/alandu/miniconda3/envs/ctrldev/lib/python3.6/site-packages/pytorch_lightning/core/memory.py", line 67, in __init__ self._hook_handle = self._register_hook() File "/home/alandu/miniconda3/envs/ctrldev/lib/python3.6/site-packages/pytorch_lightning/core/memory.py", line 91, in _register_hook return self._module.register_forward_hook(hook) File "/home/alandu/miniconda3/envs/ctrldev/lib/python3.6/site-packages/torch/jit/_script.py", line 723, in fail raise RuntimeError(name + " is not supported on ScriptModules") RuntimeError: register_forward_hook is not supported on ScriptModules Exception ignored in: <bound method LayerSummary.__del__ of <pytorch_lightning.core.memory.LayerSummary object at 0x7f5815a7a9e8>> Traceback (most recent call last): File "/home/alandu/miniconda3/envs/ctrldev/lib/python3.6/site-packages/pytorch_lightning/core/memory.py", line 72, in __del__ self.detach_hook() File "/home/alandu/miniconda3/envs/ctrldev/lib/python3.6/site-packages/pytorch_lightning/core/memory.py", line 98, in detach_hook if self._hook_handle is not None: AttributeError: 'LayerSummary' object has no attribute '_hook_handle' Expected behavior This should work as if I had just done nn.Linear. For now, I can work around this by setting weight_summary=None. Environment * CUDA: - GPU: - available: False - version: None * Packages: - numpy: 1.19.5 - pyTorch_debug: False - pyTorch_version: 1.8.0 - pytorch-lightning: 1.2.2 - tqdm: 4.59.0 * System: - OS: Linux - architecture: - 64bit - ELF - processor: x86_64 - python: 3.6.10 - version: #1 SMP Fri Feb 26 16:21:30 UTC 2021
Multiple Trainloaders in a Sequential Mode.
[ "feature", "help wanted", "won't fix" ]
πŸš€ Feature Add new mode to multiple_trainloader_mode of the Trainer in order to support training on multiple dataloaders in a sequential way. Motivation Say we want to train on two dataloaders A, and B. Currently, the modes support max/min size of all trainloaders and provide a list/dict containing batches from loader A and B at each step. But it is not possible to define loaders such that it returns batches from loader A until it is exhausted, and then returns batches from loader B until exhausted. Pitch Adding a new mode, called sequential loading to the Trainer. At each time, the batch list/dict can have only one batch from one of the loaders. Example application: Finetuning Say we want at each epoch, train a model on one dataset and then finetune it on another. loaders = {'train': loader_train, 'finetune': loader_finetune} for epoch in epochs: train(model, loaders['train']) train(model, loaders['finetune']) Alternatives Chaining dataloaders together as a single dataloader seems possible but then additional book-keeping is required to figure out which dataloader a specific batch belongs to. If we can call the train_loop of the LightningModule, we can add a callback that is called at the end of each training epoch and then finetune the model on the finetune loader. Additional Context To the best of my knowledge, there is a CombinedLoader that takes care of parallel sampling. I suppose a new mode can be added to this class to support sequential loading. But I could be wrong and there is a better alternative. In that case, please let me know. Thanks.
AttributeError: 'BoringModel' object has no attribute 'require_backward_grad_sync' when using manual optimization with TPU
[ "bug", "help wanted", "priority: 0", "accelerator: tpu" ]
πŸ› Bug Hello! When using manual optimization with TPU, I am getting an AttributeError: 'BoringModel' object has no attribute 'require_backward_grad_sync'. When I replace self.manual_backward(loss) with loss.backward() things seem to work, but I am not sure if this is a safe or sustainable workaround. It seems the error happens at the self.manual_backward(loss) step in training_step. Any help would be much appreciated. Please reproduce using the BoringModel Here is the notebook reproducing the error: https://colab.research.google.com/drive/1LPYgtUAiHd1OXuTK6I1WkRaCUQScxEPg?usp=sharing Environment WARNING:root:TPU has started up successfully with version pytorch-1.8 CUDA: GPU: available: False version: 10.1 Packages: numpy: 1.19.5 pyTorch_debug: False pyTorch_version: 1.8.0+cu101 pytorch-lightning: 1.3.0dev tqdm: 4.41.1 System: OS: Linux architecture: 64bit processor: x86_64 python: 3.7.10 version: #1 SMP Thu Jul 23 08:00:38 PDT 2020 Installed torch-xla using: !pip install cloud-tpu-client==0.10 https://storage.googleapis.com/tpu-pytorch/wheels/torch_xla-1.8-cp37-cp37m-linux_x86_64.whl to match colab defaults
Training stuck at 0% at first epoch
[ "bug", "help wanted", "priority: 1" ]
Hello, Training gets stuck at 0% at the very first epoch whether using fast_dev_run or not. Not error reported. How can I debug that? Specs: cudatoolkit 10.2.89 hfd86e86_1 python 3.8.8 hdb3f193_4 pytorch 1.6.0 py3.8_cuda10.2.89_cudnn7.6.5_0 pytorch pytorch-lightning 1.2.3 pyhd8ed1ab_0 conda-forge torch-encoding 1.2.2b20210313 dev_0 <develop> torchvision 0.7.0 py38_cu102 pytorch Using poly LR scheduler with warm-up epochs of 0! | Name | Type | Params ------------------------------------------------- 0 | model | EncNet | 36.0 M 1 | criterion | SegmentationLosses | 0 ------------------------------------------------- 36.0 M Trainable params 0 Non-trainable params 36.0 M Total params 144.163 Total estimated model params size (MB) Epoch 0: 0%| | 0/1 [00:00<?, ?it/s]
relative refresh rate in progress bar
[ "feature", "help wanted", "won't fix" ]
πŸš€ Feature Dynamic refresh rate, define float as percent and default 0.01 Motivation as a user refresh, every percent of progress is mostly enough Pitch The validation in the notebook never triggers progress as default is 20 and this process has only 2 steps Alternatives Additional context
"Advanced" profiler not working
[ "bug", "help wanted", "priority: 0" ]
πŸ› Bug Hello Guys, I am having issues with the advanced profiling option in lightning. Here is the colab file documenting the issue on a simple model. The issue happens, whenever I set stochastic_weight_avg=True. Also, I have my complicated setup where I faced issue regarding Advanced profiler but it was a different one. When I tried to reproduce it, I faced the above mentioned issue. Thus, I am just mentioning my actual error over here. Here is how I call my trainer - trainer = pl.Trainer.from_argparse_args(args, callbacks=[LoggingCallback(), checkpoint_callback], profiler="advanced", ) And here is the error stack - Traceback (most recent call last): File "/home/nzb0040/.pyenv/versions/3.8.6/lib/python3.8/contextlib.py", line 131, in __exit__ self.gen.throw(type, value, traceback) File "/home/nzb0040/.pyenv/versions/intersection_train/lib/python3.8/site-packages/pytorch_lightning/profiler/profilers.py", line 71, in profile yield action_name File "/home/nzb0040/.pyenv/versions/intersection_train/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1102, in call_hook trainer_hook(*args, **kwargs) File "/home/nzb0040/.pyenv/versions/intersection_train/lib/python3.8/site-packages/pytorch_lightning/trainer/callback_hook.py", line 35, in on_before_accelerator_backend_setup callback.on_before_accelerator_backend_setup(self, model) File "/home/nzb0040/.pyenv/versions/intersection_train/lib/python3.8/site-packages/pytorch_lightning/callbacks/swa.py", line 142, in on_before_accelerator_backend_setup self._average_model = deepcopy(pl_module) File "/home/nzb0040/.pyenv/versions/3.8.6/lib/python3.8/copy.py", line 172, in deepcopy y = _reconstruct(x, memo, *rv) File "/home/nzb0040/.pyenv/versions/3.8.6/lib/python3.8/copy.py", line 270, in _reconstruct state = deepcopy(state, memo) File "/home/nzb0040/.pyenv/versions/3.8.6/lib/python3.8/copy.py", line 146, in deepcopy y = copier(x, memo) File "/home/nzb0040/.pyenv/versions/3.8.6/lib/python3.8/copy.py", line 230, in _deepcopy_dict y[deepcopy(key, memo)] = deepcopy(value, memo) File "/home/nzb0040/.pyenv/versions/3.8.6/lib/python3.8/copy.py", line 172, in deepcopy y = _reconstruct(x, memo, *rv) File "/home/nzb0040/.pyenv/versions/3.8.6/lib/python3.8/copy.py", line 270, in _reconstruct state = deepcopy(state, memo) File "/home/nzb0040/.pyenv/versions/3.8.6/lib/python3.8/copy.py", line 146, in deepcopy y = copier(x, memo) File "/home/nzb0040/.pyenv/versions/3.8.6/lib/python3.8/copy.py", line 230, in _deepcopy_dict y[deepcopy(key, memo)] = deepcopy(value, memo) File "/home/nzb0040/.pyenv/versions/3.8.6/lib/python3.8/copy.py", line 172, in deepcopy y = _reconstruct(x, memo, *rv) File "/home/nzb0040/.pyenv/versions/3.8.6/lib/python3.8/copy.py", line 270, in _reconstruct state = deepcopy(state, memo) File "/home/nzb0040/.pyenv/versions/3.8.6/lib/python3.8/copy.py", line 146, in deepcopy y = copier(x, memo) File "/home/nzb0040/.pyenv/versions/3.8.6/lib/python3.8/copy.py", line 230, in _deepcopy_dict y[deepcopy(key, memo)] = deepcopy(value, memo) File "/home/nzb0040/.pyenv/versions/3.8.6/lib/python3.8/copy.py", line 146, in deepcopy y = copier(x, memo) File "/home/nzb0040/.pyenv/versions/3.8.6/lib/python3.8/copy.py", line 230, in _deepcopy_dict y[deepcopy(key, memo)] = deepcopy(value, memo) File "/home/nzb0040/.pyenv/versions/3.8.6/lib/python3.8/copy.py", line 161, in deepcopy rv = reductor(4) TypeError: cannot pickle 'Profile' object Exception ignored in: <function AdvancedProfiler.__del__ at 0x7f045ad834c0> Traceback (most recent call last): File "/home/nzb0040/.pyenv/versions/intersection_train/lib/python3.8/site-packages/pytorch_lightning/profiler/profilers.py", line 280, in __del__ if self.output_file: AttributeError: 'AdvancedProfiler' object has no attribute 'output_file'
Unable to use any scheduler ('scheduler' object has no attribute 'param_groups') [BUG]
[ "bug", "help wanted", "priority: 0" ]
I am trying to convert a number of Pytorch repos into Lightning, but I am unable to use any scheduler. I have tried both the origin custom made schedulers and official Pytorch schedulers, and always get the same 'scheduler' object has no attribute 'param_groups' Traceback (most recent call last): File "tools/train_net_lightning.py", line 255, in <module> trainer.fit(trainer_sys) File "/home/anaconda3/envs/panoptic_cuda11/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 514, in fit self.dispatch() File "/home/anaconda3/envs/panoptic_cuda11/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 554, in dispatch self.accelerator.start_training(self) File "/home/anaconda3/envs/panoptic_cuda11/lib/python3.8/site-packages/pytorch_lightning/accelerators/accelerator.py", line 74, in start_training self.training_type_plugin.start_training(trainer) File "/home/anaconda3/envs/panoptic_cuda11/lib/python3.8/site-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py", line 111, in start_training self._results = trainer.run_train() File "/home/anaconda3/envs/panoptic_cuda11/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 645, in run_train self.train_loop.run_training_epoch() File "/home/anaconda3/envs/panoptic_cuda11/lib/python3.8/site-packages/pytorch_lightning/trainer/training_loop.py", line 493, in run_training_epoch batch_output = self.run_training_batch(batch, batch_idx, dataloader_idx) File "/home/anaconda3/envs/panoptic_cuda11/lib/python3.8/site-packages/pytorch_lightning/trainer/training_loop.py", line 619, in run_training_batch self.run_train_split_start(split_idx, split_batch, opt_idx, optimizer) File "/home/anaconda3/envs/panoptic_cuda11/lib/python3.8/site-packages/pytorch_lightning/trainer/training_loop.py", line 910, in run_train_split_start model.toggle_optimizer(optimizer, opt_idx) File "/home/anaconda3/envs/panoptic_cuda11/lib/python3.8/site-packages/pytorch_lightning/core/lightning.py", line 1267, in toggle_optimizer for group in opt.param_groups: AttributeError: 'StepLR' object has no attribute 'param_groups' My specs: OS: Ubuntu 18.04 Conda environment: cudatoolkit 11.0.3 h15472ef_8 conda-forge python 3.8.8 hffdb5ce_0_cpython conda-forge pytorch 1.7.1 py3.8_cuda11.0.221_cudnn8.0.5_0 pytorch pytorch-lightning 1.2.3 pyhd8ed1ab_0 conda-forge torchvision 0.8.2 py38_cu110 pytorch (also does't work in slighlty different environment like cudatoolkit 11.0.221 h6bb024c_0 python 3.8.8 hdb3f193_4 pytorch 1.7.1 py3.8_cuda11.0.221_cudnn8.0.5_0 pytorch pytorch-lightning 1.2.3 pyhd8ed1ab_0 conda-forge torchvision 0.8.2 py38_cu110 pytorch ) GPU: 2018 Ti
Option of doing optimizer.zero_grad(set_to_none=True).
[ "feature", "help wanted" ]
πŸš€ Feature An option to set gradients to None instead of zero. This was introduced in PyTorch 1.7. Documentation. Motivation Training Speed improvements, as discussed by PyTorch here. Pitch I changed Line 1400 of https://github.com/PyTorchLightning/pytorch-lightning/blob/master/pytorch_lightning/core/lightning.py. From def optimizer_zero_grad(self, epoch: int, batch_idx: int, optimizer: Optimizer, optimizer_idx: int): optimizer.zero_grad() To def optimizer_zero_grad(self, epoch: int, batch_idx: int, optimizer: Optimizer, optimizer_idx: int): optimizer.zero_grad(set_to_none=True) I achieved a modest speedup in my training. In my case of 4096 image epoch, batchsize 32. My times before the change were 41.42s, and after were 41.05s. Measured as the min over three runs.
Should truncated_bptt_steps take effect during validation phase?
[ "bug", "help wanted", "priority: 1" ]
πŸ› Bug It appears that the Trainer flag truncated_bptt_steps doesn’t affect the validation phase. Should it? The problem I’m running into is that I need truncated_bptt_steps to virtually increase the length of the sequence I can fit into my GPU memory, but this purpose is defeated when the validation step doesn’t also make use of truncated_bptt_steps β€”Β in this case my memory is limited by my validation step, which attempts to process the full sequence in-memory. Is there a reason why validation step shouldn’t also make use of the truncated_bptt_steps flag? Or is this just not finished yet? I could probably manage to adapt the evaluation_loop code to copy the training_loop code regarding truncated_bptt_steps. Would this be an acceptable addition? See also: #6483 Please reproduce using the BoringModel https://colab.research.google.com/drive/1JYAVdMy-UPg0Rzwpo3T9A-FmwrOdi_aR?usp=sharing To Reproduce Expected behavior I expected the validation loop to also split the batch along the time dimension according to truncated_bptt_steps.
Loading a model from PL 1.2 that was saved in PL 1.1 breaks
[ "bug", "help wanted", "waiting on author", "checkpointing", "priority: 1" ]
πŸ› Bug I saved a model trained with PL 1.1 from an environment with PL 1.2 and it breaks. There are some PL specific objects that get pickled into the checkpoint. This shouldn't happen. See error below: Traceback (most recent call last): File "scripts/train_bart_seq2seq_augmented_kilt.py", line 45, in <module> model = BartSeq2SeqAugmented(**vars(args)) File "/home/ndecao/modify-transformers-memory/src/models/bart_seq2seq_augmented_kilt.py", line 67, in __init__ self.model = BartSeq2Seq.load_from_checkpoint(self.hparams.model_checkpoint) File "/home/ndecao/.anaconda3/envs/kilt37/lib/python3.7/site-packages/pytorch_lightning/core/saving.py", line 134, in load_from_checkpoint checkpoint = pl_load(checkpoint_path, map_location=lambda storage, loc: storage) File "/home/ndecao/.anaconda3/envs/kilt37/lib/python3.7/site-packages/pytorch_lightning/utilities/cloud_io.py", line 32, in load return torch.load(f, map_location=map_location) File "/home/ndecao/.anaconda3/envs/kilt37/lib/python3.7/site-packages/torch/serialization.py", line 594, in load return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args) File "/home/ndecao/.anaconda3/envs/kilt37/lib/python3.7/site-packages/torch/serialization.py", line 853, in _load result = unpickler.load() AttributeError: Can't get attribute '_gpus_arg_default' on <module 'pytorch_lightning.utilities.argparse_utils' Expected behavior The model should load without any error. Environment The model was trained and saved using PL 1.1.6 and loaded from PL 1.2.1 PyTorch Version (e.g., 1.0): 1.7.1 OS (e.g., Linux): Linux Python version: 3.9
[BUG] `auto_move_data` does not work with `DataParallel`
[ "bug", "help wanted", "strategy: dp", "priority: 1" ]
πŸ› Bug In case your forward function is wrapped with auto_move_data it will not work with DataParallel because it will try to send the data to self.device which in dataparallel is always the main device. i.e. the following won't work with accelerator="dp" (and probably also with "ddp"): class Module(pl.LightningModule): ... @auto_move_data def forward(x): ... def training_step(self, batch, batch_idx): x = self.forward(batch[0]) ... The error comes from this line: pytorch-lightning/pytorch_lightning/core/hooks.py Line 646 in b190403 device = device or self.device self.device should probably be replaced by torch.distributed.get_rank() when torch.distributed.is_available() and torch.distributed.is_initialized()
Training stalls with DDP multi-GPU setup
[ "bug", "help wanted", "priority: 0", "waiting on author" ]
πŸ› Bug My training / validation step gets hung when using ddp on 4-GPU AWS instance. Usually it happens at the end of the first epoch, but sometimes in the middle of it. Code runs fine on 1 GPU. My model checkpoint is a very basic set up checkpoint_callback = pl.callbacks.ModelCheckpoint( args.checkpointdir, save_last=True) as is the trainer trainer = pl.Trainer( progress_bar_refresh_rate=1000, log_every_n_steps=1000, max_epochs=model_config['epochs'], gradient_clip_val=0.5, gpus=-1, accelerator='ddp', plugins=[pl.plugins.DDPPlugin(find_unused_parameters=True)], callbacks=[checkpoint_callback]) I know there is a related issue #4612, but in my case the hanging happens non-deterministically. Funnily if I use subset of data using --limit_train_batches the trains runs fine. However, I monitor GPU mem usage and it never goes above 91/92%. Any suggestions would be most appreciated. Is there a way to at least induce an error message and failure. For example, on AWS SageMaker, stalled model does not fail the job and it continues accumulating costs. I do not want to use other parallel backends as they are much slower making 4-GPU parallelism cost-ineffective. Expected behavior Model runs in multi-gpu DDP model without stalling. Environment Using AWS p3* instances CUDA: GPU: Tesla V100-SXM2-16GB available: True version: 10.1 Packages: numpy: 1.20.1 pyTorch_debug: False pyTorch_version: 1.4.0 (also tried 1.6.0) pytorch-lightning: 1.2.3 tqdm: 4.57.0 System: OS: Linux architecture: 64bit ELF processor: x86_64 python: 3.8.5 version: #40-Ubuntu SMP Fri Feb 5 23:50:40 UTC 2021
CUDA memory leak after batch size finder
[ "bug", "help wanted" ]
πŸ› Bug Using transformers + AdamW optimizer + batch size finder results in ~2 - 3 GB GPU memory not being freed after trainer.tune (for xlm-roberta-base). This causes OOM issues on a subsequent call of trainer.fit. I suspect that the state of the AdamW optimizer causes this issue. Please reproduce using the BoringModel https://colab.research.google.com/drive/1cugaUmLzNvk-38OyV8zyT9M9xQY4LkfH#scrollTo=j4w0wizx5XxJ Expected behavior GPU memory should be freed after the batch size finder (up to the model which may stay on GPU). Environment CUDA: GPU: Tesla T4 available: True version: 10.1 Packages: numpy: 1.19.5 pyTorch_debug: False pyTorch_version: 1.8.0+cu101 pytorch-lightning: 1.2.4 tqdm: 4.41.1 System: OS: Linux architecture: 64bit processor: x86_64 python: 3.7.10 version: #1 SMP Thu Jul 23 08:00:38 PDT 2020
Trainer.predict(), LightningModule.predict(), and LightningDataModule.predict_dataloader() are not in documentation
[ "docs" ]
πŸ“š Documentation Currently the Trainer.predict() method, along with the LightningModule.predict() and LightningDataModule.predict_dataloader() hooks, are not mentioned anywhere in the documentation so users don't know about their existence unless they've read through the source files. This led to confusion on my part since I saw the write_prediction and write_prediction_dict methods get added to LightningModule, as well as the limit_predict_batches Trainer flag, but it wasn't clear how you were supposed to actually use them. PS: While we're at it, the training, testing, predicting, tuning, and evaluating properties on Trainer aren't in the documentation either. I'm not sure if that is intentional or notβ€” I can understand not wanting to expose the setters as public APIsβ€” but the getters are definitely useful if the core training_step, validation_step, etc. methods on your LightningModule call helper methods that in turn need to know what stage they're in.
trainer.test() fails when using both auto_lr_find and ModelPruning
[ "bug", "help wanted", "won't fix", "priority: 1" ]
πŸ› Bug Hi, wasn't able to reproduce properly with BoringModel, but did with small CIFAR10 example. Description trainer.test() errors out when I use both ModelPruning callback and auto_lr_find. Disabling either of these makes trainer.test() work again. I'm using mainline of pytorch-lightning. Example import os import torch from torch import nn from torch.nn import functional as F from torch.utils.data import DataLoader, random_split from torchvision.datasets import CIFAR10 from torchvision import transforms import pytorch_lightning as pl from pytorch_lightning.metrics.functional import accuracy from pytorch_lightning.callbacks import ModelPruning class LitCIFAR10(pl.LightningModule): def __init__(self, data_dir='./', hidden_size=64, learning_rate=2e-4, batch_size=128): super().__init__() # Set our init args as class attributes self.data_dir = data_dir self.hidden_size = hidden_size self.learning_rate = learning_rate self.bs = batch_size # Hardcode some dataset specific attributes self.num_classes = 10 self.dims = (3, 32, 32) channels, width, height = self.dims mean = [0.4913997551666284, 0.48215855929893703, 0.4465309133731618] std = [0.24703225141799082, 0.24348516474564, 0.26158783926049628] self.transform = transforms.Compose([ transforms.ToTensor(), transforms.Normalize(mean, std) ]) # Define PyTorch model self.model = nn.Sequential( nn.Flatten(), nn.Linear(channels * width * height, hidden_size), nn.ReLU(), nn.Dropout(0.1), nn.Linear(hidden_size, hidden_size), nn.ReLU(), nn.Dropout(0.1), nn.Linear(hidden_size, self.num_classes) ) def forward(self, x): x = self.model(x) return F.log_softmax(x, dim=1) def training_step(self, batch, batch_idx): x, y = batch logits = self(x) loss = F.nll_loss(logits, y) return loss def validation_step(self, batch, batch_idx): x, y = batch logits = self(x) loss = F.nll_loss(logits, y) preds = torch.argmax(logits, dim=1) acc = accuracy(preds, y) # Calling self.log will surface up scalars for you in TensorBoard self.log('val_loss', loss, prog_bar=True) self.log('val_acc', acc, prog_bar=True) return loss def test_step(self, batch, batch_idx): # Here we just reuse the validation_step for testing return self.validation_step(batch, batch_idx) def configure_optimizers(self): optimizer = torch.optim.Adam(self.parameters(), lr=self.learning_rate) return optimizer #################### # DATA RELATED HOOKS #################### def prepare_data(self): # download return CIFAR10(self.data_dir, train=True, download=True) CIFAR10(self.data_dir, train=False, download=True) def setup(self, stage=None): # Assign train/val datasets for use in dataloaders if stage == 'fit' or stage is None: cifar10_full = CIFAR10(self.data_dir, train=True, transform=self.transform) self.cifar10_train, self.cifar10_val = random_split(cifar10_full, [45000, 5000]) # Assign test dataset for use in dataloader(s) if stage == 'test' or stage is None: self.cifar10_test = CIFAR10(self.data_dir, train=False, transform=self.transform) def train_dataloader(self): return DataLoader(self.cifar10_train, batch_size=self.bs, num_workers=os.cpu_count()) def val_dataloader(self): return DataLoader(self.cifar10_val, batch_size=self.bs, num_workers=os.cpu_count()) def test_dataloader(self): return DataLoader(self.cifar10_test, batch_size=self.bs, num_workers=os.cpu_count()) model = LitCIFAR10() prune = ModelPruning( pruning_fn='l1_unstructured', parameter_names=['weight', 'bias'], amount=0.02, use_global_unstructured=True, ) trainer = pl.Trainer(gpus=1, max_epochs=2, auto_lr_find=True, # either comment this line out, callbacks=[prune] # or comment this line out ) trainer.tune(model) trainer.fit(model) trainer.test() # and this will then work
during training, model not able to save checkpoint at the end of every epoch
[ "bug", "help wanted" ]
πŸ› Bug when I try to train the model, the model not saving checkpoint at the end of every epoch PyTorch Version : 1.1.4 OS: Linux 18.04 How you installed PyTorch (conda, pip, source): conda Build command you used (if compiling from source): Python version: 3.6 CUDA/cuDNN version: cuda 10.0 GPU models and configuration: cuda 10.0
CodeCarbon monitor callback
[ "feature", "help wanted", "won't fix" ]
πŸš€ Feature Carbon emissions monitoring. Motivation https://mlco2.github.io/impact/#about Pitch Add a callback like lr_monitor, but to monitor estimated emissions. Simplest way would be to use the codecarbon library, although this wouldn't support distributed training (maybe possible, could at least log emissions for each node?). Alternatives This does currently exist in Comet.ML, but it would be good to not be tied to it. Could also be done just using the raw data here: https://github.com/mlco2/impact, but then need to also monitor approximate power consumption. Additional context
performance drop from v1.1.8 to >= v1.2.0 when using metrics
[ "bug", "help wanted", "working as intended", "priority: 2" ]
πŸ› Bug Hi, I'm observing longer epoch times for pytorch lightning versions >= 1.2.0 when using metrics in the training or validation steps. Depending on the model, dataset etc I observed an increase of 10 seconds per epoch. I tested it using Accuracy() and ConfusionMatrix() and the impact is larger for the former. Removing the metrics results in the same epoch times regardless of the lightning version. My colleagues and I verified this on different machines using a simple mnist model. This behavior also occurs with the soon-to-be deprecated internal metrics and the torchmetrics variant of Accuracy() Currently, the mnist host website is down, so I replicated this using cifar10 in the boring model notebook, to which a link can be found below. The impact in this example is not as severe as observed with our mnist example, but still in the range of 3-5 seconds per epoch on my machine, which some might find neglectable, but this adds up. The problem using colab is, that I was not able to install pytorch lightning 1.1.8, since it is incompatible with the current torchtext library, which is imported by lightning if available, and is installed in the colab environment by default. Uninstalling it using pip did not work. To verify this one unfortunately has to run this somewhere else The printed epoch/fit times in the notebook where only measured using 1.2.4. The following times for 3 epochs and the complete fit were measured on my machine using a V100 for 1.1.8 and 1.2.4: pl 1.2.4: epoch: 64 s epoch: 64 s epoch: 64 s fit: 198 s pl 1.1.8 epoch: 59 s epoch: 58 s epoch: 59 s fit: 182 s I verified this behavior for all versions since 1.2.0 including the 1.2.0rcs. Please reproduce using the BoringModel https://colab.research.google.com/drive/1ILDOsZqlOKplrdKBkuGUiNiWnJwYYYoW?usp=sharing To Reproduce Use following https://colab.research.google.com/drive/1ILDOsZqlOKplrdKBkuGUiNiWnJwYYYoW?usp=sharing Expected behavior Same epoch times regardless of lightning version Environment this is the machine used for the times above CUDA: GPU: Tesla V100-SXM2-32GB Tesla V100-SXM2-32GB available: True version: 10.2 Packages: numpy: 1.20.1 pyTorch_debug: False pyTorch_version: 1.8.0 pytorch-lightning: 1.2.4 tqdm: 4.59.0 System: OS: Linux architecture: 64bit ELF processor: x86_64 python: 3.8.8 version: #141-Ubuntu SMP Fri Feb 19 13:46:27 UTC 2021 Additional context
ModelCheckpoint not accepting argument 'filepath'
[ "bug", "help wanted" ]
πŸ› Bug When filepath is used as an argument in ModelCheckpoint it says TypeError: __init__() got an unexpected keyword argument 'filepath' Link for the colab notebook: https://colab.research.google.com/drive/1-ECmP0JTPYXFSDt__K93Cm4wzW7OdyXS?usp=sharing Expected behavior Environment CUDA: GPU: available: False version: 10.1 Packages: numpy: 1.19.5 pyTorch_debug: False pyTorch_version: 1.8.0+cu101 pytorch-lightning: 1.3.0dev tqdm: 4.41.1 System: OS: Linux architecture: 64bit processor: x86_64 python: 3.7.10 version: #1 SMP Thu Jul 23 08:00:38 PDT 2020
Calling trainer.test() when using fast_dev_run throws confusing error
[ "bug", "help wanted" ]
πŸ› Bug Calling trainer.test() when using fast_dev_run throws confusing error: Traceback (most recent call last): File "main.py", line 89, in <module> trainer.test(test_dataloaders=test) File "/home/ash/miniconda3/envs/tmp/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 916, in test results = self.__test_using_best_weights(ckpt_path, test_dataloaders) File "/home/ash/miniconda3/envs/tmp/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 927, in __test_using_best_weights raise MisconfigurationException( pytorch_lightning.utilities.exceptions.MisconfigurationException: ckpt_path is "best", but ModelCheckpoint is not configured to save the best model. Please reproduce using the BoringModel from pytorch_lightning import LightningModule import torch from torch.utils.data import DataLoader, Dataset import pytorch_lightning as pl class RandomDataset(Dataset): def __init__(self, size, num_samples): self.len = num_samples self.data = torch.randn(num_samples, size) def __getitem__(self, index): return self.data[index] def __len__(self): return self.len class BoringModel(LightningModule): def __init__(self): super().__init__() self.layer = torch.nn.Linear(32, 2) def forward(self, x): return self.layer(x) def loss(self, batch, prediction): return torch.nn.functional.mse_loss(prediction, torch.ones_like(prediction)) def training_step(self, batch, batch_idx): output = self.layer(batch) loss = self.loss(batch, output) return {"loss": loss} def training_step_end(self, training_step_outputs): return training_step_outputs def training_epoch_end(self, outputs) -> None: torch.stack([x["loss"] for x in outputs]).mean() def validation_step(self, batch, batch_idx): output = self.layer(batch) loss = self.loss(batch, output) return {"x": loss} def validation_epoch_end(self, outputs) -> None: torch.stack([x['x'] for x in outputs]).mean() def test_step(self, batch, batch_idx): output = self.layer(batch) loss = self.loss(batch, output) self.log('fake_test_acc', loss) return {"y": loss} def test_epoch_end(self, outputs) -> None: torch.stack([x["y"] for x in outputs]).mean() def configure_optimizers(self): optimizer = torch.optim.SGD(self.layer.parameters(), lr=0.1) lr_scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=1) return [optimizer], [lr_scheduler] num_samples = 10000 train = RandomDataset(32, num_samples) train = DataLoader(train, batch_size=32) val = RandomDataset(32, num_samples) val = DataLoader(val, batch_size=32) test = RandomDataset(32, num_samples) test = DataLoader(test, batch_size=32) model = BoringModel() trainer = pl.Trainer( fast_dev_run=True ) trainer.fit(model, train, val) trainer.test(test_dataloaders=test)
Revisit CONTRIBUTING.md
[ "docs", "priority: 1" ]
πŸ“š Documentation Update CONTRIBUTING.md to include detailed steps and missing guidelines Pytorch-Lightning developer machine setup. The setup instructions are currently under "Testing" section, and they don't have information about all the necessary dependencies. We should rename the section to make it easy for developers to find it, add missing steps, as well as add debugging/testing/troubleshooting guidelines. A good example to follow is Pytorch CONTRIBUTING.md. Some of the missing steps I and my colleagues learned while setting up Pytorch-Lightning for development on local machine: Install dependencies from requirements.txt (I am new to OSS development, and didn't run "pip install -r requirements.txt", directly went to "Testing" instructions and ran commands from there, resulting into later having to install bunch of dependencies manually after running "make test" and getting bunch of test failures). brew install cmake brew install libuv brew install pkg-config After the setup, I still ran into errors, and had to manually install torch.
Wall time auto-resubmit not working
[ "bug", "help wanted", "won't fix", "environment: slurm", "priority: 1" ]
Hi :) I fear the wall time auto-resubmit is not working for me. I'm using this submit script: #SBATCH --job-name=NORA_test_MNIST #SBATCH --ntasks=1 #SBATCH --cpus-per-task=1 #SBATCH --mem=12000 #SBATCH --gres=gpu:1 ### #SBATCH --mem-per-gpu=11000 #SBATCH -o /home/%u/%x-%j-on-%N.out #SBATCH -e /home/%u/%x-%j-on-%N.err #SBATCH --mail-type=ALL #Timelimit format: "hours:minutes:seconds" -- max is 24h #SBATCH --time=00:15:00 #Resubmit (provided by pytorch lightning) #SBATCH --signal=SIGUSR1@90 echo "Your job is running on" $(hostname) python3 mnist_model.py mnist_model.py is the model from https://towardsdatascience.com/from-pytorch-to-pytorch-lightning-a-gentle-introduction-b371b7caaf09 Pytorch Lightning 1.2.4 is installed on the cluster. In the error file I get: TPU available: None, using: 0 TPU cores 2021-03-22 11:19:52.797360: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0 /home/gourmelon/.local/lib/python3.8/site-packages/pytorch_lightning/utilities/distributed.py:51: UserWarning: GPU available but not used. Set the --gpus flag when calling the script. warnings.warn(*args, **kwargs) Set SLURM handle signals. | Name | Type | Params ----------------------------------- 0 | layer_1 | Linear | 100 K 1 | layer_2 | Linear | 33.0 K 2 | layer_3 | Linear | 2.6 K ----------------------------------- 136 K Trainable params 0 Non-trainable params 136 K Total params 0.544 Total estimated model params size (MB) slurmstepd-lme171: error: *** JOB 656086 ON lme171 CANCELLED AT 2021-03-22T11:34:49 DUE TO TIME LIMIT *** In the out file I simply see the progress of training. I also assured that the SIGUSR1 is actually sent using this script: #!/bin/bash # #SBATCH --job-name=pers_job_debug #SBATCH --dependency=singleton #SBATCH --signal=B:SIGUSR1@90 #SBATCH --time=00:05:00 #SBATCH -o /cluster/%u/%x-%j-on-%N.out #SBATCH -e /cluster/%u/%x-%j-on-%N.err # catch the SIGUSR1 signal _resubmit() { ## Resubmit the job for the next execution echo "$(date): job $SLURM_JOBID received SIGUSR1 at $(date), re-submitting" sbatch $0 } trap _resubmit SIGUSR1 echo "$(date): job $SLURM_JOBID starting on $SLURM_NODELIST" while true; do echo "$(date): normal execution" sleep 60 done This script gives me: Mon 22 Mar 2021 11:35:52 AM CET: normal execution Mon 22 Mar 2021 11:36:52 AM CET: normal execution Mon 22 Mar 2021 11:37:52 AM CET: normal execution Mon 22 Mar 2021 11:38:52 AM CET: normal execution Mon 22 Mar 2021 11:39:52 AM CET: job 656090 received SIGUSR1 at Mon 22 Mar 2021 11:39:52 AM CET, re-submitting Submitted batch job 656092 Mon 22 Mar 2021 11:39:52 AM CET: normal execution Any help is highly appreciated! Please also tell me if the example is not minimal enough (this is my first minimal example :D) or if I should try something else first.
ddp: Consider warning / failing fast if single proc relies on `len(gpus) >= 8` for CUDA, due to peer-to-peer resource limitations?
[ "feature", "help wanted", "won't fix", "distributed" ]
πŸš€ Feature If using accelerator="ddp" (or any other backend that may trigger this error), ideally there's a warning and/or error about possibly encountering this issue, e.g.: WARNING: You are requesting > 8 GPU devices be shared via CUDA's peer-to-peer protocol, but it generally supports <= 8 devices. Please consider distributing this across multiple nodes (e.g. instead of `--gpus 8`, use `--num_nodes 2 --gpus 4`). Motivation I just ran into this failure mode on the same instance type (p2.16xlarge) today. https://forums.developer.nvidia.com/t/cuda-peer-resources-error-when-running-on-more-than-8-k80s-aws-p2-16xlarge/45351#4994583 (FWIW, I uh, locked the EC2 instance, tried to reboot which didn't work, then tried to stop-then-start and was unable to relaunch due to zone availability πŸ˜…) We are using pytorch-lightning == 1.2.0. Scrubbed stacktrace: Traceback (most recent call last): File "{source}/train_pl.py", line 248, in <module> main() File "{source}/train_pl.py", line 235, in main trainer.fit(model, dataloaders["train"], dataloaders["val"]) File "{venv}/pytorch_lightning/trainer/trainer.py", line 510, in fit self.pre_dispatch() File "{venv}/pytorch_lightning/trainer/trainer.py", line 539, in pre_dispatch self.accelerator.pre_dispatch() File "{venv}/pytorch_lightning/accelerators/accelerator.py", line 84, in pre_dispatch self.training_type_plugin.pre_dispatch() File "{venv}/pytorch_lightning/plugins/training_type/ddp.py", line 254, in pre_dispatch self.model_to_device() File "{venv}/pytorch_lightning/plugins/training_type/ddp.py", line 279, in model_to_device self.model.to(self.root_device) File "{venv}/pytorch_lightning/core/decorators.py", line 89, in inner_fn module = fn(self, *args, **kwargs) File "{venv}/pytorch_lightning/utilities/device_dtype_mixin.py", line 120, in to return super().to(*args, **kwargs) File "{venv}/pytorch/torch/nn/modules/module.py", line 612, in to return self._apply(convert) File "{venv}/pytorch/torch/nn/modules/module.py", line 359, in _apply module._apply(fn) File "{venv}/pytorch/torch/nn/modules/module.py", line 359, in _apply module._apply(fn) File "{venv}/pytorch/torch/nn/modules/module.py", line 359, in _apply module._apply(fn) File "{venv}/pytorch/torch/nn/modules/module.py", line 381, in _apply param_applied = fn(param) File "{venv}/pytorch/torch/nn/modules/module.py", line 610, in convert return t.to(device, dtype if t.is_floating_point() else None, non_blocking) RuntimeError: cuda runtime error (711) : peer mapping resources exhausted at /pytorch/aten/src/THC/THCGeneral.cpp:139 Pitch See above. Alternatives Can't think of a quick one. Additional context None necessary.
External MLFlow logging failures cause training job to fail
[ "bug", "help wanted", "won't fix", "logger", "3rd party" ]
πŸ› Bug I am using a pytorch_lightning.loggers.mlflow.MLFlowLogger during training, with the MLFlow tracking URI hosted in Databricks. When Databricks updates, we sometimes lose access to MLFlow for a brief period. When this happens, logging to MLFlow fails with the following error: urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host=XXX.cloud.databricks.com, port=443): Max retries exceeded with url: /api/2.0/mlflow/runs/get?XXX (Caused by NewConnectionError(<urllib3.connection.HTTPSConnection object at 0x7fbbd6096f50>: Failed to establish a new connection: [Errno 111] Connection refused)) Not only does logging fail, but with PyTorch Lightning, an error logging means the entire training pipeline will also fail, losing progress on a potentially long-running job with limited error handling options currently available. Ideally, there would be flexibility in PyTorch Lightning to allow users to handle logging errors such that it will not always kill the training job. Please reproduce using the BoringModel https://colab.research.google.com/drive/17TqdKZ8SjcdpiCWc76N5uQc5IKIgNp7g?usp=sharing To Reproduce Attempt to use a logger that fails to log. The training job will fail, losing all progress. Expected behavior There is an option to handle exceptions from the logger such that the job does not automatically die if logging a parameter fails. Environment CUDA: GPU: Tesla T4 available: True version: 10.1 Packages: numpy: 1.19.5 pyTorch_debug: False pyTorch_version: 1.8.0+cu101 pytorch-lightning: 1.2.4 tqdm: 4.41.1 System: OS: Linux architecture: 64bit processor: x86_64 python: 3.7.10 version: #1 SMP Thu Jul 23 08:00:38 PDT 2020 Additional context
Add illustration of hooks in the LightningModule
[ "feature", "docs", "priority: 0" ]
Add a code snippet showing when each of the LightningModule hooks is being called.
pl.Trainer.add_argparse_args() does not work with argparse subparsers
[ "bug", "help wanted" ]
πŸ› Bug pytorch-lightning/pytorch_lightning/utilities/argparse.py Lines 157 to 160 in 7114c2d parser = ArgumentParser( parents=[parent_parser], add_help=False, ) will override the parent_parser so the sub parser will not work as expected. Please see the code in reproduce section. I think the current version from master with use_argument_group=True will fix this. pytorch-lightning/pytorch_lightning/utilities/argparse.py Lines 190 to 197 in efce2b7 if use_argument_group: group_name = get_abbrev_qualified_cls_name(cls) parser = parent_parser.add_argument_group(group_name) else: parser = ArgumentParser( parents=[parent_parser], add_help=False, ) Please reproduce using the BoringModel To Reproduce in a cli.py file import argparse import pytorch_lightning as pl parser = argparse.ArgumentParser("") sub_parsers = parser.add_subparsers() train_parser = sub_parsers.add_parser("train") train_parser.add_argument("--seed") train_parser = pl.Trainer.add_argparse_args(train_parser) args = parser.parse_args() Run python cli.py train --help will not show the training flags for Trainer Expected behavior Run python cli.py train --help show the training flags for Trainer Environment PyTorch Version (e.g., 1.0): 1.7.0 OS (e.g., Linux): MacOS Big Sur 11.2.3 How you installed PyTorch (conda, pip, source): conda Python version: 3.7.10 Additional context
`Trainer.predict` stops gradients globally until `torch.set_grad_enabled(True)` is called
[ "bug", "help wanted", "priority: 1" ]
πŸ› Bug After Trainer.predict is called, gradients are never turned back on. This can cryptically impact tests (see #6595). See error in notebook: https://colab.research.google.com/drive/1vbKcVwApZEcWX_ryyx-2BJXdLD_nkFBG?usp=sharing
Remove requirement of PyYAML!=5.4.x
[ "feature", "help wanted", "let's do it!" ]
πŸš€ Feature Remove dependency requirement of PyYAML!=5.4.x Motivation According to safety PyYAML versions below 5.4 have a security vulnerability which means that our pre-commit hooks don't allow upgrading to a newer versions of lightning. -> pyyaml, installed 5.3.1, affected <5.4, id 39611 A vulnerability was discovered in the PyYAML library in versions before 5.4, where it is susceptible to arbitrary code execution when it processes untrusted YAML files through the full_load method or with the FullLoader loader. Applications that use the library to process untrusted input may be vulnerable to this flaw. This flaw allows an attacker to execute arbitrary code on the system by abusing the python/object/new constructor. This flaw is due to an incomplete fix for CVE-2020-1747. See CVE-2020-14343. Pitch Figure out why 5.4.x doesn't work for lightning and remove this requirement if possible.
Add ability to specify `artifact_location` when using `MLFlowLogger`
[ "feature", "help wanted" ]
πŸš€ Feature Expose argument artifact_location of MlflowClient.create_experiment when it is called inside MLFlowLogger.experiment, this can be set with an argument when MLFlowLogger is instantiated along with tracking_uri or save_dir, to specify a custom location to save artifacts. Motivation When I use the save_dir argument of MLFlowLogger to specify a location, different from my current working directory, to save my mlruns/, the artifacts logged using the same MLFlowLogger still get saved under my current working directory, which is kind of messy.
ImportError: cannot import name 'PY3' from 'torch._six'
[ "bug", "help wanted" ]
I tried running the example from the section 'RAPID PROTOTYPING TEMPLATES' from the pytorch lightning documentation (https://pytorch-lightning.readthedocs.io/en/latest/starter/rapid_prototyping_templates.html) but gives the following error: "ImportError: cannot import name 'PY3' from 'torch._six' (/usr/local/lib/python3.7/dist-packages/torch/_six.py) " when trying to make this import from pl_bolts.datasets import DummyDataset. Is it a probem of versions?
Use a pickle-alternative for serialization
[ "feature", "help wanted", "3rd party" ]
πŸš€ Feature Right now, if you use ddp_spawn or ddp-cpu, you are forced to make everything in your script pickleable. This is unfortunate, because there are many things that pickle cannot serialize correctly (e.g. lambda functions). One particular point of conflict is developing in a notebook -- if you write your pl.LightningModule in the notebook itself, it is attached to __main__ and cannot be pickled (and thus you cannot use ddp_spawn). There are alternative serialization libraries, most notably cloudpickle and dill. I'm most familiar with cloudpickle, which is what Dask, a super popular parallel computing library, uses. I'm not super familiar with how communication is handled, but would it be possible to use one of these libraries instead of pickle for serialization? I think that would make DDP much easier to use (especially for the interactive Jupyter notebook development usecase).