title
stringlengths 5
164
| labels
sequence | bodyText
stringlengths 0
46.7k
|
---|---|---|
NCCL 2.7.8 error, ncclSystemError: System call (socket, malloc, munmap, etc) failed | [
"bug",
"help wanted"
] | π Bug
I use pytorch official image pytorch/pytorch:1.8.0-cuda11.1-cudnn8-runtime, and based that installed pytorch-lightning to use multi-GPU -- 4 3090, and ran into this proble. it seems a pytorch problem, how can I tackle this?
Full stack:
`/opt/conda/lib/python3.8/site-packages/pytorch_lightning/utilities/distributed.py:52: UserWarning: MASTER_ADDR environment variable is not defined. Set as localhost
warnings.warn(*args, **kwargs)
initializing ddp: GLOBAL_RANK: 0, MEMBER: 1/4
initializing ddp: GLOBAL_RANK: 1, MEMBER: 2/4
initializing ddp: GLOBAL_RANK: 2, MEMBER: 3/4
initializing ddp: GLOBAL_RANK: 3, MEMBER: 4/4
Traceback (most recent call last):
File "/ghome/luoxin/projects/liif-lightning-hydra/run.py", line 34, in main
return train(config)
File "/ghome/luoxin/projects/liif-lightning-hydra/src/train.py", line 78, in train
trainer.fit(model=model, datamodule=datamodule)
File "/opt/conda/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 499, in fit
self.dispatch()
File "/opt/conda/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 546, in dispatch
self.accelerator.start_training(self)
File "/opt/conda/lib/python3.8/site-packages/pytorch_lightning/accelerators/accelerator.py", line 73, in start_training
self.training_type_plugin.start_training(trainer)
File "/opt/conda/lib/python3.8/site-packages/pytorch_lightning/plugins/training_type/ddp_spawn.py", line 108, in start_training
mp.spawn(self.new_process, **self.mp_spawn_kwargs)
File "/opt/conda/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 230, in spawn
return start_processes(fn, args, nprocs, join, daemon, start_method='spawn')
File "/opt/conda/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 188, in start_processes
while not context.join():
File "/opt/conda/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 150, in join
raise ProcessRaisedException(msg, error_index, failed_process.pid)
torch.multiprocessing.spawn.ProcessRaisedException:
-- Process 2 terminated with the following error:
Traceback (most recent call last):
File "/opt/conda/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 59, in _wrap
fn(i, *args)
File "/opt/conda/lib/python3.8/site-packages/pytorch_lightning/plugins/training_type/ddp_spawn.py", line 157, in new_process
self.configure_ddp()
File "/opt/conda/lib/python3.8/site-packages/pytorch_lightning/plugins/training_type/ddp_spawn.py", line 195, in configure_ddp
self._model = DistributedDataParallel(
File "/opt/conda/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 446, in __init__
self._sync_params_and_buffers(authoritative_rank=0)
File "/opt/conda/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 457, in _sync_params_and_buffers
self._distributed_broadcast_coalesced(
File "/opt/conda/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 1155, in _distributed_broadcast_coalesced
dist._broadcast_coalesced(
RuntimeError: NCCL error in: /opt/conda/conda-bld/pytorch_1614378083779/work/torch/lib/c10d/ProcessGroupNCCL.cpp:825, unhandled system error, NCCL version 2.7.8
ncclSystemError: System call (socket, malloc, munmap, etc) failed.`
To Reproduce
Steps to reproduce the behavior:
git clone -b selfuse https://github.com/LuoXin-s/liif-lightning-hydra.git
update results_dir to your dir in configs/config.yaml
python run.py trainer.max_epochs=5 to quickly test it
Expected behavior
Successful use of multi-GPU
Environment
PyTorch version: 1.8.0
Is debug build: False
CUDA used to build PyTorch: 11.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.5 LTS (x86_64)
GCC version: Could not collect
Clang version: Could not collect
CMake version: Could not collect
Python version: 3.8 (64-bit runtime)
Is CUDA available: True
CUDA runtime version: Could not collect
GPU models and configuration:
GPU 0: GeForce RTX 3090
GPU 1: GeForce RTX 3090
GPU 2: GeForce RTX 3090
GPU 3: GeForce RTX 3090
Nvidia driver version: 460.67
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Versions of relevant libraries:
[pip3] numpy==1.19.2
[pip3] pytorch-lightning==1.2.5
[pip3] torch==1.8.0
[pip3] torchelastic==0.2.2
[pip3] torchmetrics==0.2.0
[pip3] torchtext==0.9.0
[pip3] torchvision==0.9.0
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.1.74 h6bb024c_0 nvidia
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2020.2 256
[conda] mkl-service 2.3.0 py38he904b0f_0
[conda] mkl_fft 1.3.0 py38h54f3939_0
[conda] mkl_random 1.1.1 py38h0573a6f_0
[conda] numpy 1.19.2 py38h54aff64_0
[conda] numpy-base 1.19.2 py38hfa32c7d_0
[conda] pytorch 1.8.0 py3.8_cuda11.1_cudnn8.0.5_0 pytorch
[conda] pytorch-lightning 1.2.5 pypi_0 pypi
[conda] torchelastic 0.2.2 pypi_0 pypi
[conda] torchmetrics 0.2.0 pypi_0 pypi
[conda] torchtext 0.9.0 py38 pytorch
[conda] torchvision 0.9.0 py38_cu111 pytorch
Additional context |
Inconsistent accuracy with pl.metrics.Accuracy() across PL 1.1.8 and PL 1.2.x | [
"bug",
"help wanted"
] | π Bug
I have a simple binary segmentation model and train it to segment objects in an image. I measure the accuracy with pl.metrics.Accuracy(). After I switched from PL 1.1.8 to PL 1.2.x without any code-changes the accuracy-values where different (see also my discussion-topic).
I tried to reproduce the problem and even If I seed everything and use synthetic data the problem persist. I have a BoringModel-Colab which uses the same input-image over and over again to overfit the network on purpose.
If I run this with PL 1.1.8 for 3 epochs, I get
loss=0.738, val_binary_segmentation_accuracy=0.375
DATALOADER:0 TEST RESULTS
{'fake_test_acc': 0.7458580136299133}
and with PL 1.2.5 I get
loss=0.738, val_binary_segmentation_accuracy=0.444
DATALOADER:0 TEST RESULTS
{'fake_test_acc': 0.7458580136299133}
As the loss and the test-accuracy (which is also just the loss) is the same, I supect the inconsistency to be in the metric
Please reproduce using the BoringModel
https://colab.research.google.com/drive/1eRgcdQvNWzcEed2eTj8paDnnQ0qplXAh?usp=sharing
To Reproduce
Run the Colab once with:
! pip install torchtext==0.8.0 torchvision==0.8.0 torch==1.7.1
! pip install pytorch-lightning==1.1.8
and one with
! pip install pytorch-lightning==1.2.5
Expected behavior
In this case, where everything is seeded and I show the network just the same image, I would expect the accuracy to be the same. |
Different training loss behavior between pytorch-lightning and pure pytorch | [
"bug",
"help wanted"
] | π Bug
You can see three train loss curves here, actually there are four curves (in the legend below):
orange: pl on k80+cu110
red: pure pytorch on k80+cu110οΌwhich is fully overlaped under the orange one, so you can't see it.
grey: pl on p100+cu101
blue: pure pytorch on p100+cu101
I seed everything in these two models and set deterministic=True. I check the random init weights and training data order are all the same. The curves of k80 and p100 can't overlap is not surprising, but what confusing me is that:
two curves on k80 (orange and red) can overlap, yet two curves on p100 (grey and blue) can't overlap
on p100 (grey and blue), duing the starting several training steps(about 10+ steps), pl and pure pytorch loss value are the same, and two curves can overlap, , but then loss value become a little different (about 0.00x), and then more and more different, and two curves can't overlap any more.
curves on k80 (orange and red) finally convergence, yet on p100 (grey and blue) they stop at 0.5 and can't reduce any more.
Since the strange behavior on p100 can't be reproduced on k80, and my full project is huge, I don't know where the bug could be happen and how to create a smallest demo to report the bug. Any ideas?
Environment
K80:
CUDA:
- GPU:
- Tesla K80
- Tesla K80
- Tesla K80
- Tesla K80
- available: True
- version: 11.0
Packages:
- numpy: 1.19.2
- pyTorch_debug: False
- pyTorch_version: 1.7.1
- pytorch-lightning: 1.2.4
- tqdm: 4.58.0
System:
- OS: Linux
- architecture:
- 64bit
-
- processor: x86_64
- python: 3.7.2
- version: #1 SMP Wed Feb 3 15:06:38 UTC 2021
P100:
CUDA:
- GPU:
- Tesla P40
- Tesla P40
- Tesla P40
- Tesla P40
- available: True
- version: 10.1
Packages:
- numpy: 1.19.5
- pyTorch_debug: False
- pyTorch_version: 1.4.0
- pytorch-lightning: 1.2.4
- tqdm: 4.57.0
System:
- OS: Linux
- architecture:
- 64bit
- ELF
- processor: x86_64
- python: 3.6.5
- version: #1 SMP Wed Apr 1 17:49:22 CST 2020
Additional context |
In multi-gpus, trainer.test Error when ModelCheckpoint specifies the filename. | [
"bug",
"help wanted",
"priority: 0"
] | π Bug
In ddp mode (trainer(accelerator='ddp') with multi-gpus, the trainer could not find the saved custom-named checkpoints when .test(), which would lead to the following error:
In addition, the program could hang, not be closed (even with CTRL-C) and the main GPU's Volatile is always 100%:
The following modelcheckpoint would lead to the problem:
ckpt = ModelCheckpoint(dirpath='./checkpoints/', monitor='valid_loss', mode='min', filename="{epoch}-{valid_loss:.4f}")
If removing the filename, the program can work well:
ckpt = ModelCheckpoint(dirpath='./checkpoints/', monitor='valid_loss', mode='min')
Please reproduce using the BoringModel
The code is based on the pl_example in https://github.com/PyTorchLightning/pytorch-lightning/blob/master/pl_examples/basic_examples/autoencoder.py.
The whole code could be found in https://paste.ubuntu.com/p/2Qp4GRns7f/
Environment
CUDA:
- GPU:
- GeForce RTX 3090
- GeForce RTX 3090
- available: True
- version: 11.1
Packages:
- numpy: 1.19.4
- pyTorch_debug: False
- pyTorch_version: 1.8.0+cu111
- pytorch-lightning: 1.2.5
- tqdm: 4.53.0
System:
- OS: Linux
- architecture:
- 64bit
- ELF
- processor: x86_64
- python: 3.6.9
- version: #140-Ubuntu SMP Thu Jan 28 05:20:47 UTC 2021
Additional context |
Standardize API for logging images, histograms, etc. | [
"feature",
"help wanted",
"refactor"
] | π Feature
Standardized way to log non-scalar data
Motivation
Different logging backends supported by PL use different ways to log such data, so it would be very convenient to have standardized way to do it. I mentioned it in Slack and got approval to make a PR, so I want to discuss details here.
Pitch
Add methods that allow to log images, histograms or other data (audio, video, ...) to PL logger class.
Additional context
What is better?
If the function is called, but current logger backend doesn't support this type of content:
silently ignore: we misleading the user allowing him to use method that does nothing without any notification
throw an error: it can be undesired behavior if LoggerCollection is used
show a warning: IMO best solution
What signature should methods have?
log_images(images: Dict[str, image_types]): consistent with existing log_metrics, but new method should be added for each type of data (log_images, log_histograms, ...); IMO best solution
log_image(tag, image): consistent with Tensorboard, but can be inconvenient when logging multiple images
log_metrics where image wrapper is passed as value: consistent with WandB (uses its own wandb.Image wrapper), but requires user to create wrappers and increases complexity
I am ready to make PR when details are discussed and approved. |
Support Spawn for DeepSpeed | [
"feature",
"help wanted",
"won't fix",
"distributed",
"3rd party"
] | π Feature
Create a deepspeed_spawn for Notebooks. This will allow users to train within notebooks using DeepSpeed! There will probably be quite a lot of duplication if we go the current route with no mixins, so it's something to consider.
Motivation
User would like to use DeepSpeed in a notebook minimaxir/aitextgen#103 (comment) |
Support DDP communication hook for speeding up training | [
"feature",
"help wanted",
"distributed"
] | π Feature
Motivation
https://pytorch.org/docs/1.8.0/ddp_comm_hooks.html control communicate gradients across workers for all_reduce in DistributedDataParallel. such as fp16_compress_hook converts gradients to fp16 before all reduce is an effective way to improve training speed when using multi nodes.
Pitch
In DDPPlugin, provide an option for specifying ddp_comm_hook, and register in configure_ddp https://github.com/PyTorchLightning/pytorch-lightning/blob/master/pytorch_lightning/plugins/training_type/ddp.py#L227
def configure_ddp(self):
self.pre_configure_ddp()
self._model = DistributedDataParallel(
LightningDistributedModule(self.model),
device_ids=self.determine_ddp_device_ids(),
**self._ddp_kwargs,
)
register_ddp_comm_hook(
model=self._model,
ddp_comm_state=self._ddp_comm_state,
ddp_comm_hook=self._ddp_comm_hook,
ddp_comm_wrapper=self.ddp_comm_wrapper,
)
Alternatives
Additional context |
Number of steps per epoch don't match number of batchs in train loader | [
"bug",
"help wanted",
"working as intended"
] | π Bug
When using trainer, the number of steps don't match the number of batch in the train loader.
For example, when using the Boring model, we have 313 batches in the train dataloader, however, when training we get 626 train steps according to the progress bar.
Furthermore, when using the val_check_interval parameter in the trainer we get totally different results.
For val_check_interval = 10 we get 10016 steps, val_check_interval =50 we get 2191 steps and val_check_interval =100 we get 1252 steps
To Reproduce
Use following BoringModel
Expected behavior
We expect number of steps to match dataloader length
Environment
CUDA:
GPU:
Tesla T4
available: True
version: 10.1
Packages:
numpy: 1.19.5
pyTorch_debug: False
pyTorch_version: 1.8.0+cu101
pytorch-lightning: 1.2.5
tqdm: 4.41.1
System:
OS: Linux
architecture:
64bit
processor: x86_64
python: 3.7.10
version: #1 SMP Thu Jul 23 08:00:38 PDT 2020 |
Progress bar does not present training loss step when Ranger optimizer is used. | [
"bug",
"help wanted",
"won't fix",
"waiting on author",
"priority: 1"
] | π Bug
AdamW:
Ranger:
Please reproduce using the BoringModel
To Reproduce
Use pytorch_ranger or Less Wright's package https://github.com/lessw2020/Ranger-Deep-Learning-Optimizer
The code is in my repo:
https://github.com/psandev/PyTorch_Lightning_YOLOv3
Expected behavior
Environment
Note: Bugs with code are solved faster ! Colab Notebook should be made public !
IDE: Please, use our python bug_report_model.py template.
Colab Notebook: Please copy and paste the output from our environment collection script (or fill out the checklist below manually).
You can get the script and run it with:
wget https://raw.githubusercontent.com/PyTorchLightning/pytorch-lightning/master/tests/collect_env_details.py
# For security purposes, please check the contents of collect_env_details.py before running it.
python collect_env_details.py
PyTorch Version (1.71):
OS (Linux):
How you installed PyTorch (pip):
Build command you used (NA):
Python version: 3.7
CUDA/cuDNN version: 10.2
GPU models and configuration:
Any other relevant information:
Additional context |
NeptuneLogger ignores NEPTUNE_PROJECT environment variable | [
"bug",
"help wanted",
"won't fix",
"3rd party"
] | π Bug
If project_name is not given, NeptuneLogger does not fetch the project name from the NEPTUNE_PROJECT environment variable and instead raises a neptune.api_exceptions.ProjectNotFound with the message "Project None not found".
To Reproduce
https://colab.research.google.com/gist/cifkao/70eb23d9021d8470b3208c7eb3607abd/the-boringmodel-neptune.ipynb
Expected behavior
The project name should be fetched from the NEPTUNE_PROJECT environment variable.
Environment
CUDA:
GPU:
Tesla T4
available: True
version: 10.1
Packages:
numpy: 1.19.5
pyTorch_debug: False
pyTorch_version: 1.8.1+cu101
pytorch-lightning: 1.2.6
tqdm: 4.41.1
System:
OS: Linux
architecture:
64bit
processor: x86_64
python: 3.7.10
version: #1 SMP Thu Jul 23 08:00:38 PDT 2020 |
`LightningModule.log(on_epoch, on_step)`: Hard to get same behavior for train and val? | [
"bug",
"help wanted",
"priority: 0",
"logging"
] | π Bug
I see a table of different default behaviors for log(), which leads me to believe that if I want train/eval to have same behavior (e.g. reduction, frequency, etc.), I could just set them.
However, that doesn't seem to be the case.
Was seeing noisy validation values logged, whereas training values were smoother.
I wanted to see if I could make validation be smoothed (reduced) as is done with training, but couldn't figure it out easily from available docs.
Reviewed:
https://pytorch-lightning.readthedocs.io/en/1.2.6/api/pytorch_lightning.core.lightning.html#pytorch_lightning.core.lightning.LightningModule.log
https://pytorch-lightning.readthedocs.io/en/1.2.6/starter/new-project.html#logging
Please reproduce using the BoringModel
To Reproduce
Colab Notebook: https://colab.research.google.com/drive/1F0t9KXDbjfE9w9Jmbk0G0m9IaFwMmD8T
Expected behavior
When I explicitly set the values for .log(), I'd expect that training and validation metrics get logged the same as they do for custom logger.
Environment
See Notebook.
Additional context
N/A |
[RFC] Introduce a dataloader factory class to better manage data modules | [
"feature",
"help wanted",
"won't fix"
] | π Feature
Create a new class to manage dataloaders
Proposal: https://docs.google.com/document/d/1c0dBmASUfQy0kIpliGD7sGmdzC0sgbuOQStkM02UySM/edit
Motivation
DataModules bundle together training, validation, and testing dataloaders
Often times, we want to configure different dataloader settings for each of these phases
Example: configure larger batch sizes for validation (no gradients so we can use more memory for batches/activations)
Example: handle uneven end of data differently across training vs validation (drop tail in training, wraparound for validation)
This is combination of bundling of different phases along with a variety of knobs creates complex datamodule initialization logic
Furthermore, a generic datamodule is difficult to implement as it can be used for training (where the train dataloader must be defined and optionally the val dataloader) or validation (where only the val dataloader must be defined) or testing (where only the test dataloader must be defined) or prediction (where only the predict dataloader must be defined).
Pitch
from typing import Any, Dict, List, Mapping, Optional
from abc import ABC, abstractmethod
import torch
from pytorch_lightning.core.hooks import CheckpointHooks
import pytorch_lightning as pl
from torch.utils.data import DataLoader
class DataLoaderFactory(ABC, CheckpointHooks):
def __init__(self) -> None:
# Pointer to the trainer object
# Placeholder until we define a proper TrainerContext class (e.g. frozen dataclass)
# to pass things like progress tracking, rank, or world size to the factory
self.trainer = None
# Private attrs to keep track of whether or not data hooks have been called yet
self._prepare_data_called: bool = False
self._setup_called: bool = False
self._teardown_called: bool = False
# This should not be a trainer setting
# https://pytorch-lightning.readthedocs.io/en/latest/common/trainer.html#prepare-data-per-node
self.prepare_data_per_node: bool = True
def prepare_data(self) -> None:
pass
def setup(self) -> None:
pass
@abstractmethod
def get_dataloader(self) -> Union[DataLoader, List[DataLoader]]:
def teardown(self) -> None:
pass
@property
def prepare_data_called(self) -> bool:
return self._prepare_data_called
@prepare_data_called.setter
def prepare_data_called(self, val: bool) -> None:
self._prepare_data_called = val
@property
def setup_called(self) -> bool:
return self._setup_called
@setup_called.setter
def setup_called(self, val: bool) -> None:
self._setup_called = val
@property
def teardown_called(self) -> bool:
return self._teardown_called
@teardown_called.setter
def teardown_called(self, val: bool) -> None:
self._teardown_called = val
We can add optional attributes inside of the datamodule for these classes, one for each of train/val/test/predict as a convenience, along with a convenience method to instantiate a datamodule from these factories similar to this:
pytorch-lightning/pytorch_lightning/core/datamodule.py
Lines 343 to 398
in
a72a799
@classmethod
def from_datasets(
cls,
train_dataset: Optional[Union[Dataset, Sequence[Dataset], Mapping[str, Dataset]]] = None,
val_dataset: Optional[Union[Dataset, Sequence[Dataset]]] = None,
test_dataset: Optional[Union[Dataset, Sequence[Dataset]]] = None,
batch_size: int = 1,
num_workers: int = 0,
):
r"""
Create an instance from torch.utils.data.Dataset.
Args:
train_dataset: (optional) Dataset to be used for train_dataloader()
val_dataset: (optional) Dataset or list of Dataset to be used for val_dataloader()
test_dataset: (optional) Dataset or list of Dataset to be used for test_dataloader()
batch_size: Batch size to use for each dataloader. Default is 1.
num_workers: Number of subprocesses to use for data loading. 0 means that the
data will be loaded in the main process. Number of CPUs available.
"""
def dataloader(ds, shuffle=False):
return DataLoader(
ds,
batch_size=batch_size,
shuffle=shuffle,
num_workers=num_workers,
pin_memory=True,
)
def train_dataloader():
if isinstance(train_dataset, Mapping):
return {key: dataloader(ds, shuffle=True) for key, ds in train_dataset.items()}
if isinstance(train_dataset, Sequence):
return [dataloader(ds, shuffle=True) for ds in train_dataset]
return dataloader(train_dataset, shuffle=True)
def val_dataloader():
if isinstance(val_dataset, Sequence):
return [dataloader(ds) for ds in val_dataset]
return dataloader(val_dataset)
def test_dataloader():
if isinstance(test_dataset, Sequence):
return [dataloader(ds) for ds in test_dataset]
return dataloader(test_dataset)
datamodule = cls()
if train_dataset is not None:
datamodule.train_dataloader = train_dataloader
if val_dataset is not None:
datamodule.val_dataloader = val_dataloader
if test_dataset is not None:
datamodule.test_dataloader = test_dataloader
return datamodule
class LightningDataModule(...)
train_dataloader_factory: Optional[DataLoaderFactory] = None
val_dataloader_factory: Optional[DataLoaderFactory] = None
test_dataloader_factory: Optional[DataLoaderFactory] = None
predict_dataloader_factory: Optional[DataLoaderFactory] = None
@classmethod
def from_dataloader_factories(
cls,
train_dataloader_factory: Optional[DataLoaderFactory] = None,
val_dataloader_factory: Optional[DataLoaderFactory] = None,
test_dataloader_factory: Optional[DataLoaderFactory] = None,
predict_dataloader_factory: Optional[DataLoaderFactory] = None
):
datamodule = cls()
if train_dataloader_factory is not None:
datamodule.train_dataloader = train_dataloader_factory.get_dataloader
if val_dataloader_factory is not None:
datamodule.val_dataloader = val_dataloader_factory.get_dataloader
if test_dataloader_factory is not None:
datamodule.test_dataloader = test_dataloader_factory.get_dataloader
if predict_dataloader_factory is not None:
datamodule.predict_dataloader = predict_dataloader_factory.get_dataloader
return datamodule
This can also replace the raw dataloaders that are currently accepted on the Trainer.fit()/validate()/test()/predict() APIs - the classes here are an improvement as they can have access to trainer context which the existing dataloaders do not have.
cc @justusschock @awaelchli @carmocca
Alternatives
Additional context |
Validation metrics assumed to be logged within the first training epoch | [
"bug",
"help wanted",
"won't fix",
"priority: 1"
] | π Bug
In TrainLoop.on_train_end a call to check_checkpoint_callback is made. Within that method a call to on_validation_end is performed. As per the docs (and the fact that the ModelCheckpoint fires on on_validation_end), the expectation is to monitor validation metrics. However, if in the Trainer we set num_sanity_val_steps to 0 then validation metrics are never logged, resulting in a misconfiguration exception in _validate_monitor_key.
Note that this is only an issue on the first epoch -- after this the val keys appear in the callback metrics and this issue is moot.
Please reproduce using the BoringModel
To Reproduce
Use following BoringModel and post here
I cannot reproduce this with the BoringModel since it uses deprecated x_step methods (e.g. validation_step returns the loss rather than logs it). It should be updated to 1.2.6 in a different issue.
Expected behavior
If the model checkpoint only implements on_validation_end then it should only fire on that callback, not secretly in on_train_end. If it should fire in on_train_end it should either have a second monitor specific to the callback_metrics logged during training, or its logic should be moved out from under on_validation_end to a more general (less misleading) hook.
Note that the callbacks have access to the Trainer.state, so it is possible to move the ModelCheckpoint.on_validation_end logic into a higher level hook and leverage this state info. An elegant (imo) attribute to add to ModelCheckpoint could be monitor_state, so that for instance a user can say "monitor metric 'loss' but only while the trainer is in state 'train'".
class ModelCheckpoint(Callback):
def __init__(
self,
...
monitor: Optional[str] = None,
monitor_state: Optional[Union[str, List[str]] = None, # must a subset of fit/validate/test/predict/etc.
...
):
...
Environment
On PL master (1.2.6)
PyTorch Version (e.g., 1.0): 1.7.1
OS (e.g., Linux): linux
How you installed PyTorch (conda, pip, source): conda
Build command you used (if compiling from source): N/A
Python version: 3.7
CUDA/cuDNN version: 10.2 |
BoringModel uses deprecated code | [
"bug",
"help wanted",
"waiting on author",
"priority: 2"
] | π Bug
The BoringModel is out of date. This model is used to help generate bug reports, but it uses syntax that was deprecated as of 1.2.6. This should be a simple fix to update, but I also propose making legacy BoringModels for pre-1.2.6 versions, in case a user has an issue with old code. |
LOCAL_RANK not being set in slurm | [
"bug",
"help wanted",
"priority: 0",
"environment: slurm"
] | π Bug
A lot of the PTL tooling around multiprocess depends on a specific environment variable: LOCAL_RANK being set correctly, it seems that when running in slurm this isnt set causing it to return the default of 0 for all processes which makes every process do things that should only be done on rank 0, like log stuff.
Also I'm a little unclear about the name of that variable, if I have multiple nodes, only the global rank 0 not the local rank should be logging and saving checkpoints etc.
To Reproduce
Run in slurm (cant really do it w/ colab), a good way to easily see it is to use the Wandb logger, you'll see that each process makes a new run on the Wandb UI which means that @rank_zero_experiment didnt work properly, and you can confirm this by printing LOCAL_RANK which is defaulted to 0 if unset, it will always give back 0.
Expected behavior
LOCAL_RANK is set correctly or the rest of the tooling is aware of the global rank of the process
Environment
Will update if it's really necessary |
Training stalls with DDP and iterable training dataset at validation step for any val_check_interval>1 | [
"feature",
"help wanted",
"waiting on author",
"priority: 1"
] | π Bug
Training stalls with DDP and iterable training dataset at validation step for any val_check_interval>1. It works fine for val_check_interval=1.
To Reproduce
Here is a toy example to reproduce:
import numpy as np
import pybmi.torch as pt
import pytorch_lightning as pl
import torch
from torch import nn
from pytorch_lightning.callbacks import ModelCheckpoint
from pytorch_lightning.loggers import TensorBoardLogger
from torch.utils.data import IterableDataset, DataLoader, ChainDataset, get_worker_info
import torch.distributed as dist
class MyIterableDataset(IterableDataset):
def __init__(self, ds_list):
self.seq_len = 3
self.batch_size = 2
self.ds_list = ds_list
def process_data(self, worker_ds):
l = []
for d in worker_ds:
l.append(d)
pdata = np.concatenate(l)
batch_size_total = self.batch_size * self.seq_len
n_batches = len(pdata)//batch_size_total
pdata = pdata[:n_batches * batch_size_total]
pdata = pdata.reshape((self.batch_size, -1)).astype(np.float32)
for n in range(0, pdata.shape[1], self.seq_len):
itr = pdata[:, n:n+self.seq_len]
yield itr
def __iter__(self):
rank = dist.get_rank()
world_size = dist.get_world_size()
worker_info = get_worker_info()
total_chunks = world_size * worker_info.num_workers
chunk_id = (rank * worker_info.num_workers) + worker_info.id
ds_list = np.array(self.ds_list,dtype = 'object')
split = np.array_split(ds_list, total_chunks)
return (self.process_data(split[chunk_id]))
class TestDataset():
def __init__(self):
self.maxlen = 40
def process_data(self,ds):
out = np.zeros([self.maxlen,len(ds)])
n = []
for ii,pdata in enumerate(ds):
n.append(len(pdata))
out[:n[-1],ii] = pdata
return out[:max(n),:].astype(np.float32)
a = np.arange(20)*1.
b = np.arange(23)*-1.
c = np.arange(31)*0.01
d = np.arange(17)*-.01
data = [a,b,c,d]
class PnRDataModule(pl.LightningDataModule):
def __init__(self):
super().__init__()
self.batch_size = None
self.val_batch_size = 3
self.num_workers = 2
def setup(self, stage=None):
if stage == 'fit' or stage is None:
self.training_data = MyIterableDataset(data)
self.val_data = TestDataset().process_data(data)
def train_dataloader(self):
return DataLoader(
self.training_data,
batch_size=self.batch_size,
num_workers=self.num_workers,
pin_memory=True,
)
def val_dataloader(self):
return DataLoader(
self.val_data,
batch_size=self.val_batch_size,
num_workers=2,
pin_memory=True,
drop_last = True,
)
class Model(nn.Module):
def __init__(self):
super(Model, self).__init__()
self.ff1 = nn.Linear(1,100)
self.ff2 = nn.Linear(100,1)
def forward(self, x):
x1 = self.ff1(x)
y = self.ff2(x1)
return y
class LM(pl.LightningModule):
def __init__(self):
super().__init__()
self.model = Model()
self.criterion = torch.nn.MSELoss(reduction='mean')
def forward(self,x):
logits= self.model.forward(x)
return logits
def training_step(self, batch, batch_idx):
batch = batch.unsqueeze(2)
batch = batch.permute([1,0,2])
logits = self.model.forward(batch)
loss = self.criterion(logits, batch)
self.log('train_loss', loss, prog_bar=True)
return loss
def validation_step(self, batch, batch_idx):
logits = self.model.forward(batch.unsqueeze(2))
loss = self.criterion(logits.squeeze(2), batch)
self.log('val_loss', loss, prog_bar=True)
return loss
def configure_optimizers(self):
optimizer = torch.optim.Adam(self.parameters(), lr=0.001)
return optimizer
def main():
dm = PnRDataModule()
model = LM()
checkpoint_callback = ModelCheckpoint(dirpath='Models/MMD',monitor="val_loss", mode="min")
trainer = pl.Trainer(max_epochs=10, gpus=2,callbacks=[checkpoint_callback], accelerator='ddp',val_check_interval=2)
trainer.fit(model, dm)
if __name__ == "__main__":
main()
Expected behavior
Training to work with any val_check_interval.
Environment
Note: Bugs with code are solved faster ! Colab Notebook should be made public !
PyTorch Version (e.g., 1.0): 1.8.0
OS (e.g., Linux): Linux
How you installed PyTorch (conda, pip, source): conda
Build command you used (if compiling from source):
Python version: 3.6.10
CUDA/cuDNN version: 11.1
GPU models and configuration: 4x V100 GPU
Any other relevant information:
Additional context
cc @tchaton |
Some properties of LightningModule were removed from the code, but left in the doc. | [
"docs"
] | Properties use_dp, use_ddp, use_ddp2, use_tpu were removed at in #5300 but left in the documentation.
(Now we need to set the use_ddp parameter in our modules manually if we want to check whether the trainer uses DDP, am I right?) |
When I use manual optimization lightning still check optimizer_idx argument. | [
"bug",
"help wanted"
] | π Bug
I set self.automatic_optimization = False in __init__ like that in official docs, but still got an error:
ValueError: Your LightningModule defines 2 optimizers but training_step is missing the "optimizer_idx" argument.
This is confusion, since in example there are no optimizer_idx in training_step , also I think there are no need for it.
Please reproduce using the BoringModel
To Reproduce
Use following BoringModel and post here
Expected behavior
Environment
Note: Bugs with code are solved faster ! Colab Notebook should be made public !
IDE: Please, use our python bug_report_model.py template.
Colab Notebook: Please copy and paste the output from our environment collection script (or fill out the checklist below manually).
You can get the script and run it with:
wget https://raw.githubusercontent.com/PyTorchLightning/pytorch-lightning/master/tests/collect_env_details.py
# For security purposes, please check the contents of collect_env_details.py before running it.
python collect_env_details.py
PyTorch Version (e.g., 1.0):
OS (e.g., Linux):
How you installed PyTorch (conda, pip, source):
Build command you used (if compiling from source):
Python version:
CUDA/cuDNN version:
GPU models and configuration:
Any other relevant information:
Additional context |
Handle cases where an IterableDataset doesn't produce a batch for an epoch. | [
"bug",
"help wanted",
"data handling",
"priority: 2"
] | π Bug
If the IterableDataset for training doesn't generate any batch for an epoch, the Trainer raises a rather cryptic exception:
Traceback (most recent call last):
File "/anaconda/envs/py37_pytorch/lib/python3.7/site-packages/pytorch_lightning/trainer/connectors/data_connector.py", line 47, in _with_is_last
last = next(it)
File "/anaconda/envs/py37_pytorch/lib/python3.7/site-packages/pytorch_lightning/trainer/supporters.py", line 470, in __next__
return self.request_next_batch(self.loader_iters)
File "/anaconda/envs/py37_pytorch/lib/python3.7/site-packages/pytorch_lightning/trainer/supporters.py", line 484, in request_next_batch
return apply_to_collection(loader_iters, Iterator, next)
File "/anaconda/envs/py37_pytorch/lib/python3.7/site-packages/pytorch_lightning/utilities/apply_func.py", line 84, in apply_to_collection
return function(data, *args, **kwargs)
File "/anaconda/envs/py37_pytorch/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 435, in __next__
data = self._next_data()
File "/anaconda/envs/py37_pytorch/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 475, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "/anaconda/envs/py37_pytorch/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 34, in fetch
data = next(self.dataset_iter)
StopIteration
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
...
File "/anaconda/envs/py37_pytorch/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 499, in fit
self.dispatch()
File "/anaconda/envs/py37_pytorch/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 546, in dispatch
self.accelerator.start_training(self)
File "/anaconda/envs/py37_pytorch/lib/python3.7/site-packages/pytorch_lightning/accelerators/accelerator.py", line 73, in start_training
self.training_type_plugin.start_training(trainer)
File "/anaconda/envs/py37_pytorch/lib/python3.7/site-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py", line 114, in start_training
self._results = trainer.run_train()
File "/anaconda/envs/py37_pytorch/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 637, in run_train
self.train_loop.run_training_epoch()
File "/anaconda/envs/py37_pytorch/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 484, in run_training_epoch
for batch_idx, (batch, is_last_batch) in train_dataloader:
File "/anaconda/envs/py37_pytorch/lib/python3.7/site-packages/pytorch_lightning/profiler/profilers.py", line 82, in profile_iterable
value = next(iterator)
RuntimeError: generator raised StopIteration
Set the environment variable HYDRA_FULL_ERROR=1 for a complete stack trace.
This is caused by _with_is_last as it assumes the given iterator produces at least one batch. If it doesn't, the first call to next() raises a StopIteration, which gets transformed to a RuntimeError per the new behavior in Python 3.7 (https://www.python.org/dev/peps/pep-0479/).
Please reproduce using the BoringModel
Colab link
To Reproduce
Run the colab.
Expected behavior
We should do something like the following:
def _with_is_last(self, iterable):
"""Pass through values from the given iterable with an added boolean indicating if this is the last item.
See `https://stackoverflow.com/a/1630350 <https://stackoverflow.com/a/1630350>`_"""
it = iter(iterable)
try:
last = next(it)
except StopIteration:
warn('your iterable dataset didn't produce any batch')
return
for val in it:
# yield last and has next
yield last, False
last = val
# yield last, no longer has next
yield last, True
If we return like the snippet above, it doesn't quite work b/c the whole training loop code assumes that there's at least one batch. We probably should re-raise the exception with a better message.
Environment
See the Colab.
Additional context
N/A |
Multi-GPU training fails when using GCP Deep Learning image | [
"bug",
"help wanted"
] | π Bug
Multi-GPU training fails when using GCP Deep Learning image. Occurs when using terminal. Occurs with dp and ddp_spawn accelerators; does not occur with a ddp accelerator. Does not occur when using the same system for single-GPU training.
/opt/conda/lib/python3.7/site-packages/pytorch_lightning/utilities/distributed.py:52: UserWarning: Y
ou requested multiple GPUs but did not specify a backend, e.g. `Trainer(accelerator="dp"|"ddp"|"ddp2
")`. Setting `accelerator="ddp_spawn"` for you.
warnings.warn(*args, **kwargs)
GPU available: True, used: True
TPU available: False, using: 0 TPU cores
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0,1,2,3]
/opt/conda/lib/python3.7/site-packages/pytorch_lightning/utilities/distributed.py:52: UserWarning: Y
ou requested multiple GPUs but did not specify a backend, e.g. `Trainer(accelerator="dp"|"ddp"|"ddp2
")`. Setting `accelerator="ddp_spawn"` for you.
warnings.warn(*args, **kwargs)
GPU available: True, used: True
TPU available: False, using: 0 TPU cores
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0,1,2,3]
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/opt/conda/lib/python3.7/multiprocessing/spawn.py", line 105, in spawn_main
exitcode = _main(fd)
File "/opt/conda/lib/python3.7/multiprocessing/spawn.py", line 114, in _main
prepare(preparation_data)
File "/opt/conda/lib/python3.7/multiprocessing/spawn.py", line 225, in prepare
_fixup_main_from_path(data['init_main_from_path'])
File "/opt/conda/lib/python3.7/multiprocessing/spawn.py", line 277, in _fixup_main_from_path
run_name="__mp_main__")
File "/opt/conda/lib/python3.7/runpy.py", line 263, in run_path
pkg_name=pkg_name, script_name=fname)
File "/opt/conda/lib/python3.7/runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "/opt/conda/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/max/boring_model_multigpu_(2).py", line 88, in <module>
trainer.fit(model, train, val)
File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 499, in f
it
self.dispatch()
File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 546, in d
ispatch
self.accelerator.start_training(self)
File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/accelerators/accelerator.py", line
73, in start_training
self.training_type_plugin.start_training(trainer)
File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/plugins/training_type/ddp_spawn.py"
, line 108, in start_training
mp.spawn(self.new_process, **self.mp_spawn_kwargs)
File "/opt/conda/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 230, in spawn
return start_processes(fn, args, nprocs, join, daemon, start_method='spawn')
File "/opt/conda/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 179, in start_p
rocesses
process.start()
File "/opt/conda/lib/python3.7/multiprocessing/process.py", line 112, in start
self._popen = self._Popen(self)
File "/opt/conda/lib/python3.7/multiprocessing/context.py", line 284, in _Popen
return Popen(process_obj)
File "/opt/conda/lib/python3.7/multiprocessing/popen_spawn_posix.py", line 32, in __init__
super().__init__(process_obj)
File "/opt/conda/lib/python3.7/multiprocessing/popen_fork.py", line 20, in __init__
self._launch(process_obj)
File "/opt/conda/lib/python3.7/multiprocessing/popen_spawn_posix.py", line 42, in _launch
prep_data = spawn.get_preparation_data(process_obj._name)
File "/opt/conda/lib/python3.7/multiprocessing/spawn.py", line 143, in get_preparation_data
_check_not_importing_main()
File "/opt/conda/lib/python3.7/multiprocessing/spawn.py", line 136, in _check_not_importing_main
is not going to be frozen to produce an executable.''')
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.
This probably means that you are not using fork to start your
child processes and you have forgotten to use the proper idiom
in the main module:
if __name__ == '__main__':
freeze_support()
...
The "freeze_support()" line can be omitted if the program
is not going to be frozen to produce an executable.
To Reproduce
Create a GCP VM w/ the following properties:
n1-highmem-2, 4 T4 GPUs
Deep Learning on Linux OS, PyTorch 1.8 m110 m66 version
Allow HTTP/HTTPS Traffic, Preemptible On
After SSHing into the system and installing CUDA drivers (may need to run sudo /opt/deeplearning/install-driver.sh), install pytorch-lightning via pip3 install pytorch-lightning.
Then run:
import os
import torch
from torch.utils.data import DataLoader, Dataset
import pytorch_lightning as pl
from pytorch_lightning import LightningModule
tmpdir = os.getcwd()
class RandomDataset(Dataset):
def __init__(self, size, num_samples):
self.len = num_samples
self.data = torch.randn(num_samples, size)
def __getitem__(self, index):
return self.data[index]
def __len__(self):
return self.len
class BoringModel(LightningModule):
def __init__(self):
super().__init__()
self.layer = torch.nn.Linear(32, 2)
def forward(self, x):
return self.layer(x)
def loss(self, batch, prediction):
# An arbitrary loss to have a loss that updates the model weights during `Trainer.fit` calls
return torch.nn.functional.mse_loss(prediction, torch.ones_like(prediction))
def training_step(self, batch, batch_idx):
output = self.layer(batch)
loss = self.loss(batch, output)
return {"loss": loss}
def training_step_end(self, training_step_outputs):
return training_step_outputs
def training_epoch_end(self, outputs) -> None:
torch.stack([x["loss"] for x in outputs]).mean()
def validation_step(self, batch, batch_idx):
output = self.layer(batch)
loss = self.loss(batch, output)
return {"x": loss}
def validation_epoch_end(self, outputs) -> None:
torch.stack([x["x"] for x in outputs]).mean()
def test_step(self, batch, batch_idx):
output = self.layer(batch)
loss = self.loss(batch, output)
self.log("fake_test_acc", loss)
return {"y": loss}
def test_epoch_end(self, outputs) -> None:
torch.stack([x["y"] for x in outputs]).mean()
def configure_optimizers(self):
optimizer = torch.optim.SGD(self.layer.parameters(), lr=0.1)
lr_scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=1)
return [optimizer], [lr_scheduler]
num_samples = 10000
train = RandomDataset(32, num_samples)
train = DataLoader(train, batch_size=32)
val = RandomDataset(32, num_samples)
val = DataLoader(val, batch_size=32)
test = RandomDataset(32, num_samples)
test = DataLoader(test, batch_size=32)
model = BoringModel()
# Initialize a trainer
trainer = pl.Trainer(max_epochs=1, progress_bar_refresh_rate=20, gpus=4)
# Train the model β‘
trainer.fit(model, train, val)
Expected behavior
Environment
CUDA:
- GPU:
- Tesla T4
- Tesla T4
- Tesla T4
- Tesla T4
- available: True
- version: 11.1
Packages:
- numpy: 1.19.5
- pyTorch_debug: False
- pyTorch_version: 1.8.0
- pytorch-lightning: 1.2.6
- tqdm: 4.59.0
System:
- OS: Linux
- architecture:
- 64bit
-
- processor:
- python: 3.7.10
- version: #1 SMP Debian 4.19.181-1 (2021-03-19)
Additional context
This was the issue I hit when debugging minimaxir/aitextgen#103; since it occurs with the BoringModel it may not be aitextgen's fault (maybe) cc @SeanNaren |
[trainer] Simplify Trainer dependencies by making TrainerTrainingTricksMixin a utils class | [
"feature",
"help wanted"
] | π Feature
Training tricks doesn't need to be inherited from the core Trainer class. These methods could be utility functions that sit completely outside the Trainer class hierarchy.
pytorch-lightning/pytorch_lightning/trainer/training_tricks.py
Line 28
in
bb9ace4
class TrainerTrainingTricksMixin(ABC):
Motivation
Simplify the Trainer inheritance hierarchy and create more reusable utilities
Pitch
Deprecate the mixin and move the functions out to a utilities file for NaN detection. All we have to do is accept an nn.Module as input and pass the LightningModule to these functions from within the training loop.
def print_nan_gradients(model: nn.Module) -> None:
def detect_nan_tensors(model: nn.Module) -> None:
we can separately check for the loss being NaN inside the training loop directly, since thats a specific call here:
pytorch-lightning/pytorch_lightning/trainer/training_tricks.py
Lines 43 to 45
in
bb9ace4
# check if loss is nan
if not torch.isfinite(loss).all():
raise ValueError('The loss returned in `training_step` is nan or inf.')
Deprecation path:
Add the new utilities in a standalone file
Make the mixin call the new utilities
Update the training loop to directly call the new utilities
Add a deprecation warning to the methods inside the mixin that mark them as deprecated
Alternatives
Keep as is
Additional context |
Adagrad not working with GPU and DDP | [
"bug",
"help wanted",
"priority: 0",
"distributed"
] | π Bug
Adagrad doesn't work with GPUs and DDP as the optimizer is created before the model is moved to CUDA. I believe this issue has been addressed in an earlier version: #554
How to reproduce using the BoringModel
https://colab.research.google.com/drive/1HfyL5htoOkPETggTLwYNfh94HrNc6TOS?usp=sharing
The error emerged when I tried using Adagrad with both one and multiple GPUs.
Stack trace
LOCAL_RANK: 1 - CUDA_VISIBLE_DEVICES: [0,1]
initializing ddp: GLOBAL_RANK: 1, MEMBER: 2/2
initializing ddp: GLOBAL_RANK: 0, MEMBER: 1/2
| Name | Type | Params
---------------------------------
0 | layer | Linear | 66
---------------------------------
66 Trainable params
0 Non-trainable params
66 Total params
0.000 Total estimated model params size (MB)
/home/rajmund/miniconda3/envs/test/lib/python3.6/site-packages/pytorch_lightning/utilities/distributed.py:52: UserWarning: The dataloader, val dataloader 0, does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` (try 20 which is the number of cpus on this machine) in the `DataLoader` init to improve performance.
warnings.warn(*args, **kwargs)
/home/rajmund/miniconda3/envs/test/lib/python3.6/site-packages/pytorch_lightning/utilities/distributed.py:52: UserWarning: The dataloader, train dataloader, does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` (try 20 which is the number of cpus on this machine) in the `DataLoader` init to improve performance.
warnings.warn(*args, **kwargs)
Epoch 0: 0%| | 0/314 [00:00<?, ?it/s]
Traceback (most recent call last):
File "/home/rajmund/test.py", line 118, in <module>
test_x(tmpdir)
File "/home/rajmund/test.py", line 110, in test_x
trainer.fit(model, train, val)
Traceback (most recent call last):
File "/home/rajmund/miniconda3/envs/test/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 499, in fit
File "test.py", line 118, in <module>
test_x(tmpdir)
File "test.py", line 110, in test_x
trainer.fit(model, train, val)
File "/home/rajmund/miniconda3/envs/test/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 499, in fit
self.dispatch()
File "/home/rajmund/miniconda3/envs/test/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 546, in dispatch
self.dispatch()
File "/home/rajmund/miniconda3/envs/test/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 546, in dispatch
self.accelerator.start_training(self)
File "/home/rajmund/miniconda3/envs/test/lib/python3.6/site-packages/pytorch_lightning/accelerators/accelerator.py", line 73, in start_training
self.accelerator.start_training(self)
File "/home/rajmund/miniconda3/envs/test/lib/python3.6/site-packages/pytorch_lightning/accelerators/accelerator.py", line 73, in start_training
self.training_type_plugin.start_training(trainer)
File "/home/rajmund/miniconda3/envs/test/lib/python3.6/site-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py", line 114, in start_training
self._results = trainer.run_train()
File "/home/rajmund/miniconda3/envs/test/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 637, in run_train
self.training_type_plugin.start_training(trainer)
File "/home/rajmund/miniconda3/envs/test/lib/python3.6/site-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py", line 114, in start_training
self._results = trainer.run_train()
File "/home/rajmund/miniconda3/envs/test/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 637, in run_train
self.train_loop.run_training_epoch()
File "/home/rajmund/miniconda3/envs/test/lib/python3.6/site-packages/pytorch_lightning/trainer/training_loop.py", line 492, in run_training_epoch
batch_output = self.run_training_batch(batch, batch_idx, dataloader_idx)
File "/home/rajmund/miniconda3/envs/test/lib/python3.6/site-packages/pytorch_lightning/trainer/training_loop.py", line 654, in run_training_batch
self.train_loop.run_training_epoch()
File "/home/rajmund/miniconda3/envs/test/lib/python3.6/site-packages/pytorch_lightning/trainer/training_loop.py", line 492, in run_training_epoch
self.optimizer_step(optimizer, opt_idx, batch_idx, train_step_and_backward_closure)
File "/home/rajmund/miniconda3/envs/test/lib/python3.6/site-packages/pytorch_lightning/trainer/training_loop.py", line 433, in optimizer_step
using_lbfgs=is_lbfgs,
File "/home/rajmund/miniconda3/envs/test/lib/python3.6/site-packages/pytorch_lightning/core/lightning.py", line 1390, in optimizer_step
batch_output = self.run_training_batch(batch, batch_idx, dataloader_idx)
File "/home/rajmund/miniconda3/envs/test/lib/python3.6/site-packages/pytorch_lightning/trainer/training_loop.py", line 654, in run_training_batch
optimizer.step(closure=optimizer_closure)
File "/home/rajmund/miniconda3/envs/test/lib/python3.6/site-packages/pytorch_lightning/core/optimizer.py", line 214, in step
self.__optimizer_step(*args, closure=closure, profiler_name=profiler_name, **kwargs)
File "/home/rajmund/miniconda3/envs/test/lib/python3.6/site-packages/pytorch_lightning/core/optimizer.py", line 134, in __optimizer_step
self.optimizer_step(optimizer, opt_idx, batch_idx, train_step_and_backward_closure)
File "/home/rajmund/miniconda3/envs/test/lib/python3.6/site-packages/pytorch_lightning/trainer/training_loop.py", line 433, in optimizer_step
trainer.accelerator.optimizer_step(optimizer, self._optimizer_idx, lambda_closure=closure, **kwargs)
File "/home/rajmund/miniconda3/envs/test/lib/python3.6/site-packages/pytorch_lightning/accelerators/accelerator.py", line 277, in optimizer_step
self.run_optimizer_step(optimizer, opt_idx, lambda_closure, **kwargs)
File "/home/rajmund/miniconda3/envs/test/lib/python3.6/site-packages/pytorch_lightning/accelerators/accelerator.py", line 282, in run_optimizer_step
using_lbfgs=is_lbfgs,
self.training_type_plugin.optimizer_step(optimizer, lambda_closure=lambda_closure, **kwargs)
File "/home/rajmund/miniconda3/envs/test/lib/python3.6/site-packages/pytorch_lightning/core/lightning.py", line 1390, in optimizer_step
File "/home/rajmund/miniconda3/envs/test/lib/python3.6/site-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py", line 163, in optimizer_step
optimizer.step(closure=lambda_closure, **kwargs)
File "/home/rajmund/miniconda3/envs/test/lib/python3.6/site-packages/torch/optim/lr_scheduler.py", line 67, in wrapper
optimizer.step(closure=optimizer_closure)
return wrapped(*args, **kwargs)
File "/home/rajmund/miniconda3/envs/test/lib/python3.6/site-packages/pytorch_lightning/core/optimizer.py", line 214, in step
File "/home/rajmund/miniconda3/envs/test/lib/python3.6/site-packages/torch/autograd/grad_mode.py", line 26, in decorate_context
self.__optimizer_step(*args, closure=closure, profiler_name=profiler_name, **kwargs)
File "/home/rajmund/miniconda3/envs/test/lib/python3.6/site-packages/pytorch_lightning/core/optimizer.py", line 134, in __optimizer_step
trainer.accelerator.optimizer_step(optimizer, self._optimizer_idx, lambda_closure=closure, **kwargs)
File "/home/rajmund/miniconda3/envs/test/lib/python3.6/site-packages/pytorch_lightning/accelerators/accelerator.py", line 277, in optimizer_step
return func(*args, **kwargs)
File "/home/rajmund/miniconda3/envs/test/lib/python3.6/site-packages/torch/optim/adagrad.py", line 90, in step
self.run_optimizer_step(optimizer, opt_idx, lambda_closure, **kwargs)
File "/home/rajmund/miniconda3/envs/test/lib/python3.6/site-packages/pytorch_lightning/accelerators/accelerator.py", line 282, in run_optimizer_step
self.training_type_plugin.optimizer_step(optimizer, lambda_closure=lambda_closure, **kwargs)
File "/home/rajmund/miniconda3/envs/test/lib/python3.6/site-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py", line 163, in optimizer_step
group['eps'])
File "/home/rajmund/miniconda3/envs/test/lib/python3.6/site-packages/torch/optim/functional.py", line 48, in adagrad
optimizer.step(closure=lambda_closure, **kwargs)
File "/home/rajmund/miniconda3/envs/test/lib/python3.6/site-packages/torch/optim/lr_scheduler.py", line 67, in wrapper
return wrapped(*args, **kwargs)
File "/home/rajmund/miniconda3/envs/test/lib/python3.6/site-packages/torch/autograd/grad_mode.py", line 26, in decorate_context
return func(*args, **kwargs)
File "/home/rajmund/miniconda3/envs/test/lib/python3.6/site-packages/torch/optim/adagrad.py", line 90, in step
state_sum.addcmul_(grad, grad, value=1)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
group['eps'])
File "/home/rajmund/miniconda3/envs/test/lib/python3.6/site-packages/torch/optim/functional.py", line 48, in adagrad
state_sum.addcmul_(grad, grad, value=1)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:1 and cpu!
Environment
PyTorch Version: 1.7.1
PyTorch Lightning: 1.2.6
OS: Linux
How you installed PyTorch: pip
Python version: 3.6
CUDA/cuDNN version: 10.1
GPU models and configuration: Titan xp
Any other relevant information: - |
`None` parameters not sanitized during pruning | [
"bug",
"help wanted"
] | π Bug
ModelPruning callback fails when a module parameter is None. This can happen, for instance, in a Linear() when bias=False.
Please reproduce using the BoringModel
https://colab.research.google.com/drive/1UApprg-5htIQbosiSyyLLXm1B8wE8EbN?usp=sharing
Expected behavior
Unavailable parameters are already taken care of in sanitize_parameters_to_prune method in ModelPruning callback. A trivial check to ensure the attribute, when present, is not None should solve this. Happy to send a PR.
Environment
CUDA:
GPU:
Tesla K80
available: True
version: 10.1
Packages:
numpy: 1.19.5
pyTorch_debug: False
pyTorch_version: 1.8.1+cu101
pytorch-lightning: 1.2.6
tqdm: 4.41.1
System:
OS: Linux
architecture:
64bit
processor: x86_64
python: 3.7.10
version: #1 SMP Thu Jul 23 08:00:38 PDT 2020 |
Code in Colab notebook is broken due to broken download links | [
"bug",
"good first issue",
"docs"
] | The "Lightning in 2 steps" page on the docs (docs/source/starter/new-project.rst) points to a notebook (https://colab.research.google.com/drive/1F_RNcHzTfFuQf-LeKvSlud6x7jXYkG31) with examples of systems.
On said notebook, the example code for BERT (#scrollTo=yr7eaxkF-djf) is broken due to broken download links.
The links to files on a firebase account on this cell are broken.
import pandas as pd
import os
import sys
import shutil
import argparse
import tempfile
import urllib.request
import zipfile
TASKS = ["CoLA", "SST", "MRPC", "QQP", "STS", "MNLI", "SNLI", "QNLI", "RTE", "WNLI", "diagnostic"]
TASK2PATH = {
"CoLA": "https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2FCoLA.zip?alt=media&token=46d5e637-3411-4188-bc44-5809b5bfb5f4", # noqa
"SST": "https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2FSST-2.zip?alt=media&token=aabc5f6b-e466-44a2-b9b4-cf6337f84ac8", # noqa
"MRPC": "https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2Fmrpc_dev_ids.tsv?alt=media&token=ec5c0836-31d5-48f4-b431-7480817f1adc", # noqa
"QQP": "https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2FQQP-clean.zip?alt=media&token=11a647cb-ecd3-49c9-9d31-79f8ca8fe277", # noqa
"STS": "https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2FSTS-B.zip?alt=media&token=bddb94a7-8706-4e0d-a694-1109e12273b5", # noqa
"MNLI": "https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2FMNLI.zip?alt=media&token=50329ea1-e339-40e2-809c-10c40afff3ce", # noqa
"SNLI": "https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2FSNLI.zip?alt=media&token=4afcfbb2-ff0c-4b2d-a09a-dbf07926f4df", # noqa
"QNLI": "https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2FQNLIv2.zip?alt=media&token=6fdcf570-0fc5-4631-8456-9505272d1601", # noqa
"RTE": "https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2FRTE.zip?alt=media&token=5efa7e85-a0bb-4f19-8ea2-9e1840f077fb", # noqa
"WNLI": "https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2FWNLI.zip?alt=media&token=068ad0a0-ded7-4bd7-99a5-5e00222e0faf", # noqa
"diagnostic": [
"https://storage.googleapis.com/mtl-sentence-representations.appspot.com/tsvsWithoutLabels%2FAX.tsv?GoogleAccessId=firebase-adminsdk-0khhl@mtl-sentence-representations.iam.gserviceaccount.com&Expires=2498860800&Signature=DuQ2CSPt2Yfre0C%2BiISrVYrIFaZH1Lc7hBVZDD4ZyR7fZYOMNOUGpi8QxBmTNOrNPjR3z1cggo7WXFfrgECP6FBJSsURv8Ybrue8Ypt%2FTPxbuJ0Xc2FhDi%2BarnecCBFO77RSbfuz%2Bs95hRrYhTnByqu3U%2FYZPaj3tZt5QdfpH2IUROY8LiBXoXS46LE%2FgOQc%2FKN%2BA9SoscRDYsnxHfG0IjXGwHN%2Bf88q6hOmAxeNPx6moDulUF6XMUAaXCSFU%2BnRO2RDL9CapWxj%2BDl7syNyHhB7987hZ80B%2FwFkQ3MEs8auvt5XW1%2Bd4aCU7ytgM69r8JDCwibfhZxpaa4gd50QXQ%3D%3D", # noqa
"https://www.dropbox.com/s/ju7d95ifb072q9f/diagnostic-full.tsv?dl=1",
],
}
MRPC_TRAIN = "https://dl.fbaipublicfiles.com/senteval/senteval_data/msr_paraphrase_train.txt"
MRPC_TEST = "https://dl.fbaipublicfiles.com/senteval/senteval_data/msr_paraphrase_test.txt"
def download_and_extract(task, data_dir):
print("Downloading and extracting %s..." % task)
data_file = "%s.zip" % task
urllib.request.urlretrieve(TASK2PATH[task], data_file)
with zipfile.ZipFile(data_file) as zip_ref:
zip_ref.extractall(data_dir)
os.remove(data_file)
print("\tCompleted!")
Since the download (urllib.request.urlretrieve(TASK2PATH[task], data_file)) does not occur, all the following cells are broken (can't run). |
'No TPU devices were found' continues to exist for v2-32. | [
"bug",
"help wanted",
"accelerator: tpu",
"3rd party",
"priority: 1"
] | π Bug
The error is still similar to that previously, as described in #6778. I am running the check code with pytorch-lightning master branch.
All the 3 slaves show the same exception.
raceback (most recent call last):
File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/pytorch_lightning/utilities/xla_device.py", line 31, in inner_f
queue.put(func(*args, **kwargs))
File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/pytorch_lightning/utilities/xla_device.py", line 70, in _is_device_tpu
return len(xm.get_xla_supported_devices("TPU")) > 0
File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch_xla/core/xla_model.py", line 136, in get_xla_supported_devices
xla_devices = _DEVICES.value
File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch_xla/utils/utils.py", line 32, in value
self._value = self._gen_fn()
File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch_xla/core/xla_model.py", line 18, in <lambda>
_DEVICES = xu.LazyProperty(lambda: torch_xla._XLAC._xla_get_devices())
RuntimeError: tensorflow/compiler/xla/xla_client/xrt_computation_client.cc:258 : Check failed: default_device_target != options_.global_device_map.end()
*** Begin stack trace ***
tensorflow::CurrentStackTrace()
xla::XrtComputationClient::XrtComputationClient(xla::XrtComputationClient::Options, std::unique_ptr<tensorflow::tpu::TopologyProto, std::default_delete<tensorflow::tpu::TopologyProto> >, xla::XrtLocalService*)
xla::ComputationClient::Create()
xla::ComputationClient::Get()
_PyCFunction_FastCallDict
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
PyEval_EvalCodeEx
PyObject_Call
_PyObject_GenericGetAttrWithDict
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
PyEval_EvalCodeEx
PyObject_Call
_PyEval_EvalFrameDefault
PyEval_EvalCodeEx
PyObject_Call
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyFunction_FastCallDict
_PyObject_FastCallDict
_PyObject_Call_Prepend
PyObject_Call
_PyObject_FastCallDict
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
PyEval_EvalCodeEx
PyEval_EvalCode
PyCFunction_Call
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyFunction_FastCallDict
_PyObject_FastCallDict
_PyObject_CallMethodIdObjArgs
PyImport_ImportModuleLevelObject
_PyEval_EvalFrameDefault
PyEval_EvalCodeEx
PyEval_EvalCode
PyCFunction_Call
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyFunction_FastCallDict
_PyObject_FastCallDict
_PyObject_CallMethodIdObjArgs
PyImport_ImportModuleLevelObject
_PyEval_EvalFrameDefault
PyEval_EvalCodeEx
PyEval_EvalCode
PyCFunction_Call
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
*** End stack trace ***
/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/pytorch_lightning/utilities/distributed.py:52: UserWarning: ModelCheckpoint(save_last=True, save_top_k=None, monitor=None) is a redundant configuration. You can save the last checkpoint with ModelCheckpoint(save_top_k=None, monitor=None).
warnings.warn(*args, **kwargs)
Traceback (most recent call last):
File "play.py", line 119, in <module>
main()
File "play.py", line 102, in main
checkpoint_callback=checkpointer,
File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/pytorch_lightning/trainer/connectors/env_vars_connector.py", line 40, in insert_env_defaults
return fn(self, **kwargs)
File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 307, in __init__
replace_sampler_ddp, deterministic, precision, amp_backend, amp_level, plugins
File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/pytorch_lightning/trainer/connectors/accelerator_connector.py", line 97, in __init__
self.tpu_cores = device_parser.parse_tpu_cores(tpu_cores)
File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/pytorch_lightning/utilities/device_parser.py", line 113, in parse_tpu_cores
raise MisconfigurationException('No TPU devices were found.')
pytorch_lightning.utilities.exceptions.MisconfigurationException: No TPU devices were found.
The master node looks ok:
GPU available: False, used: False
TPU available: True, using: 8 TPU cores
To Reproduce
the same as #6778 |
global process count incorrect with elastic, fault tolerant training | [
"bug",
"help wanted",
"priority: 0",
"waiting on author"
] | π Bug
Problem
Count of the total number of processes incorrectly set.
Context
I am trying to run elastic training with torchelastic. I have tried with both gloo and nccl backends.
Error message
Error coming from gloo backend:
Traceback (most recent call last):
File "train_hydra.py", line 20, in hydra_main
train(cfg)
File "/bdata/bdata1/sribkain/learnseis/learnseis/training.py", line 39, in train
t.fit(module, data_module)
File "/ldata/Code/salt-identification/SRIBKAIN_ENVS/pl_env/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 496, in fit
self.pre_dispatch()
File "/ldata/Code/salt-identification/SRIBKAIN_ENVS/pl_env/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 525, in pre_dispatch
self.accelerator.pre_dispatch()
File "/ldata/Code/salt-identification/SRIBKAIN_ENVS/pl_env/lib/python3.7/site-packages/pytorch_lightning/accelerators/accelerator.py", line 83, in pre_dispatch
self.training_type_plugin.pre_dispatch()
File "/ldata/Code/salt-identification/SRIBKAIN_ENVS/pl_env/lib/python3.7/site-packages/pytorch_lightning/plugins/training_type/ddp.py", line 243, in pre_dispatch
self.init_ddp_connection(self.global_rank, self.world_size)
File "/ldata/Code/salt-identification/SRIBKAIN_ENVS/pl_env/lib/python3.7/site-packages/pytorch_lightning/plugins/training_type/ddp.py", line 226, in init_ddp_connection
torch_distrib.init_process_group(self.torch_distributed_backend, rank=global_rank, world_size=world_size)
File "/ldata/Code/salt-identification/SRIBKAIN_ENVS/pl_env/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py", line 432, in init_process_group
timeout=timeout)
File "/ldata/Code/salt-identification/SRIBKAIN_ENVS/pl_env/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py", line 503, in _new_process_group_helper
timeout=timeout)
RuntimeError: [enforce fail at /pytorch/third_party/gloo/gloo/context.cc:27] rank < size. 13 vs 8
NCCL backend gives this error: pytorch/pytorch#20313
Please reproduce using the BoringModel
I am running imagenet example from pl using torchvision.models.resnet34. Happy to reproduce with BoringModel if needed.
Before launching, I have exported the variable GLOO_SOCKET_IFNAME and set it to the appropriate interface name.
On node 0:
PL_TORCH_DISTRIBUTED_BACKEND=gloo python -m torchelastic.distributed.launch --nnodes=1:5 --rdzv_id='nodockertestelasticlaunch7' --rdzv_backend=etcd --rdzv_endpoint=10.18.0.15:2379 train_hydra.py +experiment=elastic_config.yaml
On node 1:
PL_TORCH_DISTRIBUTED_BACKEND=gloo python -m torchelastic.distributed.launch --nnodes=1:5 --rdzv_id='nodockertestelasticlaunch7' --rdzv_backend=etcd --rdzv_endpoint=10.18.0.15:2379 train_hydra.py +experiment=elastic_config.yaml
To Reproduce
Use following BoringModel and post here
Expected behavior
To be able to run distributed fault tolerant training :)
Environment
Note: Bugs with code are solved faster ! Colab Notebook should be made public !
IDE: Please, use our python bug_report_model.py template.
Colab Notebook: Please copy and paste the output from our environment collection script (or fill out the checklist below manually).
Output of collect_env_details.py:
* CUDA:
- GPU:
- GeForce RTX 2080 Ti
- GeForce RTX 2080 Ti
- GeForce RTX 2080 Ti
- GeForce RTX 2080 Ti
- GeForce RTX 2080 Ti
- GeForce RTX 2080 Ti
- GeForce RTX 2080 Ti
- GeForce RTX 2080 Ti
- available: True
- version: 10.2
* Packages:
- numpy: 1.19.2
- pyTorch_debug: False
- pyTorch_version: 1.6.0
- pytorch-lightning: 1.2.6
- tqdm: 4.48.2
- torchelastic: 0.2.0
* System:
- OS: Linux
- architecture:
- 64bit
-
- processor: x86_64
- python: 3.7.7
- version: #88-Ubuntu SMP Tue Feb 11 20:11:34 UTC 2020
Additional context |
Deepspeed + Auto Select GPUs = CUDA Out of Memory Error | [
"bug",
"help wanted",
"won't fix",
"3rd party",
"priority: 1"
] | π Bug
Please reproduce using the BoringModel
https://colab.research.google.com/drive/17Bt2m570f4o16iwbEV1fpUhgO04cuCqg?usp=sharing
To Reproduce
You can see the code on the BoringModel above, but I don't think it'll run on Colab because it's a multigpu issue.
Basically, when I have a large-ish model (2M parameters), I find that deepspeed is incompatible with auto_select_gpus.
So,
trainer = pl.Trainer(
gpus=8,
accelerator='ddp',
plugs='deepspeed',
precision=16,
)
seems to work
But,
trainer = pl.Trainer(
gpus=-1,
auto_select_gpus=True,
accelerator='ddp',
plugs='deepspeed',
precision=16,
)
causes a CUDA out of memory error.
Expected behavior
I'd expect it to select the GPUs and run it.
Environment
CUDA:
- GPU:
- Tesla V100-SXM2-16GB-N
- Tesla V100-SXM2-16GB-N
- Tesla V100-SXM2-16GB-N
- Tesla V100-SXM2-16GB-N
- Tesla V100-SXM2-16GB-N
- Tesla V100-SXM2-16GB-N
- Tesla V100-SXM2-16GB-N
- Tesla V100-SXM2-16GB-N
- available: True
- version: 11.1
Packages:
- numpy: 1.20.2
- pyTorch_debug: False
- pyTorch_version: 1.8.1+cu111
- pytorch-lightning: 1.2.6
- tqdm: 4.59.0
System:
- OS: Linux
- architecture:
- 64bit
-
- processor: x86_64
- python: 3.9.2
- version: #16~16.04.1-Ubuntu SMP Thu Apr 5 12:19:23 UTC 2018
Additional context
Semi-related, am I supposed to specify an accelerator when using deepspeed? In the docs, none is specified, but when i run without an accelerator, it complains and says I should be setting one. |
"TypeError: can't pickle _thread.lock objects" when logging tables to WandB | [
"bug",
"help wanted",
"won't fix",
"waiting on author",
"3rd party",
"priority: 2"
] | π Bug
To Reproduce
Try to log tables using WandLogger, e.g.:
def validation_epoch_end(self, outputs: List[Any]) -> None:
df = pd.DataFrame(
{
'my_stats': [1,2,3]
}
)
table = wandb.Table(dataframe=df)
self.log("examples", table)
After the first epoch (i.e. when the model checkpoint is saved), the error occurs:
TypeError: can't pickle _thread.lock objects
Expected behavior
Pickling models should succeed.
Environment
CUDA:
GPU:
NVIDIA GeForce RTX 3090
NVIDIA GeForce RTX 2060 SUPER
available: True
version: 11.1
Packages:
numpy: 1.20.1
pyTorch_debug: False
pyTorch_version: 1.8.0+cu111
pytorch-lightning: 1.2.4
tqdm: 4.59.0
System:
OS: Linux
architecture:
64bit
ELF
processor:
python: 3.7.9
version: #1 SMP Tue Jun 23 12:58:10 UTC 2020 |
CI Testing ROCm | [
"feature",
"help wanted",
"ci"
] | Since PyTorch now ships ROCm binaries for AMD GPUs we should test against it.
cc @Borda |
Latest FairScale + Sharded Training crashes using default trainer parameters | [
"bug",
"help wanted",
"3rd party"
] | π Bug
When validation/training is used (as default with the boring model) sharded crashes. This is because internally SDP relies on knowing the training state of the model, and when we run the validation sanity check, we do not set the eval mode correctly on the SDP model itself, so it waits for grads to be reduced since the module is in train mode.
import os
import torch
from torch.utils.data import Dataset
from pytorch_lightning import LightningModule, Trainer
class RandomDataset(Dataset):
"""
>>> RandomDataset(size=10, length=20) # doctest: +ELLIPSIS
<...bug_report_model.RandomDataset object at ...>
"""
def __init__(self, size, length):
self.len = length
self.data = torch.randn(length, size)
def __getitem__(self, index):
return self.data[index]
def __len__(self):
return self.len
class BoringModel(LightningModule):
"""
>>> BoringModel() # doctest: +ELLIPSIS +NORMALIZE_WHITESPACE
BoringModel(
(layer): Linear(...)
)
"""
def __init__(self):
"""
Testing PL Module
Use as follows:
- subclass
- modify the behavior for what you want
class TestModel(BaseTestModel):
def training_step(...):
# do your own thing
or:
model = BaseTestModel()
model.training_epoch_end = None
"""
super().__init__()
self.layer = torch.nn.Linear(32, 2)
def forward(self, x):
return self.layer(x)
def loss(self, batch, prediction):
# An arbitrary loss to have a loss that updates the model weights during `Trainer.fit` calls
return torch.nn.functional.mse_loss(prediction, torch.ones_like(prediction))
def step(self, x):
x = self.layer(x)
out = torch.nn.functional.mse_loss(x, torch.ones_like(x))
return out
def training_step(self, batch, batch_idx):
output = self.layer(batch)
loss = self.loss(batch, output)
return {"loss": loss}
def validation_step(self, batch, batch_idx):
output = self.layer(batch)
loss = self.loss(batch, output)
return {"x": loss}
def test_step(self, batch, batch_idx):
output = self.layer(batch)
loss = self.loss(batch, output)
return {"y": loss}
def configure_optimizers(self):
optimizer = torch.optim.SGD(self.layer.parameters(), lr=0.1)
lr_scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=1)
return [optimizer], [lr_scheduler]
def test_run():
# fake data
train_data = torch.utils.data.DataLoader(RandomDataset(32, 64))
val_data = torch.utils.data.DataLoader(RandomDataset(32, 64))
test_data = torch.utils.data.DataLoader(RandomDataset(32, 64))
# model
model = BoringModel()
trainer = Trainer(
default_root_dir=os.getcwd(),
limit_train_batches=1,
limit_val_batches=1,
max_epochs=1,
plugins='ddp_sharded',
gpus=1,
weights_summary=None,
)
trainer.fit(model, train_data, val_data)
trainer.test(test_dataloaders=test_data)
if __name__ == '__main__':
test_run() |
PL computes wrong accuracy with drop_last=False in PyTorch Geometric | [
"bug",
"help wanted",
"priority: 0"
] | π Bug
PyTorch Lightning computes wrong accuracy when using a DataLoader with drop_last=False in PyTorch Geometric.
There seems to be an issue in which PL cannot determine the correct batch_size of mini-batches.
from typing import Optional
import torch
import torch.nn.functional as F
from torch.nn import Linear
from pytorch_lightning.metrics import Accuracy
from pytorch_lightning import (LightningDataModule, LightningModule, Trainer,
seed_everything)
from torch_geometric.data import DataLoader
from torch_geometric.datasets import TUDataset
from torch_geometric.nn import GCNConv, global_mean_pool
class Dataset(LightningDataModule):
def __init__(self):
super().__init__()
def prepare_data(self):
TUDataset('./data', name='MUTAG')
def setup(self, stage: Optional[str] = None):
dataset = TUDataset('./data', name='MUTAG')
self.train_dataset = dataset[:3]
self.val_dataset = dataset[3:6]
self.test_dataset = dataset[6:9]
def train_dataloader(self):
return DataLoader(self.train_dataset, batch_size=2)
def val_dataloader(self):
return DataLoader(self.val_dataset, batch_size=2)
def test_dataloader(self):
return DataLoader(self.val_dataset, batch_size=2)
class GNN(LightningModule):
def __init__(self):
super().__init__()
self.conv = GCNConv(7, 64)
self.lin = Linear(64, 2)
self.acc = Accuracy()
def forward(self, x, edge_index, batch):
x = self.conv(x, edge_index).relu()
x = global_mean_pool(x, batch)
return self.lin(x)
def training_step(self, data, batch_idx):
data = data.to(self.device)
y_hat = self(data.x, data.edge_index, data.batch)
train_loss = F.cross_entropy(y_hat, data.y)
return train_loss
def validation_step(self, data, batch_idx):
data = data.to(self.device)
y_hat = self(data.x, data.edge_index, data.batch)
acc = self.acc(y_hat.softmax(dim=-1), data.y)
self.log('val_acc', acc, on_step=False, on_epoch=True)
return acc
def test_step(self, data, batch_idx):
data = data.to(self.device)
y_hat = self(data.x, data.edge_index, data.batch)
acc = self.acc(y_hat.softmax(dim=-1), data.y)
print('batch_size', data.num_graphs, 'accuracy', acc, 'shape', y_hat.shape)
self.log('test_acc', acc, on_step=False, on_epoch=True)
return acc
def configure_optimizers(self):
return torch.optim.Adam(self.parameters(), lr=0.01)
def main():
seed_everything(42)
datamodule = Dataset()
model = GNN()
trainer = Trainer(max_epochs=1, progress_bar_refresh_rate=0)
trainer.fit(model, datamodule=datamodule)
trainer.test()
if __name__ == "__main__":
main()
Here, I am using a dataset with 3 examples and utilize a batch_size of 2. In test_step, the accuracy of each individual mini-batch is:
batch_size 2 accuracy 0.5 shape [2, 2]
batch_size 1 accuracy 0.0 shape [1, 2]
while PyTorch Lightning reports an overall accuracy of 0.25.
Expected behavior
Report accuracy of 0.33.
Environment
torch-geometric==master
Additional context
It seems like PL has problems determining the correct batch_size of batches when data doesn't follow the conventional [batch_size, ...] format. However, it shouldn't have a problem in doing so since the batch_size can be easily inferred from the self.acc(y_hat, y_pred) call. |
[BUG] `BaseFinetuning` not working with `resume_from_checkpoint` | [
"bug",
"help wanted",
"priority: 0",
"callback"
] | π Bug
Using BaseFinetuning will add parameter groups during training, as a result when trying to load from checkpoint you will get the following ValueError: loaded state dict has a different number of parameter groups. Because when loading these groups won't exist yet !
To Reproduce
from pl_bolts.models.regression import LinearRegression
import pytorch_lightning as pl
from pl_bolts.datamodules import SklearnDataModule
from sklearn.datasets import load_boston
from pytorch_lightning.callbacks.finetuning import BaseFinetuning
from pytorch_lightning.callbacks import ModelCheckpoint
X, y = load_boston(return_X_y=True)
dm = SklearnDataModule(X, y)
class TmpLinearRegression(LinearRegression):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.to_freeze = nn.Linear(1,1)
def configure_optimizers(self):
return self.optimizer(self.linear.parameters(), lr=self.hparams.learning_rate)
class LRFinetuner(BaseFinetuning):
def freeze_before_training(self, pl_module):
self.freeze(modules=pl_module.to_freeze)
def finetune_function(self, pl_module, current_epoch, optimizer, optimizer_idx):
if current_epoch == 1:
self.unfreeze_and_add_param_group(
modules=pl_module.to_freeze,
optimizer=optimizer,
initial_denom_lr=10,
)
model = TmpLinearRegression(input_dim=13)
trainer = pl.Trainer(max_epochs=2, callbacks=[ModelCheckpoint(dirpath="./tmp", save_last=True), LRFinetuner()])
trainer.fit(model, train_dataloader=dm.train_dataloader(), val_dataloaders=dm.val_dataloader())
model = TmpLinearRegression(input_dim=13)
trainer = pl.Trainer(max_epochs=3, resume_from_checkpoint="tmp/last.ckpt")
trainer.fit(model, train_dataloader=dm.train_dataloader(), val_dataloaders=dm.val_dataloader())
Bug:
/usr/local/lib/python3.8/dist-packages/pytorch_lightning/trainer/connectors/checkpoint_connector.py in restore_training_state(self, checkpoint)
181 optimizer_states = checkpoint['optimizer_states']
182 for optimizer, opt_state in zip(self.trainer.optimizers, optimizer_states):
--> 183 optimizer.load_state_dict(opt_state)
184
185 # move optimizer to GPU 1 weight at a time
/usr/local/lib/python3.8/dist-packages/torch/optim/optimizer.py in load_state_dict(self, state_dict)
117
118 if len(groups) != len(saved_groups):
--> 119 raise ValueError("loaded state dict has a different number of "
120 "parameter groups")
121 param_lens = (len(g['params']) for g in groups)
ValueError: loaded state dict has a different number of parameter groups
Environment
pl version : 1.2.6 |
DDP_SPAWN + Double Precision gives pickle error | [
"bug",
"help wanted",
"priority: 1"
] | π Bug
Pickle error with double precision + DDP-spawn
Can't pickle <function BoringModel.training_step at 0x7f8d684573b0>: it's not the same object as __main__.BoringModel.training_step
Fixing general pickle error may also solve discussion #6851
Please reproduce using the BoringModel
To Reproduce
Use bug_report_model.py, add accelerator='ddp-spawn' and precision=64 |
IndexError: dimension specified as 0 but tensor has no dimensions | [
"bug",
"help wanted",
"priority: 0",
"accelerator: tpu"
] | π Bug
TPUs training throwing the following error during validation.
https://colab.research.google.com/drive/1rHBxrtopwtF8iLpmC_e7yl3TeDGrseJL?usp=sharing
Expected behavior
Environment
Note: Bugs with code are solved faster ! Colab Notebook should be made public !
IDE: Please, use our python bug_report_model.py template.
Colab Notebook: Please copy and paste the output from our environment collection script (or fill out the checklist below manually).
You can get the script and run it with:
wget https://raw.githubusercontent.com/PyTorchLightning/pytorch-lightning/master/tests/collect_env_details.py
# For security purposes, please check the contents of collect_env_details.py before running it.
python collect_env_details.py
PyTorch Version (e.g., 1.0):
OS (e.g., Linux):
How you installed PyTorch (conda, pip, source):
Build command you used (if compiling from source):
Python version:
CUDA/cuDNN version:
GPU models and configuration:
Any other relevant information:
Additional context |
Optional alternate logging method for LR monitor callback | [
"feature",
"help wanted",
"won't fix"
] | π Feature
Allow user to pass any callable to be used in addition to trainer.logger.log_metrics()
Motivation
The LR monitor callback handles many best practices, but it doesn't offer integration with a plain Python logger. As mentioned on this discussions thread, I prefer to use a plain Python logger to output progress messages during training. I demonstrated how model callbacks can be used for batch loss/accuracy, but in order to log the learning rate the same way I currently need a custom callback.
Rather than duplicate much of the official LR monitor code, it would be better to support an optional second "data destination" that the official callback can send statistics to. This is easier to understand with the example code below
Pitch
Make the following additions to pl.callbacks.lr_monitor
class LearningRateMonitor(Callback):
def __init__(
self,
logging_interval: Optional[str] = None,
log_momentum: bool = False,
alt_logging_method: Optional[str] = None
):
if logging_interval not in (None, 'step', 'epoch'):
raise MisconfigurationException('logging_interval should be `step` or `epoch` or `None`.')
if not callable(alt_logging_method):
raise MisconfigurationException('alt_logging_method must be callable')
self.logging_interval = logging_interval
self.log_momentum = log_momentum
self.alt_logging_method = alt_logging_method # The key addition
self.lrs = None
self.lr_sch_names = []
def on_train_batch_start(self, trainer, *args, **kwargs):
...
if latest_stat:
trainer.logger.log_metrics(latest_stat, step=trainer.global_step)
if self.alt_logging_method is not None:
self.alt_logging_method(latest_stat) # Use a second data destination if provided
def on_train_epoch_start(self, trainer, *args, **kwargs):
...
if latest_stat:
trainer.logger.log_metrics(latest_stat, step=trainer.global_step)
if self.alt_logging_method is not None:
self.alt_logging_method(latest_stat) # Use a second data destination if provided
Then our model can opt to do the following
import logging
import pytorch_lightning as pl
logger = logging.getLogger(__name__)
# Other configuration of logger handlers/formatters here
class MyModel(pl.LightningModule):
def __init__(self):
super(MyModel, self).__init__()
def configure_callbacks(self):
return [
pl.callbacks.LearningRateMonitor(alt_logging_method=logger.info) # Could use other logging level
]
Or we could wrap the destination method in another callable to do any arbitrary task (like apply formatting vs. printing the raw dict). With this modification to the existing callback, the user can accomplish whatever they want with the extracted LR statistics. It's essentially a callback for the callback
def log_formatted_lrs(latest_stat):
stat_str = ' | '.join([f'{"-".join(k.split("-")[1:])} {v:.0e}' for k, v in latest_stat.items()])
return logger.info(f'Optimizer LRs: {stat_str}')
Alternatives
Rather than passing logger to the LR monitor callback, we could create a custom Lightning logger that uses plain Python logging and then trainer.logger.log_metrics() would automatically connect to it. However as mentioned in the discussion thread linked above and this issue, it is rather difficult to get a plain Python logger to provide an interface for accepting dict metrics since it was designed for text messages only. That difficulty is why I ended up creating a module-level logger instance and using it in model callbacks |
`Trainer(gradient_clip_algorithm='value')` has no effect (from #6123) | [
"bug",
"help wanted",
"priority: 0"
] | π Bug
I couldn't find anywhere in the code where the gradient_clip_algorithm argument (implemented in #6123) got passed to Accelerator.clip_gradients method and suspected that the default algorithm (GradClipAlgorithmType.NORM) is always used no matter what.
After a brief investigation, I believe I've confirmed that it is the case and the original test case couldn't correctly detect it.
I'm not sure how to properly fix this bug yet but would like to issue a warning to other users (that only clipping by norm works at this moment).
To Reproduce
This commit firstly disabled the suppression of AssertionError in Trainer.run_train, and then test if the maximum gradient value is almost the same as the set 1e-5 threshold.
I ran the command pytest tests/trainer/test_trainer.py -k "test_gradient_clipping_by_value and not test_gradient_clipping_by_value_fp16" and got this:
FAILED tests/trainer/test_trainer.py::test_gradient_clipping_by_value - AssertionError: Gradient max value 3.6332883155409945e-06 != grad_clip_val 1e-05 .
If we change the default algorithm in PrecisionPlugin.clip_gradients to GradClipAlgorithmType.VALUE, we will pass this test case.
Alternatively, we can directly assert if the clip algorithm is by value in PrecisionPlugin.clip_gradients. We'll get the following error:
FAILED tests/trainer/test_trainer.py::test_gradient_clipping_by_value - AssertionError: GradClipAlgorithmType.NORM
By now we can clearly see that:
Setting gradient_clip_algorithm changes nothing in the training procedure
The original test case cannot distinguish between the two clipping algorithms
The AssertionError in the original test case will be ignored anyway because of the design of Trainer.run_train. (I'm not entirely sure of this one because I'm not familiar with the test environment setup. It appears so in my local environment for sure.)
Environment
CUDA:
- GPU:
- GeForce RTX 2070
- available: True
- version: 11.0
Packages:
- numpy: 1.19.2
- pyTorch_debug: False
- pyTorch_version: 1.7.1
- pytorch-lightning: 1.3.0rc0
- tqdm: 4.49.0
System:
- OS: Linux
- architecture:
- 64bit
- processor: x86_64
- python: 3.7.9
- version: #78-Ubuntu SMP Fri Mar 19 13:29:52 UTC 2021 |
Inconsistent `outputs` format between `training_epoch_end` and `on_train_epoch_end` | [
"bug",
"help wanted",
"priority: 0",
"logging"
] | π Bug
The outputs object for on_train_epoch_end should not include the extra field
To Reproduce
def test_bug(tmpdir):
class TestModel(BoringModel):
def training_step(self, batch, batch_idx):
output = self(batch)
loss = self.loss(batch, output)
return {"loss": loss, "foo": 123}
def training_epoch_end(self, outputs):
print("training_epoch_end:", outputs)
def on_train_epoch_end(self, outputs):
print("on_train_epoch_end:", outputs)
class TestCallback(Callback):
def on_train_epoch_end(self, trainer, pl_module, outputs):
print("callback on_train_epoch_end:", outputs)
trainer = Trainer(default_root_dir=tmpdir, fast_dev_run=True, callbacks=[TestCallback()], progress_bar_refresh_rate=0)
trainer.fit(TestModel())
callback on_train_epoch_end: [[[{'extra': {'foo': 123}, 'minimize': tensor(1.1792)}]]]
on_train_epoch_end: [[[{'extra': {'foo': 123}, 'minimize': tensor(1.1792)}]]]
training_epoch_end: [{'foo': 123, 'loss': tensor(1.1792)}]
Expected behavior
on_train_epoch_end acts as training_epoch_end
Environment
master
Additional context
Reported by Marek O in Slack |
Missing LightningModule datamodule reference | [
"bug",
"help wanted",
"good first issue",
"docs",
"data handling",
"priority: 1"
] | π Bug
This docs snippet does not work:
https://pytorch-lightning.readthedocs.io/en/latest/common/lightning_module.html#datamodule
To Reproduce
def test_bug(tmpdir):
class TestModel(BoringModel):
def configure_optimizers(self):
# works
len(self.trainer.datamodule.train_dataloader())
# does not
len(self.datamodule.train_dataloader())
return super().configure_optimizers()
dm = BoringDataModule()
trainer = Trainer(default_root_dir=tmpdir, fast_dev_run=True)
trainer.fit(TestModel(), datamodule=dm)
line 128, in configure_optimizers
num_training_samples = len(self.datamodule.train_dataloader())
AttributeError: 'NoneType' object has no attribute 'train_dataloader'
Expected behavior
self.datamodule works
Alternatives
Update docs
Environment
master
Additional context
Reported by Brian Staber in Slack |
Fairscale integration not working for me | [
"bug",
"help wanted",
"priority: 0",
"waiting on author",
"distributed"
] | π Bug
When I try to train a model with Fairscale (ddp_sharded), I get this error always:
File "/home/ubuntu/anaconda3/envs/torch/lib/python3.7/site-packages/fairscale/nn/data_parallel/sharded_ddp.py", line 500, in _setup_backward_hooks
assert p_tmp.grad_fn is not None
AssertionError
Could anyone please point me at an example I could run?
PyTorch Version (e.g., 1.0): 1.8
OS (e.g., Linux): Ubuntu 18.04
How you installed PyTorch (conda, pip, source): conda install -c pytorch pytorch
Build command you used (if compiling from source):
Python version: 3.7
CUDA/cuDNN version: 10.2
GPU models and configuration: 8xV100
Any other relevant information: |
Learning rate interval update not working properly | [
"bug",
"help wanted"
] | π Bug
When I use Onecycle scheduler, it only works properly if I set steps_per_epoch = 1, even though I have set 'interval': 'step'.
scheduler = torch.optim.lr_scheduler.OneCycleLR(optimizer,
max_lr,
epochs,
steps_per_epoch = len(self.loaders_dict['train']))
return {"optimizer": optimizer, "lr_scheduler": scheduler, "monitor": "train_loss", 'interval': 'step'}
Here is the learning rate at the end of training if I set steps_per_epoch > 1
I went to check the code:
Here is the training loop
There seems to be two main functions that update the learning rate:
here
self.update_train_loop_lr_schedulers(monitor_metrics=monitor_metrics)
that later calls:
self.trainer.optimizer_connector.update_learning_rates(interval="step", monitor_metrics=monitor_metrics)
and
here
self.trainer.optimizer_connector.update_learning_rates(interval='epoch')
I cant figure out what exactly is wrong, any help will be appreciated. Thanks!
PyTorch Version 1.7
Linux |
XLA + IterableDataset support should now work on XLA master | [
"bug",
"help wanted",
"won't fix",
"priority: 1"
] | π Bug
Following pytorch/xla#2866 XLA should now support IterableDatasets (or at least not fail on trying to get the length). This means we can edit the check from #6875 to only fail with older XLA versions.
Note: the fix from pytorch/xla#2866 has not yet been released
Please reproduce using the BoringModel
This message from PL slack, currently gives an error: https://pytorch-lightning.slack.com/archives/CQXV8BRH9/p1618028942117600?thread_ts=1617801747.099100&cid=CQXV8BRH9 |
Turning SWA on makes scheduler lr change to epoch, instead of batch [with colab ex] | [
"bug",
"help wanted",
"callback"
] | π Bug
Below is my optimizer/scheduler code. If my trainer has stochastic_weight_avg=True, then my learning rate is shown below, in green, and I get the warning:
/home/felipe/anaconda3/envs/ML_38_new/lib/python3.8/site-packages/pytorch_lightning/utilities/distributed.py:68: UserWarning: Swapping lr_scheduler <torch.optim.lr_scheduler.OneCycleLR object at 0x7fa445c76ee0> for <torch.optim.swa_utils.SWALR object at 0x7fa445e2d190>
warnings.warn(*args, **kwargs)
If stochastic_weight_avg=False, then I get the appropriate learning rate scheduler (pink).
What seems to me is that when stochastic_weight_avg=True, there is some conflict related to updating per batch or per epoch
def configure_optimizers(self):
params = list(self.named_parameters())
def is_backbone(n): return 'encoder' in n
grouped_parameters = [
{'params': [p for n, p in params if is_backbone(n)], 'lr': cfg.max_lr/cfg.encoder_lr_frac},
{'params': [p for n, p in params if not is_backbone(n)], 'lr': cfg.max_lr},
]
optimizer = MADGRAD(grouped_parameters, lr=cfg.max_lr, weight_decay=cfg.wd)
if cfg.scheduler_type == 'onecycle':
scheduler_fn = torch.optim.lr_scheduler.OneCycleLR(optimizer,
max_lr=[cfg.max_lr/cfg.encoder_lr_frac, cfg.max_lr],
epochs=self.epochs,
steps_per_epoch = len(self.loaders_dict['train']))
scheduler = {'scheduler': scheduler_fn,
'name': cfg.scheduler_type,
'frequency': 1,
'interval': 'step',
"monitor": 'train_loss'}#len(self.loaders_dict['train']))
return [optimizer], [scheduler]
To Reproduce
https://gist.github.com/fmellomascarenhas/7e53efbacaafd8769088d58574e73cd5 |
Avoid wrapping LightningModule in *DataParallel overrides when not fitting | [
"feature",
"help wanted",
"let's do it!"
] | π Feature
For distributed testing or prediction, we don't need to wrap the LightningModule inside of DistributedDataParallel or DataParallel for testing as there are no gradients we need to synchronize. We only need this during the fit stage when model training occurs
Motivation
This can reduce overhead with distributed inference in Lightning. We can also use torchscript modules or models without any trainable parameters purely for inference.
Pitch
We'd need the training type plugins to be aware of the Trainer state somehow. Then we could only apply the wrapper here in case the trainer is set to fit:
pytorch-lightning/pytorch_lightning/plugins/training_type/ddp.py
Lines 249 to 256
in
80c5293
def configure_ddp(self):
self.pre_configure_ddp()
self._model = DistributedDataParallel(
LightningDistributedModule(self.model),
device_ids=self.determine_ddp_device_ids(),
**self._ddp_kwargs,
)
self._register_ddp_hooks()
Alternatives
Additional context |
Why is my gpu-util low? | [
"bug",
"help wanted",
"distributed"
] | I use one node and 4 gpus for training. And I use dali dataloader, I don't know why my gpu util is low, and training is also slow. About 1:30 per epoch, I train for 200 epoches, which will cost 5 hours. It's slower than the project mmclassification, which only cost 3.5 hours. Compared to mmclassification project which can only support torch.utils.data.dataloader, I think if I use dali_dataloader, it will accelerate my training. But as you can see, it's the opposite. I don't know why. Could anyone give me some advice? I use cifar10 dataset. And I train on slurm.
Here is my code.
main.py
import pytorch_lightning as pl
from pytorch_lightning.callbacks import ModelCheckpoint
from net import ResNet18
if __name__ == '__main__':
model = ResNet18()
trainer = pl.Trainer( max_epochs=200,log_every_n_steps=1,
log_gpu_memory='min_max',gpus=4,num_nodes=1,accelerator='ddp',
fast_dev_run=False,callbacks=[ModelCheckpoint(monitor='val_accuracy',mode='max')],
progress_bar_refresh_rate=1,replace_sampler_ddp=False)
trainer.fit(model)
net.py
import torch
import torch.nn as nn
import torch.nn.functional as F
import pytorch_lightning as pl
from dataloader import dali_DataLoader,HybridPipe,dali_CIFAR10
class BasicBlock(nn.Module):
expansion = 1
def __init__(self, in_planes, planes, stride=1):
super(BasicBlock, self).__init__()
self.conv1 = nn.Conv2d(
in_planes, planes, kernel_size=3, stride=stride, padding=1, bias=False)
self.bn1 = nn.BatchNorm2d(planes)
self.conv2 = nn.Conv2d(planes, planes, kernel_size=3,
stride=1, padding=1, bias=False)
self.bn2 = nn.BatchNorm2d(planes)
self.shortcut = nn.Sequential()
if stride != 1 or in_planes != self.expansion*planes:
self.shortcut = nn.Sequential(
nn.Conv2d(in_planes, self.expansion*planes,
kernel_size=1, stride=stride, bias=False),
nn.BatchNorm2d(self.expansion*planes)
)
def forward(self, x):
out = F.relu(self.bn1(self.conv1(x)))
out = self.bn2(self.conv2(out))
out += self.shortcut(x)
out = F.relu(out)
return out
class Bottleneck(nn.Module):
expansion = 4
def __init__(self, in_planes, planes, stride=1):
super(Bottleneck, self).__init__()
self.conv1 = nn.Conv2d(in_planes, planes, kernel_size=1, bias=False)
self.bn1 = nn.BatchNorm2d(planes)
self.conv2 = nn.Conv2d(planes, planes, kernel_size=3,
stride=stride, padding=1, bias=False)
self.bn2 = nn.BatchNorm2d(planes)
self.conv3 = nn.Conv2d(planes, self.expansion *
planes, kernel_size=1, bias=False)
self.bn3 = nn.BatchNorm2d(self.expansion*planes)
self.shortcut = nn.Sequential()
if stride != 1 or in_planes != self.expansion*planes:
self.shortcut = nn.Sequential(
nn.Conv2d(in_planes, self.expansion*planes,
kernel_size=1, stride=stride, bias=False),
nn.BatchNorm2d(self.expansion*planes)
)
def forward(self, x):
out = F.relu(self.bn1(self.conv1(x)))
out = F.relu(self.bn2(self.conv2(out)))
out = self.bn3(self.conv3(out))
out += self.shortcut(x)
out = F.relu(out)
return out
class ResNet(pl.LightningModule):
def __init__(self, block, num_blocks, num_classes=10):
super(ResNet, self).__init__()
self.in_planes = 64
self.conv1 = nn.Conv2d(3, 64, kernel_size=3,
stride=1, padding=1, bias=False)
self.bn1 = nn.BatchNorm2d(64)
self.layer1 = self._make_layer(block, 64, num_blocks[0], stride=1)
self.layer2 = self._make_layer(block, 128, num_blocks[1], stride=2)
self.layer3 = self._make_layer(block, 256, num_blocks[2], stride=2)
self.layer4 = self._make_layer(block, 512, num_blocks[3], stride=2)
self.linear = nn.Linear(512*block.expansion, num_classes)
self.correct = 0
self.total_size = 0
def _make_layer(self, block, planes, num_blocks, stride):
strides = [stride] + [1]*(num_blocks-1)
layers = []
for stride in strides:
layers.append(block(self.in_planes, planes, stride))
self.in_planes = planes * block.expansion
return nn.Sequential(*layers)
def forward(self, x):
out = F.relu(self.bn1(self.conv1(x)))
out = self.layer1(out)
out = self.layer2(out)
out = self.layer3(out)
out = self.layer4(out)
out = F.avg_pool2d(out, 4)
out = out.view(out.size(0), -1)
out = self.linear(out)
return out
def training_step(self, batch, batch_idx):
x, y = batch
x = self(x)
loss_fn = nn.CrossEntropyLoss()
loss = loss_fn(x,y)
predicted = torch.argmax(x, dim=1, keepdim=False)
self.correct += (predicted == y).sum().item()
self.total_size += y.size(0)
self.log('train_loss', loss,prog_bar=True, logger=True)
self.log('train_accuracy', self.correct/self.total_size,prog_bar=True, logger=True)
return loss
def validation_step(self, batch, batch_idx):
x, y = batch
x = self(x)
loss_fn = nn.CrossEntropyLoss()
loss = loss_fn(x,y)
predicted = torch.argmax(x, dim=1, keepdim=False)
self.correct += (predicted == y).sum().item()
self.total_size += y.size(0)
self.log('val_loss', loss,on_step=False, on_epoch=True,prog_bar=True, logger=True)
self.log('val_accuracy', self.correct/self.total_size,prog_bar=True, logger=True)
return loss
def validation_epoch_end(self,out):
self.log('val_accuracy', self.correct/self.total_size,prog_bar=True, logger=True)
self.correct=0
self.total_size=0
def train_epoch_end(self,out):
self.log('train_accuracy', self.correct/self.total_size,prog_bar=True, logger=True)
self.correct=0
self.total_size=0
def configure_optimizers(self):
optimizer = torch.optim.SGD(self.parameters(), lr=0.1)
scheduler = torch.optim.lr_scheduler.MultiStepLR(optimizer, [100,150], gamma=0.1, last_epoch=-1, verbose=False)
return [optimizer],[scheduler]
def train_dataloader(self):
loader = dali_DataLoader(pipelines=HybridPipe(dali_CIFAR10(root='./data'), batch_size=32, pad_ratio=1.25,num_threads=4,
is_distribute=True, crop_size=32,ramdom_flip=True,
normalize=dict(mean=[125.307, 122.961, 113.8575],std=[51.5865, 50.847, 51.255])))
return loader
def val_dataloader(self):
loader = dali_DataLoader(pipelines=HybridPipe(dali_CIFAR10(root='./data',test_mode=True), batch_size=100,
normalize=dict(mean=[125.307, 122.961, 113.8575],std=[51.5865, 50.847, 51.255])))
return loader
def ResNet18():
return ResNet(BasicBlock, [2, 2, 2, 2])
dataloader.py
import os,sys,math,random,pickle
import torch
import numpy as np
import torch.distributed as dist
try:
from nvidia import dali
from nvidia.dali.pipeline import Pipeline
import nvidia.dali.types as types
import nvidia.dali.fn as fn
import nvidia.dali.ops as ops
from nvidia.dali.plugin.pytorch import DALIClassificationIterator
except:
print('Could not import DALI')
class dali_DataLoader():
def __init__(self, pipelines, **kwargs):
pipelines.build()
try:
self._dali_iterator = DALIClassificationIterator(pipelines=pipelines, size=len(pipelines.iterator.indices))
self.sampler = pipelines.iterator
except:
self._dali_iterator = DALIClassificationIterator(pipelines=pipelines, reader_name='Reader')
self.sampler = self
def set_epoch(self,epoch):
pass
def __iter__(self):
return self
def __len__(self):
return int(math.ceil(self._dali_iterator._size / self._dali_iterator.batch_size))
def __next__(self):
try:
data = next(self._dali_iterator)
except StopIteration:
self._dali_iterator.reset()
raise StopIteration
# Decode the data output
input = data[0]['data']
target = data[0]['label'].squeeze().long()
return input,target
class identity():
def __call__(self,x,*tmp,**kargs):
return x
class HybridPipe(Pipeline):
def __init__(self,dataset, batch_size, file_root=None,filelist_path=None,num_threads=1, pad_ratio=1,is_distribute=True, resize=None,crop_size=[0,0],ramdom_flip=False,normalize=None,random_rotate_degree=None):
device_id = torch.cuda.current_device()
print("device_id",device_id)
super(HybridPipe, self).__init__(batch_size, num_threads, device_id, seed=12 + device_id)
if is_distribute:
if filelist_path is not None:
if file_root is None:
raise Exception("if provide filelist_path, then must provide file_root")
else:
self.input = ops.readers.File(file_root=file_root,file_list=filelist_path,num_shards=dist.get_world_size(),prefetch_queue_depth=num_threads,read_ahead=True,shard_id=dist.get_rank())
self.decode = ops.decoders.Image(device="mixed", output_type=types.RGB)
self.use_file=True
else:
self.iterator = iter(Distribute_Input_Iter(dataset, batch_size))
#self.input = ops.ExternalSource(source=self.iterator, num_outputs=2)
self.input = ops.ExternalSource()
self.input_label = ops.ExternalSource()
self.use_file=False
else:
if filelist_path is not None:
if file_root is None:
raise Exception("if provide filelist_path, then must provide file_root")
else:
self.input = ops.readers.File(file_root=file_root,file_list=filelist_path,num_shards=dist.get_world_size(),prefetch_queue_depth=num_threads,read_ahead=True,shard_id=dist.get_rank())
self.decode = ops.decoders.Image(device="mixed", output_type=types.RGB)
self.use_file=True
else:
self.iterator = iter(Normal_Input_Iter(dataset, batch_size))
self.input = ops.ExternalSource()
self.input_label = ops.ExternalSource()
self.use_file=False
dali_device = "gpu"
if isinstance(resize,(tuple,list)) and len(resize)==2:
self.resize = ops.Resize(size=tuple(resize))
elif isinstance(resize,(int, float)):
self.resize = ops.Resize(size=tuple(resize,resize))
else:
self.resize = identity()
if normalize is not None and isinstance(normalize,dict):
self.mean = normalize.get('mean',0)
self.std = normalize.get('std',1)
else:
self.mean = 0
self.std = 1
if isinstance(crop_size, (int, float)):
crop_size = [crop_size,crop_size]
if (len(crop_size)==2 and (crop_size[0]==0 or crop_size[1]==0)):
self.crop = identity()
else:
self.crop = ops.Crop(device=dali_device, crop_h=crop_size[0], crop_w=crop_size[1])
if pad_ratio>1:
self.pad = ops.Paste(device=dali_device, ratio=pad_ratio, fill_value=0)
else:
self.pad = identity()
self.cmnp = ops.CropMirrorNormalize(device="gpu",
dtype=types.FLOAT,
output_layout=types.NCHW,
mean=self.mean,
std=self.std
)
if ramdom_flip:
self.coin = ops.random.CoinFlip(probability=0.5)
else:
self.coin = lambda :0
if random_rotate_degree is not None:
try:
tmp = math.abs(int(random_rotate_degree))
self.degree = ops.random.Uniform(range=(-tmp, tmp))
self.rotate = ops.Rotate()
except:
self.degree = lambda :0
self.rotate = identity()
else:
self.degree = lambda :0
self.rotate = identity()
def iter_setup(self):
if not self.use_file:
(images, labels) = self.iterator.__next__()
self.feed_input(self.jpegs, images, layout="HWC")
self.feed_input(self.labels, labels)
def define_graph(self):
rng = self.coin()
print()
if self.use_file:
self.jpegs,self.labels = self.input(name="Reader")
self.jpegs = self.decode(self.jpegs)
else:
self.jpegs= self.input()
self.labels = self.input_label()
output = self.jpegs
output = self.resize(output)
output = self.rotate(output, angle=self.degree())
output = self.pad(output.gpu())
output = self.crop(output)
output = self.cmnp(output, mirror=rng)
return [output, self.labels]
class Distribute_Input_Iter():
def __init__(self,dataset, batch_size, num_replicas=None,rank=None,shuffle=True,seed=0,drop_last=False):
if num_replicas is None:
if not dist.is_available():
raise RuntimeError("Requires distributed package to be available")
num_replicas = dist.get_world_size()
#num_replicas = 1
if rank is None:
if not dist.is_available():
raise RuntimeError("Requires distributed package to be available")
rank = dist.get_rank()
#rank = 0
if rank >= num_replicas or rank < 0:
raise ValueError(
"Invalid rank {}, rank should be in the interval"
" [0, {}]".format(rank, num_replicas - 1))
self.dataset = dataset
self.batch_size = batch_size
self.num_replicas = num_replicas
self.rank = rank
self.epoch = 0
self.drop_last = drop_last
# If the dataset length is evenly divisible by # of replicas, then there
# is no need to drop any data, since the dataset will be split equally.
if self.drop_last and len(self.dataset) % self.num_replicas != 0: # type: ignore
# Split to nearest available length that is evenly divisible.
# This is to ensure each rank receives the same amount of data when
# using this Sampler.
self.num_samples = math.ceil(
# `type:ignore` is required because Dataset cannot provide a default __len__
# see NOTE in pytorch/torch/utils/data/sampler.py
(len(self.dataset) - self.num_replicas) / self.num_replicas # type: ignore
)
else:
self.num_samples = math.ceil(len(self.dataset) / self.num_replicas) # type: ignore
self.total_size = self.num_samples * self.num_replicas
self.shuffle = shuffle
self.seed = seed
self.epoch=0
indices = list(range(len(self.dataset))) # type: ignore
if not self.drop_last:
# add extra samples to make it evenly divisible
padding_size = self.total_size - len(indices)
if padding_size <= len(indices):
indices += indices[:padding_size]
else:
indices += (indices * math.ceil(padding_size / len(indices)))[:padding_size]
else:
# remove tail of data to make it evenly divisible.
indices = indices[:self.total_size]
assert len(indices) == self.total_size,'len(indices) != self.total_size'
# subsample
indices = indices[self.rank:self.total_size:self.num_replicas]
assert len(indices) == self.num_samples,'len(indices) != self.num_samples'
self.indices = indices
def set_epoch(self,epoch):
self.epoch = epoch
def __iter__(self):
self.i = 0
self.n = len(self.indices)
return self
def __next__(self):
batch = []
labels = []
should_shuffle = False
for _ in range(self.batch_size):
if self.i % self.n == self.n-1:
should_shuffle = True
img, label = self.dataset.__getitem__(self.indices[self.i])
batch.append(img)
labels.append(label)
self.i = (self.i + 1) % self.n
if should_shuffle:
random.shuffle(self.indices)
return (batch, labels)
class Normal_Input_Iter():
def __init__(self,dataset, batch_size):
self.dataset = dataset
self.batch_size = batch_size
self.indices = list(range(len(self.dataset)))
def __iter__(self):
self.i = 0
self.n = len(self.dataset)
return self
def __next__(self):
batch = []
labels = []
should_shuffle = False
for _ in range(self.batch_size):
if self.i % self.n == self.n-1:
should_shuffle = True
img, label = self.dataset.__getitem__(self.indices[self.i])
batch.append(img)
labels.append(label)
self.i = (self.i + 1) % self.n
if should_shuffle:
random.shuffle(self.indices)
return (batch, labels) |
Performance Optimization for DDP sharded | [
"feature",
"help wanted"
] | π Feature
Motivation
Experiments show that enabling buckets and compressing in FP16 before broadcasting improves performance for multi-nodes.
20% for 1B parameter model enabling buckets
5% for 0.6B parameter model 2 nodes compressing FP16 before broadcasting
Pitch
set smarter default for DDP sharded
If single-node: reduce_buffer_size = 0 (disable buckets)
If multi-nodes: reduce_buffer_size = 8M, if training use precision 16, broadcast_fp16 = True
Alternatives
Additional context |
Docs wrong callback reference | [
"docs"
] | π Documentation
Fixed wrong on_fit_start callback reference in documentation.
PR: #7001
Currently:
on_fit_start
^^^^^^^^^^^^
.. automethod:: pytorch_lightning.callbacks.Callback.on_save_checkpoint
:noindex:
Fix:
on_fit_start
^^^^^^^^^^^^
.. automethod:: pytorch_lightning.callbacks.Callback.on_fit_start
:noindex: |
PyTorch Lightning ignores traditional WORLD_SIZE/RANK specifications in environment and doesn't document replacement | [
"bug",
"help wanted",
"priority: 0",
"distributed"
] | π Bug
Standard torch.distributed environment variables seem to be handled differently.
To Reproduce
$ MASTER_ADDR=192.168.1.3 MASTER_PORT=1234 WORLD_SIZE=2 RANK=0 python3 boring_model.py
Expected behavior
Should wait for second job to connect to the MASTER_PORT. Instead, just starts training. No combination of arguments and environment variables seems to change this behavior.
It appears that PL handles startup for distributed processing differently somehow and internally overrides environment variables. This works pretty well for ddp on single nodes and (presumably) automates deployment under Slurm.
But I can't figure out from the documentation or from the samples how to arrange for the traditional PyTorch behavior in which all jobs are started up manually.
For comparison, this code behaves as expected:
$ cat > simple.py
import torch
print("init")
torch.distributed.init_process_group("gloo")
print("done", torch.distributed.get_rank(), torch.distributed.get_world_size())
$ MASTER_ADDR=192.168.1.3 MASTER_PORT=1234 WORLD_SIZE=2 RANK=0 python3 simple.py & sleep 3; MASTER_ADDR=192.168.1.3 MASTER_PORT=1234 WORLD_SIZE=2 RANK=1 python3 simple.py
That is, the two jobs rendezvous as expected and then exit.
Suggested Behavior
document how the traditional PyTorch behavior can be reproduced (i.e., no calculations of nodes/size/... in DDP)
perhaps provide some kind of command line flag that restores the traditional behavior (e.g., "--accelerator ddp_plain")
maybe something else
Environment
PyTorch Version (e.g., 1.0): 1.8.1
OS (e.g., Linux): Ubuntu 20.04
How you installed PyTorch (conda, pip, source): pip3
Build command you used (if compiling from source):
Python version: 3.8
CUDA/cuDNN version: 11.1
GPU models and configuration: 1080
Any other relevant information:
Additional context |
Training Type Plugin environment related setting from Trainer | [
"feature",
"help wanted"
] | π Feature
Motivation
When user provide a specified training type plugin, user has to pass in num_nodes and sync_batchnorm explicitly. For example DDPPlugin. These parameters are set from Trainer, probably it is better to reuse the setting from Trainer instead of specifying again.
Now we have to specify like:
trainer = Trainer(
num_nodes = 2,
gpus = 8,
sync_batchnorm = True,
plugins = [
DDPPlugin(num_nodes=2, sync_batchnorm=True, .... # other parameters)
]
Ideally, we could
trainer = Trainer(
num_nodes = 2,
gpus = 8,
sync_batchnorm = True,
plugins = [
DDPPlugin(.... # other parameters)
]
relying on accelerator_connector to connect num_nodes and sync_batchnorm to training type plugin instance.
Pitch
https://github.com/PyTorchLightning/pytorch-lightning/blob/master/pytorch_lightning/trainer/connectors/accelerator_connector.py#L439-L445
if hasattr(training_type, 'num_nodes') and getattr(training_type, 'num_nodes') is None:
training_type.num_nodes = self.num_nodes
# Automatically set sync_batchnorm if None.
# Useful for custom plugins.
if hasattr(training_type, 'sync_batchnorm') and getattr(training_type, 'sync_batchnorm') is None:
training_type.sync_batchnorm = self.sync_batchnorm
here, instead of setting only when getattr(training_type, 'num_nodes') is None, we override as long as training_type has this attribute. |
seed for DistributedSampler | [
"feature",
"help wanted",
"distributed"
] | π Bug
torch.utils.data.distributed.DistributedSampler defaults to random seed 0 to shuffle the sampler if shuffle=True.
As lightning does not specify a seed, multi-gpu training with DistributedSampler in lightning will use a deterministic order independently from the user-specified seed (seed_everything).
pytorch-lightning/pytorch_lightning/trainer/data_loading.py
Lines 198 to 202
in
b85cfbe
def _get_distributed_sampler(self, dataloader, shuffle):
kwargs = self.distributed_sampler_kwargs
kwargs['shuffle'] = shuffle and not self.overfit_batches
sampler = DistributedSampler(dataloader.dataset, **kwargs)
return sampler
pytorch-lightning/pytorch_lightning/plugins/training_type/ddp.py
Lines 91 to 94
in
33cc9fe
@property
def distributed_sampler_kwargs(self):
distributed_sampler_kwargs = dict(num_replicas=(self.num_nodes * self.num_processes), rank=self.global_rank)
return distributed_sampler_kwargs
As a workaround, users can specify replace_sampler_ddp when creating the Trainer, but I think this is something that lightning should handle well out of the box. |
RuntimeError: All input tensors must be on the same device. Received cpu and cuda:0 | [
"bug",
"help wanted",
"priority: 2"
] | π Bug
At the end of the epoch, I get the error mentioned in the title. Here is a full stack-trace:
File "main.py", line 255, in <module>
trainer.fit(trainVQG, data_loader, val_data_loader)
File "/data/nv419/anaconda3/envs/blt-vqg/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 473, in fit
results = self.accelerator_backend.train()
File "/data/nv419/anaconda3/envs/blt-vqg/lib/python3.8/site-packages/pytorch_lightning/accelerators/cpu_accelerator.py", line 60, in train
results = self.train_or_test()
File "/data/nv419/anaconda3/envs/blt-vqg/lib/python3.8/site-packages/pytorch_lightning/accelerators/accelerator.py", line 69, in train_or_test
results = self.trainer.train()
File "/data/nv419/anaconda3/envs/blt-vqg/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 524, in train
self.train_loop.run_training_epoch()
File "/data/nv419/anaconda3/envs/blt-vqg/lib/python3.8/site-packages/pytorch_lightning/trainer/training_loop.py", line 637, in run_training_epoch
self.run_on_epoch_end_hook(epoch_output)
File "/data/nv419/anaconda3/envs/blt-vqg/lib/python3.8/site-packages/pytorch_lightning/trainer/training_loop.py", line 868, in run_on_epoch_end_hook
self.trainer.logger_connector.on_train_epoch_end()
File "/data/nv419/anaconda3/envs/blt-vqg/lib/python3.8/site-packages/pytorch_lightning/trainer/connectors/logger_connector/logger_connector.py", line 370, in on_train_epoch_end
self.cached_results.has_batch_loop_finished = True
File "/data/nv419/anaconda3/envs/blt-vqg/lib/python3.8/site-packages/pytorch_lightning/trainer/connectors/logger_connector/epoch_result_store.py", line 445, in has_batch_loop_finished
self.auto_reduce_results_on_epoch_end()
File "/data/nv419/anaconda3/envs/blt-vqg/lib/python3.8/site-packages/pytorch_lightning/trainer/connectors/logger_connector/epoch_result_store.py", line 435, in auto_reduce_results_on_epoch_end
hook_result.auto_reduce_results_on_epoch_end()
File "/data/nv419/anaconda3/envs/blt-vqg/lib/python3.8/site-packages/pytorch_lightning/trainer/connectors/logger_connector/epoch_result_store.py", line 229, in auto_reduce_results_on_epoch_end
opt_outputs = time_reduced_outputs[0].__class__.reduce_on_epoch_end(time_reduced_outputs)
File "/data/nv419/anaconda3/envs/blt-vqg/lib/python3.8/site-packages/pytorch_lightning/core/step_result.py", line 519, in reduce_on_epoch_end
recursive_stack(result)
File "/data/nv419/anaconda3/envs/blt-vqg/lib/python3.8/site-packages/pytorch_lightning/core/step_result.py", line 660, in recursive_stack
result[k] = collate_tensors(v)
File "/data/nv419/anaconda3/envs/blt-vqg/lib/python3.8/site-packages/pytorch_lightning/core/step_result.py", line 682, in collate_tensors
return torch.stack(items)
RuntimeError: All input tensors must be on the same device. Received cpu and cuda:0
The error gets thrown at the end of an epoch. The model I've built is a VAE/Latent Variable model. Note that this error does NOT get thrown (and the code works perfectly fine) if I do not use the VAE part of the model. The BoringModel doesn't reproduce the error, and I've got some research code in here which I'd rather not make public at this point in time. However, these are the parts of my code which may be contributing to the error:
# training_step
def training_step(self, batch, batch_idx):
if self.args.num_warmup_steps < self.iter:
self.latent_transformer = True
self.model.latent_transformer = True
self.configure_optimizers() # reset the momentum
loss, kld = self(batch)
total_loss, loss_rec, loss_kl = self.calculate_losses(loss, kld)
self.log('total train loss', total_loss)
self.log('rec train loss', loss_rec)
self.log('kl train loss', loss_kl)
self.iter += 1
return total_loss
# calculating losses
def calculate_losses(self, loss, kld, r=0.5):
if kld is None:
loss_rec = loss
total_loss = loss
loss_kl = torch.tensor(0)
else:
cycle_num = (self.args.total_training_steps/4)
mod = self.iter % cycle_num
temp = mod/cycle_num
beta = 1
if temp <= r:
beta = 1/(1 + np.exp(-temp))
loss_kl = kld
loss_rec = loss
total_loss = loss + beta * kld
return total_loss.to(self.args.device), loss_rec, loss_kl
# inside self.latent
class LatentNorm(nn.Module):
def __init__(self, args):
super(LatentNorm, self).__init__()
self.args = args
self.latent_encoder = nn.Sequential(
nn.Linear(args.hidden_dim, args.latent_dim*2),
nn.LeakyReLU(0.2),
nn.Linear(args.latent_dim*2, args.latent_dim*2)
)
def forward(self, hidden_states):
latents = self.latent_encoder(hidden_states)
mu, logvar = latents[:, :, :self.args.latent_dim], latents[:, :, self.args.latent_dim:]
std = torch.exp(0.5*logvar)
eps = torch.randn_like(std)
z = eps.mul(std).add_(mu)
kld = gaussian_kld_norm(mu, logvar)
return z, kld
# inside model
kld = None
if self.latent_transformer:
encoder_hidden_states, kld = self.latent(encoder_hidden_states)
Interestingly, OP of this #5053 issue also seemed to be using a VAE and had the error thrown (although the solution to his problem isn't the solution to my problem)
Expected behavior
For the model to continue training as normal
Environment
CUDA:
- GPU:
- Tesla V100-PCIE-32GB
- Tesla V100-PCIE-32GB
- GeForce RTX 2080 Ti
- GeForce RTX 2080 Ti
- GeForce RTX 2080 Ti
- GeForce RTX 2080 Ti
- GeForce RTX 2080 Ti
- GeForce RTX 2080 Ti
- GeForce RTX 2080 Ti
- GeForce RTX 2080 Ti
- available: True
- version: 10.2
Packages:
- numpy: 1.19.2
- pyTorch_debug: False
- pyTorch_version: 1.7.1
- pytorch-lightning: 1.1.4
- tqdm: 4.60.0
System:
- OS: Linux
- architecture:
- 64bit
- ELF
- processor: x86_64
- python: 3.8.8
- version: #59~18.04.1-Ubuntu SMP Wed Oct 21 12:14:56 UTC 2020
I'm running this code on a V100 |
Can't get attribute '_gpus_arg_default' | [
"bug",
"help wanted"
] | π Bug
when I use the code
model = model.load_from_checkpoint(ckpt_path)
It pops out a problem: Can't get attribute '_gpus_arg_default' on <module 'pytorch_lightning.utilities.argparse' from '/opt/conda/envs/lasaft/lib/python3.8/site-packages/pytorch_lightning/utilities/argparse.py'>
To Reproduce
Very hard to describe the reproduction, because I could not load model.
Expected behavior
Succeed in loading the model
Environment
PyTorch Version (e.g., 1.0):1.7.1
OS (e.g., Linux): Ubuntu 18.04
How you installed PyTorch (conda, pip, source): conda
Build command you used (if compiling from source):
Python version: 3.8.5
CUDA/cuDNN version: 10.2
GPU models and configuration: V100
Any other relevant information: PL version is 1.3.0rc1 |
Add version suffix to last.ckpt | [
"feature",
"callback: model checkpoint"
] | π Feature
If you use ModelCheckpoint(save_last=True) and you run an experiment twice in the same directory, then this set of checkpoints is generated:
file-v0.ckpt
file-v1.ckpt
...
last.ckpt (the last of the second run)
the idea is to add a version also to last.ckpt if it would get overwritten:
file-v0.ckpt
file-v1.ckpt
...
last-v0.ckpt (last of the first run)
last-v1.ckpt (last of the second run)
Motivation
Avoid overwriting existing checkpoints
Alternatives
Modifying the default CHECKPOINT_NAME_LAST to avoid the conflict
Additional context
Discussed in #5000 (comment)
To be done after #5008 is merged.
cc @Borda @carmocca @awaelchli @ninginthecloud @jjenniferdai @rohitgr7 |
Dead Links in "Implementing your own DDP" documentation | [
"help wanted",
"good first issue",
"docs"
] | π Documentation
In the "Implementing your own DDP section" (https://pytorch-lightning.readthedocs.io/en/stable/multi_gpu.html#implement-your-own-distributed-ddp-training), the text says:
If you need your own way to init PyTorch DDP you can override pytorch_lightning.core.LightningModule.().
If you also need to use your own DDP implementation, override: pytorch_lightning.core.LightningModule.configure_ddp().
Unfortunately, both of those seem to be dead links. I'm not sure what would replace them -- the custom DDP implementation should presumably point to https://pytorch-lightning.readthedocs.io/en/stable/accelerators.html#implement-a-custom-accelerator, but I'm not sure where the "overriding the init Pytorch DDP" should live (from the Git blame, it looks like used to call to a init_ddp_connection function that doesn't seem to exist anymore). |
metrics from "sanity check" do not appear on wandb / epoch 1 becomes 0 | [
"question",
"won't fix"
] | β Questions and Help
What is your question?
I'm using so called "sanity check" to run actual full validation before I do any training, using num_sanity_val_steps=-1
however I could not see any metrics displayed on wandb.
Eventually I had to implement custom validation_epoch_end
where instead of log_dict I'm using
self.logger.agg_and_log_metrics(metrics, step=self.current_epoch + 1)
the question is if there is more normal / straight-forward way to do this?
I still have to implement custom training_epoch_end, because currently validation epoch 1 corresponds to training epoch 0. Essentially how I see it is validation curve on wandb starting from epoch 0, training epoch starting from 1, and this being achieved by some sort of "don't decrement epoch number" kind of parameter...
Thanks!
What's your environment?
currently using PL 1.0.8 and wandb 0.10.12 |
ddp opens processes on GPUs (and consumes memory) not assigned to the Trainer | [
"bug",
"duplicate",
"help wanted",
"priority: 0",
"distributed"
] | π Bug
When training on multiple GPUs with ddp, it is still starting processes on the unused GPUs (with a lot less memory, but still). It's not running any work there (GPU-Util is 0% in nvidia-smi).
How to fix this manually: I can run CUDA_VISIBLE_DEVICES="2,3" python train.py --gpus 0,1 and then it works, it only runs processes on GPUs 2,3 and nothing on 0,1.
It also crashes when you use GPU 0 or GPU 1 and then some others (for example 0, 2, 3 crashes).
To me it looks like the ddp rank is used to index into the list of visible GPUs and there is some error in how that is handled.
To Reproduce
Run training on GPU index 2,3 and it will also create a process on 0 and 1 (can be verified in nvidia-smi):
trainer = pl.Trainer(gpus=[2,3], accelerator='ddp')
trainer.fit(lightning_model)
Expected behavior
I should be able to run any GPU combination and the other GPUs should not be touched.
Environment
CUDA:
GPU:
GeForce RTX 2080 Ti
GeForce RTX 2080 Ti
GeForce RTX 2080 Ti
GeForce RTX 2080 Ti
available: True
version: 10.2
Packages:
numpy: 1.18.5
pyTorch_debug: True
pyTorch_version: 1.7.0
pytorch-lightning: 1.0.8
tqdm: 4.54.1
System:
OS: Linux
architecture:
64bit
ELF
processor: x86_64
python: 3.8.5
version: #124-Ubuntu SMP Thu Oct 15 13:03:05 UTC 2020 |
train_epoch_end called before epoch_end | [
"bug",
"help wanted",
"priority: 1"
] | π Bug
Please reproduce using the BoringModel and post here
To Reproduce
https://github.com/PyTorchLightning/pytorch-lightning/blob/master/pytorch_lightning/trainer/training_loop.py#L858
Expected behavior
Environment
Note: Bugs with code are solved faster ! Colab Notebook should be made public !
IDE: Please, use our python bug_report_model.py template.
Colab Notebook: Please copy and paste the output from our environment collection script (or fill out the checklist below manually).
You can get the script and run it with:
wget https://raw.githubusercontent.com/PyTorchLightning/pytorch-lightning/master/tests/collect_env_details.py
# For security purposes, please check the contents of collect_env_details.py before running it.
python collect_env_details.py
PyTorch Version (e.g., 1.0):
OS (e.g., Linux):
How you installed PyTorch (conda, pip, source):
Build command you used (if compiling from source):
Python version:
CUDA/cuDNN version:
GPU models and configuration:
Any other relevant information:
Additional context |
Is there a way to overide scheduler calls in pytorch_lightning ? | [
"question"
] | What I want to do is the following:
I have 2 optimizers and 2 lr_schedulers
The 1st optimizer and 1st scheduler will step normally but the 2nd optimizer and 2nd scheduler will step only after the 4th epoch.
Now I wrote this code in which i could modify the stepping of the optimizer how can i do the same for the schedulers:
def configure_optimizers(self):
opt1, sch1 = ... , ...
opt2, sch2 = ... , ...
return [opt1, opt2], [{'scheduler':sch1, 'interval': 'step'}, {'scheduler': sch2, 'interval': 'step'}]
def optimizer_step(self, epoch, batch_idx, optimizer, optimizer_idx, optimizer_closure, *args, **kwargs):
if optimizer_i == 0:
optimizer.step(closure=optimizer_closure)
optimizer.zero_grad()
if optimizer_i == 1:
if epoch > 4:
optimizer.step(closure=optimizer_closure)
optimizer.step()
In a nutshell what i want to do is:
if scheduler_i == 0:
scheduler.step()
if scheduler_i == 1:
if current_epoch > 4:
scheduler.step()
Is this achievable in pytorch-lightning ? |
`Missing logger folder Warning` due to `isdir` of `fsspec` | [
"bug",
"help wanted",
"priority: 1"
] | π Bug
I linked a folder to PROJECT_ROOT/lightning_logs, but pytorch-lightning showed the warning Missing logger folder: PROJECT_ROOT/lightning_logs, and then wrote the tensorboard event log file to PROJECT_ROOT/lightning_logs/version_0, instead of creating a new version_xxx.
I think this is because in this case, fsspec's isdir returns False, while python os.path.isdir returns True . Please refer to https://github.com/PyTorchLightning/pytorch-lightning/blob/master/pytorch_lightning/loggers/tensorboard.py#L241 for details. |
Error in Logger on epoch end when using Multiple GPUs | [
"bug",
"help wanted",
"priority: 0",
"strategy: dp"
] | π Bug
When using multiple GPUs with 'dp', the error RuntimeError: All input tensors must be on the same device. Received cuda:1 and cuda:0 occurs. It means the collections on epoch end would be from different device.
Expected behavior
While they might need to be on the same device, or maybe the aggregating function should be able to handle items from different device.
Environment
PyTorch Version (e.g., 1.0): 1.7.0
OS (e.g., Linux): Linux
How you installed PyTorch (conda, pip, source): pip
Python version: 3.8.0
CUDA/cuDNN version: 10.2
Any other relevant information: pytorch-lightning==1.0.8
A quick but not safe solution
modify collate_tensors function in pytorch_lightning/core/step_result.py:
def collate_tensors(items: Union[List, Tuple]) -> Union[Tensor, List, Tuple]:
if not items or not isinstance(items, (list, tuple)) or any(not isinstance(item, Tensor) for item in items):
# items is not a sequence, empty, or contains non-tensors
return items
# add the following line of code
items = [item.type_as(items[0]) for item in items]
if all(item.ndim == 0 for item in items):
# all tensors are scalars, we need to stack
return torch.stack(items)
if all(item.ndim >= 1 and item.shape[1:] == items[0].shape[1:] for item in items):
# we can concatenate along the first dimension
return torch.cat(items)
return items |
Add option for overriding scheduler step | [
"feature",
"help wanted",
"won't fix"
] | π Feature
Motivation
Currently there is no way to overide the scheduler.step(). scheduler.step() calls for all the schedulers are called sequentially i.e,
for scheduler in schedulers:
scheduler.step()
There should be something a hook that enables to override the scheduler.step() call something similar to the existing hook for optimizers LightningModule.optimizer_step()
Pitch
something similar to :
def scheduler_step(self, epoch, batch_idx, scheduler, scheduler_idx, *args, **kwargs)
if scheduler_idx == 0:
.... # do something
scheduler.step()
if scheduler_idx == 1:
.... # do something here
scheduler.step()
.... |
Incorrect example in `configure_optimizers` | [
"bug",
"help wanted"
] | π Bug
Example shows to use scheduler key to pass the scheduler in the dict. Correct key is lr_scheduler instead.
The example is:
{
'scheduler': lr_scheduler, # The LR schduler
'interval': 'epoch', # The unit of the scheduler's step size
'frequency': 1, # The frequency of the scheduler
'reduce_on_plateau': False, # For ReduceLROnPlateau scheduler
'monitor': 'val_loss', # Metric for ReduceLROnPlateau to monitor
'strict': True # Whether to crash the training if `monitor` is not found
}
but should be
{
'lr_scheduler': lr_scheduler, # The LR schduler
'interval': 'epoch', # The unit of the scheduler's step size
'frequency': 1, # The frequency of the scheduler
'reduce_on_plateau': False, # For ReduceLROnPlateau scheduler
'monitor': 'val_loss', # Metric for ReduceLROnPlateau to monitor
'strict': True # Whether to crash the training if `monitor` is not found
}
To Reproduce
Just use the wrong scheduler key. No exception will be raised but no scheduler will be used! I think many people, like me, do not know that their scheduler is not running. I would add at least some warning.
Environment
PyTorch Version (e.g., 1.7.1):
OS (e.g., Linux): Linux
How you installed PyTorch (conda, pip, source): pip
Build command you used (if compiling from source):
Python version: 3.8.5
CUDA/cuDNN version:
GPU models and configuration:
Any other relevant information: |
Trainer parser error: argument --fast_dev_run: invalid int value: 'True' | [
"bug",
"help wanted",
"priority: 1"
] | π Bug
When generating a parser for Trainer arguments with Trainer.add_argparse_args, the type of fast_dev_run is no longer interpreted correctly (was working correctly on PL 1.08, but no longer works with PL 1.1).
When interpreting an argument like follows:
python my_script.py --fast_dev_run=True [...]
now raises the following error:
error: argument --fast_dev_run: invalid int value: 'True'
I assume this regression was introduced when changing the type of fast_dev_run from bool to Union[int, bool] as per #4629, which messes up type inference when building the parser.
Here is a Colab Notebook reproducing the issue.
Expected behavior
Even though fast_dev_run now can be an int, the automatically generated trainer CLI should continue supporting the boolean options.
Environment
Lightning 1.1 |
When saving top-k models, does it contain the best model? | [
"question"
] | When saving top-k models, it seems that it save k models around the best model. But the best model is not contained in it. |
Edge case bug when validating on multiple data sets if .log is not called at least once for each data set. | [
"bug",
"help wanted",
"won't fix",
"priority: 2",
"logging"
] | π Bug
When validating on multiple data sets, if you call self.log(..) for any data set, self.log(..) also has to have been called for all previous data sets at least once. If not, you get a KeyError when the results are auto reduced. I encountered this because for one of my validation sets I only created and logged images.
Reproduce
import pytorch_lightning as pl
import torch
from torch.utils.data import DataLoader, TensorDataset
class Module(pl.LightningModule):
def __init__(self):
super().__init__()
self.model = torch.nn.Linear(1, 1)
def validation_step(self, batch, batch_idx, dataset_idx):
if dataset_idx == 0:
# When you uncomment the following line, everything works.
# self.log("test1", 0.)
# Just calling self.log for dataset 0 and not for dataset 1 also works
pass
else:
self.log("test2", 0.)
def val_dataloader(self):
return (
DataLoader(TensorDataset(torch.ones(2, 1))),
DataLoader(TensorDataset(torch.ones(2, 1)))
)
def training_step(self, batch, batch_idx):
return torch.mean(self.model(batch[0]))
def train_dataloader(self):
return DataLoader(TensorDataset(torch.ones(2, 1)))
def configure_optimizers(self):
return torch.optim.SGD(self.parameters(), lr=0.01)
if __name__ == "__main__":
trainer = pl.Trainer()
trainer.fit(Module())
Error you get:
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/trainer.py", line 690, in run_sanity_check
_, eval_results = self.run_evaluation(test_mode=False, max_batches=self.num_sanity_val_batches)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/trainer.py", line 622, in run_evaluation
deprecated_eval_results = self.evaluation_loop.evaluation_epoch_end()
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/evaluation_loop.py", line 203, in evaluation_epoch_end
self.trainer.logger_connector.evaluation_epoch_end(self.testing)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/connectors/logger_connector/logger_connector.py", line 225, in evaluation_epoch_end
self.cached_results.has_batch_loop_finished = True
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/connectors/logger_connector/epoch_result_store.py", line 441, in has_batch_loop_finished
self.auto_reduce_results_on_epoch_end()
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/connectors/logger_connector/epoch_result_store.py", line 431, in auto_reduce_results_on_epoch_end
hook_result.auto_reduce_results_on_epoch_end()
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/connectors/logger_connector/epoch_result_store.py", line 195, in auto_reduce_results_on_epoch_end
epoch_metrics = self._internals[dl_idx]
KeyError: 0
Environment
* CUDA:
- GPU:
- available: False
- version: 10.2
* Packages:
- numpy: 1.19.4
- pyTorch_debug: False
- pyTorch_version: 1.7.1
- pytorch-lightning: 1.1.0
- tqdm: 4.54.1
* System:
- OS: Linux
- architecture:
- 64bit
- ELF
- processor: x86_64
- python: 3.6.9
- version: #62-Ubuntu SMP Mon Nov 23 19:20:19 UTC 2020 |
ModelCheckpoint fails at garbage collecting checkpoint passed to Trainer.resume_from_checkpoint | [
"bug",
"help wanted",
"won't fix",
"priority: 1"
] | π Bug
When passing a checkpoint to Trainer via resume_from_checkpoint, it is not tracked/garbage collected by ModelCheckpoint class. Instead, a new checkpoint is instantiated and gargabe collected/updated as usual.
Please reproduce using the BoringModel and post here
https://colab.research.google.com/drive/1QJrLngpOZg1MOgAtZH5kRo_s6u-Hjh0n?usp=sharing
Expected behavior
Checkpoint passed to Trainer.resume_from_checkpoint is garbage collected.
If this is not desired behavior, I think a sentence or 2 should be added to the docs on the intended behavior.
Environment
CUDA:
GPU:
Tesla T4
available: True
version: 10.1
Packages:
numpy: 1.18.5
pyTorch_debug: True
pyTorch_version: 1.7.0+cu101
pytorch-lightning: 1.1.0
tqdm: 4.41.1
System:
OS: Linux
architecture:
64bit
processor: x86_64
python: 3.6.9
version: #1 SMP Thu Jul 23 08:00:38 PDT 2020
Additional context
If this is due to epoch mismatching and not a design choice, #5007 #4655 #2401 could be possibly related. |
Optimization docs and override optimizer_step docs | [] | π Documentation
Optimization docs need to be updated with the correct way of overriding optimizer_step() method, which now takes in a closure as a non-optional parameter.
https://pytorch-lightning.readthedocs.io/en/latest/optimizers.html#step-optimizers-at-arbitrary-intervals |
pip install pytorch-lightning["extra"] doesn't install FairScale | [
"bug",
"help wanted",
"won't fix",
"docs",
"3rd party"
] | π Bug
The documentation [0] states that to enable sharded training one needs to install the extras packages with pip install pytorch-lightning["extra"], in my case only following the second option pip install https://github.com/PyTorchLightning/fairscale/archive/pl_1.1.0.zip actually installed fairscale.
[0] https://pytorch-lightning.readthedocs.io/en/stable/multi_gpu.html#model-parallelism-beta
To Reproduce
pip install pytorch-lightning["extra"]
Expected behavior
Fairscale gets installed.
Environment
PyTorch Version 1.7.0
OS: Linux
How you installed PyTorch: pip
Python version: 3.6.9
CUDA/cuDNN version: 10.2
GPU models and configuration: 4x TITAN X |
Using gradient_clip_val only for Discriminator in GAN | [
"bug",
"feature",
"design",
"priority: 1"
] | How should I use gradient_clip_val if I only want to clip the discriminator in GAN?
Currently, I try to clip the discriminator as follows and I get an error:
def optimizer_step(self, current_epoch, batch_idx, optimizer, optimizer_idx,*args,**kwargs):
optimizer.step()
optimizer.zero_grad()
if optimizer_idx==0:
for p in self.Discriminator.parameters():#weight clipping
p.data.clamp_(-0.01, 0.01)
The error is:
TypeError: optimizer_step() missing 1 required positional argument: 'current_epoch'
I use pl==1.1.0 on colab |
pass a variable instead of string to self.save_hyperparameters() | [
"feature",
"help wanted"
] | π Feature
pass a variable instead of string to function self.save_hyperparameters()
Motivation
>>> class ManuallyArgsModel(LightningModule):
... def __init__(self, arg1, arg2, arg3):
... super().__init__()
... # manually assign arguments
... self.save_hyperparameters('arg1', 'arg3')
I think it is too weird to pass strings as arguments to self.save_hyperparamets(), using variables instead,such as self.save_hyperparametes(arg1, arg3), is much more elegant.
Pitch
Alternatives
Additional context |
Pretty print tensors in `trainer.test()` | [
"bug",
"help wanted",
"good first issue",
"priority: 2"
] | π Bug
When printing the results of trainer.test() (saying "DATALOADER:0 TEST RESULTS"), it shows "tensor(...)", sometimes also including the device, when just a number should be shown.
Please reproduce using the BoringModel and post here
I copied the template and didn't modify it, as it already shows the issue: https://colab.research.google.com/drive/1_lBgEkebPAdoZ-045qGpjUG_E4IUL60w?usp=sharing
It shows this:
DATALOADER:0 TEST RESULTS
{'fake_test_acc': tensor(4.7004e-14)}
Expected behavior
DATALOADER:0 TEST RESULTS
{'fake_test_acc': 4.7004e-14} |
Question about transfer learining. | [
"question",
"won't fix"
] | I have a trained LightningModule, named AutoEncoder. When using it in other LightningModule, i want to know that if both AutoEncoder.freeze() and AutoEncoder.eval() are needed. |
How to save my model with half precision | [
"question"
] | My model includes 5 resnet18, if they are saved with default precision(float32), then about 220MB space in my disk is occupied.
My idea is to reduce the storage to 110MB, so I used model.half() to apply precision 16.
I used torch.save(model.state_dict(),'model.pt') to save my model, however there still is 220MB for the model storage.
Does anyone know how to deal with this? Thanks very much.
trainer = pl.Trainer(max_epochs=config['epochs'], gpus=[0, ], precision=16,
# limit_train_batches=config['limit_train_batches'],
# limit_val_batches=config['limit_val_batches'],
) |
Invalid usage of torch.no_grad Context manager | [
"bug",
"help wanted",
"distributed"
] | π Bug
Using no_grad context manager in the following line
pytorch-lightning/pytorch_lightning/utilities/distributed.py
Line 210
in
127454a
with torch.no_grad:
is incorrect as the context manager is callable. Parentheses are missing.
The line should be: with torch.no_grad():
Please reproduce using the BoringModel and post here
https://colab.research.google.com/drive/1snyzXx4G6QCatbs6bN2GCsTFIMdItvmm?usp=sharing
To Reproduce
Expected behavior
No AttributeError enter
gathered_loss == loss in BoringModel
Environment
Note: Bugs with code are solved faster ! Colab Notebook should be made public !
IDE: Please, use our python bug_report_model.py template.
Colab Notebook: Please copy and paste the output from our environment collection script (or fill out the checklist below manually).
You can get the script and run it with:
wget https://raw.githubusercontent.com/PyTorchLightning/pytorch-lightning/master/tests/collect_env_details.py
# For security purposes, please check the contents of collect_env_details.py before running it.
python collect_env_details.py
CUDA:
GPU:
Tesla P100-PCIE-16GB
available: True
version: 10.1
Packages:
numpy: 1.18.5
pyTorch_debug: True
pyTorch_version: 1.7.0+cu101
pytorch-lightning: 1.1.0
tqdm: 4.41.1
System:
OS: Linux
architecture:
64bit
processor: x86_64
python: 3.6.9
version: #1 SMP Thu Jul 23 08:00:38 PDT 2020
Additional context
Training using "ddp" accelerator with arbitrary number of gpus. |
How to forbid save optimizer's state οΌ | [
"question"
] | Sometime's it's time-consuming. |
NeptuneObserver raises Neptune.api_exceptions.ChannelsValuesSendBatchError | [
"bug",
"help wanted",
"logger"
] | π Bug
NeptuneObserver throws
Failed to send channel value.
Traceback (most recent call last):
File "/home/wojciech/miniconda3/envs/ml/lib/python3.7/site-packages/neptune/internal/channels/channels_values_sender.py", line 156, in _send_values
self._experiment._send_channels_values(channels_with_values)
File "/home/wojciech/miniconda3/envs/ml/lib/python3.7/site-packages/neptune/experiments.py", line 1167, in _send_channels_values
self._backend.send_channels_values(self, channels_with_values)
File "/home/wojciech/miniconda3/envs/ml/lib/python3.7/site-packages/neptune/utils.py", line 211, in wrapper
return func(*args, **kwargs)
File "/home/wojciech/miniconda3/envs/ml/lib/python3.7/site-packages/neptune/internal/backends/hosted_neptune_backend.py", line 571, in send_channels_values
raise ChannelsValuesSendBatchError(experiment.id, batch_errors)
neptune.api_exceptions.ChannelsValuesSendBatchError: Received batch errors sending channels' values to experiment SAN-28. Cause: Error(code=400, message='X-coordinates must be strictly increasing for channel: e968f192-b466-4419-89ce-469fdc4cf86f. Invalid point: InputChannelValue(timestamp=2020-12-14T15:53:20.877Z, x=0.0, numericValue=0.0, textValue=null, image', type=None) (metricId: 'e968f192-b466-4419-89ce-469fdc4cf86f', x: 0.0) Skipping 1 values.
/home/wojciech/miniconda3/envs/ml/lib/python3.7/site-packages/pytorch_lightning/utilities/distributed.py:49: UserWarning: The dataloader, test dataloader 0, does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` (try 6 which is the number of cpus on this machine) in the `DataLoader` init to improve performance.
warnings.warn(*args, **kwargs)
import os
import torch
from pytorch_lightning.loggers import NeptuneLogger
from torch.utils.data import Dataset
from pytorch_lightning import Trainer, LightningModule
class RandomDataset(Dataset):
def __init__(self, size, length):
self.len = length
self.data = torch.randn(length, size)
def __getitem__(self, index):
return self.data[index]
def __len__(self):
return self.len
class BoringModel(LightningModule):
def __init__(self):
"""
Testing PL Module
Use as follows:
- subclass
- modify the behavior for what you want
class TestModel(BaseTestModel):
def training_step(...):
# do your own thing
or:
model = BaseTestModel()
model.training_epoch_end = None
"""
super().__init__()
self.layer = torch.nn.Linear(32, 2)
def forward(self, x):
return self.layer(x)
def loss(self, batch, prediction):
# An arbitrary loss to have a loss that updates the model weights during `Trainer.fit` calls
return torch.nn.functional.mse_loss(prediction, torch.ones_like(prediction))
def step(self, x):
x = self.layer(x)
out = torch.nn.functional.mse_loss(x, torch.ones_like(x))
return out
def training_step(self, batch, batch_idx):
output = self.layer(batch)
loss = self.loss(batch, output)
self.log('train_loss', loss, on_step=True, on_epoch=True)
return {"loss": loss}
def training_step_end(self, training_step_outputs):
return training_step_outputs
def training_epoch_end(self, outputs) -> None:
torch.stack([x["loss"] for x in outputs]).mean()
def validation_step(self, batch, batch_idx):
output = self.layer(batch)
loss = self.loss(batch, output)
self.log('val_loss', loss, on_step=False, on_epoch=True)
return {"x": loss}
def validation_epoch_end(self, outputs) -> None:
torch.stack([x['x'] for x in outputs]).mean()
def test_step(self, batch, batch_idx):
output = self.layer(batch)
loss = self.loss(batch, output)
return {"y": loss}
def test_epoch_end(self, outputs) -> None:
torch.stack([x["y"] for x in outputs]).mean()
def configure_optimizers(self):
optimizer = torch.optim.SGD(self.layer.parameters(), lr=0.1)
lr_scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=1)
return [optimizer], [lr_scheduler]
# NOTE: If you are using a cmd line to run your script,
# provide the cmd line as below.
# opt = "--max_epochs 1 --limit_train_batches 1".split(" ")
# parser = ArgumentParser()
# args = parser.parse_args(opt)
def run_test():
class TestModel(BoringModel):
def on_train_epoch_start(self) -> None:
print('override any method to prove your bug')
# fake data
train_data = torch.utils.data.DataLoader(RandomDataset(32, 64))
val_data = torch.utils.data.DataLoader(RandomDataset(32, 64))
test_data = torch.utils.data.DataLoader(RandomDataset(32, 64))
neptune_logger = NeptuneLogger(
api_key = "ANONYMOUS",
project_name = "shared/pytorch-lightning-integration",
)
# model
model = TestModel()
trainer = Trainer(
default_root_dir=os.getcwd(),
limit_train_batches=1,
limit_val_batches=1,
max_epochs=1,
weights_summary=None,
logger=neptune_logger
)
trainer.fit(model, train_data, val_data)
trainer.test(test_dataloaders=test_data)
if __name__ == '__main__':
run_test()
Expected behavior
No exception.
Environment
This happens both with pytorch-lighting 1.0.7 and 1.1 and neptune-client-0.4.126 and 129.
CUDA:
GPU:
GeForce GTX 1080 Ti
available: True
version: 11.0
Packages:
numpy: 1.18.5
pyTorch_debug: False
pyTorch_version: 1.7.1
pytorch-lightning: 1.1.0
tqdm: 4.54.1
System:
OS: Linux
architecture:
64bit
processor: x86_64
python: 3.7.4
version: #36-Ubuntu SMP Wed Dec 9 09:14:40 UTC 2020
Additional context
Happens only when we log during validation_step only with on_epoch=True, i.e.:
self.log('val_loss', loss, on_step=False, on_epoch=True) |
Wrong name in APEX optimization levels | [
"bug",
"good first issue",
"docs",
"priority: 2"
] | APEX optimization levels are βO1, O2, O3β and not β01, 02, 03β. We should fix this in code + docs.
Make sure it is clear from the docs that the use of Apex anymore and recommend that users use upstream native AMP available since PyTorch 1.6 |
Attempting to unscale FP16 gradients | [
"question",
"won't fix"
] | I want to train and save my model with 16-bit precision, what should I do ?
def train():
model = Model(**config)
model.half()
data_module = MyDataModule(config['train_root'], config['folds'], config['size'],
config['batch_size'], config['num_workers'])
for i in range(config['folds']):
model.set_net(i)
trainer = pl.Trainer(max_epochs=config['epochs'], gpus=[0, ], precision=16,
# limit_train_batches=config['limit_train_batches'],
# limit_val_batches=config['limit_val_batches'],
)
trainer.fit(model=model,
train_dataloader=data_module.train_dataloader(i),
val_dataloaders=data_module.val_dataloader(i))
print:
{'n_class': 2, 'num_workers': 4, 'batch_size': 32, 'train_root': '/root/projects/Screw/data/train', 'ckpt': 'model-7res18.pt', 'epochs': 30, 'lr': 0.001, 'size': 224, 'folds': 7, 'limit_train_batches': None, 'limit_val_batches': None}
GPU available: True, used: True
TPU available: False, using: 0 TPU cores
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Using native 16bit precision.
| Name | Type | Params
---------------------------------------
0 | nets | ModuleList | 78 M
1 | cur_net | Net | 11 M
/root/anaconda3/envs/py37/lib/python3.7/site-packages/pytorch_lightning/utilities/distributed.py:45: UserWarning: Your val_dataloader has `shuffle=True`, it is best practice to turn this off for validation and test dataloaders.
warnings.warn(*args, **kwargs)
Validation sanity check: 50%|ββββββββββ | 1/2 [00:00<00:00, 1.90it/s]cur net: 0
{'val_loss': tensor(0.9086, device='cuda:0'), 'val_f1': 0.5681818181818181}
Epoch 0: 0%| | 0/19 [00:00<?, ?it/s]Traceback (most recent call last):
File "/root/projects/Screw/model.py", line 251, in <module>
train()
File "/root/projects/Screw/model.py", line 223, in train
val_dataloaders=data_module.val_dataloader(i))
File "/root/anaconda3/envs/py37/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 446, in fit
results = self.accelerator_backend.train()
File "/root/anaconda3/envs/py37/lib/python3.7/site-packages/pytorch_lightning/accelerators/gpu_accelerator.py", line 64, in train
results = self.train_or_test()
File "/root/anaconda3/envs/py37/lib/python3.7/site-packages/pytorch_lightning/accelerators/accelerator.py", line 66, in train_or_test
results = self.trainer.train()
File "/root/anaconda3/envs/py37/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 495, in train
self.train_loop.run_training_epoch()
File "/root/anaconda3/envs/py37/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 561, in run_training_epoch
batch_output = self.run_training_batch(batch, batch_idx, dataloader_idx)
File "/root/anaconda3/envs/py37/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 728, in run_training_batch
self.optimizer_step(optimizer, opt_idx, batch_idx, train_step_and_backward_closure)
File "/root/anaconda3/envs/py37/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 470, in optimizer_step
optimizer, batch_idx, opt_idx, train_step_and_backward_closure, *args, **kwargs
File "/root/anaconda3/envs/py37/lib/python3.7/site-packages/pytorch_lightning/accelerators/accelerator.py", line 124, in optimizer_step
**kwargs,
File "/root/anaconda3/envs/py37/lib/python3.7/site-packages/pytorch_lightning/core/lightning.py", line 1372, in optimizer_step
optimizer_closure()
File "/root/anaconda3/envs/py37/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 723, in train_step_and_backward_closure
self.trainer.hiddens
File "/root/anaconda3/envs/py37/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 823, in training_step_and_backward
self.backward(result, optimizer, opt_idx)
File "/root/anaconda3/envs/py37/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 844, in backward
result.closure_loss, optimizer, opt_idx, *args, **kwargs
File "/root/anaconda3/envs/py37/lib/python3.7/site-packages/pytorch_lightning/accelerators/accelerator.py", line 90, in backward
closure_loss, optimizer, opt_idx, *args, **kwargs
File "/root/anaconda3/envs/py37/lib/python3.7/site-packages/pytorch_lightning/plugins/native_amp.py", line 50, in backward
self.trainer.scaler.unscale_(optimizer)
File "/root/anaconda3/envs/py37/lib/python3.7/site-packages/torch/cuda/amp/grad_scaler.py", line 240, in unscale_
optimizer_state["found_inf_per_device"] = self._unscale_grads_(optimizer, inv_scale, found_inf, False)
File "/root/anaconda3/envs/py37/lib/python3.7/site-packages/torch/cuda/amp/grad_scaler.py", line 187, in _unscale_grads_
raise ValueError("Attempting to unscale FP16 gradients.")
ValueError: Attempting to unscale FP16 gradients.
Exception ignored in: <function tqdm.__del__ at 0x7efe5330fb70>
Traceback (most recent call last):
File "/root/anaconda3/envs/py37/lib/python3.7/site-packages/tqdm/std.py", line 1128, in __del__
File "/root/anaconda3/envs/py37/lib/python3.7/site-packages/tqdm/std.py", line 1341, in close
File "/root/anaconda3/envs/py37/lib/python3.7/site-packages/tqdm/std.py", line 1520, in display
File "/root/anaconda3/envs/py37/lib/python3.7/site-packages/tqdm/std.py", line 1131, in __repr__
File "/root/anaconda3/envs/py37/lib/python3.7/site-packages/tqdm/std.py", line 1481, in format_dict
TypeError: cannot unpack non-iterable NoneType object
Process finished with exit code 1 |
Trainer test cannot load from checkpoint when training on multiple GPUs | [
"bug",
"help wanted",
"waiting on author"
] | π Bug
The Trainer.test() looks for epoch=X-v0.ckpt when only epoch=X.ckpt exists, thus the result is:
Traceback (most recent call last):
File "/home/wojciech/tmp/pytorch-lightining/main.py", line 16, in <module>
result = trainer.test()
File "/home/wojciech/miniconda3/envs/ml/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 721, in test
results = self.__test_using_best_weights(ckpt_path, test_dataloaders)
File "/home/wojciech/miniconda3/envs/ml/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 751, in __test_using_best_weights
ckpt = pl_load(ckpt_path, map_location=lambda storage, loc: storage)
File "/home/wojciech/miniconda3/envs/ml/lib/python3.8/site-packages/pytorch_lightning/utilities/cloud_io.py", line 31, in load
with fs.open(path_or_url, "rb") as f:
File "/home/wojciech/miniconda3/envs/ml/lib/python3.8/site-packages/fsspec/spec.py", line 897, in open
f = self._open(
File "/home/wojciech/miniconda3/envs/ml/lib/python3.8/site-packages/fsspec/implementations/local.py", line 115, in _open
return LocalFileOpener(path, mode, fs=self, **kwargs)
File "/home/wojciech/miniconda3/envs/ml/lib/python3.8/site-packages/fsspec/implementations/local.py", line 197, in __init__
self._open()
File "/home/wojciech/miniconda3/envs/ml/lib/python3.8/site-packages/fsspec/implementations/local.py", line 202, in _open
self.f = open(self.path, mode=self.mode)
FileNotFoundError: [Errno 2] No such file or directory: '/home/wojciech/tmp/pytorch-lightining/lightning_logs/version_10/checkpoints/epoch=0-v0.ckpt'
To Reproduce
Execute several times on >1 gpu machine:
#!/usr/bin/env python
# -*- coding: utf-8 -*-
from pytorch_lightning import Trainer
from pytorch_lightning.loggers import NeptuneLogger
import os
from typing import Any, Optional
import torch
from pytorch_lightning import LightningDataModule
from torch.utils.data import DataLoader
from torchvision.datasets import MNIST
from torchvision.transforms import transforms
import torch
from torch.nn import functional as F
from pytorch_lightning.core.lightning import LightningModule
from torch.optim import Adam
class MNISTModule(LightningModule):
def __init__(self):
super().__init__()
# mnist images are (1, 28, 28) (channels, width, height)
self.layer_1 = torch.nn.Linear(28 * 28, 128)
self.layer_2 = torch.nn.Linear(128, 256)
self.layer_3 = torch.nn.Linear(256, 10)
def forward(self, x):
batch_size, channels, width, height = x.size()
# (b, 1, 28, 28) -> (b, 1*28*28)
x = x.view(batch_size, -1)
x = self.layer_1(x)
x = F.relu(x)
x = self.layer_2(x)
x = F.relu(x)
x = self.layer_3(x)
x = F.log_softmax(x, dim=1)
return x
def training_step(self, batch, batch_idx):
x, y = batch
logits = self(x)
loss = F.nll_loss(logits, y)
self.log('train_loss', loss, on_step=True, on_epoch=True)
return loss
def validation_step(self, batch, batch_idx):
x, y = batch
logits = self(x)
loss = F.nll_loss(logits, y)
self.log('val_loss', loss, on_step=False, on_epoch=True)
return loss
def test_step(self, batch, batch_idx):
x, y = batch
logits = self(x)
loss = F.nll_loss(logits, y)
self.log('test_loss', loss, on_step=False, on_epoch=True)
return loss
def configure_optimizers(self):
opt = Adam(self.parameters(), lr=1e-3)
return opt
# noinspection PyAttributeOutsideInit
class MNISTDataModule(LightningDataModule):
def __init__(self):
super().__init__()
self.train_dims = None
self.vocab_size = 0
def prepare_data(self):
# called only on 1 GPU
MNIST(os.getcwd(), train=True, download=True, transform=transforms.ToTensor())
MNIST(os.getcwd(), train=False, download=True, transform=transforms.ToTensor())
def setup(self, stage: Optional[str] = None):
# called on every GPU
transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))])
self.train = MNIST(os.getcwd(), train=True, download=False, transform=transform)
self.test = MNIST(os.getcwd(), train=False, download=False, transform=transform)
self.train, self.val = torch.utils.data.random_split(self.train, (50000, 10000))
def train_dataloader(self):
return DataLoader(self.train, batch_size=64, shuffle=True, drop_last=True, num_workers=2)
def val_dataloader(self):
return DataLoader(self.val, batch_size=512, drop_last=False)
def test_dataloader(self):
return DataLoader(self.test, batch_size=512, drop_last=False)
if __name__ == '__main__':
dm = MNISTDataModule()
model = MNISTModule()
params = dict(param1='a', param2=1)
trainer = Trainer(gpus=2, max_epochs=1, accelerator='ddp')
trainer.fit(model, datamodule=dm)
result = trainer.test()
print(result)
Expected behavior
No exception.
Environment
CUDA:
- GPU:
- GeForce GTX TITAN X
- GeForce GTX TITAN X
- GeForce GTX TITAN X
- GeForce GTX TITAN X
- available: True
- version: 10.2
Packages:
- numpy: 1.19.4
- pyTorch_debug: False
- pyTorch_version: 1.7.1
- pytorch-lightning: 1.1.0 [Also 1.0.8]
- tqdm: 4.54.1
System:
- OS: Linux
- architecture:
- 64bit
- ELF
- processor: x86_64
- python: 3.8.5
- version: #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020
### Additional context
This happens only when `gpus=2` and acceleration=`ddp`. There must be some race condition since this problem occurs every now and then only. |
Metrics reduction during logging/checkpointing | [
"bug",
"help wanted",
"priority: 0",
"checkpointing",
"logging"
] | π Bug
When logging and checkpointing/early stopping with the metrics like shown in the code below, I get:
Traceback (most recent call last):
File "pytorch_lightning/trainer/trainer.py", line 521, in train
self.train_loop.run_training_epoch()
File "pytorch_lightning/trainer/training_loop.py", line 590, in run_training_epoch
self.trainer.run_evaluation(test_mode=False)
File "pytorch_lightning/trainer/trainer.py", line 628, in run_evaluation
self.evaluation_loop.on_evaluation_end()
File "pytorch_lightning/trainer/evaluation_loop.py", line 111, in on_evaluation_end
self.trainer.call_hook('on_validation_end', *args, **kwargs)
File "pytorch_lightning/trainer/trainer.py", line 887, in call_hook
trainer_hook(*args, **kwargs)
File "pytorch_lightning/trainer/callback_hook.py", line 177, in on_validation_end
callback.on_validation_end(self, self.get_model())
File "pytorch_lightning/callbacks/early_stopping.py", line 164, in on_validation_end
self._run_early_stopping_check(trainer, pl_module)
File "pytorch_lightning/callbacks/early_stopping.py", line 205, in _run_early_stopping_check
current = torch.tensor(current, device=pl_module.device)
RuntimeError: Could not infer dtype of Accuracy
Please reproduce using the BoringModel and post here
from tests.base.boring_model import BoringModel, RandomDataset
import torch
import pytorch_lightning as pl
class CustomModel(BoringModel):
def __init__(self, *args, **kwargs):
super().__init__()
self.acc = pl.metrics.Accuracy()
def training_step(self, batch, batch_idx):
self.acc(torch.rand(1, 3, device=self.device), torch.randint(0, 2, (1,), device=self.device))
self.log('train_acc', self.acc, on_step=True, on_epoch=True)
return super().training_step(batch, batch_idx)
def validation_step(self, batch, batch_idx):
self.acc(torch.rand(1, 3, device=self.device), torch.randint(0, 2, (1,), device=self.device))
self.log('val_acc', self.acc, on_step=True, on_epoch=True)
return super().validation_step(batch, batch_idx)
if __name__ == '__main__':
early_stop = pl.callbacks.EarlyStopping(monitor='val_acc', mode='max')
checkpoint = pl.callbacks.ModelCheckpoint(
monitor='val_acc',
save_last=True,
save_top_k=5,
mode='max',
)
pl.Trainer(gpus=[0,], max_epochs=20, callbacks=[early_stop, checkpoint]).fit(CustomModel(), torch.utils.data.DataLoader(RandomDataset(32, 500)))
Environment
pl version: 1.1.0
Additional context
Sometimes I also get
File "pytorch_lightning/trainer/trainer.py", line 470, in fit
results = self.accelerator_backend.train()
File "pytorch_lightning/accelerators/gpu_accelerator.py", line 66, in train
results = self.train_or_test()
File "pytorch_lightning/accelerators/accelerator.py", line 65, in train_or_test
results = self.trainer.train()
File "pytorch_lightning/trainer/trainer.py", line 521, in train
self.train_loop.run_training_epoch()
File "pytorch_lightning/trainer/training_loop.py", line 627, in run_training_epoch
self.run_on_epoch_end_hook(epoch_output)
File "pytorch_lightning/trainer/training_loop.py", line 856, in run_on_epoch_end_hook
self.trainer.logger_connector.on_train_epoch_end()
File "pytorch_lightning/trainer/connectors/logger_connector/logger_connector.py", line 367, in on_train_epoch_end
self.cached_results.has_batch_loop_finished = True
File "pytorch_lightning/trainer/connectors/logger_connector/epoch_result_store.py", line 441, in has_batch_loop_finished
self.auto_reduce_results_on_epoch_end()
File "pytorch_lightning/trainer/connectors/logger_connector/epoch_result_store.py", line 431, in auto_reduce_results_on_epoch_end
hook_result.auto_reduce_results_on_epoch_end()
File "pytorch_lightning/trainer/connectors/logger_connector/epoch_result_store.py", line 217, in auto_reduce_results_on_epoch_end
tbptt_outs = tbptt_outs[0].__class__.reduce_across_time(tbptt_outs)
File pytorch_lightning/core/step_result.py", line 566, in reduce_across_time
value = torch.tensor(value)
RuntimeError: Could not infer dtype of ABCMeta
which I could not reproduce with open code so far, but I assume it to be caused by the same issue.
cc @tchaton |
Allow configuring optimizer_step based on the gradients. | [
"bug",
"help wanted",
"priority: 1"
] | As of now, there is no way to skip optimizer_step based on the gradients after they are calculated since the backward resides inside the closure and this closure is passed to the optimizer.step. Also if training_step returns None it still calls optimizer_step due to the closure which I think it should not if there are no pending accumulated gradients.
pytorch-lightning/pytorch_lightning/trainer/training_loop.py
Lines 707 to 718
in
fde972f
def train_step_and_backward_closure():
result = self.training_step_and_backward(
split_batch,
batch_idx,
opt_idx,
optimizer,
self.trainer.hiddens
)
return None if result is None else result.loss
# optimizer step
self.optimizer_step(optimizer, opt_idx, batch_idx, train_step_and_backward_closure)
pytorch-lightning/pytorch_lightning/core/optimizer.py
Lines 127 to 136
in
fde972f
if trainer.on_tpu:
with trainer.profiler.profile(profiler_name):
xm.optimizer_step(optimizer, optimizer_args={'closure': closure, **kwargs})
elif trainer.amp_backend is not None:
trainer.precision_connector.backend.optimizer_step(trainer, optimizer, closure)
else:
with trainer.profiler.profile(profiler_name):
optimizer.step(closure=closure, *args, **kwargs)
Alternate option
Use manual optimization
zero_grad after backward the gradients and let optimizer_step happen which will have no effect.
Expected behavior
Would like to have this support with automatic optimization with no hacky way (2nd point above) |
Add train validity check | [
"feature",
"help wanted",
"design"
] | π Feature
Motivation
Currently, only validation check is being run.
In order to fix: #4797, we need the introduction of train validity check.
Refer to discussion in #5084.
Pitch
Alternatives
Additional context |
performance loss from 1.0.8 to 1.1.* when using 16 bit precision | [
"bug",
"help wanted",
"priority: 0"
] | π Bug
After updating pytorch-lightning from 1.0.8 to 1.1.0/1.1.1 the use of 16 bit precision destroys the performances.
In my actual code of object detection losses are by a factor of 4 larger at the beginning than compared to 32 bit or 16 bit with pl 1.08.
They converge to a much higher value and the resulting model lost its detection capabilities completely.
To replicate I tested the pl notebooks and the 06-cifar10-baseline.ipynb also shows this and the classification accuracy corresponds to guessing the class when switching from 32 to 16 bit.
I integrated it into the BoringModel notebook and the problem is also happening in google colab.
Please reproduce using the BoringModel and post here
https://colab.research.google.com/drive/1FqXG9Xw9gVZxnwiGnjsHpAtb-vUqFaob?usp=sharing
To Reproduce
Expected behavior
Same performance for 32 and 16 bit.
Environment
CUDA:
GPU:
Tesla P100-PCIE-16GB
available: True
version: 10.1
Packages:
numpy: 1.18.5
pyTorch_debug: True
pyTorch_version: 1.7.0+cu101
pytorch-lightning: 1.1.1
tqdm: 4.41.1
System:
OS: Linux
architecture:
64bit
processor: x86_64
python: 3.6.9
version: #1 SMP Thu Jul 23 08:00:38 PDT 2020
Additional context |
Reproducibility bug between LightningOptimizer activate / deactivate | [
"bug",
"help wanted",
"priority: 0"
] | π Bug
Please reproduce using the BoringModel and post here
import os
import torch
from torch.utils.data import Dataset
from pytorch_lightning import Trainer, LightningModule, seed_everything
class RandomDataset(Dataset):
def __init__(self, size, length):
self.len = length
self.data = torch.randn(length, size)
def __getitem__(self, index):
return self.data[index]
def __len__(self):
return self.len
class BoringModel(LightningModule):
def __init__(self):
"""
Testing PL Module
Use as follows:
- subclass
- modify the behavior for what you want
class TestModel(BaseTestModel):
def training_step(...):
# do your own thing
or:
model = BaseTestModel()
model.training_epoch_end = None
"""
super().__init__()
self.layer = torch.nn.Linear(32, 2)
def forward(self, x):
return self.layer(x)
def loss(self, batch, prediction):
# An arbitrary loss to have a loss that updates the model weights during `Trainer.fit` calls
return torch.nn.functional.mse_loss(prediction, torch.ones_like(prediction))
def step(self, x):
x = self.layer(x)
out = torch.nn.functional.mse_loss(x, torch.ones_like(x))
return out
def training_step(self, batch, batch_idx):
output = self.layer(batch)
loss = self.loss(batch, output)
print(loss)
return {"loss": loss}
def training_step_end(self, training_step_outputs):
return training_step_outputs
def training_epoch_end(self, outputs) -> None:
torch.stack([x["loss"] for x in outputs]).mean()
def validation_step(self, batch, batch_idx):
output = self.layer(batch)
loss = self.loss(batch, output)
return {"x": loss}
def validation_epoch_end(self, outputs) -> None:
torch.stack([x['x'] for x in outputs]).mean()
def test_step(self, batch, batch_idx):
output = self.layer(batch)
loss = self.loss(batch, output)
return {"y": loss}
def test_epoch_end(self, outputs) -> None:
torch.stack([x["y"] for x in outputs]).mean()
def configure_optimizers(self):
optimizer = torch.optim.SGD(self.layer.parameters(), lr=0.1)
lr_scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=1)
return [optimizer], [lr_scheduler]
# NOTE: If you are using a cmd line to run your script,
# provide the cmd line as below.
# opt = "--max_epochs 1 --limit_train_batches 1".split(" ")
# parser = ArgumentParser()
# args = parser.parse_args(opt)
def run_test():
seed_everything(42)
# fake data
train_data = torch.utils.data.DataLoader(RandomDataset(32, 64))
val_data = torch.utils.data.DataLoader(RandomDataset(32, 64))
# model
before = BoringModel()
trainer = Trainer(
default_root_dir=os.getcwd(),
max_epochs=1,
limit_train_batches=4,
limit_val_batches=0,
weights_summary=None,
gpus=1,
precision=16,
)
trainer.fit(before, train_data, val_data)
seed_everything(42)
# fake data
train_data = torch.utils.data.DataLoader(RandomDataset(32, 64))
val_data = torch.utils.data.DataLoader(RandomDataset(32, 64))
# model
after = BoringModel()
trainer = Trainer(
default_root_dir=os.getcwd(),
max_epochs=1,
limit_train_batches=4,
limit_val_batches=0,
weights_summary=None,
gpus=1,
precision=16,
enable_pl_optimizer=False
)
trainer.fit(after, train_data, val_data)
# Assert model parameters are identical after fit
for before, after in zip(before.parameters(), after.parameters()):
assert torch.equal(before, after), 'Model parameters are different'
if __name__ == '__main__':
run_test()
Credit to @SeanNaren
To Reproduce
Expected behavior
Environment
Note: Bugs with code are solved faster ! Colab Notebook should be made public !
IDE: Please, use our python bug_report_model.py template.
Colab Notebook: Please copy and paste the output from our environment collection script (or fill out the checklist below manually).
You can get the script and run it with:
wget https://raw.githubusercontent.com/PyTorchLightning/pytorch-lightning/master/tests/collect_env_details.py
# For security purposes, please check the contents of collect_env_details.py before running it.
python collect_env_details.py
PyTorch Version (e.g., 1.0):
OS (e.g., Linux):
How you installed PyTorch (conda, pip, source):
Build command you used (if compiling from source):
Python version:
CUDA/cuDNN version:
GPU models and configuration:
Any other relevant information:
Additional context |
How to compute metric in each step without reseting the metric | [
"feature",
"question"
] | I want to know a metric during the process of an epoch. That is, I want to know the metric value until now. However, if I call metric.compute() in each step, it will reset my metric, which is not expected. I read the source code and didn't find a clever way to do that. So could you tell me how to do that? |
Failing computer_vision_fine_tuning example | [
"bug",
"good first issue",
"example",
"priority: 1"
] | Hello,
I noticed in the current master branch, the computer_vision_fine_tuning example fails to run with the following error.
TypeError: __init__() got an unexpected keyword argument 'root_data_path'
The error lies in the fact that root_data_path is being passed to pl.LightningModule.__init__ from main .
pytorch-lightning/pl_examples/domain_templates/computer_vision_fine_tuning.py
Line 452
in
e721b1f
model = TransferLearningModel(dl_path=tmp_dir, **vars(args))
This can be fixed by removing the **kwargs in super().__init__(**kwargs). I don't see any arguments in the parser that actually need to be passed to the parent class so It would not lead to bugs due to ignoring the leftover passed arguments.
pytorch-lightning/pl_examples/domain_templates/computer_vision_fine_tuning.py
Line 185
in
e721b1f
super().__init__(**kwargs)
In addition, it seems like the logging is done in the "outdated" method with the return {'log': {logging_dict}} instead of the self.log_dict(logging_dict) / self.log(val), would it be in the interest of the project to update it?
pytorch-lightning/pl_examples/domain_templates/computer_vision_fine_tuning.py
Lines 275 to 280
in
e721b1f
output = OrderedDict({'loss': train_loss,
'num_correct': num_correct,
'log': tqdm_dict,
'progress_bar': tqdm_dict})
return output
pytorch-lightning/pl_examples/domain_templates/computer_vision_fine_tuning.py
Lines 290 to 292
in
e721b1f
return {'log': {'train_loss': train_loss_mean,
'train_acc': train_acc_mean,
'step': self.current_epoch}}
would you be interested in me submitting a PR fixing these issues?
Kindest wishes,
Sebastian
related to #4694 |
How to inference on GPU? | [
"question"
] | β Questions and Help
Hi.
I have trained a Model with Trainer.fit(). Now I want to load the checkpoint at another place and preform inference. But I have no idea how to inference on GPU. Where could I assign a GPU for my inference just like assigning a GPU before training:
trainer = pl.Trainer(max_epochs = cfg['n_epochs'],
callbacks=[checkpoint_monitor, lr_monitor],
gpus=1)
My inference code is (simplified version):
model = plModel.load_from_checkpoint('lightning_logs/version_0/checkpoints/model_epoch=17_val_loss=95.99.ckpt')
model.eval()
test_loader =test_dataloader()
correct_count = 0
nsample = len(test_loader)
for i, data in enumerate(test_loader):
output = model(data['x'])
label = data['label'].data.cpu().numpy()
pred = nn.functional.softmax(output, dim=1)
pred = np.argmax(pred.data.cpu().numpy(), axis = 1)
correct_count += np.sum(pred == label)
accuracy = correct_count / nsample
What's your environment?
OS: [ Linux,]
Packaging [pip]
Version [1.1.0] |
PyTorch Geometric example removed. | [
"question"
] | I've save this link https://github.com/PyTorchLightning/pytorch-lightning/tree/master/pl_examples/pytorch_ecosystem/pytorch_geometric a few days ago, but now I get a 404. Did it get moved somewhere else? I tried to google for it, but I can't find it anywhere.
Thanks. |
WandbLogger: add more flexibility with logging step | [
"feature",
"help wanted",
"logger"
] | π Feature
Let WandbLogger log using:
the trainer step by default (current behavior)
an auto-incremented step, independent from the trainer
Motivation
The current default of using trainer step is a good default and seems to work fine for most people, automatically associated correctly training and validation metrics at the right step.
However, some users have a few issues with custom logging:
#4980 wants to see a chart of validation metrics over a single batch
#5070 tries to log intermediate validation metrics, which uses a different step
wandb/client#1626
wandb/client#1507
#5050 solves some of these issues by suggesting using wandb.log(my_dict, commit=False) when logging outside of the "regular" workflow (which will automatically associate metrics to last used step) but not all of them.
In some cases it may be hard to ensure the associated step is always increasing.
Pitch
We will be able to have a new parameter in WandbLogger (TBD) such as sync_step=True:
by default True, we will have the current behavior which syncs wandb.step with the trainer step.
when set to False, the step will auto-increment at each logging (by relying on wandb.log() default behaviour) and the trainer step will just be added to logged metrics when available
In those cases (set to False), we will still be able to manually select the trainer step as x-axis in the W&B dashboard (global graphs or any specific graph) so this feature will add more flexibility in logging.
It will not be the default behavior mainly for people not familiar with W&B and that expect to see validation and training metrics aligned by default (see #4113). |
ImportError: cannot import name 'invoke_rpc_builtin' from 'torch.distributed' | [
"bug",
"help wanted",
"priority: 0",
"checkpointing"
] | Python 3.7.9 (default, Aug 31 2020, 07:22:35)
[Clang 10.0.0 ] :: Anaconda, Inc. on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import pytorch_lightning as pl
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/ericwiener/anaconda3/envs/donkey/lib/python3.7/site-packages/pytorch_lightning/__init__.py", line 61, in <module>
from pytorch_lightning.callbacks import Callback
File "/Users/ericwiener/anaconda3/envs/donkey/lib/python3.7/site-packages/pytorch_lightning/callbacks/__init__.py", line 19, in <module>
from pytorch_lightning.callbacks.model_checkpoint import ModelCheckpoint
File "/Users/ericwiener/anaconda3/envs/donkey/lib/python3.7/site-packages/pytorch_lightning/callbacks/model_checkpoint.py", line 36, in <module>
from pytorch_lightning.plugins.rpc_plugin import RPCPlugin
File "/Users/ericwiener/anaconda3/envs/donkey/lib/python3.7/site-packages/pytorch_lightning/plugins/rpc_plugin.py", line 24, in <module>
from torch.distributed import rpc
File "/Users/ericwiener/anaconda3/envs/donkey/lib/python3.7/site-packages/torch/distributed/rpc.py", line 3, in <module>
from . import invoke_rpc_builtin, invoke_rpc_python_udf, invoke_remote_builtin
ImportError: cannot import name 'invoke_rpc_builtin' from 'torch.distributed' (/Users/ericwiener/anaconda3/envs/donkey/lib/python3.7/site-packages/torch/distributed/__init__.py)
reported on slack
https://pytorch-lightning.slack.com/archives/CRBLFHY79/p1608427484257200
by Eric Wiener
it probably happens when torch<1.7 (due to missing rpc) |
MisconfigurationError: No TPU devices were found even when TPU is connected | [
"won't fix"
] | Have been frustrated over the past few hours over a problem, though It's likely its a problem I started myself hah.
I'm trying to connect to the TPU in Colab. I'm pretty sure I've gotten all the import stuff down. My code is here. I'm not completely set on everything, so the entire document isn't functional, but you should be able to see my attempts at TPU connection.
I'm running Pytorch in version 1.5.0 and torchvision in 0.6.0 because I was finding I couldn't install XLA with anything later than 1.5.0. I'm running XLA in version 20200325.
This is the image that seems so confusing: It states that we have a connection with xla: 1 yet when trying to flag it in the trainer I get an error saying no TPUs can be found.
If anyone could help me, that would be amazing.
Thanks,
A |
Cannot use SyncBN with sharded DDP. | [
"bug",
"help wanted",
"3rd party"
] | π Bug
To Reproduce
Using SyncBN with sharded DDP plugin makes this error message.
AttributeError: SyncBatchNorm is only supported within torch.nn.parallel.DistributedDataParallel
I think this is not a PL bug but a bug of PyTorch and fair-scale.
Nevertheless, I think there is a way to support this combination in PL (like overriding SyncBN).
Do you have any update plans?
Environment
- PyTorch Version (e.g., 1.0): 1.7.1
- OS (e.g., Linux): Ubuntu 20.04
- How you installed PyTorch (`conda`, `pip`, source): pip
- Python version: 3.8
- CUDA/cuDNN version: 10.1
- GPU models and configuration: V100 / driver == 418.39
- Any other relevant information: PL 1.1.0 |
Add more dataloader options to the trainer for TPU training. | [
"feature",
"help wanted",
"won't fix",
"accelerator: tpu"
] | π Feature
Add more dataloader options to the trainer for TPU training.
Motivation
As is described in https://pytorch.org/xla/release/1.5/_modules/torch_xla/distributed/parallel_loader.html, the signature for ParallelLoader is
def __init__(self,
loader,
devices,
batchdim=0,
fixed_batch_size=False,
loader_prefetch_size=8,
device_prefetch_size=4):
The fixed_batch_size can fix many problems especially caused by especially transformer type model. On the other hand, prefetch can greatly improve the dataloading speed.
Pitch
I would like to add these to the init parameters of trainers. However, I would like to know whether merge is possible before continue. |
split_batches instead of accumulate_grad_batches | [
"feature",
"help wanted",
"won't fix"
] | π Feature
Add a split_batches argument to the trainer that splits the batches of your dataloaders into smaller mini-batches (inverse of accumulate_grad_batches), without changing any of the other parts of training.
Optimizers would then only run every n mini-batches and for all other intents and purposes, it would appear the training/validation/testing happens with the original batch size.
Motivation
Imagine the following situation:
Researcher A develops a state-of-the-art technique for a certain tasks and develops an entire training pipeline with PyTorch Lightning. This researcher is part of a big fancy institute and has access to great computing infrastructure with the best GPUs with lots of VRAM. This allows him to train his model with a big batch_size and Researcher A thus figures it is necessary to set the batch_size of his training to 128 in order to reach good results. His entire pipeline (optimizer steps, training length, etc.) is thus built around this batch_size of 128.
He finally beats the SotA in his field and publishes a paper about it! π
Researcher B, from a much smaller institute, reads about this paper and immediately sees a few possibilities for improving this new model! He quickly goes to github, searches for the new technique and is super happy to find that the original authors published some code in PyTorch. Even better, they used his favorite framework, PyTorch Lightning! π
He pulls the code and data, and decides to first try and train a model to check whether he can reach the same accuracy. But alas, there comes the first issue, he does not have enough memory to train a model with that big batch_size. He tries to use smaller batch sizes, but this yields worse results, as this dataset is best trained with big batch sizes. Researcher B knows PyTorch Lightning well and thus uses a smaller batch_size of 32 and sets the accumulate_grad_batches to 4, in order to fake bigger batches of 128.
However, and here comes the frustrating part in all of this, in order to make these changes, researcher B had to go into the LightningDataModule and change the batch_size. He also needed to go into the LightningModules and adapt the optimizers and schedulers, as they were setup to run with an interval of 'step' (and now need to update 4 times slower). Finally, he cannot forget to run the training with the argument accumulate_grad_batches set to 4, otherwise his entire training will behave differently.
The moral of this made-up story is that the accumulate_grad_batches setting of the Trainer is an engineering trick, but in the current implementation it has quite a substantial influence on some of the hyperparameters of the model, which are RnD settings. IMO, the various Trainer parameters should not influence the results of training (or at least have minimal effect on it).
You should be able to define your training hyperparameters from a research/scientific point of view and only have to worry about the engineering part when starting your training, without both choices influencing each other.
Pitch
I would like to add a split_batches=n argument to the Trainer class, which will reduce the batch_size / n of all dataloaders, but run then run the forward pass of the n times before performing running the optimizer/scheduler/...
For every part of the training, it would act as if the training is using batch_size, but only the actual batches would be smaller.
In contrast to accumulate_grad_batches, this flag allows you to choose a different effective (mini-)batch size depending on your system and resources, without affecting anything else in the code (or needing to modify different parts of the model/data to match the original training).
The only minor issue I see with this flag, is that your choice of batch_size might not be divisible by your settings of split_batches. In reality, batches are usually factors of 2 and finding a proper split value should be doable, but if the values do not match, we can do the following:
batch_size = 130
split_batches = 4
mini_batch_size = round(130 / 4) = 32
new_batch_size = 4 * 32 = 128
This would mean the training uses an effective batch_size of 128, which is not the same as the original 130, but is the closest value that matches and hopefully should only have a minor influence on the results.
We would also log a warning to the user that we are unable to match the original batch_size.
An alternative might be to ceil the mini-batch size, as a bigger batch_size is usually better than a small one, though this is open for discussion.
Alternatives
Obviously, this argument is quite similar to accumulate_grad_batches, but I think I highlighted the reasons for this argument enough in the text above.
I think both arguments can exist next to each other, as long as you dont use both at the same time.
In the long run, you could decide to deprecate accumulate_grad_batches, but that decision is not mine to make.
Additional context
I would love to contribute to this project and what better way than to implement something I am requesting myself.
However, I looked through the Trainer class and noticed it is quite a complicated setup you got there.
I think we would need to adapt the training loop logic in order to run through n batches, and set self.trainer.batch_idx = batch_idx / n.
Alternatively, we could provide something like the tbptt_split_batch functionality, in order to split an existing batch further during runtime. This involves adding some logic to should_accumulate(), in order to disable the backward pass.
If you agree that this feature would be nice to add, it would be nice to get some pointers on where to best start implementing this feature! π |
enable_pl_optimizer causes optimizers to not be restored properly | [
"bug",
"help wanted",
"priority: 1"
] | π Bug
enable_pl_optimizer (default!) causes optimizers to not be restored properly from the checkpoint specified by resume_from_checkpoint.
BoringModel Colab Reproduction
The model is trained for 3 epochs and saved in a checkpoint. The checkpoint is then restored and further trained for 1 epoch (with different values of enable_pl_optimizer), the training loss is printed at each step.
The setup where enable_pl_optimizer=True shows a huge loss spike after the first optimizer step, suggesting that the optimizer is not restored properly.
https://colab.research.google.com/drive/1lHYXm4MpnmXwPZTcPem4D4wwwU5vJhHc?usp=sharing
Expected behavior
PL Optimizers are restored such that there is no huge loss spike after restore, just like when enable_pl_optimizer=False.
Environment
See Colab. |
How to disable automatic SLURM detection / signal handling? | [
"feature",
"won't fix",
"environment: slurm",
"priority: 1"
] | β Questions and Help
What is your question?
I'm running single-GPU jobs on a SLURM cluster. PyTorch Lightning uses environment variables to detect that I'm on SLURM, and automatically interrupts SIGTERM signals. However, when I'm debugging, I don't want the SIGTERM to be bypassed-- I need to know where the signal is originating.
I can't seem to tell PytorchLightning to not use the Slurm handler, because it's automatically detected using environment variables. Is there any way to not use PL's default SLURM connector / SIGTERM bypass function?
OS: Linux
Packaging pip
Version 1.1.1 |
How to split my model to different gpu? | [
"question"
] | pytorch-lightning forum
The problem is above. Want some help ,thx! |
TPU training freeze when define custom backward operation | [
"bug",
"help wanted",
"won't fix",
"accelerator: tpu"
] | As is mentioned. A minimal example is here on the colab.
Here is the address for an example. I have noted where it doesn't work. https://colab.research.google.com/gist/rwbfd/57dde430bc168505cfe7d5c42a31924e/tpu_freeze.ipynb |
Returning None from training_step with multi GPU DDP training | [
"feature",
"help wanted",
"distributed",
"priority: 1"
] | π Bug
Returning None from training_step with multi GPU DDP training freezes the training without exception
To Reproduce
Starting multi-gpu training with a None-returning training_step function.
Example training_step function:
def training_step(self, batch, batch_idx):
data, target = batch
model_outputs = self.forward(images)
loss = calc_loss(model_outputs, target)
if torch.isnan(loss) or random.random() < .05:
return None
return loss
Example trainer:
trainer = Trainer(
gpus=2,
distributed_backend="ddp"
)
Expected behavior
To continue training with skipping the current batch as pointed out at here.
Environment
No specific environment is needed to reproduce this bug.
Additional context
This issue was mentioned here: #4956 but not with specifics.
Note: By the time this issue being investigated, a help for a workaround would be great! |
Schedule model testing every N training epochs | [
"feature",
"help wanted",
"won't fix"
] | π Feature
A check_test_every_n_epoch trainer option to schedule model testing every n epochs, just like check_val_every_n_epoch for validation.
Motivation
Sometimes validation and test tasks are very different. For instance, in unsupervised anomaly detection or segmentation, the training and validation set cannot contain anomalous samples, and therefore the set of metrics that can be computed on such set is limited.
Test metrics cannot also be computed on a second validation set that contains a portion of test data, because all parameters and hyperparameter optimization should be performed on clean (anomaly-free) samples.
The only way the user has to check test metrics is to run a test epoch, but currently pytorch-lightning allows to run a test epoch only after the trainer.fit() call has ended, by explicitly calling trainer.test().
Pitch
When initializing a Trainer with check_test_every_n_epoch: x every x epochs the fitting automatically stops, a test epoch is executed, logging and computing test metrics, and then the fitting resumes.
Alternatives
I tried to:
Calling self.trainer.test() from on_training_epoch_start module callback when the value of trainer.current_epochs becomes multiple of a desired value. Apparently, doing this works fine, but after calling the test method, the number of epochs continues to increase from the last value, but the trainer global_step is reset to the value it had when test was last called, creating the beautiful effect shown in figure and making logs unreadable.
Raising KeyboardInterrupt from on_training_epoch_start module callback to stop the fitting, testing, manually saving a checkpoint, and resuming training from the saved checkpoint in a while loop. Cumbersome, poor disk usage, cannot gracefully stop training with Ctrl+C anymore.
Computing test metrics in validation_epoch_end by manually looping on test set using pytorch only. Cumbersome, code duplication, makes test hooks useless, poor code readability.
Additional context
Tensorboard logging current epoch each step when trying solution 1: |
Why is there a difference in loss value logged using self.log ? | [
"question"
] | What is your question?
(New to Lightning) The progress bar shows loss for each step. Before logging other metrics using self.log, wanted to sanity test it to log the loss again with a different name.
Please check the last 2 lines of the code. I'm logging the same loss value under "train_loss" that is being returned so I expected to see the "loss" and "train_loss_step" values to be the same in the progress bar but they aren't. The values don't seem to be time-shifted.
Which value should I interpret as the loss of the current mini-batch and why is there a difference?
Code
class LitCNN(pl.LightningModule):
def __init__(self):
super(LitCNN, self).__init__()
# Model preparation
model = torchvision.models.resnet34(pretrained=True)
# Freeze layers
for params in model.parameters():
params.requires_grad = False
# Change the final fully-connected layer to match our number of classes
model.fc = nn.Linear(in_features=512, out_features=5, bias=True)
self.model = model
self.metric_accuracy = pl.metrics.Accuracy()
self.loss = nn.CrossEntropyLoss()
def forward(self, x):
# called with self(x)
return self.model(x)#.argmax(1) # Return class ID
def training_step(self, batch, batch_nb):
# REQUIRED
X, y = batch
logits = self.model(X)
loss = self.loss(logits, y)
self.log('train_loss', loss, on_step=True, on_epoch=True, prog_bar=True)
return {'loss': loss}
What have you tried?
I've experimented with the "reduce_fx" arg in self.log but it seems to get invoked only on epoch end.
What's your environment?
OS: Win 10
Packaging: Conda
Version: 1.1.0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.