title
stringlengths 5
164
| labels
sequence | bodyText
stringlengths 0
46.7k
|
---|---|---|
validation loops run the partial dataset with horovod | [
"bug",
"help wanted"
] | Hello,
It seems to be the same issue as #1161.
When I use horovod, validation_step and validation_epoch_end are called multiple times).
Thank you. |
auto_lr_find=True doesn't work with early_stop_callback | [
"bug",
"help wanted"
] | π Bug
When I use auto_lr_find=True with early_stop_callback I find errors like this.
Traceback (most recent call last):
File "gpu_template.py", line 92, in <module>
main(hyperparams)
File "gpu_template.py", line 53, in main
trainer.fit(model)
File "/home/hirune/anaconda3/envs/PANDA/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 734, in fit
self._run_lr_finder_internally(model)
File "/home/hirune/anaconda3/envs/PANDA/lib/python3.7/site-packages/pytorch_lightning/trainer/lr_finder.py", line 31, in _run_lr_finder_internally
lr_finder = self.lr_find(model)
File "/home/hirune/anaconda3/envs/PANDA/lib/python3.7/site-packages/pytorch_lightning/trainer/lr_finder.py", line 164, in lr_find
self.restore(str(save_path), on_gpu=self.on_gpu)
File "/home/hirune/anaconda3/envs/PANDA/lib/python3.7/site-packages/pytorch_lightning/trainer/training_io.py", line 289, in restore
self.restore_training_state(checkpoint)
File "/home/hirune/anaconda3/envs/PANDA/lib/python3.7/site-packages/pytorch_lightning/trainer/training_io.py", line 372, in restore_training_state
self.early_stop_callback.wait = checkpoint['early_stop_callback_wait']
KeyError: 'early_stop_callback_wait'
To Reproduce
Execute the following code like this.
python gpu_template.py --gpus=0
This source code is a slightly modified version of this code.
https://github.com/PyTorchLightning/pytorch-lightning/blob/master/pl_examples/basic_examples/gpu_template.py
Code sample
"""
Runs a model on a single node across multiple gpus.
"""
import os
from argparse import ArgumentParser
import numpy as np
import torch
import pytorch_lightning as pl
from pl_examples.models.lightning_template import LightningTemplateModel
from pytorch_lightning.callbacks import ModelCheckpoint
from pytorch_lightning.callbacks import EarlyStopping
SEED = 2334
torch.manual_seed(SEED)
np.random.seed(SEED)
def main(hparams):
"""
Main training routine specific for this project
:param hparams:
"""
early_stop_callback = EarlyStopping(
monitor='val_loss',
patience=20,
min_delta = 0.0,
strict=True,
verbose=True,
mode='min'
)
# ------------------------
# 1 INIT LIGHTNING MODEL
# ------------------------
model = LightningTemplateModel(hparams)
# ------------------------
# 2 INIT TRAINER
# ------------------------
trainer = pl.Trainer(
max_epochs=hparams.epochs,
gpus=hparams.gpus,
distributed_backend=hparams.distributed_backend,
precision=16 if hparams.use_16bit else 32,
auto_lr_find=True,
early_stop_callback=early_stop_callback,
)
# ------------------------
# 3 START TRAINING
# ------------------------
trainer.fit(model)
if __name__ == '__main__':
# ------------------------
# TRAINING ARGUMENTS
# ------------------------
# these are project-wide arguments
root_dir = os.path.dirname(os.path.realpath(__file__))
parent_parser = ArgumentParser(add_help=False)
# gpu args
parent_parser.add_argument(
'--gpus',
type=int,
default=2,
help='how many gpus'
)
parent_parser.add_argument(
'--distributed_backend',
type=str,
default='dp',
help='supports three options dp, ddp, ddp2'
)
parent_parser.add_argument(
'--use_16bit',
dest='use_16bit',
action='store_true',
help='if true uses 16 bit precision'
)
# each LightningModule defines arguments relevant to it
parser = LightningTemplateModel.add_model_specific_args(parent_parser, root_dir)
hyperparams = parser.parse_args()
# ---------------------
# RUN TRAINING
# ---------------------
main(hyperparams)
Expected behavior
Environment
CUDA:
GPU:
GeForce GTX 1080 Ti
available: True
version: 10.1
Packages:
numpy: 1.18.1
pyTorch_debug: False
pyTorch_version: 1.5.0
pytorch-lightning: 0.7.5
tensorboard: 2.2.1
tqdm: 4.42.1
System:
OS: Linux
architecture:
64bit
processor: x86_64
python: 3.7.6
version: #97-Ubuntu SMP Wed Apr 1 03:25:46 UTC 2020
Additional context |
Return the evaluation result of Trainer.test | [
"feature",
"help wanted"
] | π Feature
This enhancement request is to let Trainer.test return the dictionary of test metrics.
Motivation
Currently, Trainer.test returns nothing. The user would have to open Tensorboard to see the test result or write a custom logger. For beginners, this enhancement offers simplicity to beginners.
In some scenarios for hparams search, the calling program needs to have the test result of each trial. Thus, it would be handy if Trainer.test returns a value.
def objective(trial):
...
trainer = Trainer(...)
result = trainer.test(best_model, validation_loader)
return result['test_acc']
Pitch
If test_epoch_end already defines the return value, we can return the eval_results from run_evaluation and let Trainer.test return that result.
Alternatives
Pass a reference to a mutable collection of metrics summary to the Module and on_test_end, the user can choose to update the collection.
Additional context
There could be some refactoring that needs to be done as currently, Trainer.test has different code paths depending on whether model is passed or not or whether ddp is used or not. |
[Examples] The UNet model has some bugs | [
"bug",
"help wanted",
"example"
] | π Bug
The UNet model definition has some bugs pertaining to bilinear interpolation.
Code sample
pytorch-lightning/pl_examples/models/unet.py
Lines 35 to 37
in
2950f66
for _ in range(num_layers - 1):
layers.append(Up(feats, feats // 2), bilinear)
feats //= 2
In the code above, there seems to be a typo. The bilinear flag should be passed to the function Up(). It has instead been passed to the .append() method of the list.
pytorch-lightning/pl_examples/models/unet.py
Lines 101 to 104
in
2950f66
if bilinear:
self.upsample = nn.Upsample(scale_factor=2, mode='bilinear', align_corners=True)
else:
self.upsample = nn.ConvTranspose2d(in_ch, in_ch // 2, kernel_size=2, stride=2)
The number of channels once the input passes through either one of these layers is different. For "bilinear", the number of channels remains the same, whereas they decrease to half if a ConvTranspose2d is used. This gives an error in the network's .forward() method.
I wanted to directly use the model for some other application, but not sure how issue 2 should be solved. Maybe use a 1x1 convolution to reduce the channels to half? |
minst multi-gpu You requested GPUs: [6, 7] But your machine only has: [0, 1] | [
"bug",
"help wanted"
] | import os
import torch
from torch.nn import functional as F
from torch.utils.data import DataLoader
from torchvision.datasets import MNIST
from torchvision import transforms
import pytorch_lightning as pl
class CoolSystem(pl.LightningModule):
def __init__(self, hparams=None):
super().__init__()
# get hyperparams, etc...
self.hparams = hparams
# not the best model...
self.l1 = torch.nn.Linear(28 * 28, 10)
def forward(self, x):
# called with self(x)
return torch.relu(self.l1(x.view(x.size(0), -1)))
def training_step(self, batch, batch_idx):
# REQUIRED
x, y = batch
y_hat = self(x)
loss = F.cross_entropy(y_hat, y)
tensorboard_logs = {'train_loss': loss}
return {'loss': loss, 'log': tensorboard_logs}
def validation_step(self, batch, batch_idx):
# OPTIONAL
x, y = batch
y_hat = self(x)
val_loss = F.cross_entropy(y_hat, y)
return {'val_loss': val_loss}
def validation_epoch_end(self, outputs):
# OPTIONAL
avg_loss = torch.stack([x['val_loss'] for x in outputs]).mean()
tensorboard_logs = {'val_loss': avg_loss}
return {'val_loss': avg_loss, 'log': tensorboard_logs}
def test_step(self, batch, batch_idx):
# OPTIONAL
x, y = batch
y_hat = self(x)
return {'test_loss': F.cross_entropy(y_hat, y)}
def test_epoch_end(self, outputs):
avg_loss = torch.stack([x['test_loss'] for x in outputs]).mean()
tensorboard_logs = {'test_loss': avg_loss}
return {'test_loss': avg_loss, 'log': tensorboard_logs}
def configure_optimizers(self):
return torch.optim.Adam(self.parameters(), lr=0.001)
def prepare_data(self):
self.mnist_train = MNIST(os.getcwd(), train=True, download=True, transform=transforms.ToTensor())
self.mnist_test = MNIST(os.getcwd(), train=False, download=True, transform=transforms.ToTensor())
def train_dataloader(self):
loader = DataLoader(self.mnist_train, batch_size=32, num_workers=4)
return loader
def val_dataloader(self):
loader = DataLoader(self.mnist_test, batch_size=32, num_workers=4)
return loader
def test_dataloader(self):
loader = DataLoader(self.mnist_test, batch_size=32, num_workers=4)
return loader
from pytorch_lightning import Trainer
model = CoolSystem()
# most basic trainer, uses good defaults
trainer = Trainer(gpus=[6,7],num_nodes=2,distributed_backend='ddp', progress_bar_refresh_rate=10, max_epochs=10)
trainer.fit(model)
the output is:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home1/liuxinfang/anaconda3/envs/MomentRetrival/lib/python3.7/multiprocessing/spawn.py", line 105, in spawn_main
exitcode = _main(fd)
File "/home1/liuxinfang/anaconda3/envs/MomentRetrival/lib/python3.7/multiprocessing/spawn.py", line 114, in _main
prepare(preparation_data)
File "/home1/liuxinfang/anaconda3/envs/MomentRetrival/lib/python3.7/multiprocessing/spawn.py", line 225, in prepare
_fixup_main_from_path(data['init_main_from_path'])
File "/home1/liuxinfang/anaconda3/envs/MomentRetrival/lib/python3.7/multiprocessing/spawn.py", line 277, in _fixup_main_from_path
run_name="__mp_main__")
File "/home1/liuxinfang/anaconda3/envs/MomentRetrival/lib/python3.7/runpy.py", line 263, in run_path
pkg_name=pkg_name, script_name=fname)
File "/home1/liuxinfang/anaconda3/envs/MomentRetrival/lib/python3.7/runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "/home1/liuxinfang/anaconda3/envs/MomentRetrival/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home1/liuxinfang/projects/minist/model.py", line 84, in <module>
trainer = Trainer(gpus=[6,7],num_nodes=2,distributed_backend='ddp', progress_bar_refresh_rate=10, max_epochs=10)
File "/home1/liuxinfang/anaconda3/envs/MomentRetrival/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 438, in __init__
self.data_parallel_device_ids = parse_gpu_ids(self.gpus)
File "/home1/liuxinfang/anaconda3/envs/MomentRetrival/lib/python3.7/site-packages/pytorch_lightning/trainer/distrib_parts.py", line 712, in parse_gpu_ids
gpus = sanitize_gpu_ids(gpus)
File "/home1/liuxinfang/anaconda3/envs/MomentRetrival/lib/python3.7/site-packages/pytorch_lightning/trainer/distrib_parts.py", line 678, in sanitize_gpu_ids
""")
pytorch_lightning.utilities.exceptions.MisconfigurationException:
You requested GPUs: [6, 7]
But your machine only has: [0, 1] |
Enable Deepsource checks in CI/CD | [
"feature",
"help wanted",
"won't fix",
"ci"
] | In addition to flake8, mypy and other checks, I use https://deepsource.io/ for my projects.
It is free and from time to time it finds non-trivial bugs.
When I use it for projects with Pytorch-lightning and several validation loaders it throws me an error in python style and syntax. To overcome this I need to add things like:
# skipcq: PYL-W0201 to my code. I would prefer to avoid it.
Is it possible that Deepsource checks would be allowed in Pytorch Lightning repo? I believe it will lead to better code quality. |
Error running on ddp (can't pickle local object 'SummaryTopic) with comet logger | [
"bug",
"help wanted"
] | I have the following problem running on ddp mode with cometlogger.
When I detach the logger from the trainer (i.e deletinglogger=comet_logger) the code runs.
Exception has occurred: AttributeError
Can't pickle local object 'SummaryTopic.__init__.<locals>.default'
File "/path/multiprocessing/reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
File "/path/multiprocessing/popen_spawn_posix.py", line 47, in _launch
reduction.dump(process_obj, fp)
File "/path/multiprocessing/popen_fork.py", line 20, in __init__
self._launch(process_obj)
File "/path/multiprocessing/popen_spawn_posix.py", line 32, in __init__
super().__init__(process_obj)
File "/path/multiprocessing/context.py", line 284, in _Popen
return Popen(process_obj)
File "/path/multiprocessing/process.py", line 112, in start
self._popen = self._Popen(self)
File "/path/site-packages/torch/multiprocessing/spawn.py", line 162, in spawn
process.start()
File "/path/site-packages/pytorch_lightning/trainer/trainer.py", line 751, in fit
mp.spawn(self.ddp_train, nprocs=self.num_processes, args=(model,))
File "/repo_path/train.py", line 158, in main_train
trainer.fit(model)
File "/repo_path/train.py", line 72, in main
main_train(model_class_pointer, hyperparams, logger)
File "/repo_path/train.py", line 167, in <module>
main()
File "/path/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/path/runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "/path/runpy.py", line 263, in run_path
pkg_name=pkg_name, script_name=fname)
File "/path/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/path/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec) |
Model checkpoint and restore via GCS on Ai platform | [
"question"
] | β Questions and Help
What is your question?
I am training models on Google's ai platform and every checkpointing or restoration is done via Google Cloud Storage (not using pytorch-lightning for that yet). I noticed that pytorch-lightning uses torch.save, torch.load and io operations to work with saving and loading, which do not handle GCS by default (although I could pass file-like object to torch.save/load). Is it possible to extend the library so that is works with GCS without forking and modifying the code directly? The most important component for me is checkpoint save and restore on GCS. Is it a matter of Trainer class, Callbacks or model hooks? |
example of doing simple inference with lightning | [
"question",
"won't fix"
] | I have an existing model where I load some pre-trained weights and then do inference (one image at a time) in pytorch. I am trying to basically convert it to a pytorch lightning module and am confused about a few things.
So currently, my __init__ method for the model looks like this:
self._load_config_file(cfg_file)
# just creates the pytorch network
self.create_network()
self.load_weights(weights_file)
self.cuda(device=0) # assumes GPU and uses one. This is probably suboptimal
self.eval() # inference mode
What I can gather from the lightning docs, I can pretty much do the same, except not to do the cuda() call. So something like:
self.create_network()
self.load_weights(weights_file)
self.freeze() # inference mode
So, my first question is whether this is the correct way to use lightning? How would lightning know if it needs to use the GPU? I am guessing this needs to be specified somewhere.
Now, for the inference, I have the following setup:
def infer(frame):
img = transform(frame) # apply some transformation to the input
img = torch.from_numpy(img).float().unsqueeze(0).cuda(device=0)
with torch.no_grad():
output = self.__call__(Variable(img)).data.cpu().numpy()
return output
This is the bit that has me confused. Which functions do I need to override to make a lightning compatible inference?
Also, at the moment, the input comes as a numpy array. Is that something that would be possible from the lightning module or do things always have to use some sort of a dataloader?
At some point, I want to extend this model implementation to do training as well, so want to make sure I do it right but while most examples focus on training models, a simple example of just doing inference at production time on a single image/data point might be useful.
I am using 0.7.5 with pytorch 1.4.0 on GPU with cuda 10.1
Thank you for your work and patience with this newbie questions. |
minist ddp You requested GPUs: [6, 7] But your machine only has: [0, 1] | [
"bug",
"help wanted"
] | import os
import torch
from torch.nn import functional as F
from torch.utils.data import DataLoader
from torchvision.datasets import MNIST
from torchvision import transforms
import pytorch_lightning as pl
class CoolSystem(pl.LightningModule):
def __init__(self, hparams=None):
super().__init__()
# get hyperparams, etc...
self.hparams = hparams
# not the best model...
self.l1 = torch.nn.Linear(28 * 28, 10)
def forward(self, x):
# called with self(x)
return torch.relu(self.l1(x.view(x.size(0), -1)))
def training_step(self, batch, batch_idx):
# REQUIRED
x, y = batch
y_hat = self(x)
loss = F.cross_entropy(y_hat, y)
tensorboard_logs = {'train_loss': loss}
return {'loss': loss, 'log': tensorboard_logs}
def validation_step(self, batch, batch_idx):
# OPTIONAL
x, y = batch
y_hat = self(x)
val_loss = F.cross_entropy(y_hat, y)
return {'val_loss': val_loss}
def validation_epoch_end(self, outputs):
# OPTIONAL
avg_loss = torch.stack([x['val_loss'] for x in outputs]).mean()
tensorboard_logs = {'val_loss': avg_loss}
return {'val_loss': avg_loss, 'log': tensorboard_logs}
def test_step(self, batch, batch_idx):
# OPTIONAL
x, y = batch
y_hat = self(x)
return {'test_loss': F.cross_entropy(y_hat, y)}
def test_epoch_end(self, outputs):
avg_loss = torch.stack([x['test_loss'] for x in outputs]).mean()
tensorboard_logs = {'test_loss': avg_loss}
return {'test_loss': avg_loss, 'log': tensorboard_logs}
def configure_optimizers(self):
return torch.optim.Adam(self.parameters(), lr=0.001)
def prepare_data(self):
self.mnist_train = MNIST(os.getcwd(), train=True, download=True, transform=transforms.ToTensor())
self.mnist_test = MNIST(os.getcwd(), train=False, download=True, transform=transforms.ToTensor())
def train_dataloader(self):
loader = DataLoader(self.mnist_train, batch_size=32, num_workers=4)
return loader
def val_dataloader(self):
loader = DataLoader(self.mnist_test, batch_size=32, num_workers=4)
return loader
def test_dataloader(self):
loader = DataLoader(self.mnist_test, batch_size=32, num_workers=4)
return loader
from pytorch_lightning import Trainer
model = CoolSystem()
most basic trainer, uses good defaults
trainer = Trainer(gpus=[6,7],
num_nodes=4,
distributed_backend='ddp',
progress_bar_refresh_rate=10, max_epochs=10)
trainer.fit(model)
NFO:lightning:GPU available: True, used: True
INFO:lightning:CUDA_VISIBLE_DEVICES: [6,7]
Traceback (most recent call last):
File "", line 1, in
File "/home1/liuxinfang/anaconda3/envs/MomentRetrival/lib/python3.7/multiprocessing/spawn.py", line 105, in spawn_main
exitcode = _main(fd)
File "/home1/liuxinfang/anaconda3/envs/MomentRetrival/lib/python3.7/multiprocessing/spawn.py", line 114, in _main
prepare(preparation_data)
File "/home1/liuxinfang/anaconda3/envs/MomentRetrival/lib/python3.7/multiprocessing/spawn.py", line 225, in prepare
_fixup_main_from_path(data['init_main_from_path'])
File "/home1/liuxinfang/anaconda3/envs/MomentRetrival/lib/python3.7/multiprocessing/spawn.py", line 277, in _fixup_main_from_path
run_name="mp_main")
File "/home1/liuxinfang/anaconda3/envs/MomentRetrival/lib/python3.7/runpy.py", line 263, in run_path
pkg_name=pkg_name, script_name=fname)
File "/home1/liuxinfang/anaconda3/envs/MomentRetrival/lib/python3.7/runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "/home1/liuxinfang/anaconda3/envs/MomentRetrival/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home1/liuxinfang/projects/minist/model.py", line 87, in
progress_bar_refresh_rate=10, max_epochs=10)
File "/home1/liuxinfang/anaconda3/envs/MomentRetrival/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 438, in init
self.data_parallel_device_ids = parse_gpu_ids(self.gpus)
File "/home1/liuxinfang/anaconda3/envs/MomentRetrival/lib/python3.7/site-packages/pytorch_lightning/trainer/distrib_parts.py", line 712, in parse_gpu_ids
gpus = sanitize_gpu_ids(gpus)
File "/home1/liuxinfang/anaconda3/envs/MomentRetrival/lib/python3.7/site-packages/pytorch_lightning/trainer/distrib_parts.py", line 678, in sanitize_gpu_ids
""")
pytorch_lightning.utilities.exceptions.MisconfigurationException:
You requested GPUs: [6, 7]
But your machine only has: [0, 1]
Actually my machine has 8 gpus, since gpu 0,1 are used by other users, i need to use 6,7 with enough memory . The code performs normally with single gpu 6 or 7, but failed with more than one gpus. |
Can lightning auto-find unused GPUs | [
"question"
] | I have 2 gpus in a machine
Pl always find the same gpu to use.
Can it automaticly find this unused gpu and run on it ? |
Get Caffe2 Error when import the pytorch-lightning. | [
"help wanted",
"question"
] | π Bug
Get Caffe2 Error when import the pytorch-lightning.
To Reproduce
Steps to reproduce the behavior:
import pytorch_lightning
WARNING:root:This caffe2 python run does not have GPU support. Will run in CPU only mode.
WARNING:root:Debug message: /opt/caffe2/build/lib/libcaffe2.so: undefined symbol: dnnLRNCreateForward_F32
CRITICAL:root:Cannot load caffe2.python. Error: /opt/caffe2/build/caffe2/python/caffe2_pybind11_state.so: undefined symbol: _Py_ZeroStruct
Environment
CUDA:
GPU:
Tesla P100-PCIE-16GB
available: True
version: 10.0.130
Packages:
numpy: 1.18.1
pytorch_lightning 0.7.5
pyTorch_debug: False
pyTorch_version: 1.2.0
tensorboard: 2.2.1
tqdm: 4.45.0
System:
OS: Linux
architecture:
64bit
processor: x86_64
python: 3.7.6
version: #76-Ubuntu SMP Wed Feb 12 03:02:44 UTC 2020 |
Trainer.from_argparse_args with additional kwargs causes model to not be saved | [
"bug",
"help wanted",
"priority: 0"
] | π Bug
When using Trainer.from_argparse_args() to initialize the trainer, there will be some specific arguments that we would like to keep constant and not send as part of hparams. If the extra arguments turn out to be an object, such as a TensorBoardLogger or a ModelCheckpoint object, the model will not be saved because these objects get added to hparams
To Reproduce
Code sample
import os
from argparse import ArgumentParser
import pytorch_lightning as pl
import torch
from pytorch_lightning import Trainer
from pytorch_lightning.loggers import TensorBoardLogger
from torch.nn import functional as F
from torch.utils.data import DataLoader
from torchvision import transforms
from torchvision.datasets import MNIST
class LitModel(pl.LightningModule):
def __init__(self, hparams):
super().__init__()
self.hparams = hparams
self.l1 = torch.nn.Linear(28 * 28, 10)
def forward(self, x):
return torch.relu(self.l1(x.view(x.size(0), -1)))
def training_step(self, batch, batch_idx):
x, y = batch
y_hat = self(x)
loss = F.cross_entropy(y_hat, y)
tensorboard_logs = {"train_loss": loss}
return {"loss": loss, "log": tensorboard_logs}
def configure_optimizers(self):
return torch.optim.Adam(self.parameters(), lr=0.001)
def train_dataloader(self):
dataset = MNIST(
os.getcwd(), train=True, download=True, transform=transforms.ToTensor()
)
loader = DataLoader(dataset, batch_size=32, num_workers=4, shuffle=True)
return loader
def main(hparams):
logger = TensorBoardLogger(save_dir=os.getenv("HOME"), name="logs")
net = LitModel(hparams)
trainer = Trainer.from_argparse_args(
hparams, logger=logger, checkpoint_callback=True, overfit_pct=0.01,
)
trainer.fit(net)
if __name__ == "__main__":
parser = ArgumentParser()
parser.add_argument("--sample", type=int, default=42, help="Sample Argument")
hparams = parser.parse_args()
main(hparams)
Error
TypeError: cannot pickle '_thread.lock' object
Expected behavior
The model to be saved without considering the extra arguments sent as hparams
Environment
CUDA:
GPU:
GeForce GTX 1050 Ti
available: True
version: 10.1
Packages:
numpy: 1.18.1
pyTorch_debug: False
pyTorch_version: 1.4.0
pytorch-lightning: 0.7.5
tensorboard: 2.2.1
tqdm: 4.45.0
System:
OS: Linux
architecture:
64bit
ELF
processor: x86_64
python: 3.8.2
version: 32-Ubuntu SMP Wed Apr 22 17:40:10 UTC 2020
Additional context
Possible Fix
I believe that a possible fix is to change the from_argparse_args method
from
@classmethod
def from_argparse_args(cls, args, **kwargs):
params = vars(args)
params.update(**kwargs)
return cls(**params)
to
@classmethod
def from_argparse_args(cls, args, **kwargs):
params = vars(args)
return cls(**params, **kwargs)
This ensures that the **kwargs are not added to the hparams of the model and the model gets saved successfully, but I'm not sure of the impact of this change |
Progress bar \ log dict items added to outputs in training_epoch_end | [
"bug",
"help wanted"
] | π Bug
When running with training_step returning something along this:
return { 'loss':x,
'some_value_for_epoch_end': y,
'progress_bar':{'v1':z.mean()}}
you get 'v1' as part of the outputs in epoch end. This is unexpected imo.
Also in case it is:
return { 'loss':x,
'some_value_for_epoch_end': y,
'progress_bar':{'some_value_for_epoch_end': z.mean()}}
you get 'some_value_for_epoch_end' = z.mean(), as the values get overwritten.
It originate from file trainer/logging.py
lines 172, 173, where you overwrite the values using progress_bar_outputs + log_outputs |
Uncenssary information of apex always pop up | [
"bug",
"help wanted"
] | π Bug
To Reproduce
import pytorch_lightning as pl
trainer = pl.Trainer(
gpus=2,
progress_bar_refresh_rate=0,
precision=16,
num_sanity_val_steps=0,
profiler=True,
gradient_clip_val=hparams.gradient_clip_val,
)
output:
Selected optimization level O1: Insert automatic casts around Pytorch functions and Tensor methods.
Defaults for this optimization level are:
enabled : True
opt_level : O1
cast_model_type : None
patch_torch_functions : True
keep_batchnorm_fp32 : None
master_weights : None
loss_scale : dynamic
Processing user overrides (additional kwargs that are not None)...
After processing overrides, optimization options are:
enabled : True
opt_level : O1
cast_model_type : None
patch_torch_functions : True
keep_batchnorm_fp32 : None
master_weights : None
loss_scale : dynamic
Try manually set _amp_state=0 the to disable this print and did not work
import apex
apex.amp._amp_state = 0
Expected behavior
Give a parameter the set control this print behavior in apex
Environment
CUDA:
GPU:
GeForce RTX 2080 Ti
GeForce RTX 2080 Ti
GeForce RTX 2080 Ti
GeForce RTX 2080 Ti
available: True
version: 10.1
Packages:
numpy: 1.18.1
pyTorch_debug: False
pyTorch_version: 1.4.0
pytorch-lightning: 0.7.5
tensorboard: 2.2.1
tqdm: 4.42.1
System:
OS: Linux
architecture:
64bit
processor: x86_64
python: 3.7.6
version: #38~18.04.1-Ubuntu SMP Tue Mar 31 04:17:56 UTC 2020 |
Is `hparams` really a good practice? | [
"help wanted",
"question",
"discussion"
] | β Questions and Help
I am a bit confused about good practices in PyTorchLightning, having in mind hparams in particular. I will provide some of my thoughts about this topic. The docs says:
# YES
model = LitModel(hparams)
trainer = Trainer.from_argparse_args(hparams, early_stopping_callback=...)
# NO
# model = LitModel(learning_rate=hparams.learning_rate, ...)
# trainer = Trainer(gpus=hparams.gpus, ...)
Does it allow to parametrize EarlyStopping callback e.g. patience? The only way I can think of is using bad practice.
def main(hparams):
model = LitModel(hparams)
early_stop_callback = EarlyStopping(
monitor='val_loss',
mode='min',
patience=hparams.patience,
)
trainer = Trainer.from_argparse_args(
hparams,
early_stop_callback=early_stop_callback,
)
trainer.fit(model)
if __name__ == '__main__':
parser = ArgumentParser()
parser.add_argument('--patience', type=int, default=5)
parser = LitModel.add_model_specific_args(parser)
parser = Trainer.add_argparse_args(parser)
hparams = parser.parse_args()
main(hparams)
Please let me know how to do it in a nice way.
What does happen if Trainer and LitModel both use the same argument name for different purposes? In the docs, they are combined into a parent parser.
parser = ArgumentParser()
parser = Trainer.add_argparse_args(parser)
# figure out which model to use
parser.add_argument('--model_name', type=str, default='gan', help='gan or mnist')
# THIS LINE IS KEY TO PULL THE MODEL NAME
temp_args, _ = parser.parse_known_args()
# let the model add what it wants
if temp_args.model_name == 'gan':
parser = GoodGAN.add_model_specific_args(parser)
elif temp_args.model_name == 'mnist':
parser = LitMNIST.add_model_specific_args(parser)
args = parser.parse_args()
Perhaps a good practice is to parse them separately?
I don't think that using add_model_specific_args static method & pass only hparams into __init__ is a good idea.
class LitModel(LightningModule):
def __init__(self, hparams):
super().__init__()
self.hparams = hparams
@staticmethod
def add_model_specific_args(parent_parser):
parser = ArgumentParser(parents=[parent_parser], add_help=False)
parser.add_argument(...)
...
return parser
a) If I properly assigned __init__ arguments into self, then IDE would suggest to me the Unresolved reference error before running the code. It happened to me many times to have some leftovers hanging out because of this hparams.
b) Basically add_model_specific_args requires me to track all occurrences of self.hparams in the code visually! If I properly assigned __init__ arguments into self, I would need just to take a simple look at the __init__ section.
I think I know the reasons why hparams was introduced (easy tracking in tensorboard or rapid research), but I believe that this concept should be reconsidered. |
Track model configuration in addition to hyperparamters | [
"feature",
"help wanted",
"won't fix"
] | π Feature
I would like to separate the current hparams (for example used to log models) into two groups:
Hyperparamters: are those hparams, which can be optimized without changing the models meaning and may change the performance. For example the amount of samples generated, the batch size or the learning rate.
Configuration: are those hparams, which actually change the 'meaning' of the model and can not be optimized in a useful, automatic manner. Possible examples are the noise variance for a denoising task or a random seed for reproducible noise generation.
Motivation
I'd like to see both types of hparams in my logging, in order to be able to tell which model ran with what set of configuration. I think, separating actual hyperparamters from configuration also makes it easier to optimizer hyperparameters automatically.
Pitch
I guess, the easiest way would be to duplicate (maybe slightly alter) the current behavior of the LightningModule.hparams field in an additional field (e.g. LightningModule.config). |
loading hparams as tags_csv results in Dict not Namepsace being passed to __init__ | [
"bug",
"help wanted",
"won't fix"
] | π Bug
Doctrstring indicates:
""" However, if your checkpoint weights don't have the hyperparameters saved,
use this method to pass in a .csv file with the hparams you'd like to use.
These will be converted into a :class:~argparse.Namespace and passed into your
:class:LightningModule for use.
"""
This does not appear to be true.
""" class PurchaseEventsClassifier(pl.LightningModule):
def init(self, hparams: Namespace):
import pdb; pdb.set_trace()
embed_params = json.loads(hparams.embed_params) """
hparams is a a dict here not a Namespace as expected.
To Reproduce
Steps to reproduce the behavior:
Go to '...'
Run '....'
Scroll down to '....'
See error
Code sample
Expected behavior
hparams loaded from tags_csv should be a Namespace
Environment
Please copy and paste the output from our
environment collection script
(or fill out the checklist below manually).
You can get the script and run it with:
wget https://raw.githubusercontent.com/PyTorchLightning/pytorch-lightning/master/tests/collect_env_details.py
# For security purposes, please check the contents of collect_env.py before running it.
python collect_env.py
PyTorch Version (e.g., 1.0):
OS (e.g., Linux):
How you installed PyTorch (conda, pip, source):
Build command you used (if compiling from source):
Python version:
CUDA/cuDNN version:
GPU models and configuration:
Any other relevant information:
Additional context |
Using `configargparse.ArgumentParser`? | [
"duplicate",
"question"
] | β Questions and Help
What is your question?
Is it possible to use somehow configargparse.ArgumentParser in PyTorchLightning? I tried the following code, but neither patience nor any of the Trainer's parameters are changed to values from YAML file. Do I really need to create custom Trainer and override add_argparse_args method?
Note that the following code works for the argparse.ArgumentParser.
Code
def main(hparams):
model = LitModel(hparams)
early_stop_callback = EarlyStopping(
monitor='val_loss',
mode='min',
patience=hparams.patience,
)
trainer = Trainer.from_argparse_args(
hparams,
early_stop_callback=early_stop_callback,
)
trainer.fit(model)
if __name__ == '__main__':
parser = configargparse.ArgumentParser(config_file_parser_class=configargparse.YAMLConfigFileParser)
parser.add_argument('-c', '--config_file', is_config_file=True)
parser.add_argument('--patience', type=int)
parser = LitModel.add_model_specific_args(parser)
parser = Trainer.add_argparse_args(parser)
hparams = parser.parse_args()
main(hparams)
What's your environment?
OS: Ubuntu 18.04
Packaging: pip
Version 0.7.5 |
How to use pytorch-lightning to run GANοΌ | [
"question"
] | I want to implement GAN by pytorch-lightning, but I not found the demo. |
What is the current state of per iteration (not per epoch) schedulers support? | [
"question"
] | β Questions and Help
What is your question?
I am trying to use a scheduler that should call step() function on every iteration rather than every epoch (i.e., 1-cycle like scheduler). However, I am not sure what is the best way to implement it with the most recent version of pytorch-lightning.
I've seen some discussions and docs here. (Including the discussion about LR warm-up that proposes a possible solution). But I don't know if they are still relevant with the most recent version of the package. Also, they don't seem to solve exactly the same question.
What have you tried?
Currently, I am trying to do something similar to what the following snippet shows.
class Module(pl.LightningModule):
...
def configure_optimizers(self):
opt = create_optimizer(self)
sched = create_scheduler(total_steps=self.num_train_batches * self.num_epochs)
return {'optimizer': opt, 'lr_scheduler': sched}
def on_batch_end(self):
self.scheduler.step()
def on_epoch_end(self):
pass # do nothing
def optimizer_step(
self,
epoch: int,
batch_idx: int,
optimizer: Optimizer,
optimizer_idx: int,
second_order_closure: Optional[Callable] = None,
):
optimizer.step()
optimizer.zero_grad()
def training_step(self, batch, batch_no):
loss = self(batch)
return {'loss': loss}
So I am calling the step() "manually" at the end of every batch. But am I doing it right? Will the Trainer call my scheduler at the end of an epoch as well? (Not something that I would expect). What is the "right way" to implement an iteration-based scheduler?
My goal is to configure the pl.LightningModule behavior in such a way that the scheduler is called at the end of every training batch, and is not called during validation. Also, it shouldn't be called at the end of the epoch.
What's your environment?
OS: Ubuntu 18.04
Packaging: conda
Version: 0.7.3 |
Trainer.test's dataloader argument can't replace pre-defined dataloader | [
"bug",
"help wanted"
] | π Bug
Trainer.test function supports test_dataloader argument. But if test dataloader defined before in trainer module, changing test dataloader with giving an argument in Trainer.test isn't working.
To Reproduce
... do some train stuff with trainer
trainer.test(model, dataloader1) # run dataloader 1
trainer.test(model. dataloader2) # expect dataloader 2 for test, but dataloader 1 is called
Environment
Python 3.6
pytorch-lightning 0.7.5
Additional context
trainer.test calls run_evaluation. And overwriting Trainer.test_dataloader performed when only trainer.test_dataloaders in None
# select dataloaders
if test_mode:
if self.test_dataloaders is None:
self.reset_test_dataloader(model)
Of course, if test_dataloader argument given to Trainer.test, it did something with __attach_dataloaders but it overwrites only model's test_dataloader so it doesn't used in Trainer.run_evaluation
def __attach_dataloaders(self, model, train_dataloader=None, val_dataloaders=None, test_dataloaders=None):
# when dataloader is passed via fit, patch the train_dataloader
# functions to overwrite with these implementations
if train_dataloader is not None:
model.train_dataloader = _PatchDataLoader(train_dataloader)
if val_dataloaders is not None:
model.val_dataloader = _PatchDataLoader(val_dataloaders)
if test_dataloaders is not None:
model.test_dataloader = _PatchDataLoader(test_dataloaders)
so Trainer.test code should be fixed from
self.testing = True
if test_dataloaders is not None:
if model:
self.__attach_dataloaders(model, test_dataloaders=test_dataloaders)
else:
self.__attach_dataloaders(self.model, test_dataloaders=test_dataloaders)
to
self.testing = True
if test_dataloaders is not None:
self.test_dataloaders = None
if model:
self.__attach_dataloaders(model, test_dataloaders=test_dataloaders)
else:
self.__attach_dataloaders(self.model, test_dataloaders=test_dataloaders) |
How to persist a pytorch lightning module that depends on external data? | [
"question",
"won't fix"
] | β Questions and Help
What is your question?
Hi! We're using pytorch lightning to train language models and transformers from scratch. This includes training tokenizers and applying them text data, resulting in binarized data.
The way we've structured the process is to train a tokenizer, apply it to text data (coupling the binarized data and the tokenizer), and apply a language model on the binarized data.
Since the language model depends on the tokenizer (number of tokens, special tokens, et.c.) the pytorch lightning model needs a tokenizer/vocabulary as part of its hparams. This does not play very nicely with the way hparams and loading works: If we transfer the model from one computer to another, we would need to move a tokenizer to the exact same path on the other computer.
Generally i guess the problem boils down to this: If the inner pytorch modules of the pytorch lightning module depends on some kind of external data (e.g. a vocabulary, or a random sparsity graph), and you then wish to share the pytorch lightning module, we can't find an easy way of doing this.
on_load/save_checkpoint does not work, since they take effect after the model has been initialized, whereas we would like to persist data to the initialization logic in itself.
Is there an elegant way to do this in pytorch lightning? |
Wandb Group Argument | [
"feature",
"help wanted",
"logger"
] | π Feature
Wandb currently supports an argument group to its init function that is not currently exposed by the PL api. I was hoping that we could expose this argument to the user to pass into the wandb init call.
Motivation
This would allow users to group their experiments on the wandb platform and keep them ordered together.
Pitch
Add an argument to the wandb logger, pass this into the wandb init call.
Additional context
https://docs.wandb.com/library/init |
native amp does not work when rewriting the optimizer_step function | [
"bug",
"help wanted"
] | π Bug
If one rewrites optimizer_step like suggested in the docs (ie. https://pytorch-lightning.readthedocs.io/en/latest/optimizers.html#step-optimizers-at-arbitrary-intervals), the native AMP stops working. The issue happens due to the default optimizer_step performing a few native-amp-specific actions (see https://github.com/PyTorchLightning/pytorch-lightning/blob/master/pytorch_lightning/core/lightning.py#L1176).
Please see this discussion for more detail.
https://pytorch-lightning.slack.com/archives/CRBLFHY79/p1588930218145400
Expected behavior
A naΓ―ve custom optimizer_step should work with native AMP.
I'm not sure how to really solve it. The simplest (but pretty bad in my opinion) option is to just write in the docs "If you want to use AMP and custom optimizer_step, add X, Y and Z to your code".
Additional context
The issue only happens on pytorch 1.6+ since that's where the native amp is implemented. |
`model.test()` can fail for `ddp` because `args` in `evaluation_forward` are malformed | [
"bug",
"help wanted",
"good first issue",
"priority: 0"
] | π Bug
model.test() can fail while training via dp because TrainerEvaluationLoopMixin.evaluation_forward doesn't handle an edge case.
To Reproduce
Attempt to model.test() any lightning model in dp mode (I believe it fails in any of the modes at
pytorch-lightning/pytorch_lightning/trainer/evaluation_loop.py
Line 420
in
3a64260
if self.use_ddp or self.use_dp or self.use_ddp2:
).
Note that the validation and training steps work well, but test fails.
The bottom of the stack trace isn't super elucidating but the crux of the matter is captured in
411 def evaluation_forward(self, model, batch, batch_idx, dataloader_idx, test_mode: bool = False):
412 # make dataloader_idx arg in validation_step optional
413 args = [batch, batch_idx]
414
415 if (test_mode and len(self.test_dataloaders) > 1) \
416 or (not test_mode and len(self.val_dataloaders) > 1):
417 args.append(dataloader_idx)
418
419 # handle DP, DDP forward
420 if self.use_ddp or self.use_dp or self.use_ddp2:
--> 421 output = model(*args)
422 return output
At line 421 the code that will run is output = model(*args[0][:-1]) but other things fail downstream of that hack. Note args[0] is the tuple of tensors and the last tensor is the target.
TL;DR: at this point (for test -- again val and train work perfectly) I believe that what we want is something similar to output = model.test_step(*args) instead (see later on in evaluation_forward, below the above trace).
However, i realized that the model, now a LightningDataParallel instance, no longer has the test_step that is defined in the original LightningModule, so my understanding of the system for making multi-GPU work is a limiting factor here.
This mock, I thought, would resolve the issue for me, but I then realized that the test_step method no longer existed per the above paragraph:
ORIG = pl.trainer.evaluation_loop.TrainerEvaluationLoopMixin.evaluation_forward
def _mock_evaluation_forward(self, model, batch, batch_idx, dataloader_idx, test_mode: bool = False):
if not test_mode or (not (self.use_ddp or self.use_dp or self.use_ddp2)):
return ORIG(self, model, batch, batch_idx, dataloader_idx, test_mode)
output = model.test_step(*args)
return output
from unittest import mock
@mock.patch('pytorch_lightning.trainer.evaluation_loop.TrainerEvaluationLoopMixin.evaluation_forward', _mock_evaluation_forward)
def train_my_model(): ...
Additional context
Thanks for the great library! I can't precisely determine why train and eval work and then test fails. One thing to note is that the forward method to my model takes several tensors, not just one, which is a possible factor. Everything works perfectly with dp turned off. |
[Model saving and loading] possibility to save additional content next to the checkpoint.ckpt file | [
"feature",
"help wanted",
"won't fix"
] | π Feature
I want to save my variables that I need to init my model in a config file in the same folder as the checkpoint is saved. Therefore, my feature request is to make the variable filepath of save_checkpoint(self, filepath) in trainer_io.py (line 247) available in the function def on_save_checkpoint(checkpoint). Like this, I can save additional parameters in the same directory like the checkpoint. Overwriting the class method load_from_checkpoint allows me then to load again whatever I want and add to the args and kwargs object before passing it to pl.LightningModule.load_from_checkpoint(checkpoint_path, map_location, tags_csv, *args, **kwargs).
Motivation
I want to use MyLightingModule.load_from_checkpoint(PATH) without adding the kwargs manually but loading them automatically from the checkpoint path. However, I also do not want to save the kwargs of my Model in the checkpoint itself, since then the checkpoint will be loaded two times, one time to get the kwargs and another time to actually load the model.
Pitch
Adding filepath of def save_checkpoint(self, filepath) as an argument to the def dump_checkpoint to then either add the filapath as a variable to checkpoint or pass it along the model.on_save_checkpoint(checkpoint, ).
Like this everybody is able to save anything else next to the checkpoint location.
Alternatives
Not really a nice one. I could of course define my own trainer and overwrite the method, but I think knowing the path in on_save_checkpoint has maybe many other advantages.
Additional context
Here, is an example of how my implementation would look like in the lightning module:
def on_save_checkpoint(self, checkpoint):
with open(os.path.splitext(f'{checkpoint['filepath']}.yaml')[0]+'.yaml', 'w') as outfile:
yaml.dump(self.model_kwargs, outfile, default_flow_style=False)
@classmethod
def load_from_checkpoint(cls,
checkpoint_path: str,
map_location: Optional[Union[Dict[str, str], str, torch.device, int, Callable]] = None,
tags_csv: Optional[str] = None,
*args, **kwargs
):
kwargs = yaml.load(open(os.path.splitext(f'{checkpoint_path['filepath']}.yaml')[0]+'.yaml'), Loader=yaml.FullLoader)
return pl.LightningModule.load_from_checkpoint(checkpoint_path, map_location, tags_csv, *args, **kwargs) |
How can I retrieve metrics from training and testing | [
"question",
"won't fix"
] | β Questions and Help
How can I extract the metrics returned from training_step, training_epoch_end, validation_step, validation_epoch_end, test_step, test_epoch_end after a train() or a test() run?
I'd like to return some dictionary (e.g. {'loss': ..., 'log': ..., 'param a: 'a', 'param b': 'b', 'param c': {...}}) from e.g. test_epoch_end and retrieve this after calling trainer.test(net). It seems like the data is available some where, as the Weights and Biases logger prints at least the training metrics before uploading. Where can I find those metrics from training and testing? |
Checkpoint: OverflowError: cannot serialize a string larger than 4GiB | [
"help wanted",
"question",
"won't fix"
] | π Bug
Model checkpointing fails with the error: OverflowError: cannot serialize a string larger than 4GiB and halts training
PyTorch Version (e.g., 1.0): 1.5
OS (e.g., Linux): Linux
How you installed PyTorch (conda, pip, source): conda
Build command you used (if compiling from source):
Python version: 3.7
CUDA/cuDNN version: 10.1
GPU models and configuration: GTX 2080Ti
Any other relevant information:
Additional context
This is a known Python issue pytorch/pytorch#12085
Possible fix, set the protocol correctly |
Allow not to freeze other optimizers' parameters | [
"feature",
"help wanted",
"good first issue"
] | π Feature
When using multiple optimizers, provide a flag which can control whether we should freeze other parameters or not (which is a 'FIX' in here)
Motivation
When implementing BicycleGAN with lightning, I'm having great trouble because it requires something like this:
opt_e.zero_grad()
opt_g.zero_grad()
loss_eg.backward(retain_graph=True)
opt_e.step()
opt_g.step()
opt_g.zero_grad()
loss_g.backward()
opt_g.step()
which means optimize two optimizers in one optimizer_step in lightning. However currently it is not possible because only one optimizer's parameters are not frozen
Pitch
Let's say we have a flag called freeze_other_parameters, then:
for opt_idx, optimizer in enumerate(self.optimizers):
if freeze_other_parameters:
# make sure only the gradients of the current optimizer's paramaters are calculated
# in the training step to prevent dangling gradients in multiple-optimizer setup.
for param in self.get_model().parameters():
param.requires_grad = False
for group in optimizer.param_groups:
for param in group['params']:
param.requires_grad = True
Alternatives
A more elegant way might be supporting multiple optimizers optimize at one same optimizer_step, but that would break backward compatibility
Additional context
This is the original repo where I port from. |
Learning rate scheduler's epoch off by one when resuming from checkpoint | [
"bug",
"duplicate",
"help wanted"
] | π Bug
Currently lr_scheduler's state is updated after the checkpoint callback, so what is being saved here is last epoch's state.
Note: I think this has the same fix as #1464, but I'm posting it here because (1) I got rekt by this again, (2) in case it's not the same bug, and (3) #1464 is not fixed.
To Reproduce
Steps to reproduce the behavior:
Install using pip install git+https://github.com/PytorchLightning/pytorch-lightning.git@master --upgrade
import os
import torch
from torch.nn import functional as F
from torch.utils.data import DataLoader
from torchvision.datasets import MNIST
import torchvision.transforms as transforms
import pytorch_lightning as pl
class CoolSystem(pl.LightningModule):
def __init__(self):
super(CoolSystem, self).__init__()
# not the best model...
self.l1 = torch.nn.Linear(28 * 28, 10)
def forward(self, x):
return torch.relu(self.l1(x.view(x.size(0), -1)))
def training_step(self, batch, batch_nb):
# REQUIRED
x, y = batch
y_hat = self.forward(x)
return {'loss': F.cross_entropy(y_hat, y)}
def validation_step(self, batch, batch_nb):
# OPTIONAL
x, y = batch
y_hat = self.forward(x)
return {'val_loss': F.cross_entropy(y_hat, y)}
def validation_epoch_end(self, outputs):
# OPTIONAL
avg_loss = torch.stack([x['val_loss'] for x in outputs]).mean()
return {'val_loss': avg_loss}
def configure_optimizers(self):
# REQUIRED
# can return multiple optimizers and learning_rate schedulers
optimizer = torch.optim.Adam(self.parameters(), lr=0.02)
return [optimizer], [torch.optim.lr_scheduler.MultiStepLR(optimizer, [100], gamma=0.1)]
def train_dataloader(self):
# REQUIRED
return DataLoader(MNIST(os.getcwd(), train=True, download=True, transform=transforms.ToTensor()), batch_size=32)
def val_dataloader(self):
# OPTIONAL
return DataLoader(MNIST(os.getcwd(), train=True, download=True, transform=transforms.ToTensor()), batch_size=32)
from pytorch_lightning import Trainer
from pytorch_lightning.callbacks import ModelCheckpoint, EarlyStopping
model = CoolSystem()
checkpoint_callback = ModelCheckpoint(
filepath='./model_ckpt/whatever_the_name_is_gonna_be_auto_chosen',
save_top_k=-1,
verbose=True,
monitor='val_loss',
mode='auto'
)
early_stopping = EarlyStopping(
monitor='val_loss',
patience=5,
verbose=True,
mode='auto'
)
class PrintingCallback(pl.Callback):
def on_epoch_start(self, trainer, pl_module):
print('Scheduler epoch %d' % trainer.lr_schedulers[0]['scheduler'].last_epoch)
print('Trainer epoch %d' % trainer.current_epoch)
print('-'*80)
trainer = Trainer(max_nb_epochs=1000, train_percent_check=0.1,
checkpoint_callback=checkpoint_callback,
early_stop_callback=early_stopping,
callbacks=[PrintingCallback()])
trainer.fit(model)
Let the model train until convergence. And then reload a saved model and see how it continues:
trainer = Trainer(max_nb_epochs=1000, train_percent_check=0.1,
checkpoint_callback=None,
resume_from_checkpoint = 'model_ckpt/_ckpt_epoch_2.ckpt',
early_stop_callback=early_stopping,
callbacks=[PrintingCallback()])
trainer.fit(model)
The PrintingCallback would print:
Scheduler epoch 2
Trainer epoch 3
--------------------------------------------------------------------------------
Scheduler epoch 3
Trainer epoch 4
--------------------------------------------------------------------------------
...
and so on.
Expected behavior
The PrintingCallback should print:
Scheduler epoch 3
Trainer epoch 3
--------------------------------------------------------------------------------
Scheduler epoch 4
Trainer epoch 4
--------------------------------------------------------------------------------
...
Environment
This is ran on Google colab.
https://colab.research.google.com/drive/1pkCSMaApyjH40jwrdl4aQLVYjnGP3JzD?usp=sharing
Additional context
Related to #1463 and #1464. |
Tensorboard log_hyperparams(params, metrics) seems not to have effect | [
"bug",
"help wanted",
"priority: 0"
] | π Bug
Calling self.logger.log_hyperparams(hparams_dicts, metrics_dicts) in test_epoch_end doesn't have the desired effect. It should show the entries in the Hparams section with hparams and metrics specified but it shows nothing instead.
Looking at the code it seems to be caused by self.hparams and the pre-logging of the hyperparameters at the start of the training. In this way, calls to log_hyperparams won't be able to log the hyperparameters AND the metrics properly since they will clash with the previous log, hence, showing nothing.
To Reproduce
Try to log metrics with self.logger.log_hyperparams.
Code sample
def test_epoch_end(self, outputs):
avg_recall = np.concatenate([x['recall'] for x in outputs]).mean()
tensorboard_logs = {'test/avg_ndcg': avg_ndcg, "test/avg_recall": avg_recall}
## Log metrics
self.logger.log_hyperparams(vars(self.params),tensorboard_logs)
return tensorboard_logs
Expected behavior
Tensorboard should show me the section of Hparams with each entry composed by hyperparameters and metrics.
Environment
CUDA:
- GPU:
- available: False
- version: 10.2
Packages:
- numpy: 1.18.1
- pyTorch_debug: False
- pyTorch_version: 1.5.0
- pytorch-lightning: 0.7.5
- tensorboard: 2.2.1
- tqdm: 4.46.0
System:
- OS: Linux
- architecture:
- 64bit
- ELF
- processor: x86_64
- python: 3.8.2
- version: #34~18.04.1-Ubuntu SMP Fri Feb 28 13:42:26 UTC 2020
Additional context |
Test demo error | [
"bug",
"help wanted"
] | Traceback (most recent call last):
File "/home/zgz/github_code/pytorch-lightning/tests/base/models.py", line 15, in <module>
from test_tube import HyperOptArgumentParser
ModuleNotFoundError: No module named 'test_tube'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/zgz/github_code/pytorch-lightning/tests/callbacks/test_callbacks.py", line 2, in <module>
import tests.base.utils as tutils
File "/home/zgz/github_code/pytorch-lightning/tests/base/__init__.py", line 33, in <module>
from tests.base.models import TestModelBase, DictHparamsModel
File "/home/zgz/github_code/pytorch-lightning/tests/base/models.py", line 18, in <module>
raise ImportError('Missing test-tube package.')
ImportError: Missing test-tube package.
When I run test code, report error: ImportError: Missing test-tube package. |
Need a grammar check in README.md file. | [
"bug",
"help wanted"
] | There are some grammar issues in README.md file. Can I work on this issue? |
Add a basic logger which does not require ports | [
"feature",
"help wanted",
"good first issue"
] | π Feature
It would be great to have a basic logger that does not require opening ports.
Motivation
At this time, all loggers supported by Lightning requires to use ports. There are some environments where I don't have total control and I cannot open ports. It would be great to implement a logger that could dump the metrics during training, and easily visualize their evolution, without the need to forward a port.
Pitch
As proposed by @justusschock in the slack channel, one could add a basic logger FileLogger that saves metrics into a file (like json) and plots them on a regular basis.
Thank you for the great package ! |
LightningModule with yacs CfgNode in __init__ will failed with some training settings | [
"question"
] | What is your question?
My custom LightningModule has a yacs.config.CfgNode parameter in its __init__:
from yacs.config import CfgNode
class MyModule(pl.LightningModule):
def __init__(self, cfg: CfgNode):
super(MyModule, self).__init__()
self.cfg = cfg
...
When I use auto_lr_find=True in the Trainer, the following error occur:
model = MyModule(cfg)
trainer = Trainer(gpus=1, auto_lr_find=True, ...)
trainer.fit(model)
File "tools/train.py", line 95, in <module>
trainer.fit(model)
File ".../lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 792, in fit
self._run_lr_finder_internally(model)
File ".../lib/python3.7/site-packages/pytorch_lightning/trainer/lr_finder.py", line 31, in _run_lr_finder_internally
lr_finder = self.lr_find(model)
File ".../lib/python3.7/site-packages/pytorch_lightning/trainer/lr_finder.py", line 169, in lr_find
self.restore(str(save_path), on_gpu=self.on_gpu)
File ".../lib/python3.7/site-packages/pytorch_lightning/trainer/training_io.py", line 288, in restore
checkpoint = torch.load(checkpoint_path, map_location=lambda storage, loc: storage)
File ".../lib/python3.7/site-packages/torch/serialization.py", line 529, in load
return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
File ".../lib/python3.7/site-packages/torch/serialization.py", line 702, in _legacy_load
result = unpickler.load()
TypeError: __init__() takes 1 positional argument but 2 were given
Similar errors also happened when I use multi-gpu training:
trainer = Trainer(gpus=4, distributed_backend='ddp', ...)
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/opt/conda/lib/python3.7/multiprocessing/spawn.py", line 105, in spawn_main
exitcode = _main(fd)
File "/opt/conda/lib/python3.7/multiprocessing/spawn.py", line 115, in _main
self = reduction.pickle.load(from_parent)
TypeError: __init__() takes 1 positional argument but 2 were given
Training with one gpu and not using the above settings can run normally.
Another weird behavior is that when check_val_every_n_epoch > 1, trainer does not act properly, some time no validation is run and no checkpoint is saved. But setting check_val_every_n_epoch=1 will cost too much time.
What's your environment?
OS: [Linux]
Packaging [pip]
Version [pytorch=1.4, pytorch-lightning=0.7.5] |
How to save the model after certain steps instead of epoch? | [
"question",
"won't fix"
] | β Questions and Help
Before asking:
search the issues.
search the docs.
What is your question?
I am trying to train a NN model on a super-big tabular data(about half billion), and I am wondering if I can save the data every certain steps(a million for example) in an epoch instead of every epoch because it indeed spend too much times. I don't know if it is possible in PytorchLightning framework.
Code
What have you tried?
What's your environment?
OS: linux
Packaging conda
Version 0.7.5 |
Use .comet.config file for CometLogger | [
"feature",
"help wanted",
"won't fix"
] | π Feature
When creating a CometML experiment normally, the API key will be read from the file ~/.comet.config or from an environment variable if it isn't passed in directly. It would be nice if the CometLogger supported these uses as well.
Motivation
Putting the API key in code is certainly a bad practice, and it's a pain to have to export it as an environment variable and then get its value in Python or else read it from the file manually. Adding this feature makes things more seamless compared to how people use CometML when not using PyTorch Lightning.
Additional context
I have a patch written for this already; it only changes a few lines of code. From the template message when I went to create a PR, though, it seemed I should create an issue first. Let me know if you have thoughts about this.
(somewhat related - the REST API key is also deprecated; the normal API key should be used instead now. I didn't change that code, though, because I'm not sure if older versions of Comet would have any issues with that change.) |
add torchelastic docs | [
"docs"
] | |
Set precision=16 (using apex) would cause early stopping break | [
"bug",
"help wanted"
] | π Bug
The current early stopping monitor initilize by comparing if the function monitor_op is equal to torch.lt.
self.best = torch_inf if self.monitor_op == torch.lt else -torch_inf
pytorch-lightning/pytorch_lightning/callbacks/early_stopping.py
Line 110
in
12138ce
self.best = torch_inf if self.monitor_op == torch.lt else -torch_inf
However when intializing with the apex, it seems that the torch.lt would change and this evaluation would be always false and thus the self.best is intialized to -inf instead of +inf.
To Reproduce
import torch
from pytorch_lightning.callbacks.early_stopping import EarlyStopping
import apex.amp as amp
es = EarlyStopping()
es.monitor_op == torch.lt
Out[6]: True
model = torch.Linear(5, 5).to('cuda')
optimizers = torch.optim.Adam(model.parameters(), lr=1e-3)
amp.initialize(model, optimizers)
es.monitor_op == torch.lt
Out[22]: False
And this bug leads to the initialization of self.best to be -inf instead of inf
Expected behavior
self.best should be initialized to inf instead of -inf.
Environment
CUDA:
- GPU:
- TITAN Xp
- Quadro P400
- available: True
- version: 10.1
Packages:
- numpy: 1.18.1
- pyTorch_debug: False
- pyTorch_version: 1.4.0
- pytorch-lightning: 0.7.5
- tensorboard: 2.1.1
- tqdm: 4.43.0
System:
- OS: Linux
- architecture:
- 64bit
-
- processor: x86_64
- python: 3.6.10
- version: #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019
Additional context
I bump into this bug after downloading from the master branch couple days ago. I would guess the old version is fine but did not test it. |
Automatic batch-size scaling is missing properties | [
"bug",
"help wanted"
] | File "envs/demo2/lib/python3.7/site-packages/pytorch_lightning/trainer/training_tricks.py", line 267, in _run_power_scaling
trainer.fit(model)
File "envs/demo2/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 839, in fit
self.single_gpu_train(model)
File "/lib/python3.7/site-packages/pytorch_lightning/trainer/distrib_parts.py", line 499, in single_gpu_train
self.run_pretrain_routine(model)
File "pytorch_lightning/trainer/trainer.py", line 981, in run_pretrain_routine
False)
File "evaluation_loop.py", line 326, in _evaluate
eval_results = model.validation_epoch_end(outputs)
File "vae.py", line 83, in validation_epoch_end
self.logger.experiment.add_image('images', grid, 0)
AttributeError: 'NoneType' object has no attribute 'experiment'
@SkafteNicki
Looks like loggers are gone? |
Allow boolean flags to work without passing True | [
"bug",
"help wanted",
"priority: 0"
] | We tried to fix this but it's still broken
This fails when adding args to argparse automatically...
--auto_lr_find
Instead we have to do:
--auto_lr_find True
which is not great |
DDP breaks LR finder | [
"bug",
"help wanted"
] | π Bug
DDP breaks LR finder
To Reproduce
finder = trainer.lr_find(model)
print(finder.suggestion())
Traceback (most recent call last):
File "./training.py", line 107, in <module>
main(hparam_trial)
File "./training.py", line 97, in main
finder = trainer.lr_find(model)
File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/lr_finder.py", line 153, in lr_find
self.fit(model, train_dataloader=train_dataloader)
File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 751, in fit
mp.spawn(self.ddp_train, nprocs=self.num_processes, args=(model,))
File "/opt/conda/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 162, in spawn
process.start()
File "/opt/conda/lib/python3.6/multiprocessing/process.py", line 105, in start
self._popen = self._Popen(self)
File "/opt/conda/lib/python3.6/multiprocessing/context.py", line 284, in _Popen
return Popen(process_obj)
File "/opt/conda/lib/python3.6/multiprocessing/popen_spawn_posix.py", line 32, in __init__
super().__init__(process_obj)
File "/opt/conda/lib/python3.6/multiprocessing/popen_fork.py", line 19, in __init__
self._launch(process_obj)
File "/opt/conda/lib/python3.6/multiprocessing/popen_spawn_posix.py", line 47, in _launch
reduction.dump(process_obj, fp)
File "/opt/conda/lib/python3.6/multiprocessing/reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
AttributeError: Can't pickle local object '_LRFinder._get_new_optimizer.<locals>.configure_optimizers'
At first I thought it's because configure_optimizers returns [opt], [sched] but returning opt still causes the error. Training works correctly with the same code. |
Isuue with Colab TPU: module 'torch_xla.core.xla_model' has no attribute 'rendezvous' | [
"help wanted",
"question"
] | Whenever I setup my colab runtime with the code provided in PyTorch-Lightning Documentation, Release 0.7.6rc1 on page 190: TPU support I get the follwing exception:
INFO:lightning:training on 8 TPU cores
INFO:lightning:INIT TPU local core: 0, global rank: 0
INFO:lightning:INIT TPU local core: 2, global rank: 2
Exception in device=TPU:2: module 'torch_xla.core.xla_model' has no attribute 'rendezvous'
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/torch_xla/distributed/xla_multiprocessing.py", line 119, in _start_fn
fn(gindex, *args)
INFO:lightning:INIT TPU local core: 3, global rank: 3
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/distrib_parts.py", line 523, in tpu_train
self.run_pretrain_routine(model)
INFO:lightning:INIT TPU local core: 4, global rank: 4
Exception in device=TPU:3: module 'torch_xla.core.xla_model' has no attribute 'rendezvous'
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/trainer.py", line 853, in run_pretrain_routine
torch_xla.core.xla_model.rendezvous("pl.Trainer.run_pretrain_routine")
AttributeError: module 'torch_xla.core.xla_model' has no attribute 'rendezvous'
Exception in device=TPU:4: module 'torch_xla.core.xla_model' has no attribute 'rendezvous'
INFO:lightning:INIT TPU local core: 5, global rank: 5
Traceback (most recent call last):
Exception in device=TPU:5: module 'torch_xla.core.xla_model' has no attribute 'rendezvous'
File "/usr/local/lib/python3.6/dist-packages/torch_xla/distributed/xla_multiprocessing.py", line 119, in _start_fn
fn(gindex, *args)
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/distrib_parts.py", line 523, in tpu_train
self.run_pretrain_routine(model)
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/torch_xla/distributed/xla_multiprocessing.py", line 119, in _start_fn
fn(gindex, *args)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/trainer.py", line 853, in run_pretrain_routine
torch_xla.core.xla_model.rendezvous("pl.Trainer.run_pretrain_routine")
AttributeError: module 'torch_xla.core.xla_model' has no attribute 'rendezvous'
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/distrib_parts.py", line 523, in tpu_train
self.run_pretrain_routine(model)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/trainer.py", line 853, in run_pretrain_routine
torch_xla.core.xla_model.rendezvous("pl.Trainer.run_pretrain_routine")
File "/usr/local/lib/python3.6/dist-packages/torch_xla/distributed/xla_multiprocessing.py", line 119, in _start_fn
fn(gindex, *args)
AttributeError: module 'torch_xla.core.xla_model' has no attribute 'rendezvous'
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/distrib_parts.py", line 523, in tpu_train
self.run_pretrain_routine(model)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/trainer.py", line 853, in run_pretrain_routine
torch_xla.core.xla_model.rendezvous("pl.Trainer.run_pretrain_routine")
INFO:lightning:INIT TPU local core: 6, global rank: 6
AttributeError: module 'torch_xla.core.xla_model' has no attribute 'rendezvous'
Exception in device=TPU:6: module 'torch_xla.core.xla_model' has no attribute 'rendezvous'
However, using the minimal code below provided in the MNIST on TPU notebook resolves the error.
VERSION = "20200325" #@param ["1.5" , "20200325", "nightly"]
!curl https://raw.githubusercontent.com/pytorch/xla/master/contrib/scripts/env-setup.py -o pytorch-xla-env-setup.py
!python pytorch-xla-env-setup.py --version $VERSION
PyTorch Version (e.g., 1.0): 1.4.0a0+c272758
OS (e.g., Linux): Linux
How you installed PyTorch (conda, pip, source): pip
Any other relevant information: Colab TPU runtime |
Summing multiple losses with single machine ddp | [
"bug"
] | Hi,
I'm summing together multiple different losses using ddp on a single machine, 2 gpus.
I've been struggling to reduce my loss to zero as a sanity check on a subset of my images.
Is there something I should be calling to synchronise loss across gpus?
I've done this with MNIST no worries.
My model output is a dictionary with 8 components and I'm calling F.nll_loss on each of them before summing together. (One training example consists of 4 images and each example can have zero, 1 or 2 classes)
Code
Both my training and validation steps are like:
x, y = batch
out = self.forward(x)
loss1 = F.nll_loss(out['CC'][:,0], y['L-CC']['be'])
loss2 = F.nll_loss(out['CC'][:,1], y['R-CC']['ben'])
loss3 = F.nll_loss(out['CC'][:,2], y['L-CC']['ca'])
loss4 = F.nll_loss(out['CC'][:,3], y['R-CC']['ca'])
loss5 = F.nll_loss(out['MLO'][:,0], y['L-MLO']['ben'])
loss6 = F.nll_loss(out['MLO'][:,1], y['R-MLO']['ben'])
loss7 = F.nll_loss(out['MLO'][:,2], y['L-MLO']['ca'])
loss8 = F.nll_loss(out['MLO'][:,3], y['R-MLO']['ca'])
lossCa = loss3 + loss4 + loss7 + loss8
lossb = loss1 + loss2 + loss5 + loss6
train_loss = lossCa + lossb
What have you tried?
I've tried each of them following: (before sum)
losses = [lossLCCb, lossRCCb, lossLCCca, lossRCCca, lossLMLOb, lossRMLOb, lossLMLOca, lossRMLOca]
for loss in losses:
loss = dist.all_reduce(loss)
loss /= dist.get_world_size()
and after sum
dist.all_reduce(train_loss)
train_loss /= dist.get_world_size()
Neither make any difference.
What's your environment?
OS: ubuntu 1804
Packaging - pip
torch 1.5.0
torchvision 0.6.0
PL Version - happens with both 0.7.1 and 0.7.2
Any tips / thoughts much appreciated. Cheers. |
Question about custom backward | [
"question",
"won't fix"
] | β Questions and Help
Before asking:
search the issues.
search the docs.
What is your question?
I want to write my own .backward. In the specification of method there are 4 arguments: self, use_amp, loss and optimizer:
def backward(self, use_amp, loss, optimizer)
But to do my backward I need additional tensors from training_step besides loss tensor. What is the safe and proper way to do it? Seems like there is no functionality to just add **kwargs or *args in backward and return something additional in training_step |
Can not use Trainer.test() if train and val dataloaders are not defined | [
"bug",
"help wanted"
] | π Bug
When the model does not define train_dataloader and no val_dataloader, we can not use trainer.test(model, test_dataloaders=test_dl).
The configuration checks fail with a MisconfigurationException.
Code sample
model = ... # a model with no `train_dataloader`, `val_dataloader` defined
test_dl = ... # a dataloader
trainer = pl.Trainer()
trainer.test(model, test_dataloaders=test_dl)
Expected behavior
We expect the testing loop to execute. |
distributed training crashes with dp (list comprehension issue from torch?) | [
"bug",
"help wanted",
"won't fix"
] | π Bug
I ran a distributed GPU template and get an error with data parallel and scatter_gather from torch nn parallel in particular.
To Reproduce
Steps to reproduce the behavior:
install packages
git clone from master
run basic example gpu job with distributed
Validation sanity check: 0it [00:00, ?it/s]Traceback (most recent call last): File "gpu_template.py", line 80, in <module> main(hyperparams) File "gpu_template.py", line 41, in main trainer.fit(model) File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 853, in fit self.dp_train(model) File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/distrib_parts.py", line 578, in dp_train self.run_pretrain_routine(model) File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1001, in run_pretrain_routine False) File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/evaluation_loop.py", line 277, in _evaluate output = self.evaluation_forward(model, batch, batch_idx, dataloader_idx, test_mode) File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/evaluation_loop.py", line 424, in evaluation_forward output = model(*args) File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__ result = self.forward(*input, **kwargs) File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/overrides/data_parallel.py", line 66, in forward return self.gather(outputs, self.output_device) File "/opt/conda/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 168, in gather return gather(outputs, output_device, dim=self.dim) File "/opt/conda/lib/python3.7/site-packages/torch/nn/parallel/scatter_gather.py", line 68, in gather res = gather_map(outputs) File "/opt/conda/lib/python3.7/site-packages/torch/nn/parallel/scatter_gather.py", line 62, in gather_map for k in out)) File "/opt/conda/lib/python3.7/site-packages/torch/nn/parallel/scatter_gather.py", line 62, in <genexpr> for k in out)) File "/opt/conda/lib/python3.7/site-packages/torch/nn/parallel/scatter_gather.py", line 63, in gather_map return type(out)(map(gather_map, zip(*outputs))) TypeError: zip argument #1 must support iteration
Code sample
run python3 gpu_template.py --gpus 2 --distributed_backend dp
Expected behavior
should run distributed demo job without errors
Environment
CUDA:
- GPU:
- GeForce RTX 2080 Ti
- GeForce RTX 2080 Ti
- available: True
- version: 10.2
Packages:
- numpy: 1.18.4
- pyTorch_debug: False
- pyTorch_version: 1.5.0
- pytorch-lightning: 0.7.6
- tensorboard: 2.2.1
- tqdm: 4.46.0
System:
- OS: Linux
- architecture:
- 64bit
-
- processor: x86_64
- python: 3.7.6
- version: #201812030624 SMP Mon Dec 3 11:25:55 UTC 2018
Additional context
python3 gpu_template.py --gpus 2 --distributed_backend ddp works |
Replace val_loss with monitor_metric | [
"feature",
"help wanted",
"won't fix"
] | Validation epoch end has a special key called 'val_loss' which enables early stopping and checkpoint.
However, a better name is likely monitor_metric
def validation_epoch_end(self, outputs):
return {'monitor_metric': whatever_thing_i_want}
This makes it clear to the user that they can pass in anything (epoch, loss, accuracy, bleu, etc).
Obviously this change needs to be backward compatible |
load_from_checkpoint(): hparam_overrides only works with hparams_file | [
"bug",
"help wanted",
"priority: 0"
] | π Bug
Using the hparam_overrides argument for load_from_checkpoint() without the hparams_file argument gives an UnboundLocalError.
To Reproduce
Code sample
Do:
MyLightningModule.load_from_checkpoint(ckpt_path, hparam_overrides={'param': 0})
and the result will be UnboundLocalError: local variable 'hparams' referenced before assignment, but the following works:
MyLightningModule.load_from_checkpoint(ckpt_path, hparams_file=hp_path, hparam_overrides={'param': 0})
Expected behavior
Overriding should be possible even if the hparams are loaded from the checkpoint file rather than the separate hparams file.
Environment
PyTorch Version (e.g., 1.0): 1.5.0
OS (e.g., Linux): Ubuntu 16.04 and 18.04
How you installed PyTorch (conda, pip, source): pip
Python version: 3.8.2
CUDA/cuDNN version: 10.1.243/7.6.5.32
GPU models and configuration: GTX1070
Additional context
This is a super simple fix---I thought about just submitting a PR directly but figured I should check that this behaviour is indeed erroneous first. |
Lightweight Hyperparameter Datastructure | [
"feature",
"help wanted"
] | π Feature
A simple and flexible way to store hyperparameters in a dict/Namespace-like object.
Motivation
Pitch
An object that behaves like this:
# just like Namespace
hparams = Hyperparameters(x=1, y=2)
# or from a dict
hparams = Hyperparameters({"x": 1, "y": 2})
# it could support nesting
hparams = Hyperparameters({"x": 1, "y": {"a": 3}})
# Namespace-like look up
x = hparams.x
a = hparams.y.a
# or like a dict
x = hparams["x"]
a = hparams["y"]["a"]
# we will have to check for invalid keys
hparams["batch_size"] # ok
hparams["batch-size"] # error
hparams["batch size"] # error
Optional features:
# could support reading from files
# useful for checkpoint loading
hparams = Hyperparameters.from_yaml("file.yml")
# could support flattening/sanitizing nested structure
# useful for logging to TensorBoard
# or to make it pickleable
clean = hparams.flatten().sanitize()
# note that these internal methods will prevent these keys:
hparams.flatten = True # Problem, because not hyperparameter!
Pro:
Behaves like dict and Namespace in one, so it will work out of the box with PL since internally we treat it as dict anyway (saving checkpoint).
Contra:
Any convenience methods/attributes we add to the class will collide with hyperparameters names. See example above.
Need to spend extra effort making sure user does not pick bad dict keys
Considerations
"Hyperparameters" may be a too specific name for such a general datastructure.
Additional context
Discussed on slack and idea has popped up in other isses as well.
Related to #1841, #1735 and would solve #1737 |
Training time estimation when max_epochs is given | [
"feature",
"help wanted",
"won't fix"
] | π Feature
Time estimation of training until max_epochs are reached.
Motivation
Right now I don't see how long I am training for already and how long it is going to take to train e.g. 500 epochs.
I usually have a tqdm progress bar for number of epochs to get an estimate how long my training will run maximally.
Pitch
If max_epochs is given, create a tqdm progress bar (probably with leave=True).
Alternatives
Keep it like it is and make rough estimations per hand.
Additional context |
prepare_data called multiple times per node for slurm and elastic training | [
"bug",
"help wanted"
] | π Bug
Slurm and elastic training create the training processes per node outside of the lightning context. This means that when the fit function calls prepare_data, the assumption that it's only being called on proc 0 is broken and it gets called for each process.
This is an issue computational reasons (e.g. downloading a whole dataset) and for training stability if the data preparation process isn't deterministic.
See calling code here:
pytorch-lightning/pytorch_lightning/trainer/trainer.py
Line 825
in
7c7e50c
model.prepare_data()
To Reproduce
Steps to reproduce the behavior:
Add print statements to prepare_data
Train a lightning model with either slurm or elastic training
See that it's being called multiple times.
Expected behavior
Expected prepare_data to only be called once per node. |
Accumulate Metrics for K Batches | [
"question"
] | What is your question?
I would like to aggregate metrics for k minibatches before logging to Tensorboard. How can I accomplish this?
For example, I would like to average my loss and accuracy for 10 minibatches and report that value to TensorBoard, not just report the statistics on the 10th minibatch.
What have you tried?
I noticed there's a log_save_interval argument for the trainer, but, this only logs the metrics computed per one batch. This is not that useful for me because my batch size is very small, so I need to average my statistics across multiple minibatches.
There is a train_epoch_end method I can override, but only gives me my outputs at the end of an epoch. I am looking for something like this where I have access to all my outputs from the previous k minibatches so I can aggregate them. |
lightning_mnist_tpu crashes | [
"bug",
"accelerator: tpu",
"working as intended"
] | π Bug
To Reproduce
Run https://colab.research.google.com/drive/1-_LKx4HwAxl5M6xPJmqAAu444LTDQoa3 with VERSION = "1.5"
There were other error when trying to run with other versions from ["1.5" , "20200325", "nightly"] neither of them seem to work properly, there's always some sort of bug that pops up.
at this one moment when i had finally got it to run the model, the second time i ran the cell, it crashed, could be that i had to free up mem. but i dont see why it shouldn't have been done automatically. |
hparam_overrides not working | [
"bug",
"help wanted"
] | π Bug
Error when using hparam_overrides which was recently introduced by: #1797. This is the log:
Traceback (most recent call last):
File "main.py", line 211, in <module>
model = RelationEmbeddingModelLit.load_from_checkpoint(
File "/home/ubuntu/anaconda3/envs/py38/lib/python3.8/site-packages/pytorch_lightning/core/lightning.py", line 1601, in load_from_checkpoint
update_hparams(hparams, hparam_overrides)
UnboundLocalError: local variable 'hparams' referenced before assignment
To Reproduce
Steps to reproduce the behavior:
Load a checkpoint using the hparam_overrides argument:
MyPLModel.load_from_checkpoint(checkpoint_path, hparam_overrides={key: new_val})
Expected behavior
No bug.
Environment
CUDA:
GPU:
Tesla V100-SXM2-16GB
available: True
version: 10.1
Packages:
numpy: 1.18.1
pyTorch_debug: False
pyTorch_version: 1.4.0
pytorch-lightning: 0.7.7-dev
tensorboard: 2.2.1
tqdm: 4.43.0
System:
OS: Linux
architecture:
64bit
ELF
processor: x86_64
python: 3.8.1
version: #117-Ubuntu SMP Wed Apr 8 09:52:02 UTC 2020 |
ddp_cpu crashing on SLURM cluster because of save_spawn_weights() | [
"bug",
"help wanted"
] | π Bug
I'm seeing a seemingly similar problem as issues #1335 and #1637 on current master when using ddp_cpu on my universities SLURM cluster. It's failing at a certain epoch (not the same for every job, but in the same range) for all jobs in a job array, on all nodes. But it's not happening when load_spawn_weights() is being invoked, but when when save_spawn_weights() is.
Error
slurmstepd-breitach: error: Unable to create TMPDIR [/tmp/user/30335]: Permission denied
slurmstepd-breitach: error: Setting TMPDIR to /tmp
psutil is not installed. You will not be able to abort this experiment from the UI.
psutil is not installed. Hardware metrics will not be collected.
NeptuneLogger will work in online mode
GPU available: True, used: False
No environment variable for node rank defined. Set as 0.
MASTER_ADDR environment variable is not defined. Set as localhost
initializing proc_rank 0 world 1
Set SLURM handle signals.
| Name | Type | Params
-----------------------------------------------------
0 | model | RecurrentModel | 5
1 | model.recurrent_model | RNN | 4
2 | model.fc | Linear | 1
/home/sch/schillmann/anaconda3/envs/pytorch-bac/lib/python3.7/site-packages/pytorch_lightning/utilities/distributed.py:23: UserWarning: The dataloader, train dataloader, does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` in the `DataLoader` init to improve performance.
warnings.warn(*args, **kwargs)
Traceback (most recent call last):
File "experiment_core.py", line 40, in <module>
main(hyperparams, parser)
File "experiment_core.py", line 25, in main
trainer.fit(model)
File "/home/sch/schillmann/anaconda3/envs/pytorch-bac/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 856, in fit
mp.spawn(self.ddp_train, nprocs=self.num_processes, args=(model,))
File "/home/sch/schillmann/anaconda3/envs/pytorch-bac/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 200, in spawn
return start_processes(fn, args, nprocs, join, daemon, start_method='spawn')
File "/home/sch/schillmann/anaconda3/envs/pytorch-bac/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 158, in start_processes
while not context.join():
File "/home/sch/schillmann/anaconda3/envs/pytorch-bac/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 119, in join
raise Exception(msg)
Exception:
-- Process 0 terminated with the following error:
Traceback (most recent call last):
File "/home/sch/schillmann/anaconda3/envs/pytorch-bac/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 20, in _wrap
fn(i, *args)
File "/home/sch/schillmann/anaconda3/envs/pytorch-bac/lib/python3.7/site-packages/pytorch_lightning/trainer/distrib_data_parallel.py", line 391, in ddp_train
self.save_spawn_weights(model)
File "/home/sch/schillmann/anaconda3/envs/pytorch-bac/lib/python3.7/site-packages/pytorch_lightning/trainer/distrib_data_parallel.py", line 401, in save_spawn_weights
self.save_checkpoint(path)
File "/home/sch/schillmann/anaconda3/envs/pytorch-bac/lib/python3.7/site-packages/pytorch_lightning/trainer/training_io.py", line 265, in save_checkpoint
self._atomic_save(checkpoint, filepath)
File "/home/sch/schillmann/anaconda3/envs/pytorch-bac/lib/python3.7/site-packages/pytorch_lightning/trainer/training_io.py", line 256, in _atomic_save
torch.save(checkpoint, tmp_path)
File "/home/sch/schillmann/anaconda3/envs/pytorch-bac/lib/python3.7/site-packages/torch/serialization.py", line 369, in save
with _open_file_like(f, 'wb') as opened_file:
File "/home/sch/schillmann/anaconda3/envs/pytorch-bac/lib/python3.7/site-packages/torch/serialization.py", line 234, in _open_file_like
return _open_file(name_or_buffer, mode)
File "/home/sch/schillmann/anaconda3/envs/pytorch-bac/lib/python3.7/site-packages/torch/serialization.py", line 215, in __init__
super(_open_file, self).__init__(open(name, mode))
FileNotFoundError: [Errno 2] No such file or directory: '/home/sch/schillmann/Bachelor-Thesis/Code/Memory-Network-Memory-Horizons/src/logs/__temp_weight_ddp_end.ckpt.part'
Expected behavior
All jobs to finish on all nodes without problems. |
Add stochastic weight averaging | [
"feature",
"help wanted"
] | Looks like we need to keep two copies of the model. Let $m_r$ define the root model and $m_c$ the current model. Then at the end of each epoch $n$, we update the weights of $m_r$ using a weighted average:
Anyone interested in implementing?
maybe enable as a callback? not sure this needs a flag?
@PyTorchLightning/core-contributors |
Bug in Early stopping and `check_val_every_n_epoch` | [
"bug",
"help wanted"
] | π Bug
To Reproduce
Steps to reproduce the behavior:
Install PyTorch and PyTorch-Lightning
Cloning master branch
Edit the example by removing these lines
In the same place, add
early_stop = pl.callbacks.EarlyStopping(monitor="val_loss", patience=3, mode="min")
trainer = pl.Trainer(
max_epochs=hparams.epochs,
gpus=hparams.gpus,
distributed_backend=hparams.distributed_backend,
precision=16 if hparams.use_16bit else 32,
check_val_every_n_epoch=2,
early_stop_callback=early_stop,
)
Re-run the example with python gpu_template.py --gpus 1
Error message:
RuntimeError: Early stopping conditioned on metric `val_loss` which is not available. Either add `val_loss` to the return of validation_epoch end or modify your EarlyStopping callback to use any of the following: `loss`, `train_loss`
Expected behavior
No error π€£ . Or at least the error still appears when removing the early_stop_callback parameters π€£
Environment
Please copy and paste the output from our
environment collection script
(or fill out the checklist below manually).
You can get the script and run it with:
wget https://raw.githubusercontent.com/PyTorchLightning/pytorch-lightning/master/tests/collect_env_details.py
# For security purposes, please check the contents of collect_env.py before running it.
python collect_env.py
PyTorch Version (e.g., 1.0): 1.4.0
OS (e.g., Linux): Linux
How you installed PyTorch (conda, pip, source): conda
Build command you used (if compiling from source):
Python version: 3.6
CUDA/cuDNN version: 9.2
PyTorch-Lightning Version: 0.7.6
Additional context
I have also try to set check_val_every_n_epoch to 1, no error appears. |
When using dp mode, only torch.Tensor can be used as the return value of the *_step function. | [
"help wanted",
"won't fix"
] | π Bug
To Reproduce
Steps to reproduce the behavior:
Go to ./pl_examples/basic_examples
Run python gpu_template.py --gpus 2 --distributed_backend dp
See error
Code sample
Error is below.
Validation sanity check: 0it [00:00, ?it/s]Traceback (most recent call last):
File "/root/workdir/pytorch-lightning/pl_examples/basic_examples/gpu_template.py", line 80, in <module>
main(hyperparams)
File "/root/workdir/pytorch-lightning/pl_examples/basic_examples/gpu_template.py", line 41, in main
trainer.fit(model)
File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 853, in fit
self.dp_train(model)
File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/distrib_parts.py", line 578, in dp_train
self.run_pretrain_routine(model)
File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1001, in run_pretrain_routine
False)
File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/evaluation_loop.py", line 277, in _evaluate
output = self.evaluation_forward(model, batch, batch_idx, dataloader_idx, test_mode)
File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/evaluation_loop.py", line 424, in evaluation_forward
output = model(*args)
File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/overrides/data_parallel.py", line 66, in forward
return self.gather(outputs, self.output_device)
File "/opt/conda/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 165, in gather
return gather(outputs, output_device, dim=self.dim)
File "/opt/conda/lib/python3.7/site-packages/torch/nn/parallel/scatter_gather.py", line 68, in gather
res = gather_map(outputs)
File "/opt/conda/lib/python3.7/site-packages/torch/nn/parallel/scatter_gather.py", line 62, in gather_map
for k in out))
File "/opt/conda/lib/python3.7/site-packages/torch/nn/parallel/scatter_gather.py", line 62, in <genexpr>
for k in out))
File "/opt/conda/lib/python3.7/site-packages/torch/nn/parallel/scatter_gather.py", line 63, in gather_map
return type(out)(map(gather_map, zip(*outputs)))
TypeError: zip argument #1 must support iteration
This error has something to do with this code (https://github.com/pytorch/pytorch/blob/f4f0dd470c7eb51511194a52e87f0ceec5d4e05e/torch/nn/parallel/scatter_gather.py#L47).
And this error can be fixed by doing the following in./pl_examples/models/lightning_template.py
def validation_step(self, batch, batch_idx):
"""
Lightning calls this inside the validation loop with the data from the validation dataloader
passed in as `batch`.
"""
x, y = batch
y_hat = self(x)
val_loss = F.cross_entropy(y_hat, y)
labels_hat = torch.argmax(y_hat, dim=1)
n_correct_pred = torch.sum(y == labels_hat).item()
return {'val_loss': val_loss, "n_correct_pred": n_correct_pred, "n_pred": len(x)}
def validation_epoch_end(self, outputs):
"""
Called at the end of validation to aggregate outputs.
:param outputs: list of individual outputs of each validation step.
"""
avg_loss = torch.stack([x['val_loss'] for x in outputs]).mean()
val_acc = sum([x['n_correct_pred'] for x in outputs]) / sum(x['n_pred'] for x in outputs)
tensorboard_logs = {'val_loss': avg_loss, 'val_acc': val_acc}
return {'val_loss': avg_loss, 'log': tensorboard_logs}
to
def validation_step(self, batch, batch_idx):
"""
Lightning calls this inside the validation loop with the data from the validation dataloader
passed in as `batch`.
"""
x, y = batch
y_hat = self(x)
val_loss = F.cross_entropy(y_hat, y)
labels_hat = torch.argmax(y_hat, dim=1)
n_correct_pred = torch.sum(y == labels_hat)
return {
"val_loss": val_loss,
"n_correct_pred": n_correct_pred,
"n_pred": torch.tensor(len(x)).to(val_loss.device),
}
def validation_epoch_end(self, outputs):
"""
Called at the end of validation to aggregate outputs.
:param outputs: list of individual outputs of each validation step.
"""
avg_loss = (
torch.stack([x["val_loss"].detach().cpu() for x in outputs]).mean().item()
)
val_acc = np.sum(
[x["n_correct_pred"].detach().cpu().numpy() for x in outputs]
) / np.sum([x["n_pred"].detach().cpu().numpy() for x in outputs])
tensorboard_logs = {"val_loss": avg_loss, "val_acc": val_acc}
print({"val_loss": avg_loss, "log": tensorboard_logs})
return {"val_loss": avg_loss, "log": tensorboard_logs}
But this approach is not elegant ...
Expected behavior
Return values other than torch.Tensor are allowed.
Environment
PyTorch Version : 1.5
OS (e.g., Linux): Ubuntu 18.04
How you installed PyTorch conda
Python version: 3.7.7
CUDA/cuDNN version: 10.2
Additional context |
How to log performance metrics in the validation step/loop? | [
"question",
"won't fix"
] | β How to log performance metrics in the validation step/loop?
Before asking:
search the issues. Done
search the docs. Done
What is your question?
I would like to log the performance metrics of each validation batch using TestTubeLogger (I am open to using alternative loggers) as well as the loss function of each training batch. I am struggling to understand the difference between adding 'log' to the output of my validation step, and including a call to the logger itself through self.logger.experiment.log({k, v}). I have seen both methods implemented in the tutorials and docs, but I do not understand the best practices nor which method to use to achieve my end goal.
Is the call to self.logger.experiment.log({k, v}) mutually exclusive with adding 'log' to the output of a training/validation step? If not, should they be used together and in what circumstance?
Reference What have I tried (3) below; why does the validation_step method not register the entries in the 'log' dictionary outputs? Are the only validation outputs that are logged those from the validation_epoch_end method?
With all of this in mind, how can I log performance metrics such as accuracy, recall, precision, from the validation loop? What are the best practices around this?
What have I tried?
I trained the network without any calls to self.logger.experiment.log({k, v}) and without adding a 'log' key to the outputs of any training/validation step or epoch end methods. This resulted in the creation of a metrics.csv file that contained only a column for the created time and epoch number.
I added in only calls to the logger experiment such that my training and validation code looked like such:
def training_step(self, batch, batch_nb):
x, y = batch
x, y = x.float(), y.float().unsqueeze(1)
y_hat = self.forward(x)
loss = self.loss_function(y_hat, y)
self.logger.experiment.log({'loss': loss})
return {
'loss': loss
}
def training_epoch_end(self, outputs):
avg_val_loss = torch.stack([x['loss'] for x in outputs]).mean()
return {
'avg_train_loss': avg_val_loss
}
def validation_step(self, batch, batch_nb):
x, y = batch
x, y = x.float(), y.float().unsqueeze(1)
y_hat = self.forward(x)
loss = self.loss_function(y_hat, y)
metrics = self.performance_metrics(F.softmax(y_hat), y)
self.logger.experiment.log({'metrics': metrics['accuracy']})
return {
'val_loss': loss,
}
def validation_epoch_end(self, outputs):
avg_val_loss = torch.stack([x['val_loss'] for x in outputs]).mean()
return {
'avg_val_loss': avg_val_loss,
}
When running trainer.fit(model) with this arrangement the first few lines of the metrics.csv look like:
I removed all calls to the logger experiment and included only 'log' dictionary keys in the outputs such that the code looked like such:
def training_step(self, batch, batch_nb):
x, y = batch
x, y = x.float(), y.float().unsqueeze(1)
y_hat = self.forward(x)
loss = self.loss_function(y_hat, y)
return {
'loss': loss,
'log': {'train_loss': loss, 'batch_nb': batch_nb}
}
def training_epoch_end(self, outputs):
avg_val_loss = torch.stack([x['loss'] for x in outputs]).mean()
return {
'avg_train_loss': avg_val_loss,
'log': {'avg_train_loss': avg_val_loss}
}
def validation_step(self, batch, batch_nb):
x, y = batch
x, y = x.float(), y.float().unsqueeze(1)
y_hat = self.forward(x)
loss = self.loss_function(y_hat, y)
metrics = self.performance_metrics(F.softmax(y_hat), y)
return {
'val_loss': loss,
'log': {'val_loss': loss, 'metrics': metrics}
}
def validation_epoch_end(self, outputs):
avg_val_loss = torch.stack([x['val_loss'] for x in outputs]).mean()
return {
'avg_val_loss': avg_val_loss,
'log': {'avg_val_loss': avg_val_loss}
}
Upon running trainer.fit(model) the first few lines of my metrics.csv file look like:
What's your environment?
OS: Ubuntu 18.04
Packaging: Conda
Python: 3.8
Pytorch Lightning: 0.7.5
Additional content
This is how I configure the logger and trainer:
logger = pl.loggers.TestTubeLogger(
save_dir=config['logging_params']['save_dir'],
name=config['logging_params']['name'],
debug=False,
create_git_tag=False,
)
trainer = pl.Trainer(default_save_path=f'{logger.save_dir}',
min_nb_epochs=1,
logger=logger,
log_save_interval=5,
num_sanity_val_steps=5,
**config['trainer_params'])
I am using a binary classifier and compute my performance metrics with the follorwing method:
def performance_metrics(self, y_hat, y):
'''0 is typical, 1 is novel, also works with boolean arrays'''
counts = {'TP': 0, 'FP': 0, 'FN': 0, 'TN': 0}
for quadrant in zip(y_hat, y):
if quadrant == (1, 1): # TP
counts['TP'] += 1
elif quadrant == (0, 1): # FN
counts['FN'] += 1
elif quadrant == (1, 0): # FP
counts['FP'] += 1
elif quadrant == (0, 0): # TN
counts['TN'] += 1
precision = counts['TP'] / (counts['TP'] + counts['FP'])
recall = counts['TP'] / (counts['TP'] + counts['FN'])
accuracy = (counts['TP'] + counts['TN']) / len(y)
f1score = (2 * precision * recall) / (precision + recall)
return {
'counts': counts,
'precision': precision,
'recall': recall,
'accuracy': accuracy,
'f1score': f1score
} |
How to save raw predictions? | [
"question",
"won't fix"
] | I have it hard coded in my logging callback not to print them, but trainer.test still prints them.
I'm wondering the canonical way to save them with wandb_logger or another callback.
(It's an NLP summarization task).
Thx! |
options for run only sanity check without training in 'run_pretrain_routine' | [
"feature",
"help wanted",
"won't fix"
] | π Feature
Motivation
for pytest pytorch-lightning based project, the training phase is not essential in some cases (like testing only inference). So adding an argument for running sanity-check without the training phase could be reasonable.
Pitch
run_pretrain_routine(self, model) => run_pretrain_routine(self, model, run_train=True) |
Trainer.parse_argparser does not yield sensible default for default_root_dir | [
"bug",
"help wanted"
] | π Bug
Using Trainer.parse_argparser returns True for default_root_dir, however, a string is expected.
To Reproduce
Steps to reproduce the behavior:
>>> from pytorch_lightning import Trainer
>>> from argparse import ArgumentParser, Namespace
>>> parser = ArgumentParser(add_help=False)
>>> parser = Trainer.add_argparse_args(parent_parser=parser)
>>> args = Trainer.parse_argparser(parser)
>>> args
Namespace(accumulate_grad_batches=1, amp_level='O1', auto_lr_find=False, auto_scale_batch_size=False, auto_select_gpus=False, benchmark=False, check_val_every_n_epoch=1, checkpoint_callback=True, default_root_dir=True, deterministic=False, distributed_backend=True, early_stop_callback=False, fast_dev_run=False, gpus=<function Trainer._arg_default at 0x1219efdd0>, gradient_clip_val=0, log_gpu_memory=True, log_save_interval=100, logger=True, max_epochs=1000, max_steps=True, min_epochs=1, min_steps=True, num_nodes=1, num_processes=1, num_sanity_val_steps=2, overfit_pct=0.0, precision=32, print_nan_grads=False, process_position=0, profiler=True, progress_bar_callback=True, progress_bar_refresh_rate=1, reload_dataloaders_every_epoch=False, replace_sampler_ddp=True, resume_from_checkpoint=True, row_log_interval=10, terminate_on_nan=False, test_percent_check=1.0, tpu_cores=True, track_grad_norm=-1, train_percent_check=1.0, truncated_bptt_steps=True, val_check_interval=1.0, val_percent_check=1.0, weights_save_path=True, weights_summary='full') |
Allow passing dict object as batch | [
"feature",
"help wanted"
] | π Feature
Allow the batch variable to be a (nested) dict object (in training_step(), etc.)
Motivation
I was porting my code from my own implementation of a Trainer to Pytorch Lightning, as I believe this is an asset for reproducibility and clarity.
However I stumbled across a problem, it is not possible for the variable batch to be a (nested) dict object and it can only be a Tensor, tuple or list.
Pitch
The dataset structure that I am using is the following:
import torch
from torch.utils.data import Dataset
class MyDataset(Dataset):
def __init__(
self, n: int
):
self.n = n
def __len__(self):
return self.n
def __getitem__(self, item):
frame = {
"index": item,
"A": {
"image": torch.randn(3, 128, 128),
"R": torch.randn(3, 3),
"t": torch.randn(3, 1),
"K": torch.randn(3, 3),
},
"B": {
"image": torch.randn(3, 128, 128),
"R": torch.randn(3, 3),
"t": torch.randn(3, 1),
"K": torch.randn(3, 3),
},
}
return frame
The PyTorch Dataloader is able to make a batch of frames with the default collate_fn.
To move things onto the GPU, I am currently doing:
def move(self, d: dict, device) -> dict:
for k in d:
if isinstance(d[k], dict):
d[k] = self.move(d[k])
elif isinstance(d[k], (Tensor)):
d[k] = d[k].to(device=device, non_blocking=True)
return d
There is probably a nicer way to do it, but it does the job.
Could you please add this feature, as passing a tuple is very impractical when the amount of variables is high?
Thanks in advance,
Guillaume |
Images not being logged after using auto_lr_find | [
"help wanted",
"won't fix"
] | π Bug
When I use the auto LR finder, the images that I logged are no longer being displayed in Tensorboard. The logging works normally when setting it to False or when not specifying it.
To Reproduce
Steps to reproduce the behavior:
Set auto_lr_find = True for the trainer
Log image during test/validation step as self.logger.experiment.add_image(title, image, step)
Load the Tensorboard WebUI and see that the image is not being logged
Expected behavior
The image that I logged should be visible in Tensorboard, even when using auto LR finder.
Environment
* CUDA:
- GPU:
- GeForce GTX 1050 Ti with Max-Q Design
- available: True
- version: 10.2
* Packages:
- numpy: 1.18.3
- pyTorch_debug: False
- pyTorch_version: 1.5.0
- pytorch-lightning: 0.7.6
- tensorboard: 2.2.1
- tqdm: 4.46.0
* System:
- OS: Linux
- architecture:
- 64bit
- ELF
- processor: x86_64
- python: 3.8.2
- version: #41~1586790036~18.04~600aeb5-Ubuntu SMP Mon Apr 13 17:47:15 UTC
How you installed PyTorch (conda, pip, source): pip
Additional context
Not sure if relevant, but I couldn't figure out how to pass the params to the Trainer as I'm using from_argparse_args(hparams), where hparams contains the keys and values obtained from command-line ArgumentParser. So I set hparams.auto_lr_finder = True before instantiating the Trainer. The LR finder work's, so I assume there's no issue with that. |
Multi GPU training (ddp) gets very slow when using list of tensors in Dataset | [
"bug",
"help wanted"
] | π Bug
We are migrating to PyTorch Lightning from a custom implementation using Torchbearer before.
Our dataset stores a list of PyTorch tensors in memory, because the tensors are all of different dimensions. When migrating to PyTorch Lightning from a custom implementation, this seems to slow our training down in the multi GPU setup very significantly (training twice as long as before!).
To Reproduce
I built a repository which has just random data and a straightforward architecture to reproduce this both with (minimal.py) and without PyTorch Lightning (custom.py).
The repository and further details are located here: https://github.com/mpaepper/pytorch_lightning_multi_gpu_speed_analysis
When training with PyTorch Lightning for only 10 epochs, it takes 105 seconds when using a big PyTorch tensor (without a list), but it increases to 310 seconds (3x slower) when using a list of tensors.
The data size and model is exactly the same, it's just that one time it's stored differently.
When using my custom implementation, no such effect is observed (takes 97-98 seconds no matter if with or without lists).
To run the PyTorch Lightning version use:
python minimal.py --gpus 4 # Baseline
python minimal.py --gpus 4 --use_list # Extremely slow
One important thing to note: when using the list approach it seems that every tensor of that list is stored as a separate filesystem memory pointer, so you might need to increase your file limit: ulimit -n 99999.
It seems that this is the issue that the DataLoader gets very slow as it needs to read so many files?
Is there a way around this?
Code sample
See https://github.com/mpaepper/pytorch_lightning_multi_gpu_speed_analysis
Expected behavior
I would expect the same dataset stored as a list of tensors to also train quickly.
Environment
CUDA:
GPU:
GeForce RTX 2080 Ti
GeForce RTX 2080 Ti
GeForce RTX 2080 Ti
GeForce RTX 2080 Ti
available: True
version: 10.1
Packages:
numpy: 1.16.4
pyTorch_debug: False
pyTorch_version: 1.4.0
pytorch-lightning: 0.7.7-dev
tensorboard: 1.14.0
tqdm: 4.46.0
System:
OS: Linux
architecture:
64bit
processor: x86_64
python: 3.7.3
version: #100-Ubuntu SMP Wed Apr 22 20:32:56 UTC 2020 |
val_dataloader participate in training? | [
"question",
"won't fix"
] | β Questions and Help
It's very strange. I just use
trainer.fit(net, train_dataloader=data.train_loader, val_dataloaders=data.val_loader)
trainer.test(test_dataloaders=data.test_loader)
to train, validate and test.
I am sure train_loader, val_loader and test_loader are from independent dataset.
But the test result turns out to be much worse than val result, and val result is close to train result, which seems that the val set takes part in training. I can't figure out what happened. Is this a bug or my way is wrong? |
Trainer.from_argparse_args fails on unknown kwargs | [
"help wanted"
] | π Bug
Since **kwargs was removed from Trainer's init in #1820, initializing Trainer objects fails if you have any non Trainer specific arguments in your parser.
If this is the expected behavior, the docs should be updated to reflect the workaround I mention below, as a few of them would currently fail.
To Reproduce
This works
parser = ArgumentParser()
parser = Trainer.add_argparse_args(parser)
args = parser.parse_args("")
trainer = Trainer.from_argparse_args(args)
Trainer init fails on unexpected kwarg 'script_specific_arg'
parser = ArgumentParser()
parser = Trainer.add_argparse_args(parser)
parser.add_argument('--script_specific_arg', type=str, default='hope this works')
args = parser.parse_args('')
trainer = Trainer.from_argparse_args(args)
Trainer init fails on unexpected kwarg 'some_argument'
class SomeModel(LightningModule):
def __init__(self, hparams):
super().__init__()
self.hparams = hparams
@staticmethod
def add_model_specific_args(parent_parser):
parser = ArgumentParser(parents=[parent_parser], add_help=False)
parser.add_argument('--some_argument', type=int, default=128)
return parser
parser = ArgumentParser()
parser = Trainer.add_argparse_args(parser)
parser = SomeModel.add_model_specific_args(parser)
args = parser.parse_args("")
trainer = Trainer.from_argparse_args(args)
You can get around it if you init Trainer on temp args
parser = ArgumentParser()
parser = Trainer.add_argparse_args(parser)
# Grab only the trainer args and init it right away
temp_args, _ = parser.parse_known_args('')
trainer = Trainer.from_argparse_args(temp_args)
parser.add_argument('--script_specific_arg', type=str, default='hope this works')
args = parser.parse_args("")
Expected behavior
Trainer.from_argparse_args should ignore unknown kwargs.
Environment
Google Colab
Current master branch (0.7.7.dev0)
Additional context
No error if using stable PyPi version (0.7.6) |
How to access training and validation losses from callbacks? | [
"question"
] | For example, if my validation_epoch_end in the trainer returns {'avg_loss':loss, 'log':logs}, how to get the loss value from a callback method like:def on_validation_end(trainer, pl_module)? |
Setting of PYTHONHASHSEED has no effect | [
"duplicate"
] | Problem
pytorch-lightning/pytorch_lightning/trainer/seed.py
Line 32
in
caa9c67
os.environ["PYTHONHASHSEED"] = str(seed)
It is not possible to change PYTHONHASHSEED inside the current program.
To see this, run the following two commands:
PYTHONHASHSEED=1 python -c "import os; print(hash('a'))"
PYTHONHASHSEED=1 python -c "import os; os.environ['PYTHONHASHSEED']='2'; print(hash('a'))"
The commands should output the same value, meaning that setting PYTHONHASHSEED after the process has started has no effect.
The following commands should output different values, also indicating that setting PYTHONHASHSEED after the process has started has no effect:
unset PYTHONHASHSEED # make sure it is not already set
python -c "import os; os.environ['PYTHONHASHSEED']='2'; print(hash('a'))"
python -c "import os; os.environ['PYTHONHASHSEED']='2'; print(hash('a'))"
edit: fixed second example
Solutions
Some ways I can think of to solve this:
Emit a warning if PYTHONHASHSEED is not 0 (0 means hash randomization is disabled)
Restarts the current process with PYTHONHASHSEED defined, see my snippet below. (This should be done as early as possible, to avoid non-idempotent code being executed twice.)
Personally I set PYTHONHASHSEED=0 in all of my .zshrc, but I also use the ensure_pythonhashseed function below.
For pytorch-lightning, I prefer solution 1, because it is less complicated, both to implement correctly and for users to reason about.
Snippet
def ensure_pythonhashseed(seed=0):
current_seed = os.environ.get("PYTHONHASHSEED")
seed = str(seed)
if current_seed is None or current_seed != seed:
print(f'Setting PYTHONHASHSEED="{seed}"')
os.environ["PYTHONHASHSEED"] = seed
# restart the current process
os.execl(sys.executable, sys.executable, *sys.argv) |
How to return a final val loss in trainer? | [
"feature",
"question",
"priority: 0"
] | What is your question?
Most optimisation packages ie Ray Tune / Hyperopt return the train loop to return a final accuracy for the optimiser to decide what to try next.
How do I do this with the Trainer module for Pytorch Lightning?
What's your environment?
OS: Linux
Packaging pip
Version 0.7.6 |
Seeming disparity in calling `on_load_checkpoint` in hpc load/save | [
"question",
"won't fix"
] | I noticed that before calling model.on_hpc_save(checkpoint) on L464, .hpc_save() gives the model a chance to add a few things by calling .dump_checkpoint() which later calls model.on_save_checkpoint(checkpoint) on L369.
However .hpc_load() calls only model.on_hpc_load(checkpoint) at training_io.py#L502, and does not seem to call model.on_load_checkpoint neither via .restore_training_state(), nor directly. This seems to be in contrast to what .restore() does on L305.
Is the intended behaviour? |
Support torch.jit.script on LightningModules | [
"feature",
"help wanted",
"won't fix"
] | π Feature
There are a number of advantages to converting a model with TorchScript (e.g. static optimizations, better saving / loading, especially into non-Python environments for deployment). However, no LightningModules can be converted using torch.jit.script. Here's a simple example with the error produced (note that this works as-is if we inherit from nn.Module instead of pl.LightningModule):
import pytorch_lightning as pl
import torch
# class Model(nn.Module): # works fine
class Model(pl.LightningModule):
def __init__(self):
super().__init__()
self.layer = torch.nn.Linear(5, 10)
def forward(self, x):
return self.layer(x)
torch.jit.script(Model())
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-70-1fe19c1470da> in <module>
10 return self.layer(x)
11
---> 12 torch.jit.script(Model())
~/anaconda/envs/vsrl/lib/python3.7/site-packages/torch/jit/__init__.py in script(obj, optimize, _frames_up, _rcb)
1259
1260 if isinstance(obj, torch.nn.Module):
-> 1261 return torch.jit._recursive.create_script_module(obj, torch.jit._recursive.infer_methods_to_compile)
1262
1263 qualified_name = _qualified_name(obj)
~/anaconda/envs/vsrl/lib/python3.7/site-packages/torch/jit/_recursive.py in create_script_module(nn_module, stubs_fn, share_types)
295 if share_types:
296 # Look into the store of cached JIT types
--> 297 concrete_type = concrete_type_store.get_or_create_concrete_type(nn_module)
298 else:
299 # Get a concrete type directly, without trying to re-use an existing JIT
~/anaconda/envs/vsrl/lib/python3.7/site-packages/torch/jit/_recursive.py in get_or_create_concrete_type(self, nn_module)
254 return nn_module._concrete_type
255
--> 256 concrete_type_builder = infer_concrete_type_builder(nn_module)
257
258 nn_module_type = type(nn_module)
~/anaconda/envs/vsrl/lib/python3.7/site-packages/torch/jit/_recursive.py in infer_concrete_type_builder(nn_module)
133 # Constants annotated via `Final[T]` rather than being added to `__constants__`
134 for name, ann in class_annotations.items():
--> 135 if torch._jit_internal.is_final(ann):
136 constants_set.add(name)
137
~/anaconda/envs/vsrl/lib/python3.7/site-packages/torch/_jit_internal.py in is_final(ann)
681
682 def is_final(ann):
--> 683 return ann.__module__ == 'typing_extensions' and \
684 (getattr(ann, '__origin__', None) is typing_extensions.Final)
685 except ImportError:
AttributeError: 'ellipsis' object has no attribute '__module__'
Digging into this a little, we have
print(Model.__annotations__)
# {'_device': Ellipsis, '_dtype': typing.Union[str, torch.dtype]}
and the _device annotation comes from DeviceDtypeModuleMixin (one of the super-classes of LightningModule). Here's the relevant snippet:
class DeviceDtypeModuleMixin(torch.nn.Module):
_device: ...
This seems to be the only issue because this code works:
import pytorch_lightning as pl
import torch
class Model(pl.LightningModule):
def __init__(self):
super().__init__()
self.layer = torch.nn.Linear(5, 10)
def forward(self, x):
return self.layer(x)
# Model.__annotations__ = {} # this works too but doesn't seem as nice
Model.__annotations__["_device"] = torch.device
torch.jit.script(Model())
However, if I try to set the annotation to typing.Union[str, torch.device] (which seems to be the true type based on this line), then I get ValueError: Unknown type annotation: 'typing.Union[str, torch.device]' in torch.jit.script`.
Is the str type for _device actually used? I don't see that anywhere, and I actually do see at least one place where there would be an error if self.device returned a string (here. I'll just go ahead and submit a PR to update the annotations, but feel free to comment here or on the PR if there's something I'm missing about the type annotations here. |
Images take a lot of GPU space | [
"question"
] | Right now my dataset is about 15 gigs of png images, with each image taking about 0.15Mb. I'm working on a V100 GPU with 32G of memory. My model takes only about 1G on a GPU, however, I'm not able to use big batches and the maximum that I can use is 32 images per batch. Is this a normal situation? Or there is a possibility that I have a memory leak? |
Separate *_percent_check for each *_dataloader | [
"feature",
"help wanted",
"won't fix",
"discussion"
] | π Feature
Can we have *_percent_check to be a list too where len(*_percent_check) == len(*_dataloaders)? In case if it is int then it will be same for all the dataloaders passed. Don't know how this can be useful in any case, just a thought.
Motivation
Pitch
For each val_dataloader or test_dataloader we can have an option to pass *_percent_check as a list with a percent_check for each of the dataloader. For eg. val_percent_check = [0.1, 0.4] and val_dataloaders = [val_dl1, val_dl2].
Alternatives
Additional context
Later we can do the same for training as well if #1959 get's merged. |
ModelCheckpointCallback.on_validation_end does not listen to val_check_interval? | [
"help wanted"
] | Evidence: I have a unit test where I train for 2 epochs with val_check_interval=0.5 and I only get two model checkpoints. Set breakpoints and indeed it is only hit twice. Would expect 4 times. |
Simple NCCL example with TCP/IP backend | [
"question",
"won't fix"
] | Simple NCCL with TCP/IP example
What is your question?
I can't find out how to "add all_reduce" so lightning would be usable in a very simple NCCL TCP/IP environment. I have a very simple pytorch program that does exactly that; I can't see how I could integrate this with lightning (even though it might be that there is a really simple solution). I like the concepts of lightning very much but what I am missing is something like a "custom all_reduce" that's being called after each sample / batch / epoch (I think it might depend which one makes most sense). Here's a RUNNING example of what I would like to integrate, and here's just the basic structure:
init_method = 'tcp://192.168.1.154:23456'
backend = 'nccl'
world_size = 2 # n
manual_seed = 12345
class Partition(object):
""" Dataset partitioning helper """
β¦β¦β¦
class DataPartitioner(object):
""" Dataset partitioning """
β¦β¦β¦
class Net(nn.Module):
β¦β¦β¦
def average_gradients(model):
""" Distributed Gradient averaging. """
size = float(dist.get_world_size())
for param in model.parameters():
dist.all_reduce(param.grad.data, op=torch.distributed.ReduceOp.SUM)
param.grad.data /= size
def run(rank, size):
torch.manual_seed(1234)
β¦β¦β¦
for epoch in range(10):
epoch_loss = 0.0
for data, target in train_set:
β¦β¦β¦
print('Rank ', dist.get_rank(), ', epoch ',
epoch, ': ', epoch_loss / num_batches)
average_gradients(model)
def start(rank, world_size):
dist.init_process_group(
backend=backend,
init_method=init_method,
rank=rank,
world_size=world_size
)
run(rank, world_size)
rank = int(sys.argv[1])
size = int(sys.argv[2])
start(rank, size)
What have you tried?
This can easily be run (Node 0: #python3 whatever.py 0 2; Node 0: #python3 whatever.py 1 2 ) and given nccl is available and both nodes "see" each other on a TCP/IP level this is working.
I can see how I can just make the model a pl.LightningModule, data loading is pretty much how Lightning likes it, too. I simply have no clue how to get the average_gradients method into Lightning so it would use dist.all_reduce(param.grad.data, op=torch.distributed.ReduceOp.SUM)... The example is mostly from PyTorch distributed docs.
What's your environment?
OS: Linux
Packaging pip, conda, nvidia-docker-runtime
Version: cloned 2020/05/27 from GitHub; pip install .
Any help would be very appreciated. |
DDP is incompatible with large datasets | [
"help wanted"
] | I'm trying to stream a large data file instead of loading it so it doesn't have to be pickled for multi-processing. However, open-file objects give a TypeError: cannot serialize '_io.TextIOWrapper' object error, so I have to open it within a subprocess instead-- but train_dataloader and val_dataloader methods get called in the main process of pytorch-lightning! How can I bypass issue without changing the source code? |
Lr_finder based on the validation loss rather than training loss | [
"question",
"won't fix"
] | What is your question?
For the default LR Range Test in PyTorch lightning, i.e., "lr_finder", is the reported loss curve based on training loss, test loss, or generalization loss? For me, it would be more reasonable to select the learning rate based on the test loss rather than training loss.
I noticed that there is a "val_dataloader" and "train_dataloader" argument in "lr_finder", but it is not clear what they are doing.
What's your environment?
OS: iOS
Packaging: pip
Version: 0.7.6 |
Crash trying to construct module_arguments when module is created in a method | [
"help wanted"
] | Last line here crashes:
import pytorch_lightning as pl
class Module(pl.LightningModule):
def forward(self):
return 0
def test_outside():
a = Module()
print(a.module_arguments)
class A:
def test(self):
a = Module()
print(a.module_arguments)
def test2(self):
test_outside()
test_outside() # prints {}
A().test2() # prints {}
A().test() # crashes
For context, this happens when we want to instantiate LightningModules as part of a unit testing functions. |
What the NeptuneLogger records as params or properties | [
"question",
"won't fix",
"logger"
] | β Questions and Help
What is your question?
Currently, NeptuneLogger seems to be logging property what is passed as a hparams.
pytorch-lightning/pytorch_lightning/loggers/neptune.py
Line 233
in
c3cf33d
def log_hyperparams(self, params: Union[Dict[str, Any], Namespace]) -> None:
@rank_zero_only
def log_hyperparams(self, params: Union[Dict[str, Any], Namespace]) -> None:
params = self._convert_params(params)
params = self._flatten_dict(params)
for key, val in params.items():
self.experiment.set_property(f'param__{key}', val)
But neptune-client has already logged the hparams that were passed to it when it did the project.create_experiment. So the current implementation is trying to log hparams twice, in different ways.
Now I am trying to log a large number of hparams using hydra.
And it doesn't take much time to log the hparams during project.create_experiment, but it takes a long time to repeat the set_property in log_hyperparams loop.
So I am rewriting and running as follows, but there is no inconvenience
Is there a problem with changing it in this way? Also, is there any chance that this will be the default?
@rank_zero_only
def log_hyperparams(self, params: Union[Dict[str, Any], Namespace]) -> None:
pass
What's your environment?
OS: [Linux]
Packaging [pip]
Version [0.7.6] |
Dynamically change optimizer frequency | [
"question"
] | β Questions and Help
What is your question?
I have a WGAN and the ratio between iterations on the discriminator and on the generator is fixed at 5:1. I accomplished this by passing the frequency parameter in the configure_optimizers method
res_1 = {
'optimizer': optimizer_d,
'frequency': 5,
'lr_scheduler': scheduler_d
}
same for generator
res_2 = {
'optimizer': optimizer_g,
'frequency': 1,
'lr_scheduler': scheduler_g
}
How can I dynamically change the frequency parameter, such that for the first n iterations I have a frequency x and after I have a frequency y.
Code
Don't know how to do it.
What have you tried?
What's your environment?
OS: OS independent
Packaging: pip
Version: 0.7.6 |
How to store dataset in shared memory for ddp? | [
"feature",
"help wanted"
] | How can I share memory across my processes in ddp? I'm getting OOM errors with 2 gpus and a 6gb dataset. My script would also load faster if it wasn't pickling the dataset and copying to other processes. |
Stopping the code along with a graceful shutdown. | [
"question"
] | Is there a way to stop the training in the model when some criteria are satisfied. Something along the lines:
class myCallback(Callback):
def __init__(self):
...
def on_epoch_end(self, trainer, pl_module):
if criteria:
model.stop_training = True # stops the training; need help here
Note that I also want to have the early stopping feature where the 'val_loss' is monitored but want to stop running the code if some other criteria is satisfied. Also, is my method of having this feature in the callback module correct or should I inherit the early stopping criteria? |
Add Docs Demo'ing Test-Set Loaders With Trainer.Test() | [
"docs"
] | π Documentation
I was wondering the best practice way to specify a new test_loader when using trainer.test(), similar to how we can for trainer.fit(). The code was already written, the docs just needed to be updated to sync with it. |
Transfer learning | [
"feature",
"help wanted",
"won't fix",
"discussion"
] | The problem
The standard flow for transfer learning is as follows:
Start from a pretrained model and create a new head
Freeze all layers but the head
Train
Unfreeze all (or some) layers
Train (with differential learning rates)
Let's call each "freeze/unfreeze -> train" step a phase.
What can change between phases?
The lr_scheduler should restart
New parameter groups can be added/removed to the optimizer and to the lr_scheduler (We don't really add a parameter group to the lr_scheduler, but we should pass a list of learning rates, one for each group (this is called differential learning rates))
Parameter groups
I've implemented a first idea on how to implement this as a ModuleMixin here, the idea is for the user to override model_splits, which returns the parameter groups of the model.
You can see how this gets used for differential learning rates here
Changing phases
There are two main ways of changing phases, but currently none of them are smooth.
Use multiple trainers
Use a separate trainer for each phase, always resuming the last state by using resume_from_checkpoint.
A scratch notebook for this idea is here
The problem is that trainer also resumes the lr_scheduler state, but we need it to start from zero
Implement the logic in on_epoch_start
Idea notebook
Check the current_epoch and change phases when needed
Might need to add/remove parameter groups from the optimizer/lr_scheduler
Changing the scheduler is causing problems when used together with LeraningRateLogger |
Log a warning/ raise an error when lightning replaces an existing Sampler | [
"feature",
"help wanted",
"good first issue"
] | π Feature
First of all, thanks for this awesome project! Really enjoying using it!
Feature Request: Log a warning or raise a MisconfigurationException when lightning replaces an existing sampler with DistributedSampler.
Even though this behaviour is documented, it's not intuitive. Also, if someone has defined a sampler - it will lead to very different training results if you use Single GPU vs DDP. So best to warn a user to remove the sampler from their code if using replace_sampler_ddp=True.
Motivation
I'm pretty new to lightning - and recently moved one of my models to lightning. Unaware of the details, I chose ddp training strategy (since it was recommended). For a long time, my model was not converging on the val set - and it took me a lot of time to realise what was happening - lightning had replaced my Dataloader's sampler with DistributedSampler! In my particular model, balanced sampling iskey to convergence (since data is highly imbalanced). Lightning's behaviour makes sense in hindsight, and is well documented - but may have saved me a lot of debugging time had it given a simple warning or error when this happens.
Additional context
The code change is very trivial. We can take advantage of the fact that DataLoader's default sampler is SequentialSampler (if shuffle=False and sampler=None).
data_loading.py:115
if self.replace_sampler_ddp and need_dist_sampler:
if not isinstance(dataloader.dataset.sampler, SequentialSampler):
raise MisconfigurationException(...)
Does this make sense? Happy to submit a PR if it does.
On a separate note, can anyone here explain why exactly is DistributedSampler needed when using DDP? I can see that it will help if I have shuffle=false. It will probably help maintain sanity of an epoch (since otherwise epoch length will become num_processes*epoch).
But let's say I am doing random shuffling - and am cognizant of the fact that my epoch sizes will be higher, there is no real reason that I need to use DistributedSampler for DDP right? Or am I missing something? |
TestTube version attribute does not perform as documented | [
"bug",
"help wanted",
"good first issue"
] | π Bug
The docs for TestTube logger state:
If version is not specified the logger inspects the save for existing versions, then automatically assigns the next available version.
However, this does not happen.
To Reproduce
>>> from pytorch_lightning.loggers import TestTubeLogger
>>> logger = TestTubeLogger('tb_logs', name='my_model')
>>> logger.version
>>> assert logger.version is None
Expected behavior
I would expect the TestTube logger performs something similar to:
https://github.com/PyTorchLightning/pytorch-lightning/blob/master/pytorch_lightning/loggers/tensorboard.py#L177 |
Is it ok to translate the Docs into Chinese? | [
"feature",
"help wanted",
"docs"
] | π Feature
Motivation
Let more Chinese Deep Learning lovers to know Pytorch-lightning |
Error using Hydra | [
"bug",
"question"
] | Hi all,
Thanks a lot for the awesome library. I'm trying to use Hydra with pytorch-lightning. I'm using the last release of pytorch-lightning.
However, I got the following error after my training step:
...
File "/home/.local/lib/python3.8/site-packages/pytorch_lightning/callbacks/model_checkpoint.py", line 241, in on_validation_end
self._do_check_save(filepath, current, epoch)
File "/home/.local/lib/python3.8/site-packages/pytorch_lightning/callbacks/model_checkpoint.py", line 275, in _do_check_save
self._save_model(filepath)
File "/home/.local/lib/python3.8/site-packages/pytorch_lightning/callbacks/model_checkpoint.py", line 142, in _save_model
self.save_function(filepath)
File "/home/.local/lib/python3.8/site-packages/pytorch_lightning/trainer/training_io.py", line 260, in save_checkpoint
checkpoint = self.dump_checkpoint()
File "/home/.local/lib/python3.8/site-packages/pytorch_lightning/trainer/training_io.py", line 353, in dump_checkpoint
raise ValueError(
ValueError: ('The acceptable hparams type is dict or argparse.Namespace,', ' not DictConfig')
Exception ignored in: <function tqdm.__del__ at 0x7f1f9e379a60>
It says that Dictconfig are not supported for saving, but I saw in a pull request that this problem has been corrected.
Can you point me to direction on how to correct this?
Code
@hydra.main("config/config.yaml")
def main(cfg=None):
wrap_tb_logger()
model = hydra.utils.instantiate(cfg.model, cfg)
trainer = pl.Trainer(
gpus=list(cfg.gpus),
max_epochs=cfg.epochs,
train_percent_check=0.4
)
trainer.fit(model)
What's your environment?
OS: Linux
Packaging pip
Version 0.7.1 |
Proposal for help | [] | Hi @williamFalcon ! I saw your project and I am very pleased by the idea. I wish to help you writing production level code. PLease let me know in what way can I help! |
Thanks for sharing! | [] | Good repo! Thanks for sharing! |
Adding visualization module | [] | Do you consider adding visualization ability? For example adding TensorBoard utility to visualize validation curve, or scalar changes, etc. |
AttributeError: 'Experiment' object has no attribute 'get_meta_copy' | [] | https://github.com/williamFalcon/pytorch-lightning/blob/b8c7baa8acce9e363c33d2580eb1abcca322a211/pytorch_lightning/models/trainer.py#L419
I encountered this problem when I used ddp. |
Allow optimizers to alternate at arbitrary intervals | [
"feature",
"help wanted"
] | For GANs or similar approaches, we may want optimizer A to step every batch while optimizer B might step every k batches.
This feature will enable this behavior.
Approach still needs to be scoped out. Open to suggestions here. |
AttributeError: 'TTNamespace' object has no attribute 'drop_prob' | [] | Got the following error while running the Demo examples:
python single_gpu_node_template.py --gpus "0,1,2,3"
Traceback (most recent call last):
File "single_gpu_node_template.py", line 112, in
main(hyperparams)
File "single_gpu_node_template.py", line 33, in main
model = LightningTemplateModel(hparams)
File "/home/dgueraco/projects/pytorch-lightning/pytorch_lightning/examples/new_project_templates/lightning_module_template.py", line 37, in init
self.__build_model()
File "/home/dgueraco/projects/pytorch-lightning/pytorch_lightning/examples/new_project_templates/lightning_module_template.py", line 49, in __build_model
self.c_d1_drop = nn.Dropout(self.hparams.drop_prob)
AttributeError: 'TTNamespace' object has no attribute 'drop_prob' |
Evaluate reduce removal from validation_step | [
"feature",
"help wanted"
] | Possible that the reduce function after validation_step might make it hard to save outputs such as videos, audios, etc...
Need to evaluate whether it makes sense to remove. |
Consider: ability to set seed | [] | I dunno if this is in scope (feel free to close if not), but when experimenting, setting a fixed seed is handy since you can remove one source of randomness (Karpathy's recipe even includes it as an important beginning step).
Basically, being able to set the seeds for the random, numpy, torch, and other common modules in the config would be handy. |
Typo in module's overview | [] | Hi,
Thanks for developing this module.
There is a small typo in the Lighting module's overview. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.