title
stringlengths 5
164
| labels
list | bodyText
stringlengths 0
46.7k
|
---|---|---|
on_after_backward for multiple optimizer
|
[
"feature",
"help wanted"
] |
π Feature
on_after_backward is a perfect place to log grad, currently, on_after_backward make no distinguish for different optimizers, for GAN applications, in it would be nice to pass in optimizer_id as param for on_after_backward.
|
maybe typo in readme
|
[
"docs"
] |
π Documentation
riguously -> rigorously
unecessary -> unnecessary
Thank you for the wonderful project!
|
AttributeError: 'Tensor' object has no attribute 'items'
|
[
"bug",
"help wanted"
] |
Hi, I'm not sure what's going on. I tried to follow tutorial to organized my code into a LightningModule. Can anyone help?
During model.fit(), I got this error :
Epoch 1: 0%| | 0/12831 [00:00<?, ?it/s]Traceback (most recent call last):
File "/home/allen_wu/miniconda3/envs/pytorch/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/home/allen_wu/miniconda3/envs/pytorch/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/allen_wu/.vscode-server-insiders/extensions/ms-python.python-2020.3.69010/pythonFiles/lib/python/debugpy/wheels/debugpy/__main__.py", line 45, in <module>
cli.main()
File "/home/allen_wu/.vscode-server-insiders/extensions/ms-python.python-2020.3.69010/pythonFiles/lib/python/debugpy/wheels/debugpy/../debugpy/server/cli.py", line 427, in main
run()
File "/home/allen_wu/.vscode-server-insiders/extensions/ms-python.python-2020.3.69010/pythonFiles/lib/python/debugpy/wheels/debugpy/../debugpy/server/cli.py", line 264, in run_file
runpy.run_path(options.target, run_name="__main__")
File "/home/allen_wu/miniconda3/envs/pytorch/lib/python3.7/runpy.py", line 263, in run_path
pkg_name=pkg_name, script_name=fname)
File "/home/allen_wu/miniconda3/envs/pytorch/lib/python3.7/runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "/home/allen_wu/miniconda3/envs/pytorch/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/allen_wu/sota_lm_dev/codebase/gpt2/Gpt2SeqClassifier.py", line 169, in <module>
trainer.fit(model)
File "/home/allen_wu/miniconda3/envs/pytorch/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 630, in fit
self.run_pretrain_routine(model)
File "/home/allen_wu/miniconda3/envs/pytorch/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 830, in run_pretrain_routine
self.train()
File "/home/allen_wu/miniconda3/envs/pytorch/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 343, in train
self.run_training_epoch()
File "/home/allen_wu/miniconda3/envs/pytorch/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 413, in run_training_epoch
output = self.run_training_batch(batch, batch_idx)
File "/home/allen_wu/miniconda3/envs/pytorch/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 562, in run_training_batch
loss = optimizer_closure()
File "/home/allen_wu/miniconda3/envs/pytorch/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 529, in optimizer_closure
split_batch, batch_idx, opt_idx, self.hiddens)
File "/home/allen_wu/miniconda3/envs/pytorch/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 703, in training_forward
output = self.process_output(output, train=True)
File "/home/allen_wu/miniconda3/envs/pytorch/lib/python3.7/site-packages/pytorch_lightning/trainer/logging.py", line 107, in process_output
for k, v in output.items():
AttributeError: 'Tensor' object has no attribute 'items'
|
bug(logger): wandb fails on sweep
|
[
"bug",
"help wanted",
"logger"
] |
π Bug
When using wandb sweeps for hyperparameters search, I get this error:
wandb: ERROR Attempted to change value of key "dropout_std" from 0.030424838979365657 to 0.030424838979365654
The reason is I ran:
wandb_logger.log_hyperparams(params)
Which I guess has some problem with floating-point numbers in high accuracy?
|
Metrics: Base Metric
|
[
"feature",
"help wanted"
] |
π Feature
Add a base class for proper metric implementation
|
Metrics: AUC
|
[
"feature",
"help wanted"
] |
π Feature
Implement general AUC (to be combined with other metrics like ROC)
|
[Metrics] IOU
|
[
"feature",
"help wanted",
"good first issue"
] |
π Feature
Implement (differentiable) IOU
|
[Metrics] SSIM
|
[
"feature",
"help wanted",
"good first issue"
] |
π Feature
Implement SSIM
|
Add gradient checkpointing
|
[
"feature",
"won't fix"
] |
Would be great to support gradient checkpointing for a whole lightningModule.
@tullie
|
How to properly move submodules to GPU?
|
[
"question"
] |
I've coded up a TransformerEncoder that relies on submodules. Specifically, I have a main lightning module (MainTransfomer.py) which has 2 sub (regular torch)modules. 1 is BertModel and 1 is a custom TransformerEncoderLayer.
The TransformerEncoderLayer has 2 submodules of its own named MultiHeadAttn and PosFeedForward. When I try to run my code via the lightning trainer, it seems that all my tensors/params are being moved to GPU except for when it tries to do operations within the MultiHeadAttn and PosFeedForward modules. Specifically it seems the weight matrix of a nn.Linear() within MultiHeadAttn is still sitting on cpu.
So my question boils down to what is the correct way to ensure all submodules are moved to the (correct) GPU within the lightning framework? In normal PyTorch I would just explicitly call .to(device) but I've read that it is not recommended to do this within lightning.
if I explicitly set .to(device) on my TransformerEncoderLayer in my MainTransformerEncoder init I don't run into this issue.
I can supply code if need be, but the general setup is as described above. Main lightning module inits a torch module, which itself has 2 sub torch modules (attn & ffnn). The attn and ffnn modules don't seem to be moved to GPU by the lightning trainer.
Env:
Ubuntu16.04
conda/py3.8
Lightning v0.7.1
|
pytorch_lightning.utilities.debugging.MisconfigurationException
|
[
"bug",
"help wanted"
] |
Hi, I encountered the problem like #899 οΌBut I checked my pytorch is not CPU version. Can anyone help? Thanks!
Traceback (most recent call last):
File "/home/allen_wu/miniconda3/envs/pytorch/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/home/allen_wu/miniconda3/envs/pytorch/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/allen_wu/.vscode-server-insiders/extensions/ms-python.python-2020.3.69010/pythonFiles/lib/python/debugpy/wheels/debugpy/__main__.py", line 45, in <module>
cli.main()
File "/home/allen_wu/.vscode-server-insiders/extensions/ms-python.python-2020.3.69010/pythonFiles/lib/python/debugpy/wheels/debugpy/../debugpy/server/cli.py", line 427, in main
run()
File "/home/allen_wu/.vscode-server-insiders/extensions/ms-python.python-2020.3.69010/pythonFiles/lib/python/debugpy/wheels/debugpy/../debugpy/server/cli.py", line 264, in run_file
runpy.run_path(options.target, run_name="__main__")
File "/home/allen_wu/miniconda3/envs/pytorch/lib/python3.7/runpy.py", line 263, in run_path
pkg_name=pkg_name, script_name=fname)
File "/home/allen_wu/miniconda3/envs/pytorch/lib/python3.7/runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "/home/allen_wu/miniconda3/envs/pytorch/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/allen_wu/sota_lm_dev/codebase/gpt2/Gpt2SeqClassifier.py", line 200, in <module>
trainer = Trainer(gpus=1)
File "/home/allen_wu/miniconda3/envs/pytorch/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 366, in __init__
self.data_parallel_device_ids = parse_gpu_ids(self.gpus)
File "/home/allen_wu/miniconda3/envs/pytorch/lib/python3.7/site-packages/pytorch_lightning/trainer/distrib_parts.py", line 622, in parse_gpu_ids
gpus = sanitize_gpu_ids(gpus)
File "/home/allen_wu/miniconda3/envs/pytorch/lib/python3.7/site-packages/pytorch_lightning/trainer/distrib_parts.py", line 592, in sanitize_gpu_ids
raise MisconfigurationException(message)
pytorch_lightning.utilities.debugging.MisconfigurationException:
You requested GPUs: [0]
But your machine only has: []
My environment packages:
# Name Version Build Channel
_libgcc_mutex 0.1 main
absl-py 0.9.0 pypi_0 pypi
attrs 19.3.0 py_0 conda-forge
backcall 0.1.0 py_0 conda-forge
blas 1.0 mkl
bleach 3.1.3 pyh8c360ce_0 conda-forge
boto3 1.12.24 pypi_0 pypi
botocore 1.15.24 pypi_0 pypi
ca-certificates 2019.11.28 hecc5488_0 conda-forge
cachetools 4.0.0 pypi_0 pypi
certifi 2019.11.28 py37hc8dfbb8_1 conda-forge
chardet 3.0.4 pypi_0 pypi
click 7.1.1 pypi_0 pypi
cudatoolkit 10.1.243 h6bb024c_0
decorator 4.4.2 py_0 conda-forge
defusedxml 0.6.0 py_0 conda-forge
docutils 0.15.2 pypi_0 pypi
entrypoints 0.3 py37hc8dfbb8_1001 conda-forge
filelock 3.0.12 pypi_0 pypi
freetype 2.9.1 h8a8886c_1
future 0.18.2 pypi_0 pypi
google-auth 1.11.3 pypi_0 pypi
google-auth-oauthlib 0.4.1 pypi_0 pypi
grpcio 1.27.2 pypi_0 pypi
icu 64.2 he1b5a44_1 conda-forge
idna 2.9 pypi_0 pypi
importlib-metadata 1.5.0 py37hc8dfbb8_1 conda-forge
importlib_metadata 1.5.0 1 conda-forge
intel-openmp 2020.0 166
ipykernel 5.1.4 py37h5ca1d4c_0 conda-forge
ipython 7.13.0 py37h43977f1_1 conda-forge
ipython_genutils 0.2.0 py_1 conda-forge
ipywidgets 7.5.1 pypi_0 pypi
jedi 0.16.0 py37hc8dfbb8_1 conda-forge
jinja2 2.11.1 py_0 conda-forge
jmespath 0.9.5 pypi_0 pypi
joblib 0.14.1 pypi_0 pypi
jpeg 9b h024ee3a_2
json5 0.9.0 py_0 conda-forge
jsonschema 3.2.0 py37hc8dfbb8_1 conda-forge
jupyter_client 6.0.0 py_0 conda-forge
jupyter_core 4.6.3 py37hc8dfbb8_1 conda-forge
jupyterlab 2.0.1 py_0 conda-forge
jupyterlab_server 1.0.7 py_0 conda-forge
ld_impl_linux-64 2.33.1 h53a641e_7
libedit 3.1.20181209 hc058e9b_0
libffi 3.2.1 hd88cf55_4
libgcc-ng 9.1.0 hdf63c60_0
libgfortran-ng 7.3.0 hdf63c60_0
libpng 1.6.37 hbc83047_0
libsodium 1.0.17 h516909a_0 conda-forge
libstdcxx-ng 9.1.0 hdf63c60_0
libtiff 4.1.0 h2733197_0
libuv 1.34.0 h516909a_0 conda-forge
markdown 3.2.1 pypi_0 pypi
markupsafe 1.1.1 py37h8f50634_1 conda-forge
mistune 0.8.4 py37h516909a_1000 conda-forge
mkl 2020.0 166
mkl-service 2.3.0 py37he904b0f_0
mkl_fft 1.0.15 py37ha843d7b_0
mkl_random 1.1.0 py37hd6b4f25_0
nbconvert 5.6.1 py37_0 conda-forge
nbformat 5.0.4 py_0 conda-forge
ncurses 6.2 he6710b0_0
ninja 1.9.0 py37hfd86e86_0
nodejs 13.10.1 hf5d1a2b_0 conda-forge
notebook 6.0.3 py37_0 conda-forge
numpy 1.18.1 py37h4f9e942_0
numpy-base 1.18.1 py37hde5b4d6_1
oauthlib 3.1.0 pypi_0 pypi
olefile 0.46 py37_0
openssl 1.1.1e h516909a_0 conda-forge
pandas 1.0.2 py37h0573a6f_0
pandoc 2.9.2 0 conda-forge
pandocfilters 1.4.2 py_1 conda-forge
parso 0.6.2 py_0 conda-forge
pexpect 4.8.0 py37hc8dfbb8_1 conda-forge
pickleshare 0.7.5 py37hc8dfbb8_1001 conda-forge
pillow 7.0.0 py37hb39fc2d_0
pip 20.0.2 py37_1
prometheus_client 0.7.1 py_0 conda-forge
prompt-toolkit 3.0.4 py_0 conda-forge
protobuf 3.11.3 pypi_0 pypi
ptyprocess 0.6.0 py_1001 conda-forge
pyasn1 0.4.8 pypi_0 pypi
pyasn1-modules 0.2.8 pypi_0 pypi
pygments 2.6.1 py_0 conda-forge
pyrsistent 0.15.7 py37h8f50634_1 conda-forge
python 3.7.6 h0371630_2
python-dateutil 2.8.1 py_0 conda-forge
python_abi 3.7 1_cp37m conda-forge
pytorch 1.4.0 py3.7_cuda10.1.243_cudnn7.6.3_0 pytorch
pytorch-lightning 0.7.1 pypi_0 pypi
pytz 2019.3 py_0
pyzmq 19.0.0 py37hac76be4_1 conda-forge
readline 7.0 h7b6447c_5
regex 2020.2.20 pypi_0 pypi
requests 2.23.0 pypi_0 pypi
requests-oauthlib 1.3.0 pypi_0 pypi
rsa 4.0 pypi_0 pypi
s3transfer 0.3.3 pypi_0 pypi
sacremoses 0.0.38 pypi_0 pypi
scikit-learn 0.22.2.post1 pypi_0 pypi
scipy 1.4.1 pypi_0 pypi
send2trash 1.5.0 py_0 conda-forge
sentencepiece 0.1.85 pypi_0 pypi
setuptools 46.0.0 py37_0
six 1.14.0 py37_0
sklearn 0.0 pypi_0 pypi
sqlite 3.31.1 h7b6447c_0
tensorboard 2.1.1 pypi_0 pypi
terminado 0.8.3 py37hc8dfbb8_1 conda-forge
testpath 0.4.4 py_0 conda-forge
tk 8.6.8 hbc83047_0
tokenizers 0.5.2 pypi_0 pypi
torchtext 0.5.0 pypi_0 pypi
torchvision 0.5.0 py37_cu101 pytorch
tornado 6.0.4 py37h8f50634_1 conda-forge
tqdm 4.43.0 pypi_0 pypi
traitlets 4.3.3 py37hc8dfbb8_1 conda-forge
transformers 2.5.1 pypi_0 pypi
urllib3 1.25.8 pypi_0 pypi
wcwidth 0.1.8 py_0 conda-forge
webencodings 0.5.1 py_1 conda-forge
werkzeug 1.0.0 pypi_0 pypi
wheel 0.34.2 py37_0
widgetsnbextension 3.5.1 pypi_0 pypi
xz 5.2.4 h14c3975_4
zeromq 4.3.2 he1b5a44_2 conda-forge
zipp 3.1.0 py_0 conda-forge
zlib 1.2.11 h7b6447c_3
zstd 1.3.7 h0b5b093_0
|
Automatic environment check
|
[
"feature",
"help wanted",
"won't fix"
] |
π Feature
Lightning could automatically detect a requirements.txt or environment.yml file and check if the packages in the current environment meet the specified versions. If these are not met, it could warn the user.
Motivation
Lightning facilitates and encourages reproducibility of research code. A feature like this could further improve this part and make a user's life easier.
Pitch
Check if there is a requirements.txt (pip, pipenv) or environment.yml (conda) file in the same path as the main script.
If there is, check the versions and warn the user if dependencies are not met.
Optional: Automatically upgrade/downgrade packages via pip / conda call (not sure if this is smart)
Alternatives
Keep as is. The users have to take care of this themselves.
Additional context
I have already implemented this for myself to keep track when working on multiple machines and code repositories.
|
Trainer.add_argparse_args(parser) break the default Tensorboard hparams logging.
|
[
"bug",
"help wanted"
] |
π Bug
Trainer.add_argparse_args(parser) break the default Tensorboard hparams logging.
To Reproduce
Steps to reproduce the behavior:
I pretty much just put together the sample codes in the Hyperparameters section in the docs and it's throw the error.
Code sample
class LitMNIST(pl.LightningModule):
def __init__(self, hparams):
super(LitMNIST, self).__init__()
self.hparams = hparams
self.layer_1 = torch.nn.Linear(28 * 28, hparams.layer_1_dim)
def forward(self, x):
return self.layer_1(x)
def train_dataloader(self):
return DataLoader(mydata(), batch_size=self.hparams.batch_size)
def configure_optimizers(self):
return Adam(self.parameters(), lr=self.hparams.learning_rate)
def main(args):
model = LitMNIST(args)
trainer = pl.Trainer()
trainer.fit(model)
if __name__ == "__main__":
parser = ArgumentParser()
# parametrize the network
parser.add_argument('--layer_1_dim', type=int, default=128)
parser.add_argument('--learning_rate', type=float, default=1e-3)
# add all the available options to the trainer
parser = pl.Trainer.add_argparse_args(parser)
args = parser.parse_args()
main(args)
Traceback (most recent call last):
File "tmp.py", line 56, in <module>
main(args)
File "tmp.py", line 40, in main
trainer.fit(model)
File "/Users/phuc/miniconda3/envs/thinc/lib/python3.7/site-packages/pytorch_ligh
tning/trainer/trainer.py", line 630, in fit
self.run_pretrain_routine(model)
File "/Users/phuc/miniconda3/envs/thinc/lib/python3.7/site-packages/pytorch_ligh
tning/trainer/trainer.py", line 748, in run_pretrain_routine
self.logger.log_hyperparams(ref_model.hparams)
File "/Users/phuc/miniconda3/envs/thinc/lib/python3.7/site-packages/pytorch_ligh
tning/loggers/base.py", line 18, in wrapped_fn
fn(self, *args, **kwargs)
File "/Users/phuc/miniconda3/envs/thinc/lib/python3.7/site-packages/pytorch_ligh
tning/loggers/tensorboard.py", line 113, in log_hyperparams
exp, ssi, sei = hparams(params, {})
File "/Users/phuc/miniconda3/envs/thinc/lib/python3.7/site-packages/torch/utils/
tensorboard/summary.py", line 156, in hparams
raise ValueError('value should be one of int, float, str, bool, or torch.Tenso
r')
ValueError: value should be one of int, float, str, bool, or torch.Tensor
The value it fails at is key callback with value [].
Expected behavior
Trainer.add_argparse_args(parser) should not create trouble for tensorboard hparams logging.
Environment
PyTorch Version (e.g., 1.0): 1.3.1
OS (e.g., Linux): Linux
How you installed PyTorch (conda, pip, source): pip
Python version: 3.7.5
|
Single-node multi-gpu ddp backend tries to delete model checkpoints from all processes
|
[
"bug",
"duplicate",
"help wanted"
] |
π Bug
To Reproduce
Steps to reproduce the behavior:
Go to '...'
Run '....'
Scroll down to '....'
See error
Code sample
Expected behavior
Environment
Please copy and paste the output from our
environment collection script
(or fill out the checklist below manually).
You can get the script and run it with:
wget https://raw.githubusercontent.com/PyTorchLightning/pytorch-lightning/master/tests/collect_env_details.py
# For security purposes, please check the contents of collect_env.py before running it.
python collect_env.py
PyTorch Version (e.g., 1.0):
OS (e.g., Linux):
How you installed PyTorch (conda, pip, source):
Build command you used (if compiling from source):
Python version:
CUDA/cuDNN version:
GPU models and configuration:
Any other relevant information:
Additional context
|
MultiGPU Training. Logging problem
|
[
"bug",
"help wanted"
] |
π Bug
When we try logging to the tensorboard loss during last step of epoch with GPU number more than one, we can get exception.
To Reproduce
Start training any model on any dataset with gpu amount over than one, but last batch should contains objects only for the part of GPUs.
I have next error:
ValueError: only one element tensors can be converted to Python scalars
Expected behavior
Success logging to tensorboard.
Environment
OS: Ubuntu
pytorch-lightning==0.7.1
Additional context
I suppose problem in the method (trainer/logging.py):
def reduce_distributed_output(self, output, num_gpus)
# reduce only metrics that have the same number of gpus
elif output[k].size(0) == num_gpus:
reduced = torch.mean(output[k])
output[k] = reduced
If we have a last not full batch, we should get mean, isn't it?
|
Trainer DDP should invoke load_spawn_weights() only in proc_rank == 0
|
[
"bug",
"help wanted"
] |
π Bug
Trainer DDP load_spawn_weights should happen only in proc_rank == 0 since only in this process (node) save_spawn_weights actually saves checkpoint
To Reproduce
Steps to reproduce the behavior:
setup two-node cluster.
set SLURM_NODEID on each node: '0' on node 0 and '1' on node 1.
run the script python app.py on each node.
see stdout on the node 1:
Traceback (most recent call last):
File "app.py", line 166, in <module>
main_() # pylint: disable=no-value-for-parameter
File "app.py", line 162, in main_
trainer.fit(model)
File "/home/ubuntu/anaconda3/envs/nightly_pt/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 593, in fit
self.load_spawn_weights(model)
File "/home/ubuntu/anaconda3/envs/nightly_pt/lib/python3.7/site-packages/pytorch_lightning/trainer/distrib_data_parallel.py", line 368, in load_spawn_weights
loaded_model = original_model.__class__.load_from_checkpoint(path)
File "/home/ubuntu/anaconda3/envs/nightly_pt/lib/python3.7/site-packages/pytorch_lightning/core/lightning.py", line 1353, in load_from_checkpoint
checkpoint = torch.load(checkpoint_path, map_location=lambda storage, loc: storage)
File "/home/ubuntu/anaconda3/envs/nightly_pt/lib/python3.7/site-packages/torch/serialization.py", line 525, in load
with _open_file_like(f, 'rb') as opened_file:
File "/home/ubuntu/anaconda3/envs/nightly_pt/lib/python3.7/site-packages/torch/serialization.py", line 212, in _open_file_like
return _open_file(name_or_buffer, mode)
File "/home/ubuntu/anaconda3/envs/nightly_pt/lib/python3.7/site-packages/torch/serialization.py", line 193, in __init__
super(_open_file, self).__init__(open(name, mode))
FileNotFoundError: [Errno 2] No such file or directory: '/home/ubuntu/pytorch-lightning-intro-guide/__temp_weight_ddp_end.ckpt'
Code sample
app.py:
import pathlib
import pytorch_lightning as pl
import torch
from torch.nn import functional as F
from torch.optim import Adam
from torch.utils.data import DataLoader, random_split
from torchvision import datasets, transforms
class LitMNIST(pl.LightningModule):
def __init__(self):
super().__init__()
self.layer_1 = torch.nn.Linear(28 * 28, 128)
self.layer_2 = torch.nn.Linear(128, 256)
self.layer_3 = torch.nn.Linear(256, 10)
self.train_dataset = None
self.val_dataset = None
self.test_dataset = None
def forward(self, x):
batch_size, channels, width, height = x.size()
x = x.view(batch_size, -1)
x = self.layer_1(x)
x = F.relu(x)
x = self.layer_2(x)
x = F.relu(x)
x = self.layer_3(x)
x = F.log_softmax(x, dim=1)
return x
def prepare_data(self):
# transform
transform = transforms.Compose(
[transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))])
# download
data_dir = pathlib.Path.home() / 'data'
mnist_train = datasets.MNIST(data_dir, train=True,
download=True, transform=transform)
mnist_test = datasets.MNIST(data_dir, train=False,
download=True, transform=transform)
# train/val split
mnist_train, mnist_val = random_split(mnist_train, [55000, 5000])
# assign to use in dataloaders
self.train_dataset = mnist_train
self.val_dataset = mnist_val
self.test_dataset = mnist_test
def train_dataloader(self):
return DataLoader(self.train_dataset, batch_size=64)
def val_dataloader(self):
return DataLoader(self.val_dataset, batch_size=64)
def test_dataloader(self):
return DataLoader(self.test_dataset, batch_size=64)
def configure_optimizers(self):
return Adam(self.parameters(), lr=1e-3)
def training_step(self, batch, batch_idx):
x, y = batch
logits = self(x)
loss = F.nll_loss(logits, y)
# add logging
logs = {'loss': loss}
return {'loss': loss, 'log': logs}
def validation_step(self, batch, batch_idx):
x, y = batch
logits = self(x)
loss = F.nll_loss(logits, y)
return {'val_loss': loss}
def test_step(self, batch, batch_idx):
x, y = batch
logits = self(x)
loss = F.nll_loss(logits, y)
return {'val_loss': loss}
def test_epoch_end(self, outputs):
avg_loss = torch.stack( # pylint: disable=no-member
[x['val_loss'] for x in outputs]).mean()
tensorboard_logs = {'val_loss': avg_loss}
return {'avg_val_loss': avg_loss, 'log': tensorboard_logs}
def init_ddp_connection(self, proc_rank: int, world_size: int) -> None:
torch.distributed.init_process_group(
'nccl', rank=proc_rank, world_size=world_size)
def main():
model = LitMNIST()
gpus = 1
num_nodes = 2
trainer = pl.Trainer(gpus=gpus,
num_nodes=num_nodes,
distributed_backend='ddp',
max_epochs=3)
trainer.fit(model)
if __name__ == '__main__':
main()
Expected behavior
All workers on all nodes should finish without errors.
Environment
On each node:
cuda:
GPU:
Tesla K80
Tesla K80
Tesla K80
Tesla K80
Tesla K80
Tesla K80
Tesla K80
Tesla K80
available: True
version: 10.1
packages:
numpy: 1.16.6
pyTorch_debug: False
pyTorch_version: 1.4.0
pytorch-lightning: 0.7.1
tensorboard: 2.2.0
tqdm: 4.44.1
system:
OS: Linux
architecture:
64bit
processor: x86_64
python: 3.7.7
version: #113-Ubuntu SMP Wed Jan 29 14:54:54 UTC 2020
Additional context
|
Native Amp Support
|
[
"feature",
"help wanted"
] |
Native automatic mixed precision support (torch.cuda.amp) is finally merged:
https://pytorch.org/docs/master/amp.html
https://pytorch.org/docs/master/notes/amp_examples.html
Apex Amp has many known pain points (extension builds, forward/backward compatibilty, DataParallel support, flaky checkpointing, i donβt even know if it can be hacked to handle double backward/gradient penalty, othersβ¦). torch.cuda.amp fixes all these, the interface is more flexible and intuitive, and the tighter integration brings more future performance optimizations into scope.
If you want to talk about adding torch.cuda.amp to Lightning, with an eye towards it becoming the true source of mixed precision and replacing Apex, message me on Pytorch slack anytime. I pinged you there as well, but Iβm not sure if you monitor it habitually.
|
Accessing measured time per epoch shown in progress bar.
|
[
"question"
] |
What is your question?
How do I access the measured time per epoch shown in progress bar?
Epoch 8: : 150it [00:03, 46.84it/s, loss=1.625, v_num=0]
It clearly says 00:03 i.e. 3 seconds and I could parse the logs as a hack, but I was wondering if there was any clean way to access the measured time elapsed per training epoch, as I would like to write it to a file.
And if not, how would you recommend timing each epoch? I could put a time.time() in training_step but then would need to add it all up in training_epoch_end, would this work? Thank you!
What's your environment?
OS: Mac OSX
Packaging: Pip-installed packages in a conda environment
Version: 0.7.1
|
How to log (TestTubes metrics.csv) per epoch
|
[
"question",
"won't fix",
"logger"
] |
What is your question?
I use the TestTube logger as well as Neptune.ai and want to keep the local TestTube .csv logs as small and as clean as possible. For that reason I locally only log per epoch. I am not returning a log in training_step, but the metrics.csv still logs created_at for every (10th?) timestep:
Code
def on_epoch_end(self):
# Logging loss per epoch
train_loss_mean = np.mean(self.training_losses)
self.logger[0].experiment.log_metric('epoch/mean_absolute_loss', y=train_loss_mean, x=self.current_epoch)
self.logger[1].experiment.log({'epoch/mean_absolute_loss': train_loss_mean, 'epoch': self.current_epoch}, global_step=self.current_epoch)
self.training_losses = [] # reset for next epoch
trainer = pl.Trainer(
checkpoint_callback=False,
logger=[neptune_logger, testtube_logger],
gpus=hparams.cuda,
val_percent_check=0,
early_stop_callback=early_stopping,
default_save_path=src.settings.LOG_DIR,
max_epochs=hparams.epochs,
row_log_interval=hparams.n_datasamples,
log_save_interval=hparams.n_datasamples
)
What have you tried?
As you can see, I tried changing the log_save_interval and row_log_interval parameters, to make logging cheaper and the log files smaler. That works, but not perfectly. My metrics.csv file now looks like this:
created_at,epoch/mean_absolute_loss,epoch
2020-04-02 10:34:58.104557,,
2020-04-02 10:35:02.600094,0.1986402783766389,0.0
2020-04-02 10:35:02.606550,,
2020-04-02 10:35:06.822270,0.11120840712822974,1.0
2020-04-02 10:35:06.827882,,
2020-04-02 10:35:11.068734,0.07875163345225156,2.0
Before, there were many more "empy" lines, but I still have one line with only datetime/created_at for every line with actual data. Is there any way to change that?
|
Make Pytorch-Lightning DDP work without SLURM
|
[
"feature",
"help wanted"
] |
π Feature
Allow pytorch-lightning DDP mode to work everywhere ordinary pytorch DDP can work.
Basically if every node in a cluster defines the following environment variables it should work:
MASTER_PORT: A free port on the machine that will host the process with rank 0.
MASTER_ADDR: IP address of the machine that will host the process with rank 0.
WORLD_SIZE: The total number of processes, so that the master knows how many workers to wait for.
RANK: Rank of each process, so they will know whether it is the master of a worker.
See pytorch documentation
Motivation
Pytorch-lightning positions itself as a framework wrapper around pytorch. One of it's differentiator features is the ease of distributed learning and it is very counter intuitive that it doesn't work in cases where vanilla pytorch does.
For example in Kubeflow there is a special operator PyTorchJob that spawns worker nodes with proper environment variables so that pytorch.distributed. init_process_group establishes communication between processes.
Pitch
While the user is able to override LightningModule.init_ddp_connection to the following:
def init_ddp_connection(self, proc_rank: int, world_size: int) -> None:
torch.distributed.init_process_group(
'nccl', rank=proc_rank, world_size=world_size)
there's at least one more place that is coupled tightly with SLURM and impedes running it inside ordinary pytorch distributed environment: its TrainerDDPMixin.ddp_train method:
def ddp_train(self, gpu_idx, model):
"""
Entry point into a DP thread
:param gpu_idx:
:param model:
:param cluster_obj:
:return:
"""
# node rank using relative slurm id
# otherwise default to node rank 0
try:
node_id = os.environ['SLURM_NODEID']
self.node_rank = int(node_id)
except Exception:
self.node_rank = 0
One possible solution is to add another check for os.environ['RANK'] instead of just assigning 0 rank to the node in case SLURM variable is missing.
Alternatives
Additional context
|
on_train_end seems to get called before logging of last epoch has finished
|
[
"bug",
"help wanted",
"priority: 0",
"logger"
] |
π Bug
Maybe not a bug, but unexpected behavior. When using the on_train_end method to either upload a models latest .csv file created by TestTube to neptune or to print the last numeric channel value of a metric send to neptune, the values from the final epoch have not yet been logged. When training has finished, the last line of metrics.csv is 2020-04-02 17:23:16.029189,0.04208208369463682,30.0, but for the outputs/uploads of on_train_end see code below:
Code sample
def on_epoch_end(self):
# Logging loss per epoch
train_loss_mean = np.mean(self.training_losses)
# Saves loss of final epoch for later visualization
self.final_loss = train_loss_mean
self.logger[0].experiment.log_metric('epoch/mean_absolute_loss', y=train_loss_mean, x=self.current_epoch)
self.logger[1].experiment.log({'epoch/mean_absolute_loss': train_loss_mean, 'epoch': self.current_epoch}, global_step=self.current_epoch)
self.training_losses = [] # reset for next epoch
def on_train_end(self):
save_dir = Path(self.logger[1].experiment.get_logdir()).parent/'metrics.csv'
self.logger[0].experiment.log_artifact(save_dir)
Last line of uploaded metrics.csv: 2020-04-02 15:27:57.044250 0.04208208404108882 29.0
def on_train_end(self):
log_last = self.logger[0].experiment.get_logs()
print('Last logged values: ', log_last)
Output: Last logged values: {'epoch/mean_absolute_loss': Channel(channelType='numeric', id='b00cd0e5-a427-4a3c-a10c-5033808a930e', lastX=29.0, name='epoch/mean_absolute_loss', x=29.0, y='0.04208208404108882')}
When printing self.final_loss in on_train_end I get the correct last value though.
Expected behavior
The on_train_end method to only get called after the last values have been logged.
|
Long time between calls to training_step when there are multiple optimizers
|
[
"help wanted",
"question",
"won't fix",
"priority: 0"
] |
I have a GAN model with two optimizers that is running 30-40% slower in lightning than without. I've discovered that the lost time comes between the end of training_step for optimizer_idx 0 and the start of the call for optimizer_idx 1. There is 120ms of time (cpu, not wall) spent there. 30ms of that time is the backwards step. The other 90ms is unaccounted for. Note that after optimizer_idx 1 is run, there is only 20ms cpu time before optimizer_idx 0 is called again for the next batch.
So why might there be extra time between the optimizers?
This is happening in both the latest release as well as master.
Thanks!
|
Dockerize test env
|
[
"feature",
"help wanted",
"ci"
] |
π Feature
the simples way for speedup builds is have a docker image with all dependencies then preserving pip cache, that means we will create a docker image which will be pulled
Motivation
for that, the simples way is hawing it id docker hub as it is a native location for almost all CI
these "devel-lightning" docker images can be simply used by any contributor for testing lol
Pitch
we may also have a docker image with installed lightning and all requirements which would make the starting with lightning even easier and it would be also useful for people working on a production
Alternatives
Additional context
later I may configure a cron do this docker build every week
|
Tensorboard logger error: lightning_logs directory not exists in multi-node DDP on nodes with rank != 0
|
[
"bug",
"help wanted"
] |
π Bug
In multi-node DDP train mode on all nodes except rank 0 errors appears at the start of the training caused by accessing lightning_logs directory in tensorboard logger which is not exist at the moment.
To Reproduce
Steps to reproduce the behavior:
setup multi-node cluster (without SLURM)
set environment variables on each node:
export MASTER_ADDR=<rank 0 node IP>
export MASTER_PORT=23456
export RANK=<node id>
export SLURM_NODEID=<node id>
export WORLD_SIZE=<world-size>
install dependencies:
pip install torch torchvision hydra-core pytorch-lightning
copy app.y and conf.yaml to each node
run script on each node
python app.py
see the error:
Exception:
-- Process 0 terminated with the following error:
Traceback (most recent call last):
File "/home/ubuntu/anaconda3/envs/nightly_pt/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 19, in _wrap
fn(i, *args)
File "/home/ubuntu/anaconda3/envs/nightly_pt/lib/python3.6/site-packages/pytorch_lightning/trainer/distrib_data_parallel.py", line 342, in ddp_train
self.run_pretrain_routine(model)
File "/home/ubuntu/anaconda3/envs/nightly_pt/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 777, in run_pretrain_routine
self.configure_checkpoint_callback()
File "/home/ubuntu/anaconda3/envs/nightly_pt/lib/python3.6/site-packages/pytorch_lightning/trainer/callback_config.py", line 45, in configure_checkpoint_callback
f'version_{self.logger.version}',
File "/home/ubuntu/anaconda3/envs/nightly_pt/lib/python3.6/site-packages/pytorch_lightning/loggers/tensorboard.py", line 161, in version
self._version = self._get_next_version()
File "/home/ubuntu/anaconda3/envs/nightly_pt/lib/python3.6/site-packages/pytorch_lightning/loggers/tensorboard.py", line 167, in _get_next_version
for d in os.listdir(root_dir):
FileNotFoundError: [Errno 2] No such file or directory: '/home/ubuntu/pytorch-lightning-intro-guide/outputs/2020-04-04/15-53-26/lightning_logs'
Code sample
app.py:
import pathlib
import hydra
import pytorch_lightning as pl
import torch
from omegaconf import OmegaConf
from torch.nn import functional as F
from torch.optim import Adam
from torch.utils.data import DataLoader, random_split
from torchvision import datasets, transforms
class LitMNIST(pl.LightningModule):
def __init__(self):
super().__init__()
self.layer_1 = torch.nn.Linear(28 * 28, 128)
self.layer_2 = torch.nn.Linear(128, 256)
self.layer_3 = torch.nn.Linear(256, 10)
self.train_dataset = None
self.val_dataset = None
self.test_dataset = None
def forward(self, x):
batch_size, channels, width, height = x.size()
x = x.view(batch_size, -1)
x = self.layer_1(x)
x = F.relu(x)
x = self.layer_2(x)
x = F.relu(x)
x = self.layer_3(x)
x = F.log_softmax(x, dim=1)
return x
def prepare_data(self):
# transform
transform = transforms.Compose(
[transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))])
# download
data_dir = pathlib.Path.home() / 'data'
mnist_train = datasets.MNIST(data_dir, train=True,
download=True, transform=transform)
mnist_test = datasets.MNIST(data_dir, train=False,
download=True, transform=transform)
# train/val split
mnist_train, mnist_val = random_split(mnist_train, [55000, 5000])
# assign to use in dataloaders
self.train_dataset = mnist_train
self.val_dataset = mnist_val
self.test_dataset = mnist_test
def train_dataloader(self):
return DataLoader(self.train_dataset, batch_size=64)
def val_dataloader(self):
return DataLoader(self.val_dataset, batch_size=64)
def test_dataloader(self):
return DataLoader(self.test_dataset, batch_size=64)
def configure_optimizers(self):
return Adam(self.parameters(), lr=1e-3)
def training_step(self, batch, batch_idx):
x, y = batch
logits = self(x)
loss = F.nll_loss(logits, y)
# add logging
logs = {'loss': loss}
return {'loss': loss, 'log': logs}
def validation_step(self, batch, batch_idx):
x, y = batch
logits = self(x)
loss = F.nll_loss(logits, y)
return {'val_loss': loss}
def validation_epoch_end(self, outputs):
avg_loss = torch.stack( # pylint: disable=no-member
[x['val_loss'] for x in outputs]).mean()
tensorboard_logs = {'val_loss': avg_loss}
return {'avg_val_loss': avg_loss, 'log': tensorboard_logs}
def test_step(self, batch, batch_idx):
x, y = batch
logits = self(x)
loss = F.nll_loss(logits, y)
return {'val_loss': loss}
def test_epoch_end(self, outputs):
avg_loss = torch.stack( # pylint: disable=no-member
[x['val_loss'] for x in outputs]).mean()
tensorboard_logs = {'val_loss': avg_loss}
return {'avg_val_loss': avg_loss, 'log': tensorboard_logs}
def init_ddp_connection(self, proc_rank: int, world_size: int) -> None:
torch.distributed.init_process_group(
'nccl', rank=proc_rank, world_size=world_size)
@hydra.main(config_path='conf.yaml')
def main(conf: OmegaConf):
model = LitMNIST()
trainer = pl.Trainer(gpus=conf.gpus,
num_nodes=conf.num_nodes,
distributed_backend=conf.distributed_backend,
max_epochs=3)
trainer.fit(model)
if __name__ == '__main__':
main() # pylint: disable=no-value-for-parameter
conf.yaml:
gpus: 1
num_nodes: 2
distributed_backend: ddp
Expected behavior
Train should go without error
Environment
cuda:
GPU:
Tesla K80
Tesla K80
Tesla K80
Tesla K80
Tesla K80
Tesla K80
Tesla K80
Tesla K80
available: True
version: 10.1
packages:
numpy: 1.18.1
pyTorch_debug: False
pyTorch_version: 1.4.0
pytorch-lightning: 0.7.1
tensorboard: 2.2.0
tqdm: 4.45.0
system:
OS: Linux
architecture:
64bit
processor: x86_64
python: 3.6.10
version: #113-Ubuntu SMP Wed Jan 29 14:54:54 UTC 2020
Additional context
|
Epoch progress bar only showing training steps
|
[
"feature",
"help wanted",
"discussion"
] |
Maybe I am missing something obvious, but my epoch progress bar also includes validation steps. This means that, when one of my training epochs is over the progress bar is still around half-way through the number of steps, and then a validation progress bar starts and both increase at the same time.
It makes sense that an epoch should end when training and validation are over, but is there a way to decouple these two, so there is a bar only for training (the epoch progress bar) and another only for validation?
|
Use isinstance() instead of type() in trainer.distrib_parts.check_gpus_data_type
|
[
"bug",
"feature",
"help wanted"
] |
π Bug
When instantiating a Trainer object, it makes sense to be able to pass a subclass of list.
Ideally, this would be something even more general like collections.abc.Sequence, but I'm not too familiar with Lightning's codebase and that change would have a greater likelihood of breaking things.
To Reproduce
Instantiate a Trainer with the gpus parameter being a subclass of list.
Code sample
>>> from pytorch_lightning import Trainer
>>> class MyList(list):
... pass
...
>>> gpus = MyList([0])
>>> t = Trainer(gpus=gpus)
This produces
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/opt/anaconda/miniconda3/envs/ai/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 366, in __init__
self.data_parallel_device_ids = parse_gpu_ids(self.gpus)
File "/opt/anaconda/miniconda3/envs/ai/lib/python3.7/site-packages/pytorch_lightning/trainer/distrib_parts.py", line 613, in parse_gpu_ids
check_gpus_data_type(gpus)
File "/opt/anaconda/miniconda3/envs/ai/lib/python3.7/site-packages/pytorch_lightning/trainer/distrib_parts.py", line 561, in check_gpus_data_type
raise MisconfigurationException("GPUs must be int, string or list of ints or None.")
pytorch_lightning.utilities.debugging.MisconfigurationException: GPUs must be int, string or list of ints or None.
Expected behavior
Trainer is instantiated normally as it would had a list been passed.
Environment
PyTorch Version: 1.4.0
PyTorch Lightning Version: 0.7.1
OS: Ubuntu 19.10
How you installed PyTorch: pip
Python version: 3.7
Potential Fix
In pytorch_lightning/trainer/distrib_parts.py check types using isinstance() instead of type():
def check_gpus_data_type(gpus):
# if gpus is not None and type(gpus) not in (int, str, list):
if gpus is not None and not isinstance(gpus, (int, str, list)):
raise MisconfigurationException("GPUs must be int, string or list of ints or None.")
I'll put in a PR if this change sounds good
|
How to publicize blog post π
|
[
"question"
] |
Hi!
I wrote a blog post on how to use Optuna with PyTorch Lightning. If you could retweet, or post somewhere, that would be appreciated!
Thanks!
|
give "validation sanity check" flag for "validation_epoch_end" & "validation_step"
|
[
"feature",
"help wanted",
"won't fix"
] |
π Feature
Motivation
When using some custom saver, logger in validation function (validation_epoch_end, validation_step), with Trainer.fit(), it always execute validation sanity check so mess log comes out.
Pitch
def validation_step(self, batch, batch_nb, sanity_check):
if sanity_check:
...
def validation_epoch_end(self, outputs, sanity_check):
if sanity_check:
...
or
def validation_step(self, batch, batch_nb):
if self.sanity_check:
...
def validation_epoch_end(self, outputs):
if self.sanity_check:
...
Alternatives
None
Additional context
None
|
Add an option to disable Trainer.detect_nan_tensors
|
[
"feature",
"help wanted"
] |
π Feature
Add an option to disable Trainer.detect_nan_tensors
Motivation
This function tends to be pretty slow when your network has got a lot of parameters, especially in small tensors. For example in my case it took ~0.5s per training iteration.
Pitch
Add an option to the Trainer class that disables calling detect_nan_tensors every epoch.
Alternatives
Remove it all together. Bad idea.
Additional context
|
Add dataloader arg to Trainer.test()
|
[
"feature",
"help wanted",
"priority: 0",
"discussion",
"let's do it!"
] |
π Feature
It would be nice if you could use a model for inference using:
Trainer.test(model, test_dataloaders=test_loader)
Motivation
This will match the calling structure for Trainer.fit() and allow for test to be called on any dataset multiple times
Pitch
Here's a use case. After training a model using 5-fold cross-validation, you may want to stack the 5 checkpoints across multiple models, which will require a) out-of-fold (OOF) predictions and b) the 5 test predictions (which will be averaged). It would be cool if a & b could be generated as follows:
for f in folds:
model1.load_from_checkpoint(f'path/to/model1_fold{f}.ckpt')
trainer.test(model1, test_dataloaders=valid_loader)
trainer.test(model1, test_dataloaders=test_loader)
model2.load_from_checkpoint(f'path/to/model2_fold{f}.ckpt'))
trainer.test(model2, test_dataloaders=valid_loader)
trainer.test(model2, test_dataloaders=test_loader)
Alternatives
Maybe I'm misunderstanding how test works and there is an easier way? Or perhaps the best way to do this is to write an inference function as you would in pure PyTorch?
Additional context
|
Model multiple parameters on TPU
|
[
"bug",
"help wanted"
] |
π Bug
load_from_checkpoint fails for model with additional required parameters (besides hparams) in model constructor on TPU with more than 1 core.
To Reproduce
Steps to reproduce the behavior:
Add additional required parameter (besides hparams) in model constructor e.g. dataset
Run training on TPU with more than 1 core
See error
Traceback (most recent call last):
File "train.py", line 83, in <module>
trainer.fit(model)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/trainer.py", line 721, in fit
self.load_spawn_weights(model)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/distrib_data_parallel.py", line 372, in load_spawn_weights
loaded_model = original_model.__class__.load_from_checkpoint(path)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/core/lightning.py", line 1512, in load_from_checkpoint
model = cls._load_model_state(checkpoint, *args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/core/lightning.py", line 1543, in _load_model_state
model = cls(*model_args)
TypeError: __init__() missing 1 required positional argument: 'dataset'
Code sample
Google Colab Notebook
from pytorch_lightning import Trainer
from argparse import Namespace
import os
import torch
from torch.nn import functional as F
from torch.utils.data import DataLoader
from torchvision.datasets import MNIST
from torchvision import transforms
import pytorch_lightning as pl
class CoolSystem(pl.LightningModule):
def __init__(self, hparams, dataset):
super(CoolSystem, self).__init__()
# not the best model...
self.l1 = torch.nn.Linear(28 * 28, 10)
self.hparams = hparams
def forward(self, x):
# called with self(x)
return torch.relu(self.l1(x.view(x.size(0), -1)))
def training_step(self, batch, batch_idx):
# REQUIRED
x, y = batch
y_hat = self.forward(x)
loss = F.cross_entropy(y_hat, y)
tensorboard_logs = {'train_loss': loss}
return {'loss': loss, 'log': tensorboard_logs}
def validation_step(self, batch, batch_idx):
# OPTIONAL
x, y = batch
y_hat = self.forward(x)
return {'val_loss': F.cross_entropy(y_hat, y)}
def validation_end(self, outputs):
# OPTIONAL
avg_loss = torch.stack([x['val_loss'] for x in outputs]).mean()
tensorboard_logs = {'val_loss': avg_loss}
return {'avg_val_loss': avg_loss, 'log': tensorboard_logs}
def configure_optimizers(self):
return torch.optim.Adam(self.parameters(), lr=0.0004)
def prepare_data(self):
self.mnist_train = MNIST(os.getcwd(), train=True, download=True, transform=transforms.ToTensor())
self.mnist_test = MNIST(os.getcwd(), train=False, download=True, transform=transforms.ToTensor())
def train_dataloader(self):
loader = DataLoader(self.mnist_train, batch_size=32, num_workers=2)
return loader
def val_dataloader(self):
loader = DataLoader(self.mnist_test, batch_size=32)
return loader
class Dataset():
pass
model = CoolSystem({ "test_param": 2 }, Dataset())
trainer = Trainer(num_tpu_cores=8, train_percent_check=0.02, val_percent_check=0.1, max_epochs=1)
trainer.fit(model)
Expected behavior
Model parameters are saved and loaded correctly.
Environment
PyTorch Version (e.g., 1.0): 1.6.0a0+3e5d25f
OS (e.g., Linux): Linux
How you installed PyTorch (conda, pip, source): pip
Build command you used (if compiling from source): -
Python version: 3.6
CUDA/cuDNN version: -
GPU models and configuration: TPU
Any other relevant information: PyTorch Lightning from master branch
|
TPU error: RAM full, page stopped responding and slower than GPU on google colab
|
[
"bug",
"help wanted",
"accelerator: tpu"
] |
π Bug
To Reproduce
Steps to reproduce the behavior:
Open lightning_mnist_tpu.ipynb
Run the code
Expected behavior
The code runs normally and faster than GPU.
Error
The webpage stopped responding soon after running the trainer, on several devices such as PC, phone and puffin browser, with Ram reaching 100% on PC. (both GPU and TPU)
Iteration speed for TPU calculations is ~30 it/s while iteration speed for GPU is >90 it/s.
Additional context
Running the demo notebook Lightning-demo.ipynb on TPU solved the first error but the iteration speed is still slower for TPU, with perpare_data added.
|
Implement Asynchronous GPU transfer and Training with Multithreading
|
[
"feature",
"help wanted",
"won't fix"
] |
π Feature
Asynchronous GPU transfer can be achieved by utilizing pinned memory with multithreading
Minimal example code
https://github.com/HenryJia/Lighter/blob/master/lighter/train/loaders.py
Motivation
Parallelrising GPU transfer and training will cut down time GPU is stuck waiting for data from CPU
https://devblogs.nvidia.com/how-overlap-data-transfers-cuda-cc/
Pitch
Everyone likes faster training and maximal GPU utilisation
Alternatives
Not Applicable
Additional context
None
|
Auto move input to proper device for inference
|
[
"feature",
"help wanted",
"discussion",
"let's do it!"
] |
Does PyTorch Lightning provide abstractions for inference? In particular, does it provide ways of automatically handling the transfer to/from GPU when I call model(x), or do I need to roll my own code for that?
Example Use Case
I have a use case where I train a model on slices of a sliding window of an audio spectrogram (i.e., let's say 1 second chunks). When training is finished, I'd like to see the performance of the model on an entire file. Pseudocode:
# generate training data
X, Y = [], []
for audio_file in audio_files:
for x, y in sliding_window(audio_file):
X.append(x); Y.append(y)
X, Y = shuffle(X, Y) # shuffle the slices of all files
# Train model on slices
model = ExampleModel(X, Y)
trainer = Trainer(gpus=1)
trainer.fit(model)
# Plot the performance on a whole test file:
test_Y = []
for x, _ in sliding_window(test_file)
test_Y.append(model(x))
plt.plot(test_Y)
Notice that during training, the notion of a file is entirely gone, but when I plot my test file, I reintroduce that. Of course, in my real code, my training data X, Y is split into training, validation and test, as usual. The plotting step is an additional verification; sort of like putting the pieces together.
Problem
When the model runs on the GPU, The last part of the code becomes:
# Plot the performance on a whole test file:
model.eval()
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
test_Y = []
for x, _ in sliding_window(test_file)
y = model(x.to(device)).cpu()
test_Y.append(y)
plt.plot(test_Y)
This isn't the end of the world, but it's not as nice as the other code that PyTorch Lightning helped me refactor. I also can't call x.type_as(...) since in that loop, I have no reference type that lives on the CPU/GPU that I could refer to (or maybe I can, but I haven't figured it out).
A workaround to this is to save the model and load it again, on a CPU.
# Train model on slices
# ...
trainer.fit(model)
trainer.save_checkpoint("model.ckpt")
model = ExampleModel.load_from_checkpoint("model.ckpt")
# Plot the performance on a whole test file:
model.eval()
test_Y = []
for x, _ in sliding_window(test_file)
test_Y.append(model(x))
plt.plot(test_Y)
While this removes the noise of the .to(device) and .cpu() calls, it adds the overhead of having to save the model every time. I also still have to manually call model.eval(). The use case of running my model on an entire audio file is not for metrics but for visual inspection; as such I always only sample a few audio files. Running the model on a CPU instead of a GPU for inference thus isn't a problem.
Question
Is there a more elegant way to achieve the above?
|
EarlyStopping restore_best_weights argument similar to Keras
|
[
"duplicate",
"feature",
"help wanted"
] |
π Feature
EarlyStopping argument restore_best_weights restores model weights from the epoch with the best value of the monitored loss, similar to Keras. Would not need to run ModelCheckpoint every epoch.
Motivation
Good to have to avoid even more pytorch boilerplate
Alternatives
Calling ModelCheckpoint ever epoch
|
Not auto add DistributedSampler for DDP training
|
[
"bug",
"help wanted"
] |
π Bug
in 0.72, even if we don't set sampler, pytorch_lightning will not add DistributedSampler for us.
To Reproduce
the reason is in pytorch, if we don't set sampler, pytorch will add a sampler for us.
in pytorch's dataloader.py:
if sampler is None: # give default samplers
if self._dataset_kind == _DatasetKind.Iterable:
# See NOTE [ Custom Samplers and IterableDataset ]
sampler = _InfiniteConstantSampler()
else: # map-style
if shuffle:
sampler = RandomSampler(dataset)
else:
sampler = SequentialSampler(dataset)
but in pytorch_lightning we check whether sampler is None to decide to add sampler
in data_loading.py funciton auto_add_sampler:
no_sampler_added = dataloader.sampler is None
because pytorch have default sampler for us, which is not None, pytorch_lighting will not automatically add sampler.
|
What is the best practice to define a model?
|
[
"question"
] |
I want to try many architectures and it is not convenient to copy all the loading/train/val/test logic from file to file. Currently, I create a "container", where I define all the common logic, and then I inherit that class. It looks like this:
class ModelContainer(pl.LightningModule):
def __init__(self,hparams):
super().__init__()
def prepare_data(self):
....
def configure_optimizers(self):
....
def train_dataloader(self):
....
And in another file I create a model, where I redefine only a forward method:
class MyCoolNet(ModelContainer):
def __init__(self, hparams):
super().__init__(hparams)
def forward(self, x):
...
Am I doing it the right way?
|
Replacing the use of Mixins with composition
|
[
"feature",
"help wanted",
"discussion"
] |
π Feature
Use composition instead of inheritance and mixins.
A typical way of using this would go something like the following:
In such a world, the trainer can be instantiated like so:
trainer_instance = Trainer(dataset_reader_instance, model_instance, training_loop_manager, callback_instances_list, platform_specific_kwargs ...)
Here, everything non-essential including logging, early-stopping, etc. goes into the callbacks. Every callback while registering itself with the trainer, will tell the trainer what attributes it requires on the trainer to function. Once all the callbacks register themselves the trainer does a sanity check β warn if multiple callbacks want to access the same attribute, error out if two callbacks have asked for exclusive write access for an attribute, etc.
Additional benefit:
Now, the issue would be that the user has to instantiate these different instances in their final training script. We can automate that using a config manager like in AllenNLP and provide the user with a standard CLI train, test, predict commands. This way if a user wants to use their own dataset to train a model, they need to define a reader class for it in a separate file, modify one line in an existing config JSON or YAML file and use CLI command train new_config.yaml
Concern:
If the worry is that the user needs to learn about all the callbacks to even start training a model, then that can be addressed by providing a sample/default config file which includes the configuration for all important callbacks. So in essence, if the user wants to use the default training setup, they just need to copy the default training config file and change the model and dataset configuration.
Motivation
Code using mixins is hard to read, debug, extend and test if the mixins are not completely decoupled. Moreover, using composition coupled with an automatic way of instantiating classes will make for a very clean and fast user interface.
Pitch
Currently, the same as what is provided above in the Feature Proposal. Will update as/if things become more concrete.
Alternatives
Decouple the mixins, define their responsibilities clearly and have detailed "for developer" documentation describing these.
Additional context
Slack conversation link
|
Test metrics is not being reported to TensorBoard since 0.7.2
|
[
"bug",
"help wanted",
"priority: 0"
] |
π Bug
To Reproduce
Steps to reproduce the behavior:
https://colab.research.google.com/drive/1fM6xL140u9pU0vcmJf6qKzHwczjcMpcF
Code sample
Please see the colab above.
Expected behavior
The test metrics should be reported.
Environment
The Colab environment:
cuda:
GPU:
available: False
version: 10.1
packages:
numpy: 1.18.2
pyTorch_debug: False
pyTorch_version: 1.4.0
pytorch-lightning: 0.7.2
tensorboard: 2.2.0
tqdm: 4.38.0
system:
OS: Linux
architecture:
64bit
processor: x86_64
python: 3.6.9
version: #1 SMP Wed Feb 19 05:26:34 PST 2020
Additional context
Regression from 0.7.1
|
Strange behaviour when using Module Lists
|
[
"question"
] |
Upon starting my training pipeline I got the follow summary of my model where I've made heavy use of module lists
| Name | Type | Params
0 | Shared_Layers | ModuleList | 8 K
1 | Shared_Layers.0 | Linear | 8 K
2 | Pooled_Layers | ModuleList | 0
3 | Decode_Layers | ModuleList | 73 K
4 | Decode_Layers.0 | Linear | 73 K
5 | Classification_Layer | Linear | 257
I'm quite concerned about what these .0's mean and why Pooled layers in particular lacks a .0 in there. Any assistance in knowing if this means there is a problem or not would be appreciated.
the init for the module is below
def __init__(self, config):
super(InfoMax, self).__init__()
self.config = config
self.af = Act_Dict[self.config["Network Architecture"]["Activation Function"]]
self.network_config = self.config["Network Architecture"]
self.bottleneck = self.network_config['Pooling_Layers'][-1]
self.max_dim = self.network_config['Shared_Layers'][-1]
self.start_dim = 32
self.Shared_Layers = nn.ModuleList(torch.nn.Linear(i, o, 1) for i, o in pairwise([self.start_dim, *self.network_config['Shared_Layers']]))
self.Pooled_Layers = nn.ModuleList(torch.nn.Linear(i, o, 1) for i, o in pairwise(self.network_config['Pooling_Layers']))
self.Decode_Layers = nn.ModuleList(torch.nn.Linear(i, o, 1) for i, o in pairwise([self.bottleneck+self.start_dim, *self.network_config['Discrimination_Layers']]))
self.Classification_Layer = torch.nn.Linear(self.Decode_Layers[-1].__dict__['out_features'], 1)
|
More granular callbacks
|
[
"feature",
"help wanted",
"won't fix"
] |
π Make callback system more granular
Motivation
I am currently implementing #765 (make progress bar into a callback) and I need additional callback methods to do this.
Pitch
introduce these new callback methods:
on_train_batch_start (currently named on_batch_start)
on_train_batch_end (currently named on_batch_end)
on_val_batch_start
on_val_batch_end
on_test_batch_start
on_test_batch_end
and make on_batch_start run on any of the above *_start (same for on_batch_end)
Further suggestions:
introduce on_train_epoch_start, on_val_epoch_start, on_test_epoch_start and corresponding *_end methods.
Alternatives
Keep as is, but I don't know how to implement the progress bar callback otherwise for validation/test updates.
|
Integrate toma for automatic batch sizing
|
[
"feature",
"help wanted"
] |
@BlackHC want to integrate into lightning?
https://github.com/BlackHC/toma
|
Disable log_save_interval
|
[
"feature",
"help wanted"
] |
What is the correct way (if there is one) to disable log_save_interval? I want to log information to WandB only at the end of each epoch, but cannot do so because logs are produced during validation steps. Even if I set log_save_interval to a very large number logs are still saved after the first validation step.
|
add support for prefetch_generator
|
[
"feature",
"help wanted",
"won't fix"
] |
π Feature
https://github.com/justheuristic/prefetch_generator
it can be used to speed up the dataloader.
Perhaps related to #1316 , but prefetch_generator is quite different from DALI
Motivation
Pitch
Alternatives
Additional context
|
ddp causes an error when my model class has a lambda function
|
[
"bug",
"help wanted"
] |
π Bug
To Reproduce
Steps to reproduce the behavior:
Add self.fn_error = lambda x: x to the model (e.g., your_model).
Run the trainer with ddp backend.
It causes an error like 'AttributeError: Can't pickle local object 'your_model.init..'.
Code sample
Expected behavior
When I use dp backend, everything is ok.
Environment
cuda:
GPU:
TITAN RTX
TITAN RTX
available: True
version: 10.2
packages:
numpy: 1.17.2
pyTorch_debug: False
pyTorch_version: 1.6.0a0+b55dee9
pytorch-lightning: 0.7.4-dev
tensorboard: 2.2.0
tqdm: 4.45.0
system:
OS: Linux
architecture:
64bit
processor: x86_64
python: 3.7.4
version: #86~16.04.1-Ubuntu SMP Mon Jan 20 11:02:50 UTC 2020
Additional context
|
Leaked semaphores with DDP training
|
[
"bug",
"help wanted"
] |
I constantly get this warning when training on an AWS instance (8 GPUs, using DDP). It does not crash, but the training hangs for a few seconds before continuing.
/usr/lib/python3.6/multiprocessing/semaphore_tracker.py:143: UserWarning: semaphore_tracker: There appear to be 3 leaked semaphores to clean up at shutdown
I can share my docker container if necessary, as it might be an issue with library versions.
|
EarlyStopping reinitializes to .wait=0 even with Trainer resume_from_checkpoint
|
[
"feature"
] |
π Bug
When using Trainer's resume_from_checkpoint with EarlyStopping callback, the callback's patience progress (i.e. self.wait) is loaded according to the checkpoint, but is getting reset by its on_train_start method, making the checkpoint restoration moot.
Also, the EarlyStopping's .best is not saved or restored at all, making its restoration further unusable.
To Reproduce
Steps to reproduce the behavior:
Install using pip install git+https://github.com/PytorchLightning/pytorch-lightning.git@master --upgrade
import os
import torch
from torch.nn import functional as F
from torch.utils.data import DataLoader
from torchvision.datasets import MNIST
import torchvision.transforms as transforms
import pytorch_lightning as pl
class CoolSystem(pl.LightningModule):
def __init__(self):
super(CoolSystem, self).__init__()
# not the best model...
self.l1 = torch.nn.Linear(28 * 28, 10)
def forward(self, x):
return torch.relu(self.l1(x.view(x.size(0), -1)))
def training_step(self, batch, batch_nb):
# REQUIRED
x, y = batch
y_hat = self.forward(x)
return {'loss': F.cross_entropy(y_hat, y)}
def validation_step(self, batch, batch_nb):
# OPTIONAL
x, y = batch
y_hat = self.forward(x)
return {'val_loss': F.cross_entropy(y_hat, y)}
def validation_epoch_end(self, outputs):
# OPTIONAL
avg_loss = torch.stack([x['val_loss'] for x in outputs]).mean()
return {'val_loss': avg_loss}
def configure_optimizers(self):
# REQUIRED
# can return multiple optimizers and learning_rate schedulers
return torch.optim.Adam(self.parameters(), lr=0.02)
def train_dataloader(self):
# REQUIRED
return DataLoader(MNIST(os.getcwd(), train=True, download=True, transform=transforms.ToTensor()), batch_size=32)
def val_dataloader(self):
# OPTIONAL
return DataLoader(MNIST(os.getcwd(), train=True, download=True, transform=transforms.ToTensor()), batch_size=32)
from pytorch_lightning import Trainer
from pytorch_lightning.callbacks import ModelCheckpoint, EarlyStopping
model = CoolSystem()
checkpoint_callback = ModelCheckpoint(
filepath='./model_ckpt/whatever_the_name_is_gonna_be_auto_chosen',
save_top_k=-1,
verbose=True,
monitor='val_loss',
mode='auto'
)
class EarlyStoppingPrinting(EarlyStopping):
def on_train_start(self, trainer, pl_module):
print('EarlyStoppingPrinting before on_train_start')
print('self.wait = ', self.wait)
super().on_train_start(trainer, pl_module)
print('EarlyStoppingPrinting after on_train_start')
print('self.wait = ', self.wait)
def on_epoch_end(self, trainer, pl_module):
ret = super().on_epoch_end(trainer, pl_module)
if self.wait:
print('Early stopping patience: %d/%d' % (self.patience-self.wait, self.patience))
return ret
early_stopping = EarlyStoppingPrinting(
monitor='val_loss',
patience=5,
verbose=True,
mode='auto'
)
trainer = Trainer(max_nb_epochs=1000, train_percent_check=0.1,
checkpoint_callback=checkpoint_callback,
early_stop_callback=early_stopping)
trainer.fit(model)
And then use KeyboardInterrupt on the training when early_stopping.wait>0. Load the corresponding checkpoint (let's say it's model_ckpt/_ckpt_epoch_5.ckpt) and resume with
trainer = Trainer(max_nb_epochs=1000, train_percent_check=0.1,
checkpoint_callback=None,
resume_from_checkpoint = 'model_ckpt/_ckpt_epoch_5.ckpt',
early_stop_callback=early_stopping)
trainer.fit(model)
The early_stopping callback would print:
EarlyStoppingPrinting before on_train_start
self.wait = 2
EarlyStoppingPrinting after on_train_start
self.wait = 0
And for self.best, I mean it's not even saved; do I need to write the code?
Expected behavior
Checkpoint value of self.wait should be preserved rather than reset:
EarlyStoppingPrinting before on_train_start
self.wait = 2
EarlyStoppingPrinting after on_train_start
self.wait = 2
And self.best should be saved and loaded from the checkpoint.
Environment
This is ran on Google colab.
https://colab.research.google.com/drive/1ZdiFf6ksNpgsqOdSKM6lMO0yIhqpnTHD
Additional context
It is confusing what member variables of the model Lightning saves into the checkpoints from reading the tutorials -- it's implied it saves a wide range of things, but what is being saved is actually very specific.
Also confusingly there are many ways to restore a checkpoint (model's load_from_checkpoint method, trainer's resume_from_checkpoint parameter, and using test_tube). These are not well documented (at least I didn't find this page before searching github) and I have no idea if I used the right one.
|
Weight Initialization blocks learning
|
[
"bug",
"help wanted",
"won't fix"
] |
π Bug
When trying to perform a custom weight initialization, such as xavier_uniform_ or orthogonal_ from torch.nn.init, the weights are kept fixed and not updated during backpropagation.
To Reproduce
Steps to reproduce the behavior:
Create a simple model, even with just FC layers and ReLU
Initialize the weight with torch.nn.init.xavier_uniform_ or another method
Start Training
Weights are not getting updated
Expected behavior
I'd expect the initialization not to block the training process
Environment
packages:
numpy: 1.17.2
pyTorch_debug: False
pyTorch_version: 1.3.0
pytorch-lightning: 0.7.3
tensorboard: 2.0.0
tqdm: 4.45.0
system:
OS: Linux
architecture:
64bit
processor: x86_64
python: 3.6.9
version: #163-Ubuntu SMP Mon Sep 24 13:14:43 UTC 2018
|
transfering of data to gpu with custom data type
|
[
"question",
"won't fix"
] |
β Questions and Help
What is your question?
The batch, that should be passed to the train/val step consists of a list of https://pytorch-geometric.readthedocs.io/en/latest/modules/data.html torch_geometric Data objects. The batch is not passed to the corresponding gpus. I assumed that this would happen in the pytorch_lightning/trainer/distrib_parts.py transfer methods, however the code never arrives in these methods. Where do I have to change the code of lightning to allow for custom data types ?
Code
What have you tried?
If I change the batch to a list of tensors, they are correctly transfered, so I assume the rest of my code works fine. Transfering the data by adding the corresponding to device calls in the train step also works.
What's your environment?
Ubuntu 18.04
pip
Version '0.7.1'
dgx2 cluster, no slurm
|
Mixing hparams and arguments in LightningModule.__init__() crashes load_from_checkpoint()
|
[
"bug",
"help wanted"
] |
π Bug
Right now, if you initialize a Lightning Module with a mixture of a Namespace (hparams) as well as additional arguments (say to a Dataset), load_from_checkpoint can't recover.
To Reproduce
Create a LightningModule as follows:
class Model(pl.LightningModule):
def __init__(self, hparams, train_dataset, val_dataset):
self.hparams = hparams
self.train_dset, self.val_dset = train_dataset, val_dataset
...
Run training, then try to restore from checkpoint, via:
nn = Model.restore_from_checkpoint(<PATH>, train_dataset=None, val_dataset=None)
Expected behavior
Ideally, you'd just be able to pass in the additional arguments (as above) and everything would work.
|
Load model from checkpoint when model is not instantiated in __init__
|
[
"feature",
"help wanted",
"good first issue",
"won't fix",
"discussion"
] |
π Feature
Be able to load a model from a checkpoint path when the model is not instantiated in init
Motivation
Imagine I can only instantiate the model after looking at the train dataset. Ex: for the creation of emb layers you need the number of categories in categorical features.
Pitch
I would like an option where when loading a model from a checkpoint I could tell it I need to run prepare data first so I can instantiate the model for instance.
Alternatives
There are probably better alternatives to this to the option presented in pitch.
|
Customizing hparams after loading checkpoint
|
[
"question"
] |
β Questions and Help
Before asking:
search the issues.
search the docs.
What is your question?
I'm wondering what the best practice for loading a model with different hparams than what is stored in the checkpoint?
I realize I could just load the model and set them afterwards e.g.:
model = model.load_from_checkpoint(args.checkpoint_file) # Load model
# Set hparams etc..
model.hparams.arg1 = 0.0
model.hparams.arg2 = 1.0
But the problem is that my model init function depends on the hparams arg1 and arg2 so they're set too late.
I could also do
checkpoint = torch.load(args.checkpoint_file)
checkpoint['hparams']['arg1'] = 0.0
checkpoint['hparams']['arg2'] = 1.0
model = model._load_state_dict(checkpoint)
The problem here is that i'm using the protected function _load_state_dict. Is there another way of solving this that i've missed? Or could we consider making _load_state_dict public?
|
on_before_zero_grad hook
|
[
"docs"
] |
π Documentation
The documentation report the method on_before_zero_grad. Strangely, this method is not shown in the lifecycle for hooks documentation. Moreover, when it is defined in a lightning module it is not called.
Hence the question : is it a discontinued hook ? If so we could erase its mention in the docs.
Thanks.
|
wandb logger 'global_step' affects other logger
|
[
"bug",
"help wanted",
"logger"
] |
π Bug
The wandb logger adds a 'global_step' to the metric dict which appears in all other loggers (e.g. Tensorboard). Only the wandb logger is adding 'global_step' to metric and I think it is not necessary. Another side effect of that is, that 'global_step' is also added to empty dicts which then are logged and resulting to strange graphs like this:
or this
I also wrote a simple logger class to print out metrics. I got this output:
Step 0
{'global_step': 0}
Step 10
{'global_step': 10}
[...]
Step 190
{'global_step': 190}
Step 200
{'global_step': 200}
Step 0
{'val/mse': 0.01713273860514164, 'train/mse': 0.04259789362549782, 'global_step': 0}
Step 207
{'global_step': 207}
Step 217
{'global_step': 217}
[...]
Step 397
{'global_step': 397}
Step 407
{'global_step': 407}
Step 1
{'val/mse': 0.013123581185936928, 'train/mse': 0.01449404377490282, 'global_step': 1}
Step 414
{'global_step': 414}
Step 424
{'global_step': 424}
...
Step 604
{'global_step': 604}
Step 614
{'global_step': 614}
Step 2
{'val/mse': 0.012394818477332592, 'train/mse': 0.012575697153806686, 'global_step': 2}
[...]
Step 5
{'val/mse': 0.012411396019160748, 'train/mse': 0.011899641714990139, 'global_step': 5}
Step 1242
{'global_step': 1242}
Step 1252
{'global_step': 1252}
[...]
Step 1432
{'global_step': 1432}
Step 1442
{'global_step': 1442}
Step 6
{'val/mse': 0.01244258601218462, 'train/mse': 0.011944737285375595, 'global_step': 6}
Step 1449
{'global_step': 1449}
Step 1459
{'global_step': 1459}
[...]
Step 1639
{'global_step': 1639}
Step 1649
{'global_step': 1649}
Step 7
{'val/mse': 0.01261985208839178, 'train/mse': 0.011924241669476032, 'global_step': 7}
Step 1656
{'global_step': 1656}
Step 1666
{'global_step': 1666}
[...]
Step 1846
{'global_step': 1846}
Step 1856
{'global_step': 1856}
Step 8
{'val/mse': 0.012863481417298317, 'train/mse': 0.011850016191601753, 'global_step': 8}
Step 1863
{'global_step': 1863}
Step 1873
[...]
Step 2053
{'global_step': 2053}
Step 2063
{'global_step': 2063}
Also notice: I set max_epochs to 10 so expected to be 10 measurements. The last one is missing. But this could be handled in an other issue.
To Reproduce
Steps to reproduce the behavior:
Use training_epoch_end and validation_epoch_end to log metric like {'log': {'loss': loss}} (see code bellow)
Run training with wandb logger and one more logger of your choice.
See global_step graphs.
Code sample
Important LightningModule Methods:
def training_step(self, batch, batch_idx):
# calculate actual model prediction given batch
# and calculate loss
x, y = batch
y_hat = self(x)
# print out current loss on training every n-th iteration
loss = F.mse_loss(y_hat, y)
return {
"loss": loss
}
def training_epoch_end(self, outputs):
loss_mean = torch.stack([x["loss"] for x in outputs]).mean().item()
return {
"log": {
"train/mse": loss_mean,
"step": self.current_epoch
}
}
def validation_step(self, batch, batch_idx):
x, y = batch
y_hat = self(x)
return {'val_loss': F.mse_loss(y_hat, y)}
def validation_epoch_end(self, outputs):
val_loss_mean = torch.stack([x["val_loss"] for x in outputs]).mean().item()
return {
"val_loss": val_loss_mean,
"log": {
"val/mse": val_loss_mean,
"step": self.current_epoch
}
}
Training:
clbk_terminal = TerminalCallback()
checkpoint = ModelCheckpoint(filepath="ckpts/" + name + "{_val_loss:.5f}_{epoch:03d}", prefix="BasicNN_", monitor="val_loss", verbose=False, save_top_k=3, save_weights_only=True)
earlystopping = EarlyStopping(monitor="val_loss", patience=25, verbose=True)
loggers = [
WandbLogger(project="nwp-energy-load", name=name, log_model=True),
TensorBoardLogger(save_dir="tb_logs", name=name, version=0),
MyLogger() # only prints metric; can also be ignored
]
trainer = Trainer(gpus=-1, max_epochs=10, progress_bar_refresh_rate=0, logger=loggers, log_save_interval=1, row_log_interval=10,
callbacks=[], early_stop_callback=earlystopping, checkpoint_callback=checkpoint)
Expected behavior
Is 'global_step' needed in wandb logger? If so, it should not affect other loggers. Also if there is nothing to log (e.g. in training_step) the logger should log nothing.
Environment
Linux Arch
Python 3.8.2
Pytorch 1.4.0
Pytorch_Lightning 0.7.3
|
Incorrect MisconfigurationException for models without dataloaders.
|
[
"bug",
"help wanted"
] |
π Bug
I have a model that does not have train, val and test dataloaders defined internally (it's a production system and it doesn't really make sense to have dataloaders). If I try to run fit() on it by passing in train_dataloader and val_dataloaders, it raises
pytorch_lightning.utilities.exceptions.MisconfigurationException: You have defined `test_step()`, but have not passed in a `test_dataloader()`.
This means that it's now impossible to train a model without dataloaders defined, as there's no way of passing in test dataloaders. I believe this was caused by this PR: #1434. This is happening at the tip of master.
To Reproduce
Steps to reproduce the behavior:
Checkout the master branch
Define a model without data loaders
Run fit() with train_dataloader and val_dataloaders
See the exception
Code sample
# MyModel doesn't have train_dataloader, val_dataloader or test_dataloader
model = MyModel()
trainer = pl.Trainer()
trainer.fit(model, train_dataloader=train, val_dataloaders=val) # exception raised here
trainer.test(test_dataloaders=test)
Expected behavior
There should be no exception raised during fit().
Environment
cuda:
GPU:
available: False
version: None
packages:
numpy: 1.18.1
pyTorch_debug: False
pyTorch_version: 1.4.0
pytorch-lightning: 0.7.4-dev
tensorboard: 2.1.1
tqdm: 4.43.0
system:
OS: Darwin
architecture:
64bit
processor: i386
python: 3.7.3
version: Darwin Kernel Version 18.7.0: Mon Feb 10 21:08:45 PST 2020; root:xnu-4903.278.28~1/RELEASE_X86_64
Additional context
|
Rank Zero Property Mixin
|
[
"feature",
"help wanted"
] |
π Feature
This is an alternative to the change proposed in #1408 in case the global variable approach doesn't work.
I propose to add a mixin class for the rank property, e.g., named RankZeroPropertyMixin
Motivation
There are some parts of the code base that use a rank property in combination with the @rank_zero_only decorator. Refactoring this into a mixin would avoid code duplication and it would make it straightforward to add to a new callback for example.
PR #1408 already solves the problem of code duplication.
Pitch
class RankZeroPropertyMixin:
def __init__(self):
self._rank = 0
@property
def rank(self) -> int:
return self._rank
@rank.setter
def rank(self, value: int) -> None:
self._rank = value
In the Trainer init or distributed_parts, we will check each callback, logger for the rank property and set it to the appropriate value.
Then, when we add a new callback:
class NewFancyCallback(RankZeroPropertyMixin, Callback):
def __init__(self):
...
@rank_zero_only
def on_train_start():
print('only on rank 0')
This does not just apply to callbacks of course, could be added to Logger, etc.
Alternatives
Leave as is or go with #1408. In the future, Lightning will probably add more callbacks and features that are restricted to rank 0, so it would lead to code duplication.
|
Feasibility of multi-task training in lightning with dynamic model size
|
[
"question"
] |
Questions and Help
Hello all. I am interested in using lightning for my research project. However I'm having trouble assessing the feasibility of my architecture in lightning due to some particularities.
The typical train loop that lightning abstracts looks like this:
for epoch in range(epochs):
...train code...
However my structure looks something more like this.
for task_number in range(number_of_tasks):
dataloader = Dataloader(task=t) # The datalaoder is task dependent.
if task_number == 0:
for epoch in range(epochs):
...regular train code...
else:
for epoch in range(epochs):
...selective retraining... # This uses pytorch hooks to only train certain nodes by setting grads to 0
model = split(model) # Logic that may add new nodes to the model (size change), also does training of newly added nodes
if loss > loss_threshold:
model = dynamic_expansion(model) # More logic that will do a size change and training
As you can see there are some challenges that don't easily translate to lightning, first the concept of tasks, task dependent loaders (for example, first task is a subset of mnist, second task is a different subset), and more complex task dependent logic which may cause a model size change and require newly added nodes to be trained.
I'm interested in using lightning, but I'm having trouble seeing how this arch could fit.
Thank you.
|
0.7.3 breaks reusable dataloaders in DDP
|
[
"bug",
"help wanted",
"priority: 0"
] |
π Bug
0.7.3 breaks reusable dataloaders in DDP
Traceback (most recent call last):
File "/opt/conda/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 19, in _wrap
fn(i, *args)
File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/distrib_data_parallel.py", line 345, in ddp_train
self.run_pretrain_routine(model)
File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 864, in run_pretrain_routine
self.train()
File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/training_loop.py", line 296, in train
self.reset_train_dataloader(model)
File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/data_loading.py", line 128, in reset_train_dataloader
self.train_dataloader = self.auto_add_sampler(self.train_dataloader, train=True)
File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/data_loading.py", line 112, in auto_add_sampler
dataloader = type(dataloader)(**dl_args)
File "../main/dataset.py", line 15, in __init__
super().__init__(*args, **kwargs)
TypeError: __init__() got an unexpected keyword argument 'iterator'
Code sample
class _RepeatSampler(object):
def __init__(self, sampler):
self.sampler = sampler
def __iter__(self):
while True:
yield from iter(self.sampler)
class FastDataLoader(torch.utils.data.dataloader.DataLoader):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
object.__setattr__(self, 'batch_sampler', _RepeatSampler(self.batch_sampler))
self.iterator = super().__iter__()
def __len__(self):
return len(self.batch_sampler.sampler)
def __iter__(self):
for i in range(len(self)):
yield next(self.iterator)
replace Dataloader with FastDataLoader in lightning
(this snippet is from pytorch/pytorch#15849)
Expected behavior
Dataloaders initialize correctly and are reused between train/val/epochs (works as expected in 0.7.1)
Probable Cause
#1425
|
Batch is not split in 'dp' mode when dataloader output is not a tensor
|
[
"bug",
"help wanted"
] |
My dataloader is returning a list of lists (for multi-label classification) for labels and a tensor of images for each batch. When I'm using DataParallel mode, labels are not getting split into "sub-batches" and I'm getting all the labels on each GPU. Is there a way to implement this splitting also for non-tensors?
class CustomDataset(Dataset):
...
def collate(self, batch):
images, labels = list(zip(*batch))
return torch.stack(images), [ label_set for label_set in labels ]
class LitModel(pl.LightningModule):
...
def validation_step(self, batch, batch_idx):
x, y = batch
y_hat = self.model(x)
print(f'Val loss {y.shape}, {y_hat.shape}' )
This example will print y to have length of full batch and y_hat to have length the same as the length of x tensor (which is smaller here). x tensor got split correctly, while y didn't.
Is it the issue of lightning or maybe DataParallel module?
|
Memory (CPU and GPU) leaks during the 1st epoch
|
[
"bug",
"help wanted"
] |
π Bug
Hello.
This memory leak occurs during the first epoch. If one has a large epoch time (I had > 10 days), the OOM error will come. It's interesting, that in precision=16 mode, it leaks out on the GPU and the CPU both. If we switch amp optimization off (precision=32), the leak goes only on the CPU.
Also, I checked the number of tensors, which are tracked by the garbage collector. And it appeared to be linearly increasing during the first epoch, and then (on the 2nd epoch starts), it falls to the initial value and begins increasing again.
Let me provide the plots:
Experiment 1: amp_level='O2', precision=16
The number of tensors, tracked by garbage collector
GPU (the 2nd in my case) usage, tracked by pytorch-lightning
CPU memory usage by the process (bytes)
Experiment 2: amp_level=None, precision=None
The number of tensors, tracked by garbage collector
GPU (the 2nd in my case) usage, tracked by pytorch-lightning
CPU memory usage by the process (bytes)
As you can see, both cases have a CPU leak. The "amp"-case also has a GPU leak.
Also, it's clear, that such leaky behavior stops when the 2nd epoch starts.
On these plots, the 2nd epoch starts on the 2nd "saw claw" of the "Num-of-tensors" plot.
Also, there is another observation: the speed of tensors number increasing is 1001. And this is my forward pass method:
def training_step(self, batch, batch_idx):
losses = self.forward(batch)
num_of_tensors = get_num_of_tensors()
log = {'Num-of-tensors': num_of_tensors, 'Cpu-mem-usg': get_cpu_mem()}
for i, loss in enumerate(losses):
log[f'loss{i}'] = loss
print(num_of_tensors)
return {'loss': losses[0], 'log': log}
Here I return exactly 1001 tensor: one for loss and 1000 for log.
In my real experiments I had only 3 tensors. It took ~2-3 days to get OOM. But in the current example (see To Reproduce) it will crash much faster.
To Reproduce
Steps to reproduce the behavior:
Execute Code sample (this script has no arguments, so change needed values manually in script).
Go to the tensorboard to check plots.
Code sample
https://gist.github.com/alexeykarnachev/47de06b93a717ab0664eded42ed2826a
Expected behavior
The number of tensors, GPU and CPU memory does not increase during the training.
Environment
PyTorch version: 1.4.0
OS: Ubuntu 16.04.6 LTS
Python version: 3.7
Versions of relevant libraries:
[pip] numpy==1.18.1
[pip] pytorch-lightning==0.7.3
[pip] torch==1.4.0
[pip] torchvision==0.5.0
Additional context
Sorry for so messy flow of the information, but I don't know, how to structure it more clearly.
|
save_function() not set with save_model callback?
|
[
"question",
"won't fix"
] |
This is the callback in trainer()
trainer = pl.Trainer(
callbacks=[ModelCheckpoint(monitor='val_loss',
filepath=os.path.join(hparams.default_root_dir,
'{epoch}-{val_loss:.2f}-{test_acc:.2f}'), verbose=True) ],
But the app crashes on the first epoch on the following error
Exception has occurred: ValueError
.save_function() not set
File "/home/AAA/anaconda3/envs/BBB/lib/python3.7/site-packages/pytorch_lightning/callbacks/model_checkpoint.py", line 133, in _save_model
raise ValueError(".save_function() not set")
File "/home/AAA/anaconda3/envs/BBB/lib/python3.7/site-packages/pytorch_lightning/callbacks/model_checkpoint.py", line 240, in _do_check_save
self._save_model(filepath)
File "/home/AAA/anaconda3/envs/BBB/lib/python3.7/site-packages/pytorch_lightning/callbacks/model_checkpoint.py", line 208, in on_validation_end
self._do_check_save(filepath, current, epoch)
File "/home/AAA/anaconda3/envs/BBB/lib/python3.7/site-packages/pytorch_lightning/trainer/callback_hook.py", line 63, in on_validation_end
callback.on_validation_end(self, self.get_model())
File "/home/AAA/anaconda3/envs/BBB/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 792, in call_checkpoint_callback
self.on_validation_end()
File "/home/AAA/anaconda3/envs/BBB/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 477, in run_training_epoch
self.call_checkpoint_callback()
File "/home/AAA/anaconda3/envs/BBB/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 363, in train
self.run_training_epoch()
File "/home/AAA/anaconda3/envs/BBB/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 865, in run_pretrain_routine
self.train()
File "/home/AAA/anaconda3/envs/BBB/lib/python3.7/site-packages/pytorch_lightning/trainer/distrib_parts.py", line 477, in single_gpu_train
self.run_pretrain_routine(model)
File "/home/AAA/anaconda3/envs/BBB/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 705, in fit
self.single_gpu_train(model)
File "/home/AAA/PycharmProjects/DL2020LiorWolf/train.py", line 110, in main_train
trainer.fit(model)
File "/home/AAA/PycharmProjects/DL2020LiorWolf/train.py", line 40, in main
main_train(model_class_pointer, hyperparams, logger)
File "/home/AAA/PycharmProjects/DL2020LiorWolf/train.py", line 118, in <module>
main()
File "/home/AAA/anaconda3/envs/BBB/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/AAA/anaconda3/envs/BBB/lib/python3.7/runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "/home/AAA/anaconda3/envs/BBB/lib/python3.7/runpy.py", line 263, in run_path
pkg_name=pkg_name, script_name=fname)
File "/home/AAA/anaconda3/envs/BBB/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/AAA/anaconda3/envs/BBB/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
From the docs the model_checkpoint module seems as a "plug-and-play", I need to implement something else?
Actually, going through the source code, it seems as save_function is never set
|
change method signature for better pycharm auto complete
|
[
"feature",
"help wanted",
"won't fix"
] |
π Feature
imporve pycharm auto complete experience
Motivation
The auto complete result when typing def training_step
def training_step(self, *args, **kwargs):
I want it to be
def training_step(self, batch, batch_idx):
Many Thanks
|
Track_grad_norm only tracks the parameters of the last optimizer defined
|
[
"bug",
"feature",
"help wanted",
"priority: 1"
] |
π Bug
When you enable track_grad_norm in Trainer you expect it to track the grad of all the parameters defined in your lightning module. It seems like it only tracks the parameters for the last optimizer defined in def configure_optimizers().
To Reproduce
Steps to reproduce the behavior:
Create a pytorch lightning module with more than one optimizer
Enable logging to tensorboard and track_grad_norm in the trainer
Train model and observe only gradients for the last optimzer is tracked
Delete all tensorboard data and restart tensorboard
Train a model with the optimizers returned in the different order. Observe which gradients are tracked again
Code sample
import pytorch_lightning as pl
from torch.utils.data import TensorDataset, DataLoader
from pytorch_lightning.loggers.tensorboard import TensorBoardLogger
from torch.optim import SGD
import torch.nn as nn
import torch
class MWENet(pl.LightningModule):
def __init__(self):
super(MWENet, self).__init__()
self.first = nn.Conv2d(1, 1, 3)
self.second = nn.Conv2d(1, 1, 3)
self.loss = nn.L1Loss()
def train_dataloader(self):
xs, ys = torch.zeros(16, 1, 10, 10), torch.ones(16 ,1, 6, 6)
ds = TensorDataset(xs, ys)
return DataLoader(ds)
def forward(self, xs):
out = self.first(xs)
out = self.second(out)
return out
def configure_optimizers(self):
first = SGD(self.first.parameters(), lr=0.01)
second = SGD(self.second.parameters(), lr=0.01)
return [second, first]
def training_step(self, batch, batch_idx, optimizer_idx):
xs, ys = batch
out = self.forward(xs)
return {'loss': self.loss(out, ys)}
net = MWENet()
logger = TensorBoardLogger('somedir', name='testing')
trainer = pl.Trainer(track_grad_norm=2,
logger = logger)
trainer.fit(net)
Expected behavior
I expect that a pytorch module will report the gradient of all parameters when track_grad_norm is enabled.
Environment
cuda:
GPU:
GeForce RTX 2080 Ti
available: True
version: 10.1
packages:
numpy: 1.18.1
pyTorch_debug: False
pyTorch_version: 1.4.0
pytorch-lightning: 0.7.3
tensorboard: 2.2.1
tqdm: 4.45.0
system:
OS: Windows
architecture:
64bit
WindowsPE
processor: Intel64 Family 6 Model 158 Stepping 12, GenuineIntel
python: 3.7.7
version: 10.0.18362
|
Getting Start with Existed Pipeline
|
[
"question",
"won't fix"
] |
I have an existing training pipeline based on Pytorch, but I am interested in applying pytorch_lightning to my workflow. I am also trying to do this is a way that is as least disruptive to my existing code. I noticed there is no way to wrap existing torch.nn.Module models with pytorch_lightning.LightningModule, so I figured I should ask my questions here before spending hours debugging code.
Is this an appropriate way to structure a potential model?
if importlib.util.find_spec('pytorch_lightning') is not None:
import pytorch_lightning as pl
backend = pl.LightningModule
else:
import torch
backend = torch.nn.Module
class Model(backend):
def __init__(self, **kwargs):
pass
def forward(self, **kwargs):
pass
This would allow the modules to work with either module depending on what environment is used. However, the examples, you include a bunch of different functions in the model definition (dataloaders, optimizers, training steps, etc.) that are not used in the normal pytorch model structure. Are those additional functions necessary for the model to work and/or are they needed for optimal efficiency? Or can training be done the same as normal?
|
load checkpoint from URL
|
[
"feature",
"help wanted",
"good first issue",
"let's do it!"
] |
Let's enable loading weights from a URL directly
Option 1:
Automate it with our current API
Trainer.load_from_checkpoint('http://')
Option 2:
Have a separate method
Trainer.load_from_checkpoint_at_url('http://')
Resources
We can use this under the hood:
(https://pytorch.org/docs/stable/hub.html#torch.hub.load_state_dict_from_url)
Any thoughts on which one is better?
@PyTorchLightning/core-contributors
|
`num_tpu_cores=8` does not work on kaggle
|
[
"bug",
"help wanted",
"priority: 0"
] |
π Bug
When I try to train a model on Kaggle TPU's with num_tpu_cores set to 8, I receive an error Exception: process 2 terminated with exit code 1 . Would be great if this worked on kaggle.
To Reproduce
Steps to reproduce the behavior:
Run this notebook:
https://www.kaggle.com/lezwon/pytorch-on-tpu-with-pytorch-lightning
---------------------------------------------------------------------------
Exception Traceback (most recent call last)
<ipython-input-9-9251330963d1> in <module>
3 # most basic trainer, uses good defaults (1 TPU)
4 trainer = pl.Trainer(num_tpu_cores=8)
----> 5 trainer.fit(mnist_model)
/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py in fit(self, model, train_dataloader, val_dataloaders, test_dataloaders)
714
715 # train
--> 716 xmp.spawn(self.tpu_train, args=(model,), nprocs=self.num_tpu_cores, start_method=start_method)
717
718 # load weights if not interrupted
/opt/conda/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py in spawn(fn, args, nprocs, join, daemon, start_method)
180 join=join,
181 daemon=daemon,
--> 182 start_method=start_method)
/opt/conda/lib/python3.6/site-packages/torch/multiprocessing/spawn.py in start_processes(fn, args, nprocs, join, daemon, start_method)
156
157 # Loop on join until it returns True or raises an exception.
--> 158 while not context.join():
159 pass
160
/opt/conda/lib/python3.6/site-packages/torch/multiprocessing/spawn.py in join(self, timeout)
111 raise Exception(
112 "process %d terminated with exit code %d" %
--> 113 (error_index, exitcode)
114 )
115
Exception: process 3 terminated with exit code 1
Code sample
trainer = pl.Trainer(num_tpu_cores=8, precision=16)
Expected behavior
Run the model utilizing all 8 TPU cores.
Environment
cuda:
GPU:
available: False
version: None
packages:
numpy: 1.18.2
pyTorch_debug: False
pyTorch_version: 1.6.0a0+30e7055
pytorch-lightning: 0.7.3
tensorboard: 2.1.1
tqdm: 4.42.0
system:
OS: Linux
architecture:
64bit
processor:
python: 3.6.6
version: #1 SMP Sat Apr 4 00:12:45 PDT 2020
|
ValueError: host not found: Name or service not known in _env_rendezvous_handler
|
[
"bug",
"help wanted"
] |
π Bug
To Reproduce
Steps to reproduce the behavior:
Go to pl_examples/basic_examples/
modify the script to fit environment
# SLURM SUBMIT SCRIPT
#SBATCH --account=gpu-s2-intelperf-0
#SBATCH --partition=gpu-s2-core-0
#SBATCH --nodes=2
#SBATCH --gres=gpu:2
#SBATCH --time=0-02:00:00
#SBATCH --ntasks-per-node=2
# activate conda env
source activate $1
export NCCL_DEBUG=INFO
export PYTHONFAULTHANDLER=1
export NCCL_SOCKET_IFNAME=^ib0,lo
# ib0 is looked up from ifconfig
module load cuda10.1
# run script from above
srun python multi_node_ddp_demo.py
See error
Traceback (most recent call last):
File "multi_node_ddp_demo.py", line 51, in <module>
main(hyperparams)
File "multi_node_ddp_demo.py", line 37, in main
trainer.fit(model)
File "/data/gpfs/home/hsinpaic/anaconda3/envs/pose/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 684, in fit
self.ddp_train(task, model)
File "/data/gpfs/home/hsinpaic/anaconda3/envs/pose/lib/python3.6/site-packages/pytorch_lightning/trainer/distrib_data_parallel.py", line 308, in ddp_train
model.init_ddp_connection(self.proc_rank, self.world_size)
File "/data/gpfs/home/hsinpaic/anaconda3/envs/pose/lib/python3.6/site-packages/pytorch_lightning/core/lightning.py", line 951, in init_ddp_connection
torch_distrib.init_process_group('nccl', rank=proc_rank, world_size=world_size)
File "/data/gpfs/home/hsinpaic/anaconda3/envs/pose/lib/python3.6/site-packages/torch/distributed/distributed_c10d.py", line 397, in init_process_group
store, rank, world_size = next(rendezvous_iterator)
File "/data/gpfs/home/hsinpaic/anaconda3/envs/pose/lib/python3.6/site-packages/torch/distributed/rendezvous.py", line 168, in _env_rendezvous_handler
store = TCPStore(master_addr, master_port, world_size, start_daemon)
ValueError: host not found: Name or service not known
srun: error: gpu-1: task 2: Exited with exit code 1
srun: error: gpu-1: task 3: Exited with exit code 1
srun: error: gpu-0: task 0: Exited with exit code 1
srun: error: gpu-0: task 1: Exited with exit code 1
Code sample
Expected behavior
I was following the README in basic_examples folder, I can pass through single node example. But it shows this error in multi-nodes.
Environment
CUDA:
- GPU:
- Tesla P100-SXM2-16GB
- Tesla P100-SXM2-16GB
- available: True
- version: 10.1
* Packages:
- numpy: 1.18.1
- pyTorch_debug: False
- pyTorch_version: 1.4.0
- pytorch-lightning: 0.7.3
- tensorboard: 1.14.0
- tqdm: 4.45.0
* System:
- OS: Linux
- architecture:
- 64bit
-
- processor: x86_64
- python: 3.6.10
- version: #1 SMP Mon Jul 29 17:46:05 UTC 2019
* CUDA:
- GPU:
- Tesla P100-SXM2-16GB
- Tesla P100-SXM2-16GB
- available: True
- version: 10.1
* Packages:
- numpy: 1.18.1
- pyTorch_debug: False
- pyTorch_version: 1.4.0
- pytorch-lightning: 0.7.3
- tensorboard: 1.14.0
- tqdm: 4.45.0
* System:
- OS: Linux
- architecture:
- 64bit
-
- processor: x86_64
- python: 3.6.10
- version: #1 SMP Mon Jul 29 17:46:05 UTC 2019```
|
Metric aggragation is broken for LoggerCollection
|
[
"bug",
"help wanted",
"priority: 0"
] |
π Bug
After changes in #1278 it is now not possible to log testing metrics after traning while using several loggers.
To Reproduce
Say we want to run a MINST example and also want to add a change - log testing metrics after training. For that we define a Callback
class TestCallback(Callback):
def on_train_end(self, trainer, pl_module):
# note that it would crash if you don't pass the `pl_module`
trainer.test(pl_module)
and pass it to trainer callbacks argument.
We would also like to use several loggers to track all metrics, say MLFlowLogger and TensorBoardLogger. For this we create instances of these loggers and pass them into Trainer in a list.
Expected behavior
Testing metrics should be logged - but they don't as there's no final aggregation when our logger is a LoggerCollection
Additional context
In my opinion, the logic in agg_and_log_metrics and _finalize_agg_metrics is hard to follow, so I'd be happy if user could choose plain old log_metrics which worked nicely.
|
'bad value(s) in fds_to_keep' error in DDP mode
|
[
"bug",
"help wanted"
] |
π Bug
To Reproduce
if i put spectral_norm in the model, it will output the error msg "bad value(s) in fds_to_keep"
event the example provided by pytorch-lightning have this kind of issue.
Steps to reproduce the behavior:
change the example model lightning_template.py: to
` self.c_d1 = nn.Linear(in_features=self.hparams.in_features,
out_features=self.hparams.hidden_dim)
self.c_d1 = spectral_norm(self.c_d1)
self.c_d1_bn = nn.BatchNorm1d(self.hparams.hidden_dim)
self.c_d1_drop = nn.Dropout(self.hparams.drop_prob)
self.c_d2 = nn.Linear(in_features=self.hparams.hidden_dim,
out_features=self.hparams.out_features)
self.c_d2 = spectral_norm(self.c_d2) `
run the example with
python3 gpu_template.py --gpus 2 --distributed_backend ddp
we will get error msg
Traceback (most recent call last): File "gpu_template.py", line 80, in <module> main(hyperparams) File "gpu_template.py", line 41, in main trainer.fit(model) File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/trainer.py", line 692, in fit mp.spawn(self.ddp_train, nprocs=self.num_gpus, args=(model,)) File "/usr/local/lib/python3.6/dist-packages/torch/multiprocessing/spawn.py", line 162, in spawn process.start() File "/usr/lib/python3.6/multiprocessing/process.py", line 105, in start self._popen = self._Popen(self) File "/usr/lib/python3.6/multiprocessing/context.py", line 284, in _Popen return Popen(process_obj) File "/usr/lib/python3.6/multiprocessing/popen_spawn_posix.py", line 32, in __init__ super().__init__(process_obj) File "/usr/lib/python3.6/multiprocessing/popen_fork.py", line 19, in __init__ self._launch(process_obj) File "/usr/lib/python3.6/multiprocessing/popen_spawn_posix.py", line 59, in _launch cmd, self._fds) File "/usr/lib/python3.6/multiprocessing/util.py", line 417, in spawnv_passfds False, False, None) ValueError: bad value(s) in fds_to_keep
Environment
CUDA:
GPU:
Tesla V100-SXM2-32GB
Tesla V100-SXM2-32GB
Tesla V100-SXM2-32GB
Tesla V100-SXM2-32GB
Tesla V100-SXM2-32GB
Tesla V100-SXM2-32GB
Tesla V100-SXM2-32GB
Tesla V100-SXM2-32GB
available: True
version: 10.1
Packages:
numpy: 1.18.2
pyTorch_debug: False
pyTorch_version: 1.4.0
pytorch-lightning: 0.7.3
tensorboard: 2.2.0
tqdm: 4.45.0
System:
OS: Linux
architecture:
64bit
ELF
processor: x86_64
python: 3.6.9
version: #58-Ubuntu SMP Mon Jun 24 10:55:24 UTC 2019
|
"example_input_array" depends on ordering of modules
|
[
"bug",
"help wanted"
] |
π Bug
To Reproduce
Go to the pl_examples/basic_examples/LightningTemplateModel.py
Change the order of modules in the __build_model method from
def __build_model(self):
self.c_d1 = nn.Linear(in_features=self.hparams.in_features,
out_features=self.hparams.hidden_dim)
self.c_d1_bn = nn.BatchNorm1d(self.hparams.hidden_dim)
self.c_d1_drop = nn.Dropout(self.hparams.drop_prob)
self.c_d2 = nn.Linear(in_features=self.hparams.hidden_dim,
out_features=self.hparams.out_features)
to:
def __build_model(self):
self.c_d1 = nn.Linear(in_features=self.hparams.in_features,
out_features=self.hparams.hidden_dim)
# move the layer definition up here
self.c_d2 = nn.Linear(in_features=self.hparams.hidden_dim,
out_features=self.hparams.out_features)
self.c_d1_bn = nn.BatchNorm1d(self.hparams.hidden_dim)
self.c_d1_drop = nn.Dropout(self.hparams.drop_prob)
We get an error message because input size does not match (for this order).
Expected behavior
Input output sizes are computed in order of execution, not definition. This is important because PyTorch graphs are dynamically built on each forward, so order of execution of each layer is not known beforehand.
Proposed Fix
I propose to install a forward hook on each submodule and compute the sizes that way.
I have started to validate the fix already and would like to submit a PR very soon if you agree.
Additional Context
It could be confusing to a user to see this error, they might think something is wrong with their code.
|
Compatibilty with PyTorch Geometric
|
[
"question"
] |
β Questions and Help
What is your question?
I'm currently using PyTorch Geometric to solve a classifying task for 3D objects. I was hoping that I could rework this small PyTorch Geometric example over to PyTorch Lightning, but I encounter the following data type-related error when reaching the dataloader part:
TypeError: default_collate: batch must contain tensors, numpy arrays, numbers, dicts or lists; found <class 'torch_geometric.data.data.Data'>.
As far as I understand PyTorch Geometric Data simply stores PyTorch tensors in a specific structure.
I have two questions:
Is PyTorch Geometric supported by PyTorch Lightning?
If not, has anyone a tip on how to either convert the datatype to correctly work with Lightning or maybe even small working example from which I could learn?
Code
My example code looks currently like this:
import torch
from torch import nn
import pytorch_lightning as pl
from torch.utils.data import DataLoader, random_split
from torchvision import datasets, transforms
from torch.nn import functional as F
from torch_geometric.datasets import FAUST
import torch_geometric.transforms as T
from torch_geometric.nn import SplineConv
import os
import os.path as osp
class Net(pl.LightningModule):
def __init__(self):
super(Net, self).__init__()
self.conv1 = SplineConv(1, 32, dim=3, kernel_size=5, aggr="add")
self.conv2 = SplineConv(32, 64, dim=3, kernel_size=5, aggr="add")
self.conv3 = SplineConv(64, 64, dim=3, kernel_size=5, aggr="add")
self.conv4 = SplineConv(64, 64, dim=3, kernel_size=5, aggr="add")
self.conv5 = SplineConv(64, 64, dim=3, kernel_size=5, aggr="add")
self.conv6 = SplineConv(64, 64, dim=3, kernel_size=5, aggr="add")
self.lin1 = torch.nn.Linear(64, 256)
self.lin2 = torch.nn.Linear(256, 6890)
def forward(self, data):
x, edge_index, pseudo = data.x, data.edge_index, data.edge_attr
x = F.elu(self.conv1(x, edge_index, pseudo))
x = F.elu(self.conv2(x, edge_index, pseudo))
x = F.elu(self.conv3(x, edge_index, pseudo))
x = F.elu(self.conv4(x, edge_index, pseudo))
x = F.elu(self.conv5(x, edge_index, pseudo))
x = F.elu(self.conv6(x, edge_index, pseudo))
x = F.elu(self.lin1(x))
x = F.dropout(x, training=self.training)
x = self.lin2(x)
return F.log_softmax(x, dim=1)
def cross_entropy_loss(self, logits, labels):
return F.nll_loss(logits, labels)
def training_step(self, train_batch, batch_idx):
x, y = train_batch
logits = self.forward(x)
loss = self.cross_entropy_loss(logits, y)
logs = {"train_loss": loss}
return {"loss": loss, "log": logs}
def validation_step(self, val_batch, batch_idx):
x, y = val_batch
logits = self.forward(x)
loss = self.cross_entropy_loss(logits, y)
return {"val_loss": loss}
def validation_epoch_end(self, outputs):
avg_loss = torch.stack([x["val_loss"] for x in outputs]).mean()
tensorboard_logs = {"val_loss": avg_loss}
return {"avg_val_loss": avg_loss, "log": tensorboard_logs}
def configure_optimizers(self):
optimizer = torch.optim.Adam(self.parameters(), lr=1e-3)
return optimizer
def prepare_data(self):
path = osp.join(osp.dirname(osp.realpath(__file__)), "..", "data", "FAUST")
self.pre_transform = T.Compose([T.FaceToEdge(), T.Constant(value=1)])
self.train_dataset = FAUST(path, True, T.Cartesian(), self.pre_transform)
self.test_dataset = FAUST(path, False, T.Cartesian(), self.pre_transform)
def train_dataloader(self):
return DataLoader(self.train_dataset, batch_size=1, shuffle=True)
def val_dataloader(self):
return DataLoader(self.test_dataset, batch_size=1)
model = Net()
trainer = pl.Trainer(gpus=1)
trainer.fit(model)
What have you tried?
I tried converting the data in pure Tensors as well as lists and dicts of tensors, but this resulted in a slew of errors with PyTorch Geometric. Unfortunately I have not found a working PyTorch Geometric example within the PyTorch Lightning framework online.
What's your environment?
OS: Windows 10
Packaging: pip
Version 0.7.3
|
Batch being moved to gpu repeatedly with multiple optimizers and single gpu training
|
[
"bug",
"help wanted"
] |
If you have multiple optimizers, then transfer_batch_to_gpu winds up getting called once per opt_idx, and the batch is copied each time via copy.copy(batch) in training_forward. Why copy the batch when there is only a single gpu? By removing the copy.copy() my GAN model moves from 8.53it/s to 9.25it/s. Pretty significant speedup.
|
Samplers are auto-added in DDP with no mechanism to override
|
[
"bug",
"help wanted"
] |
π Bug
Lightning automatically adds DistributedSampler when you turn on ddp, ddp2 or TPU:
pytorch-lightning/pytorch_lightning/trainer/data_loading.py
Line 86
in
17f58d2
def auto_add_sampler(self, dataloader: DataLoader, train: bool) -> DataLoader:
This seems to be a recent change.
This is surprising behavior and not always something that's warranted. For example, it is common (at least in several of our large scale vision trainers) for each worker to read a specific partition of a large warehouse table. In this case, the automatic addition of the DistributedSampler will only provide access to a portion of the loaded data, which is unintended.
Worse, there's no mechanism at all to override this.
Possible fixes
At the very least, provide some way to override this functionality
If the dataset is iterable-style, never auto-add a Sampler
|
Docstring for `on_after_backward`
|
[
"won't fix",
"docs"
] |
π Documentation
Hi !
In the docstring for on_after_backward there is a puzzling piece of code that is suggested (link) :
# example to inspect gradient information in tensorboard
if self.trainer.global_step % 25 == 0: # don't make the tf file huge
params = self.state_dict()
for k, v in params.items():
grads = v
name = k
self.logger.experiment.add_histogram(tag=name, values=grads,
global_step=self.trainer.global_step)
It isn't reported in Pytorch documentation that enumerating the state dict key-values gives the gradient: it is usually used to load a saved model weights (thus grads would be the weights and not the grads).
Adding a reference (which I couldn't find) would probably help pick up the logic behind it.
|
How to count training batches with support for distributed training
|
[
"question",
"won't fix"
] |
I am trying to write minimal code to track the total number of training batches seen so far in the logs for validation.
For non-distributed training, I simply add a training_batches_so_far variable in my lightning module init, increment it on training_step() and add it to the progress_bar and log fields in the output.
However I want to make sure I am doing this properly for distributed training. What is the simplest way to do this? Ideally, I would like to be able to control how various metrics are accumulated (sum, avg, max). In this case, the amalgamation would be to sum the training steps seen by each worker and add that to the central total. I found related issues #702 and #1165, but it is unclear to me what the simplest / best practice is for this.
|
Any way to make Lightning work with petastorm custom DataLoaders?
|
[
"question"
] |
Is it possible to use petastorm (https://github.com/uber/petastorm) pytorch data loaders with pytorch lightning?
This issue is that petastorm's DataLoaders need to be re initiated for each epoch.
A sample code looks like this:
for epoch in range(1, loop_epochs + 1):
train_loader = DataLoader(...)
train(model, device, train_loader, args.log_interval, optimizer, epoch)
test_loader = DataLoader(...)
test(model, device, test_loader)
The dataloader keeps it's state, so refactoring the snippet as below breaks for epochs > 1:
train_loader = DataLoader(...)
test_loader = DataLoader(...)
for epoch in range(1, loop_epochs + 1):
train(model, device, train_loader, args.log_interval, optimizer, epoch)
test(model, device, test_loader)
Thanks for you help guys.
|
Named converted to regular tuples when sent to the gpu.
|
[
"bug",
"help wanted"
] |
π Bug
Named tuples returned from Dataset get converted to regular tuples when sent to the gpu.
This happens because isinstance(instance_of_a_named_tuple, tuple) evaluates to True in distrib_parts.py
pytorch-lightning/pytorch_lightning/trainer/distrib_parts.py
Line 463
in
67d5f4d
if isinstance(batch, tuple):
To Reproduce
import pytorch_lightning as pl
from collections import namedtuple
import torch
import numpy
NamedTupleDemoInput = namedtuple('DemoInput', ['x1', 'x2', 'y'])
class NamedTupleDemoDataset:
def __len__(self):
return 30000
def __getitem__(self, index):
x1 = numpy.random.uniform(0, 100)
x2 = numpy.random.uniform(0, 100)
y = 2*x1 + 3*x2 + numpy.random.normal(0, 0.05)
return NamedTupleDemoInput(x1, x2, y)
class WeightedSum(torch.nn.Module):
def __init__(self):
super(WeightedSum, self).__init__()
self.a = torch.nn.Parameter(torch.zeros(1))
self.b = torch.nn.Parameter(torch.zeros(1))
def forward(self, x1, x2):
return self.a * x1 + self.b * x2
class NamedTupleDemo(pl.LightningModule):
def __init__(self):
super(NamedTupleDemo, self).__init__()
self.model = WeightedSum()
def forward(self, x1, x2):
return self.model(x1, x2)
def train_dataloader(self):
return torch.utils.data.DataLoader(NamedTupleDemoDataset(), batch_size=128)
def training_step(self, batch, batch_index):
yhat = self.forward(batch.x1, batch.x2)
return {'loss': torch.nn.functional.mse_loss(batch.y, yhat)}
def configure_optimizers(self):
return torch.optim.Adam(self.parameters(), lr=1e-2)
if __name__ == '__main__':
module = NamedTupleDemo()
pl.Trainer(max_epochs=20, gpus=1).fit(module)
print(f'a={float(module.model.a)} b={float(module.model.b)}')
Traceback (most recent call last):
File "demo.py", line 48, in <module>
pl.Trainer(max_epochs=20, gpus=1).fit(module)
File "/home/n/repos/pytorch-lightning/pytorch_lightning/trainer/trainer.py", line 749, in fit
self.single_gpu_train(model)
File "/home/n/repos/pytorch-lightning/pytorch_lightning/trainer/distrib_parts.py", line 491, in single_gpu_train
self.run_pretrain_routine(model)
File "/home/n/repos/pytorch-lightning/pytorch_lightning/trainer/trainer.py", line 910, in run_pretrain_routine
self.train()
File "/home/n/repos/pytorch-lightning/pytorch_lightning/trainer/training_loop.py", line 384, in train
self.run_training_epoch()
File "/home/n/repos/pytorch-lightning/pytorch_lightning/trainer/training_loop.py", line 456, in run_training_epoch
_outputs = self.run_training_batch(batch, batch_idx)
File "/home/n/repos/pytorch-lightning/pytorch_lightning/trainer/training_loop.py", line 633, in run_training_batch
loss, batch_output = optimizer_closure()
File "/home/n/repos/pytorch-lightning/pytorch_lightning/trainer/training_loop.py", line 597, in optimizer_closure
output_dict = self.training_forward(split_batch, batch_idx, opt_idx, self.hiddens)
File "/home/n/repos/pytorch-lightning/pytorch_lightning/trainer/training_loop.py", line 770, in training_forward
output = self.model.training_step(*args)
File "demo.py", line 40, in training_step
yhat = self.forward(batch.x1, batch.x2)
AttributeError: 'tuple' object has no attribute 'x1'
Expected behavior
Namedtuples returned from the dataset should be keep their original fields.
Environment
CUDA:
- GPU:
- GeForce RTX 2080 Ti
- available: True
- version: 10.2
Packages:
- numpy: 1.18.3
- pyTorch_debug: False
- pyTorch_version: 1.5.0
- pytorch-lightning: 0.7.4rc5
- tensorboard: 2.2.1
- tqdm: 4.45.0
System:
- OS: Linux
- architecture:
- 64bit
- ELF
- processor:
- python: 3.8.2
- version: #1 SMP PREEMPT Sun, 05 Apr 2020 05:13:14 +0000
|
[Discussion] Callback interface/architechture
|
[
"won't fix"
] |
PR #1544 opened the discussion that aspects of the callback interface needs to be rethink. This issue will keep track of future discussion. From the PR, these points were made:
Trainer callback arguments: Currently there are 3 arguments in trainer (callback, early_stopping_callback and checkpoint_callback). It should be discussed what the user can pass to the different arguments. Mostly it seems that people are in favor of only allowing bool arguments for early_stopping_callback and checkpoint_callback, which will add a default version of the respective callback. Anything else should be passed on callback
Callback order: As pointed out early stopping needs to be called before model checkpoint, because modelcheckpoint save early stopping callback stats. This implies that some form of dependency tree should be implemented to callback interface
|
How to remove `v_num` from the progress bar ?
|
[
"help wanted",
"good first issue",
"question",
"docs",
"logger"
] |
Version: 0.7.3
The v_num is automatically added to progress_bar when some logger is used
It is not much a problem for tensorboard when v_num is just a simple number
But v_num for mlfow takes a lot of space
Traning step
def training_step(self, batch, batch_nb):
....
log = { "trn_loss": 0.1, "lr": 0.001 }
return {"loss": loss, "log": log, "progress_bar": log}
Progress bar when using with mlflow logger
[00:33<00:46, 1.62s/it, loss=0.740, lr=8e-6, trn_loss=0.659, v_num=18_28bc973b1f0e42e8b4d664d1ef7812f6]
Also, loss is automatically added to progress_bar
|
How can I log (to tensorboard for example) at process 0 only?
|
[
"question"
] |
β Questions and Help
What is your question?
I'm using two GPU's. It seems training_step with batch_idx = 0 is called twice.
I want to log something when the current process is 0.
|
Tensorboard loss graph differs from command-line output when using accumulated gradients
|
[
"bug",
"help wanted",
"won't fix"
] |
π Bug
Tensorboard loss graph differs from command-line output when using accumulated gradients
To Reproduce
Steps to reproduce the behavior:
Run a model with accumulated gradients
Compare printed loss to tensorboard loss
See error
Expected behavior
The loss displayed via tensorboard should agree with the command-line output.
Environment
CUDA:
- GPU:
- GeForce RTX 2070 with Max-Q Design
- available: True
- version: 10.2
Packages:
- numpy: 1.18.1
- pyTorch_debug: False
- pyTorch_version: 1.5.0
- pytorch-lightning: 0.7.3
- tensorboard: 2.2.1
- tqdm: 4.45.0
System:
- OS: Linux
- architecture:
- 64bit
- ELF
- processor: x86_64
- python: 3.8.2
- version: #41158678979119.10~9593806-Ubuntu SMP Mon Apr 13 17:50:40 UTC
|
Add hparams metrics to tensorboard
|
[
"feature",
"help wanted",
"won't fix"
] |
Currently, we track hparams, but not metrics which is a new feature.
https://pytorch.org/docs/stable/tensorboard.html#torch.utils.tensorboard.writer.SummaryWriter.add_hparams
|
Feature to automatically choose batch size
|
[
"feature",
"help wanted"
] |
Let's add a flag:
# default False
Trainer(auto_scale_batch_size=True)
This should do binary search on batch size:
Run a few train steps using the current batch size.
if OOM batch_size / 2.
If no OOM, batch_size = batch_size * 1.5.
And so on until we find the optimal batch size. At this point log it so the user knows (including tensorboard), and continue training with the new batch size.
Ideally the user fixes the batch size in future runs to tune the learning rate.
|
Early stopping + checkpoint key
|
[
"feature",
"help wanted",
"won't fix"
] |
Consider updating how we condition early stopping or checkpoint
return {'early_stop_on': mse_loss, 'checkpoint_on': other_metric}
Instead of:
# only if val_loss is present
return {'val_loss': val_loss}
|
horovod cicd tests are failing on ubuntu 18.04 python 3.6 latest
|
[
"bug",
"help wanted",
"priority: 0"
] |
π Bug
The failed job: https://github.com/PyTorchLightning/pytorch-lightning/runs/620109522
We see two errors:
RuntimeError: Failed to determine if NCCL support has been built. Run again with --verbose for more details.
ImportError: /opt/hostedtoolcache/Python/3.6.10/x64/lib/python3.6/site-packages/horovod/torch/mpi_lib_v2.cpython-36m-x86_64-linux-gnu.so: undefined symbol: _ZTIN3c1021AutogradMetaInterfaceE
My hunch is that both are caused by the same horovod compilation issue.
Another thing to note is that the same tests are passing on ubuntu 18.04 python 3.6 minimal.
@tgaddair maybe you have an idea?
To Reproduce
Run the cicd test suite.
|
How many epochs will my model train for?
|
[
"question"
] |
How many epochs will my model train for if i don't set max and min epoch value in my trainer?
trainer = Trainer(gpus=1,max_epochs=4)
I know that I could specify max and min epochs. What if i don't specify and just call fit() without min or max epochs. What value does it use to stop training my model? Is it loss value returned from training_step(). Also will it check if that loss is in a similar range as my validation loss so it does not over or under fit?
Thanks
|
Bug in DDP, but not DP modes.
|
[
"bug",
"help wanted"
] |
Pytorch 1.5
In [3]: pytorch_lightning.__version__
Out[3]: '0.7.5rc1'
In DP everything works.
In DDP fails with:
File "/home/vladimir/anaconda3/envs/solaris/lib/python3.7/multiprocessing/popen_fork.py", line 20, in __init__
self._launch(process_obj)
File "/home/vladimir/anaconda3/envs/solaris/lib/python3.7/multiprocessing/popen_spawn_posix.py", line 47, in _launch
reduction.dump(process_obj, fp)
File "/home/vladimir/anaconda3/envs/solaris/lib/python3.7/multiprocessing/reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
_pickle.PicklingError: Can't pickle <class 'torch._C._VariableFunctions'>: it's not the same object as torch._C._VariableFunctions
|
Trainer DDP invoking load_spawn_weights() on each node
|
[
"bug",
"help wanted",
"priority: 0"
] |
π Bug
On a SLURM cluster, I am seeing the same problem as issue #1335 , despite that issue's fix being applied.
To Reproduce
Steps to reproduce the behavior:
Allocate 4 nodes on a SLURM-managed cluster
srun the script pl.py on each allocated node
See the errors on 3 of 4 nodes:
INFO:lightning:GPU available: True, used: True
INFO:lightning:VISIBLE GPUS: 0,1
INFO:lightning:GPU available: True, used: True
INFO:lightning:VISIBLE GPUS: 0,1
INFO:lightning:GPU available: True, used: True
INFO:lightning:VISIBLE GPUS: 0,1
INFO:lightning:GPU available: True, used: True
INFO:lightning:VISIBLE GPUS: 0,1
/home/gridsan/dcampbell/.local/lib/python3.6/site-packages/pytorch_lightning/utilities/warnings.py:18: RuntimeWarning: You have defined a `test_dataloader()` and have defined a `test_step()`, you may also want to define `test_epoch_end()` for accumulating stats.
warnings.warn(*args, **kwargs)
/home/gridsan/dcampbell/.local/lib/python3.6/site-packages/pytorch_lightning/utilities/warnings.py:18: RuntimeWarning: You have defined a `test_dataloader()` and have defined a `test_step()`, you may also want to define `test_epoch_end()` for accumulating stats.
warnings.warn(*args, **kwargs)
/home/gridsan/dcampbell/.local/lib/python3.6/site-packages/pytorch_lightning/utilities/warnings.py:18: RuntimeWarning: You have defined a `test_dataloader()` and have defined a `test_step()`, you may also want to define `test_epoch_end()` for accumulating stats.
warnings.warn(*args, **kwargs)
/home/gridsan/dcampbell/.local/lib/python3.6/site-packages/pytorch_lightning/utilities/warnings.py:18: RuntimeWarning: You have defined a `test_dataloader()` and have defined a `test_step()`, you may also want to define `test_epoch_end()` for accumulating stats.
warnings.warn(*args, **kwargs)
d-12-3-1:47016:47016 [0] NCCL INFO Bootstrap : Using [0]ens2f0:172.31.130.132<0>
d-12-3-1:47016:47016 [0] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so).
d-12-3-1:47016:47016 [0] NCCL INFO NET/IB : Using [0]mlx5_0:1/RoCE ; OOB ens2f0:172.31.130.132<0>
NCCL version 2.4.8+cuda10.1
d-12-3-1:47022:47022 [1] NCCL INFO Bootstrap : Using [0]ens2f0:172.31.130.132<0>
d-12-3-1:47022:47022 [1] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so).
d-12-4-1:51815:51815 [0] NCCL INFO Bootstrap : Using [0]ens2f0:172.31.130.134<0>
d-12-4-1:51820:51820 [1] NCCL INFO Bootstrap : Using [0]ens2f0:172.31.130.134<0>
d-12-4-1:51820:51820 [1] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so).
d-12-4-1:51815:51815 [0] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so).
d-12-4-2:51993:51993 [0] NCCL INFO Bootstrap : Using [0]ens2f0:172.31.130.135<0>
d-12-4-2:51997:51997 [1] NCCL INFO Bootstrap : Using [0]ens2f0:172.31.130.135<0>
d-12-3-2:43991:43991 [1] NCCL INFO Bootstrap : Using [0]ens2f0:172.31.130.133<0>
d-12-3-2:43985:43985 [0] NCCL INFO Bootstrap : Using [0]ens2f0:172.31.130.133<0>
d-12-4-2:51993:51993 [0] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so).
d-12-4-2:51997:51997 [1] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so).
d-12-3-2:43991:43991 [1] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so).
d-12-3-2:43985:43985 [0] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so).
d-12-3-1:47022:47022 [1] NCCL INFO NET/IB : Using [0]mlx5_0:1/RoCE ; OOB ens2f0:172.31.130.132<0>
d-12-3-1:47022:47036 [1] NCCL INFO Setting affinity for GPU 1 to ffff,f00000ff,fff00000
d-12-4-2:51993:51993 [0] NCCL INFO NET/IB : Using [0]mlx5_0:1/RoCE ; OOB ens2f0:172.31.130.135<0>
d-12-4-2:51997:51997 [1] NCCL INFO NET/IB : Using [0]mlx5_0:1/RoCE ; OOB ens2f0:172.31.130.135<0>
d-12-4-1:51820:51820 [1] NCCL INFO NET/IB : Using [0]mlx5_0:1/RoCE ; OOB ens2f0:172.31.130.134<0>
d-12-3-2:43991:43991 [1] NCCL INFO NET/IB : Using [0]mlx5_0:1/RoCE ; OOB ens2f0:172.31.130.133<0>
d-12-3-2:43985:43985 [0] NCCL INFO NET/IB : Using [0]mlx5_0:1/RoCE ; OOB ens2f0:172.31.130.133<0>
d-12-4-1:51815:51815 [0] NCCL INFO NET/IB : Using [0]mlx5_0:1/RoCE ; OOB ens2f0:172.31.130.134<0>
d-12-4-2:51993:52008 [0] NCCL INFO Setting affinity for GPU 0 to ffff,f00000ff,fff00000
d-12-4-2:51997:52010 [1] NCCL INFO Setting affinity for GPU 1 to ffff,f00000ff,fff00000
d-12-4-1:51820:51830 [1] NCCL INFO Setting affinity for GPU 1 to ffff,f00000ff,fff00000
d-12-3-2:43991:44003 [1] NCCL INFO Setting affinity for GPU 1 to ffff,f00000ff,fff00000
d-12-3-2:43985:44002 [0] NCCL INFO Setting affinity for GPU 0 to ffff,f00000ff,fff00000
d-12-4-1:51815:51832 [0] NCCL INFO Setting affinity for GPU 0 to ffff,f00000ff,fff00000
d-12-3-1:47016:47034 [0] NCCL INFO Setting affinity for GPU 0 to ffff,f00000ff,fff00000
d-12-4-2:51997:52010 [1] NCCL INFO CUDA Dev 1[1], IB NIC distance : NODE
d-12-4-2:51993:52008 [0] NCCL INFO CUDA Dev 0[0], IB NIC distance : NODE
d-12-4-1:51815:51832 [0] NCCL INFO CUDA Dev 0[0], IB NIC distance : NODE
d-12-4-1:51820:51830 [1] NCCL INFO CUDA Dev 1[1], IB NIC distance : NODE
d-12-3-2:43991:44003 [1] NCCL INFO CUDA Dev 1[1], IB NIC distance : NODE
d-12-3-2:43985:44002 [0] NCCL INFO CUDA Dev 0[0], IB NIC distance : NODE
d-12-3-1:47022:47036 [1] NCCL INFO CUDA Dev 1[1], IB NIC distance : NODE
d-12-3-1:47016:47034 [0] NCCL INFO CUDA Dev 0[0], IB NIC distance : NODE
d-12-3-1:47016:47034 [0] NCCL INFO Channel 00 : 0 1 2 3 4 5 6 7
d-12-3-1:47016:47034 [0] NCCL INFO Ring 00 : 7 -> 0 [receive] via NET/IB/0
d-12-4-2:51993:52008 [0] NCCL INFO Ring 00 : 5 -> 6 [receive] via NET/IB/0
d-12-3-2:43985:44002 [0] NCCL INFO Ring 00 : 1 -> 2 [receive] via NET/IB/0
d-12-4-1:51815:51832 [0] NCCL INFO Ring 00 : 3 -> 4 [receive] via NET/IB/0
d-12-3-2:43985:44002 [0] NCCL INFO Ring 00 : 2[0] -> 3[1] via direct shared memory
d-12-4-2:51993:52008 [0] NCCL INFO Ring 00 : 6[0] -> 7[1] via direct shared memory
d-12-4-1:51815:51832 [0] NCCL INFO Ring 00 : 4[0] -> 5[1] via direct shared memory
d-12-3-1:47016:47034 [0] NCCL INFO Ring 00 : 0[0] -> 1[1] via direct shared memory
d-12-4-2:51997:52010 [1] NCCL INFO Ring 00 : 7 -> 0 [send] via NET/IB/0
d-12-3-2:43991:44003 [1] NCCL INFO Ring 00 : 3 -> 4 [send] via NET/IB/0
d-12-4-1:51820:51830 [1] NCCL INFO Ring 00 : 5 -> 6 [send] via NET/IB/0
d-12-3-1:47022:47036 [1] NCCL INFO Ring 00 : 1 -> 2 [send] via NET/IB/0
d-12-3-2:43991:44003 [1] NCCL INFO Ring 00 : 3[1] -> 2[0] via direct shared memory
d-12-4-1:51820:51830 [1] NCCL INFO Ring 00 : 5[1] -> 4[0] via direct shared memory
d-12-4-2:51997:52010 [1] NCCL INFO Ring 00 : 7[1] -> 6[0] via direct shared memory
d-12-3-1:47022:47036 [1] NCCL INFO Ring 00 : 1[1] -> 0[0] via direct shared memory
d-12-4-1:51820:51830 [1] NCCL INFO Trees [0] 4->5->-1/-1/-1
d-12-4-2:51997:52010 [1] NCCL INFO Trees [0] 6->7->-1/-1/-1
d-12-3-2:43991:44003 [1] NCCL INFO Trees [0] 2->3->-1/-1/-1
d-12-3-1:47022:47036 [1] NCCL INFO Trees [0] 0->1->-1/-1/-1
d-12-4-1:51820:51830 [1] NCCL INFO comm 0x7fa5380022f0 rank 5 nranks 8 cudaDev 1 nvmlDev 1 - Init COMPLETE
d-12-3-2:43991:44003 [1] NCCL INFO comm 0x7ff3f40022f0 rank 3 nranks 8 cudaDev 1 nvmlDev 1 - Init COMPLETE
d-12-4-2:51997:52010 [1] NCCL INFO comm 0x7f84600022f0 rank 7 nranks 8 cudaDev 1 nvmlDev 1 - Init COMPLETE
d-12-3-1:47022:47036 [1] NCCL INFO comm 0x7f1f780022f0 rank 1 nranks 8 cudaDev 1 nvmlDev 1 - Init COMPLETE
d-12-4-1:51815:51832 [0] NCCL INFO Ring 00 : 2 -> 4 [receive] via NET/IB/0
d-12-3-2:43985:44002 [0] NCCL INFO Ring 00 : 2 -> 4 [send] via NET/IB/0
d-12-4-2:51993:52008 [0] NCCL INFO Ring 00 : 6 -> 4 [send] via NET/IB/0
d-12-3-1:47016:47034 [0] NCCL INFO Ring 00 : 4 -> 0 [receive] via NET/IB/0
d-12-4-1:51815:51832 [0] NCCL INFO Ring 00 : 6 -> 4 [receive] via NET/IB/0
d-12-3-2:43985:44002 [0] NCCL INFO Ring 00 : 4 -> 2 [receive] via NET/IB/0
d-12-4-1:51815:51832 [0] NCCL INFO Ring 00 : 4 -> 0 [send] via NET/IB/0
d-12-4-2:51993:52008 [0] NCCL INFO Ring 00 : 4 -> 6 [receive] via NET/IB/0
d-12-3-1:47016:47034 [0] NCCL INFO Ring 00 : 0 -> 4 [send] via NET/IB/0
d-12-4-1:51815:51832 [0] NCCL INFO Ring 00 : 0 -> 4 [receive] via NET/IB/0
d-12-3-1:47016:47034 [0] NCCL INFO Trees [0] -1->0->1/4/-1
d-12-3-1:47016:47034 [0] NCCL INFO Using 256 threads, Min Comp Cap 7, Trees enabled up to size 79999
d-12-4-1:51815:51832 [0] NCCL INFO Ring 00 : 4 -> 2 [send] via NET/IB/0
d-12-3-1:47016:47034 [0] NCCL INFO comm 0x7fa36c0022f0 rank 0 nranks 8 cudaDev 0 nvmlDev 0 - Init COMPLETE
d-12-3-1:47016:47016 [0] NCCL INFO Launch mode Parallel
d-12-4-1:51815:51832 [0] NCCL INFO Ring 00 : 4 -> 6 [send] via NET/IB/0
d-12-3-2:43985:44002 [0] NCCL INFO Trees [0] 4->2->3/-1/-1
d-12-3-2:43985:44002 [0] NCCL INFO comm 0x7febb00022f0 rank 2 nranks 8 cudaDev 0 nvmlDev 0 - Init COMPLETE
d-12-4-1:51815:51832 [0] NCCL INFO Trees [0] 0->4->5/2/6
d-12-4-2:51993:52008 [0] NCCL INFO Trees [0] 4->6->7/-1/-1
d-12-4-1:51815:51832 [0] NCCL INFO comm 0x7fab7c0022f0 rank 4 nranks 8 cudaDev 0 nvmlDev 0 - Init COMPLETE
d-12-4-2:51993:52008 [0] NCCL INFO comm 0x7f2bec0022f0 rank 6 nranks 8 cudaDev 0 nvmlDev 0 - Init COMPLETE
INFO:lightning:Set SLURM handle signals.
INFO:lightning:Set SLURM handle signals.
INFO:lightning:Set SLURM handle signals.
INFO:lightning:
| Name | Type | Params
-------------------------------
0 | layer_1 | Linear | 100 K
1 | layer_2 | Linear | 33 K
2 | layer_3 | Linear | 2 K
INFO:lightning:Set SLURM handle signals.
INFO:lightning:Set SLURM handle signals.
INFO:lightning:Set SLURM handle signals.
INFO:lightning:Set SLURM handle signals.
INFO:lightning:Set SLURM handle signals.
spr= 5 snr= 2 sng= 2 gpu_idx= 1
rank= 5 world_size= 8
Root node= d-12-3-1
spr= 6 snr= 3 sng= 2 gpu_idx= 0
rank= 6 world_size= 8
Root node= d-12-3-1
spr= 0 snr= 0 sng= 2 gpu_idx= 0
rank= 0 world_size= 8
Root node= d-12-3-1
spr= 3 snr= 1 sng= 2 gpu_idx= 1
rank= 3 world_size= 8
Root node= d-12-3-1
spr= 4 snr= 2 sng= 2 gpu_idx= 0
rank= 4 world_size= 8
Root node= d-12-3-1
spr= 1 snr= 0 sng= 2 gpu_idx= 1
rank= 1 world_size= 8
Root node= d-12-3-1
spr= 2 snr= 1 sng= 2 gpu_idx= 0
rank= 2 world_size= 8
Root node= d-12-3-1
spr= 7 snr= 3 sng= 2 gpu_idx= 1
rank= 7 world_size= 8
Root node= d-12-3-1
/home/gridsan/dcampbell/.local/lib/python3.6/site-packages/pytorch_lightning/utilities/warnings.py:18: RuntimeWarning: Displayed epoch numbers in the progress bar start from "1" until v0.6.x, but will start from "0" in v0.8.0.
warnings.warn(*args, **kwargs)
Epoch 2: 100%|##########| 118/118 [00:19<00:00, 5.98it/s, loss=0.197, v_num=436408]
before lsw: self.proc_rank= 0
load_spawn_weights called for self.proc_rank= 0
before lsw: self.proc_rank= 0
load_spawn_weights called for self.proc_rank= 0
Traceback (most recent call last):
File "pl.py", line 119, in <module>
main()
File "pl.py", line 111, in main
trainer.fit(model)
File "/home/gridsan/dcampbell/.local/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 697, in fit
self.load_spawn_weights(model)
File "/home/gridsan/dcampbell/.local/lib/python3.6/site-packages/pytorch_lightning/trainer/distrib_data_parallel.py", line 376, in load_spawn_weights
loaded_model = original_model.__class__.load_from_checkpoint(path)
File "/home/gridsan/dcampbell/.local/lib/python3.6/site-packages/pytorch_lightning/core/lightning.py", line 1504, in load_from_checkpoint
checkpoint = torch.load(checkpoint_path, map_location=lambda storage, loc: storage)
File "/state/partition1/llgrid/pkg/anaconda/anaconda3-2020b/lib/python3.6/site-packages/torch/serialization.py", line 525, in load
with _open_file_like(f, 'rb') as opened_file:
File "/state/partition1/llgrid/pkg/anaconda/anaconda3-2020b/lib/python3.6/site-packages/torch/serialization.py", line 212, in _open_file_like
return _open_file(name_or_buffer, mode)
File "/state/partition1/llgrid/pkg/anaconda/anaconda3-2020b/lib/python3.6/site-packages/torch/serialization.py", line 193, in __init__
super(_open_file, self).__init__(open(name, mode))
FileNotFoundError: [Errno 2] No such file or directory: '/home/gridsan/groups/anaconda/dpc/lightning/__temp_weight_ddp_end.ckpt'
before lsw: self.proc_rank= 0
load_spawn_weights called for self.proc_rank= 0
Traceback (most recent call last):
File "pl.py", line 119, in <module>
main()
File "pl.py", line 111, in main
trainer.fit(model)
File "/home/gridsan/dcampbell/.local/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 697, in fit
self.load_spawn_weights(model)
File "/home/gridsan/dcampbell/.local/lib/python3.6/site-packages/pytorch_lightning/trainer/distrib_data_parallel.py", line 376, in load_spawn_weights
loaded_model = original_model.__class__.load_from_checkpoint(path)
File "/home/gridsan/dcampbell/.local/lib/python3.6/site-packages/pytorch_lightning/core/lightning.py", line 1504, in load_from_checkpoint
checkpoint = torch.load(checkpoint_path, map_location=lambda storage, loc: storage)
File "/state/partition1/llgrid/pkg/anaconda/anaconda3-2020b/lib/python3.6/site-packages/torch/serialization.py", line 525, in load
with _open_file_like(f, 'rb') as opened_file:
File "/state/partition1/llgrid/pkg/anaconda/anaconda3-2020b/lib/python3.6/site-packages/torch/serialization.py", line 212, in _open_file_like
return _open_file(name_or_buffer, mode)
File "/state/partition1/llgrid/pkg/anaconda/anaconda3-2020b/lib/python3.6/site-packages/torch/serialization.py", line 193, in __init__
super(_open_file, self).__init__(open(name, mode))
FileNotFoundError: [Errno 2] No such file or directory: '/home/gridsan/groups/anaconda/dpc/lightning/__temp_weight_ddp_end.ckpt'
before lsw: self.proc_rank= 0
load_spawn_weights called for self.proc_rank= 0
Traceback (most recent call last):
File "pl.py", line 119, in <module>
main()
File "pl.py", line 111, in main
trainer.fit(model)
File "/home/gridsan/dcampbell/.local/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 697, in fit
self.load_spawn_weights(model)
File "/home/gridsan/dcampbell/.local/lib/python3.6/site-packages/pytorch_lightning/trainer/distrib_data_parallel.py", line 376, in load_spawn_weights
loaded_model = original_model.__class__.load_from_checkpoint(path)
File "/home/gridsan/dcampbell/.local/lib/python3.6/site-packages/pytorch_lightning/core/lightning.py", line 1504, in load_from_checkpoint
checkpoint = torch.load(checkpoint_path, map_location=lambda storage, loc: storage)
File "/state/partition1/llgrid/pkg/anaconda/anaconda3-2020b/lib/python3.6/site-packages/torch/serialization.py", line 525, in load
with _open_file_like(f, 'rb') as opened_file:
File "/state/partition1/llgrid/pkg/anaconda/anaconda3-2020b/lib/python3.6/site-packages/torch/serialization.py", line 212, in _open_file_like
return _open_file(name_or_buffer, mode)
File "/state/partition1/llgrid/pkg/anaconda/anaconda3-2020b/lib/python3.6/site-packages/torch/serialization.py", line 193, in __init__
super(_open_file, self).__init__(open(name, mode))
FileNotFoundError: [Errno 2] No such file or directory: '/home/gridsan/groups/anaconda/dpc/lightning/__temp_weight_ddp_end.ckpt'
srun: error: d-12-4-1: task 2: Exited with exit code 1
srun: error: d-12-4-2: task 3: Exited with exit code 1
srun: error: d-12-3-1: task 0: Exited with exit code 1
Code sample
sbatch script:
#!/bin/bash -l
# SLURM SUBMIT SCRIPT
#SBATCH --nodes=4
#SBATCH --gres=gpu:volta:2
#SBATCH --mem=0
#SBATCH --time=0-02:00:00
#SBATCH --partition=gaia
# activate conda env
#source activate $1
# -------------------------
# debugging flags (optional)
export NCCL_DEBUG=INFO
export PYTHONFAULTHANDLER=1
# on your cluster you might need these:
# set the network interface
# export NCCL_SOCKET_IFNAME=^docker0,lo
# might need the latest cuda
# module load NCCL/2.4.7-1-cuda.10.0
# -------------------------
module load mpi/openmpi-4.0
module load anaconda/2020b
export MASTER_PORT=`comm -23 <(seq 12000 18000 | sort) <(ss -Htan | awk '{print $4}' | cut -d':' -f2 | sort -u) | shuf | head -n 1`
# run script from above
srun python pl.py
pl.py (dervied from lightning tutorial's MNIST example)
import torch
from torch import nn
import pytorch_lightning as pl
from torch.utils.data import DataLoader, random_split
from torch.nn import functional as F
from torchvision.datasets import MNIST
from torchvision import datasets, transforms
import os
import torch.multiprocessing as mp
from localtools import slurm_torch
from hostlist import expand_hostlist
class LightningMNISTClassifier(pl.LightningModule):
def __init__(self):
super(LightningMNISTClassifier, self).__init__()
# mnist images are (1, 28, 28) (channels, width, height)
self.layer_1 = torch.nn.Linear(28 * 28, 128)
self.layer_2 = torch.nn.Linear(128, 256)
self.layer_3 = torch.nn.Linear(256, 10)
def forward(self, x):
batch_size, channels, width, height = x.size()
# (b, 1, 28, 28) -> (b, 1*28*28)
x = x.view(batch_size, -1)
# layer 1 (b, 1*28*28) -> (b, 128)
x = self.layer_1(x)
x = torch.relu(x)
# layer 2 (b, 128) -> (b, 256)
x = self.layer_2(x)
x = torch.relu(x)
# layer 3 (b, 256) -> (b, 10)
x = self.layer_3(x)
# probability distribution over labels
x = torch.log_softmax(x, dim=1)
return x
def cross_entropy_loss(self, logits, labels):
return F.nll_loss(logits, labels)
def training_step(self, train_batch, batch_idx):
x, y = train_batch
logits = self.forward(x)
loss = self.cross_entropy_loss(logits, y)
logs = {'train_loss': loss}
return {'loss': loss, 'log': logs}
def test_step(self, test_batch, batch_idx):
x, y = test_batch
logits = self.forward(x)
loss = self.cross_entropy_loss(logits, y)
return {'test_loss': loss}
def validation_step(self, val_batch, batch_idx):
x, y = val_batch
logits = self.forward(x)
loss = self.cross_entropy_loss(logits, y)
return {'val_loss': loss}
def validation_epoch_end(self, outputs):
# called at the end of the validation epoch
# outputs is an array with what you returned in validation_step for each batch
# outputs = [{'loss': batch_0_loss}, {'loss': batch_1_loss}, ..., {'loss': batch_n_loss}]
avg_loss = torch.stack([x['val_loss'] for x in outputs]).mean()
tensorboard_logs = {'val_loss': avg_loss}
return {'avg_val_loss': avg_loss, 'log': tensorboard_logs}
def prepare_data(self):
# transforms for images
transform=transforms.Compose([transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))])
# prepare transforms standard to MNIST
mnist_train = MNIST(os.getcwd(), train=True, download=True, transform=transform)
mnist_test = MNIST(os.getcwd(), train=False, download=True, transform=transform)
self.mnist_train, self.mnist_val = random_split(mnist_train, [55000, 5000])
def train_dataloader(self):
return DataLoader(self.mnist_train, num_workers=16, batch_size=64)
def val_dataloader(self):
return DataLoader(self.mnist_val, num_workers=16, batch_size=64)
def test_dataloader(self):
return DataLoader(self,mnist_test, num_workers=16, batch_size=64)
def configure_optimizers(self):
optimizer = torch.optim.Adam(self.parameters(), lr=1e-3)
return optimizer
# train
def main ():
rank, size = slurm_torch.torch_env()
nn = ' '.join(expand_hostlist( os.environ['SLURM_NODELIST']))
os.environ['SLURM_NODELIST']=nn
model = LightningMNISTClassifier()
trainer = pl.Trainer(gpus=2, num_nodes=size, distributed_backend='ddp', max_epochs=2)
trainer.fit(model)
if __name__ == '__main__':
root_dir = os.path.dirname(os.path.realpath(__file__))
# TRAIN
main()
Expected behavior
Workers on all nodes run to completion without errors
Environment
* CUDA:
- GPU:
- Tesla V100-PCIE-32GB
- Tesla V100-PCIE-32GB
- available: True
- version: 10.1
* Packages:
- numpy: 1.18.1
- pyTorch_debug: False
- pyTorch_version: 1.4.0
- pytorch-lightning: 0.7.3
- tensorboard: 2.1.0
- tqdm: 4.45.0
* System:
- OS: Linux
- architecture:
- 64bit
-
- processor: x86_64
- python: 3.6.10
- version: #1 SMP Fri Apr 3 11:13:11 EDT 2020
How you installed PyTorch: pip
Additional context
I'm not positive that this isn't a user-introduced problem as I had to do a little bit of tweaking to the supplied example application in order to run in my environment.
I have added some outputs to attempt to determine what's going on, and it seems as though self.proc_rank is 0 for the python process on each node at the point that it's attempting to load the spawn weights, so the check introduced to fix #1335 isn't preventing the attempts to load a non existent file.
|
Allow the scheduler to be None
|
[
"feature",
"help wanted",
"won't fix"
] |
π Feature
Allow a scheduler to be None. This means that one could have configure_optimizers() return [optimizer], [None].
Motivation
Allowing configure_optimizers() to return [optimizer], [None] improves how one could write clean, dynamic code. One could for instance do something like this:
def configure_optimizers(self):
self.optim = create_optimizer(self.model, **self.optim_opts)
self.lr_scheduler = create_lr_scheduler(self.optim, **self.sched_opts)
return [self.optim], [self.lr_scheduler]
where create_lr_scheduler() can return None when no scheduler is needed.
Alternatives
I am aware that one could rewrite as follows, but this is a lot less clean and can easily be changed in the source code.
def configure_optimizers(self):
self.optim = create_optimizer(self.model, **self.optim_opts)
self.lr_scheduler = create_lr_scheduler(self.optim, **self.sched_opts)
if self.lr_scheduler is None:
return self.optim
else:
return [self.optim], [self.lr_scheduler]
Additional context
I can do a PR for this. It should be as easy as adding a condition in configure_schedulers()
pytorch-lightning/pytorch_lightning/trainer/optimizers.py
Line 83
in
e79ae18
def configure_schedulers(self, schedulers: list):
namely, allowing:
elif scheduler is None:
continue
|
no val_dataloader when lr_find
|
[
"bug",
"help wanted"
] |
π Bug
To Reproduce
If you want to give the dataloaders as parameters during fitting (so training_step, validation_step are defined but not train_dataloader and val_dataloader), if you want to do a learning rate finder, it return you the following error : pytorch_lightning.utilities.exceptions.MisconfigurationException: You have defined 'validation_step()', but have not passed in a val_dataloader().
Code sample
import os
import torch
from torch.nn import functional as F
from torch.utils.data import DataLoader
from torchvision.datasets import MNIST
from torchvision import transforms
import pytorch_lightning as pl
from pytorch_lightning import Trainer
class LitModel(pl.LightningModule):
def __init__(self):
super().__init__()
self.l1 = torch.nn.Linear(28 * 28, 10)
def forward(self, x):
return torch.relu(self.l1(x.view(x.size(0), -1)))
def training_step(self, batch, batch_idx):
x, y = batch
y_hat = self(x)
loss = F.cross_entropy(y_hat, y)
tensorboard_logs = {'train_loss': loss}
return {'loss': loss, 'log': tensorboard_logs}
def validation_step(self, batch, batch_idx):
x, y = batch
y_hat = self(x)
return {'val_loss': F.cross_entropy(y_hat, y)}
def validation_epoch_end(self, outputs):
avg_loss = torch.stack([x['val_loss'] for x in outputs]).mean()
tensorboard_logs = {'val_loss': avg_loss}
return {'val_loss': avg_loss, 'log': tensorboard_logs}
def configure_optimizers(self):
return torch.optim.Adam(self.parameters(), lr=0.001)
train_dataset = MNIST(os.getcwd(), train=True, download=True, transform=transforms.ToTensor())
train_loader = DataLoader(train_dataset, batch_size=32, num_workers=4, shuffle=True)
val_dataset = MNIST(os.getcwd(), train=False, download=True, transform=transforms.ToTensor())
val_loader = DataLoader(val_dataset, batch_size=32, num_workers=4, shuffle=True)
model = LitModel()
trainer = Trainer(gpus=1)
lr = trainer.lr_find(model, train_loader)
Expected behavior
Simply determines the best learning rate
Environment
* CUDA:
- GPU:
- GeForce RTX 2080 Ti
- available: True
- version: 10.1.243
* Packages:
- numpy: 1.17.4
- pyTorch_debug: False
- pyTorch_version: 1.3.1
- pytorch-lightning: 0.7.5
- tensorboard: 2.0.2
- tqdm: 4.35.0
* System:
- OS: Linux
- architecture:
- 64bit
- ELF
- processor: x86_64
- python: 3.6.9
- version: #1 SMP Tue Oct 29 08:30:10 EDT 2019
|
Wrong logic in `finalize_agg_metrics` routine
|
[
"bug",
"help wanted",
"won't fix"
] |
π Bug
1. TrainerLoggingMixin.log_metrics methos makes a call:
pytorch-lightning/pytorch_lightning/trainer/logging.py
Lines 73 to 74
in
7919624
self.logger.agg_and_log_metrics(scalar_metrics, step=step)
self.logger.save()
2. Now, let's check this save method in LightningLoggerBase:
pytorch-lightning/pytorch_lightning/loggers/base.py
Lines 223 to 225
in
7919624
def save(self) -> None:
"""Save log data."""
self._finalize_agg_metrics()
3. Go deeper. Now, _finalize_agg_metrics in LightningLoggerBase:
pytorch-lightning/pytorch_lightning/loggers/base.py
Lines 108 to 111
in
7919624
def _finalize_agg_metrics(self):
"""This shall be called before save/close."""
agg_step, metrics_to_log = self._reduce_agg_metrics()
self._metrics_to_agg = []
Please, pay attention to:
self._metrics_to_agg = []
At this point, we clean up the self._metrics_to_agg.
Now, let's recall the first statement, where the TrainerLoggingMixin performs self.logger.save().
The Mixin performs this call on each step. But not on each accumulation step! So, we will execute steps 2 and 3 before an actual metrics aggregation. It means, that we'll not aggregate metrics, but instead, we will log them on each NOT-accumulation step.
To Reproduce
Steps to reproduce the behavior:
Copy this test and execute it in tests/loggers/test_base.py harness.
def test_with_accumulate_grad_batches_and_trainer():
class StoreHistoryLogger(CustomLogger):
def __init__(self):
super().__init__()
self.steps_logged = []
@rank_zero_only
def log_metrics(self, metrics, step):
self.steps_logged.append(step)
hparams = tutils.get_default_hparams()
model = LightningTestModel(hparams)
logger = StoreHistoryLogger()
trainer = Trainer(
accumulate_grad_batches=5,
logger=logger,
max_epochs=2,
row_log_interval=1
)
trainer.fit(model)
number_of_unique_steps_logged = len(set(logger.steps_logged))
number_of_total_steps_logged = len(logger.steps_logged)
assert number_of_unique_steps_logged == number_of_total_steps_logged
It will fail because the actual number of logged steps is greater than the required (required is equal to the actual divided by accumulate_grad_batches).
Expected behavior
Metrics must be aggregated and logged on each n_accum steps, but not logged on each step.
Environment
Environment
OS: Linux
architecture: 64bit
processor: x86_64
python: 3.7.6
version: #97~16.04.1-Ubuntu SMP Wed Apr 1 03:03:31 UTC 2020
pytorch-lightning: 0.7.5
Additional context
@Borda , I think, you can help me with this issue. It's something wrong with save and finalize logger routines.
|
Improve ModelCheckpoint
|
[
"feature",
"help wanted",
"good first issue",
"won't fix"
] |
π Feature
Add two optional features:
Save the trainer checkpoint just before shutdown: add an optional argument (e.g. save_on_shutdown) in ModelCheckpoint to save the current trainer state before shutdown. Value of save_on_shutdown can only be None or the file path for saving.
Maintain a file (e.g. latest.ckpt) linking to the latest saved model (across multiple runs of training): add an optional argument (e.g. create_link_for_latest), the value can only be None or file path for saving.
Motivation
For the first one, if training is interrupted in the middle, no checkpoint is left after last saving, which could be several epochs ago. If I want to continue, I can only resume, at most, with the one saved at last epoch.
For the second one, this is a feature I always implement, maybe it's not essential for everyone. This is useful when I'm doing frequent training, I have to find all the way to the exact model saved last time. So I create a file called latest.ckpt at somewhere easy to reach, linking to the lastest model.
|
Checkpoint adding "version_" at the start of the logger name
|
[
"feature"
] |
To reproduce :
logger = pl.loggers.TensorBoardLogger(
save_dir='.',
version='my_name'
name='lightning_logs'
)
trainer = pl.Trainer(logger=logger, log_gpu_memory='all', max_epochs=10)
Giving as a result:
/lightning_logs/my_name: Where is saved the logs
/lightning_logs/version_my_name : Where is saved the checkpoints
Possible Explanation:
It seems like the checkpoint saving add "version_" to the start of the name even if the name have been given as a parameter :
pytorch-lightning/pytorch_lightning/trainer/callback_config.py
Lines 52 to 57
in
3e8f2d9
ckpt_path = os.path.join(
save_dir,
self.logger.name,
f'version_{self.logger.version}',
"checkpoints"
)
Even if in the Tensorboard Logger if the name is provided there is no "version_" prefix :
pytorch-lightning/pytorch_lightning/loggers/tensorboard.py
Line 81
in
8b82ce0
version = self.version if isinstance(self.version, str) else f"version_{self.version}"
|
Using default path in ModelCheckpoint does not work
|
[
"won't fix"
] |
According to the implementation, passing None as model path uses the default path for checkpoints. However, the following test prevents this feature to be used when save_top_k > 0:
pytorch-lightning/pytorch_lightning/callbacks/model_checkpoint.py
Line 89
in
3e8f2d9
if save_top_k > 0 and os.path.isdir(filepath) and len(os.listdir(filepath)) > 0:
It results in the following error:
st = os.stat(s)
TypeError: stat: path should be string, bytes, os.PathLike or integer, not NoneType
Also, as a side issue, the example for model checkpointing from the documentation suggests using os.getcwd() as path and save_top_k=True, which results in the warning being displayed and potentially all files of current directory being removed. New users have to be very careful if they don't wand their source code deleted.
|
Is it a bug or a feature that validating starts before training ends?
|
[
"bug",
"help wanted"
] |
pytorch-lightning version:0.7.5
1 gpu.
I found that validating started when training reached near end(about 98%), then training and validating run simultaneously.
Anyone ever noticed this?
I guess validating should start after training loop ends in every epoch. Am I wrong?
|
Trainer hooks for before training start and after test/val ends
|
[
"question"
] |
β Questions and Help
What is your question?
I am wondering if there are existing hooks that allow me to do something before training starts and after test ends.
Use case:
Training start: Initialization works like call self.logger.watch(self) - right now I am using prepare_data as a hack since I could not find something like this.
On test end/val end: My research requires me to build some plots with predictions (and ground truth) of the learned model and I need all datapoints to do some aggregated processing so I cannot just use a current batch and I cannot use the epoch_end hooks since I really want to do this as a last step (end of last epoch). I used to do that as a last step in ignite (since then I have moved to lightning and love it)
EDIT: On a second thought, if I call test after fit has completed, then test_epoch_end() will work since it will be called only once?
EDIT2: Is there a place in the documentation where I can see a comprehensive list of callbacks and hooks and the order in which everything is called? I am a little confused - I see hooks and callbacks and and I feel confused about when to use which, is my ask better suited to be used with callbacks and I should build one myself?
|
How to download the prediction file from the model developed with pytorch lightning?
|
[
"question",
"won't fix"
] |
I am trying to implement this,
https://github.com/bighuang624/DSANet
and also making a few changes to my code. I would like to know if you have any formats to extract the prediction file after training and validation?
PS. I am also following this documentation,
https://pytorch-lightning.readthedocs.io/_/downloads/en/latest/pdf/
Any help would be appreciated! @williamFalcon
|
ModelCheckpoint.filepath cannot be None
|
[
"bug",
"help wanted"
] |
π Bug
The ModelCheckpoint callback's argument filepath cannot be None even though the docs says:
Can also be set to None, then it will be set to default location during trainer construction.
The exception is raised:
Traceback (most recent call last):
File "/home/pi3ni0/Workspace/random/siamese_cnn_glove.py", line 209, in <module>
checkpoint_callback = ModelCheckpoint(
File "/home/pi3ni0/.venv/dev/lib/python3.6/site-packages/pytorch_lightning/callbacks/model_checkpoint.py", line 89, in __init__
if save_top_k > 0 and os.path.isdir(filepath) and len(os.listdir(filepath)) > 0:
File "/home/pi3ni0/.venv/dev/lib/python3.6/genericpath.py", line 42, in isdir
st = os.stat(s)
TypeError: stat: path should be string, bytes, os.PathLike or integer, not NoneType
It is because os.path.isdir cannot be applied to None argument.
Code sample
checkpoint_callback = ModelCheckpoint(
filepath=None,
)
trainer = pl.Trainer(
checkpoint_callback=checkpoint_callback,
)
...
Environment
* CUDA:
- GPU:
- GeForce GTX 1650 with Max-Q Design
- available: True
- version: 10.2
* Packages:
- numpy: 1.18.2
- pyTorch_debug: False
- pyTorch_version: 1.5.0
- pytorch-lightning: 0.7.5
- tensorboard: 2.2.1
- tqdm: 4.45.0
* System:
- OS: Linux
- architecture:
- 64bit
- ELF
- processor: x86_64
|
Model checkpoint claims test_step() not defined
|
[
"bug",
"help wanted"
] |
π Bug
I'm attempting to have my model be easily checkpointable for later testing. I have no issues with it creating the checkpoints and loading the model in as such seems to at least "work"
model = MyCoolModel.load_from_checkpoint(checkpoint_path, tags_csv=meta_path)
With checkpoint_path pointing towards the .ckpt file and meta_path the tags.csv. Now, my model in normal running works perfectly fine, I have working training epochs, validation steps, and a final test step called at the end. The problem begins when I load my model in I am greeted by an error saying I have never defined test_step()
Traceback (most recent call last):
File "main.py", line 74, in <module>
run_model(hparams)
File "main.py", line 64, in run_model
trainer.test()
File "/users2/mmatero/anaconda3/envs/project/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 904, in test
self.run_evaluation(test_mode=True)
File "/users2/mmatero/anaconda3/envs/project/lib/python3.8/site-packages/pytorch_lightning/trainer/evaluation_loop.py", line 329, in run_evaluation
raise MisconfigurationException(
pytorch_lightning.utilities.exceptions.MisconfigurationException: You called `.test()` without defining model's `.test_step()`. Please define and try again
Expected behavior
I have clearly defined both test_step and test_epoch_end within my model's definition and they run completely fine when not loading from a checkpoint. I reuse my validation calls since the only difference is the dataloader they're using, operations are exactly the same.
def test_step(self, batch, batch_idx):
return self.validation_step(batch, batch_idx)
def test_epoch_end(self, outputs):
return self.validation_epoch_end(outputs)
So I'd expect them to still be defined after loading. I had other issues with pytorch-lightning ignoring my test_step definitions on other versions (specifically 0.7.5) but I have downgraded to one that works for a normal train/val/test loop.
Environment
PyTorch Version (e.g., 1.0): 1.4.0
Lightning Version: 0.7.3
OS (e.g., Linux): Ubuntu 16.04
How you installed PyTorch (conda, pip, source): Conda
Python version: 3.8.1
CUDA/cuDNN version: 10.1
GPU models and configuration: Titan XP x3
|
Folder names inconsistent if version is string and using `TensorBoardLogger`
|
[
"won't fix"
] |
A) According to TensorBoardLogger, if version is a string then it is used as the run-specific subdirectory name, otherwise 'version_${version}' is used.
B) However, in the callback 'version_${version}' is always used.
LINKS:
A)
pytorch-lightning/pytorch_lightning/loggers/tensorboard.py
Line 40
in
8b82ce0
If it is a string then it is used as the run-specific subdirectory name,
B)
pytorch-lightning/pytorch_lightning/trainer/callback_config.py
Line 55
in
3e8f2d9
f'version_{self.logger.version}',
Edit: This issue comes up as I generally use git commit IDs as version numbers.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.