title
stringlengths 5
164
| labels
sequence | bodyText
stringlengths 0
46.7k
|
---|---|---|
TensorBoardLogger not working as expected with accumulate_grad_batches>1 | [
"bug",
"help wanted",
"logger"
] | π Bug
When logging inside training step to TensorBoard and using accumulate_grad_batches > 1 inside pl.Trainer() the behavior is not as expected.
With accumulate_grad_batches = 1 everything looks good.
With accumulate_grad_batches = 8 the values are reported on the same step.
To Reproduce (sorry for not using colab)
import os
import torch
from torch.nn import functional as F
from torch.utils.data import DataLoader, random_split
import pytorch_lightning as pl
from pytorch_lightning.loggers import TensorBoardLogger
from torchvision.datasets.mnist import MNIST
from torchvision import transforms
class LitClassifier(pl.LightningModule):
def __init__(self, hidden_dim=128, learning_rate=1e-3):
super().__init__()
self.save_hyperparameters()
self.l1 = torch.nn.Linear(28 * 28, self.hparams.hidden_dim)
self.l2 = torch.nn.Linear(self.hparams.hidden_dim, 10)
def forward(self, x):
x = x.view(x.size(0), -1)
x = torch.relu(self.l1(x))
x = torch.relu(self.l2(x))
return x
def training_step(self, batch, batch_idx):
x, y = batch
y_hat = self(x)
loss = F.cross_entropy(y_hat, y)
self.log("train_loss",loss)
return loss
def validation_step(self, batch, batch_idx):
x, y = batch
y_hat = self(x)
loss = F.cross_entropy(y_hat, y)
def configure_optimizers(self):
return torch.optim.Adam(self.parameters(), lr=self.hparams.learning_rate)
def run_test(accumulate_grad_batches,batch_size,num_workers):
dataset = MNIST('', train=True, download=True, transform=transforms.ToTensor())
mnist_train, mnist_val = random_split(dataset, [55000, 5000])
train_loader = DataLoader(mnist_train,batch_size)
val_loader = DataLoader(mnist_val,batch_size)
model = LitClassifier()
trainer = pl.Trainer(
logger=TensorBoardLogger(os.getcwd()',name='bug'),
accumulate_grad_batches=accumulate_grad_batches,
max_epochs=2
)
trainer.fit(model, train_loader, val_loader)
run_test(1,32)
run_test(8,32)
Expected behavior
Take a mean of the values logged at same step?
Environment
PyTorch Version: 1.6
OS: Linux
How you installed PyTorch: conda
Python version: 3.8
Tensorboard: 2.3.0 |
callbacks: some callback hooks can be cleaned up | [
"feature",
"help wanted",
"refactor",
"callback"
] | π Feature
Motivation
There are some duplicate callback hooks which can be removed.
ON TRAIN START
ON EPOCH START <-- same 1
ON TRAIN EPOCH START <-- same 1
ON BATCH START <-- same 2
ON TRAIN BATCH START <-- same 2
ON BATCH END
ON TRAIN BATCH END
ON BATCH START
ON TRAIN BATCH START
ON BATCH END
ON TRAIN BATCH END
ON VALIDATION START
ON VALIDATION EPOCH START
ON VALIDATION BATCH START
ON VALIDATION BATCH END
ON VALIDATION EPOCH END
ON VALIDATION END
ON EPOCH END
ON TRAIN EPOCH END
ON TRAIN END
ON FIT END
TEARDOWN
SETUP
ON FIT START
ON PRETRAIN ROUTINE START
ON PRETRAIN ROUTINE END
ON TEST START
ON TEST EPOCH START
ON TEST BATCH START
ON TEST BATCH END
ON TEST EPOCH END
ON TEST END
ON FIT END
TEARDOWN
TEARDOWN
Pitch
ON EPOCH START
ON TRAIN START
ON TRAIN EPOCH START
ON TRAIN BATCH START
ON TRAIN BATCH END
ON TRAIN EPOCH END
ON TRAIN END
ON VALIDATION START
ON VALIDATION EPOCH START
ON VALIDATION BATCH START
ON VALIDATION BATCH END
ON VALIDATION EPOCH END
ON VALIDATION END
ON EPOCH END
ON FIT END
TEARDOWN
SETUP
ON FIT START
ON PRETRAIN ROUTINE START
ON PRETRAIN ROUTINE END
ON TEST START
ON TEST EPOCH START
ON TEST BATCH START
ON TEST BATCH END
ON TEST EPOCH END
ON TEST END
ON FIT END
TEARDOWN
TEARDOWN
Alternatives
Due to various researches, the above hooks calling style can't be fit every needs.
But it would be intuitive to see like
(one thing is the user can think TRAIN / VALIDATION can be in a separate epoch)
ON TRAIN START
ON TRAIN EPOCH START
ON TRAIN BATCH START
ON TRAIN BATCH END
ON TRAIN EPOCH END
ON TRAIN END
ON VALIDATION START
ON VALIDATION EPOCH START
ON VALIDATION BATCH START
ON VALIDATION BATCH END
ON VALIDATION EPOCH END
ON VALIDATION END
ON FIT END
TEARDOWN
SETUP
ON FIT START
ON PRETRAIN ROUTINE START
ON PRETRAIN ROUTINE END
ON TEST START
ON TEST EPOCH START
ON TEST BATCH START
ON TEST BATCH END
ON TEST EPOCH END
ON TEST END
ON FIT END
TEARDOWN
TEARDOWN
Additional context
https://colab.research.google.com/drive/1xVpEVaMKsbnLL4cwLB60lhz8zL3FcWKo?usp=sharing |
Checkpoint hparams.yaml does not save current self.hparms, but only at self.save_hyperparameters | [
"help wanted",
"question",
"working as intended",
"design",
"checkpointing"
] | π Bug
I expect that lightning_logs/version_0/hparams.yaml matches self.hparams when Checkpoint is called, or at least matches self.hparams when calling trainer.fit(model).
Instead hparams.yaml only contains the arguments given to self.save_hyperparameters() (empty is all arguments).
Please reproduce using the BoringModel and post here
https://colab.research.google.com/drive/1dmZjS_3SETwF3hAAeCq0xdrd0NQGs9nt?usp=sharing
To Reproduce
def __init__(self, output_neurons):
super().__init__()
# !!!!! CHANGED !!!!!
self.save_hyperparameters()
self.hparams["output_neurons"] = 3
self.layer = torch.nn.Linear(32, self.hparams["output_neurons"])
...
model = BoringModel(output_neurons=2)
Expected behavior
Inside hparams.yaml the following line:
output_neurons: 3
Actual behavior
Inside hparams.yaml the following line:
output_neurons: 2
Environment
Note: Bugs with code are solved faster ! Colab Notebook should be made public !
IDE: Please, use our python bug_report_model.py template.
Colab Notebook: Please copy and paste the output from our environment collection script (or fill out the checklist below manually).
You can get the script and run it with:
wget https://raw.githubusercontent.com/PyTorchLightning/pytorch-lightning/master/tests/collect_env_details.py
# For security purposes, please check the contents of collect_env_details.py before running it.
python collect_env_details.py
Colab
CUDA:
GPU:
Tesla T4
available: True
version: 10.1
Packages:
numpy: 1.18.5
pyTorch_debug: False
pyTorch_version: 1.6.0+cu101
pytorch-lightning: 1.0.3
tqdm: 4.41.1
System:
OS: Linux
architecture:
64bit
processor: x86_64
python: 3.6.9
version: #1 SMP Thu Jul 23 08:00:38 PDT 2020
Any other relevant information:
Additional context
I listed this as a bug, as it would go against (my) intuition. If this is intended behavior, this should be made clear in the documentation.
How I came across this issue
I trained and model and saved it with to_torchscript(). I wanted to share with my teams the hparams of this new model.pt, but found out that hparams.yaml did not contain the hparam values I used to train the model with. I'm also adding additional information to self.hparams later in the pl.LightningModule, but this was also not included.
Imagine doing transfer learning. You set self.hparams["output_neurons"] = 10 to match the output size when loading the weights of a pre-trained model. Then you change self.hparams["output_neurons"] = 5 and do a self.output_layer = torch.nn.Linear(32, self.hparams["output_neurons"]) to train on your new task.
You would expect hparams.yaml to contain this new output size, but in the current PL version it does not. |
After DDP train processes have different best val paths | [
"bug",
"help wanted",
"priority: 0",
"distributed"
] | π Bug
Tied to huggingface/transformers#7852
There is no synchronisation/communication to ensure the model has finished saving before loading. If you look at ddp_spawn/ddp_cpu there is communication to ensure that each process has the same best_val_path stored in the model after save.
Run below on multi-gpu:
# Copyright The PyTorch Lightning team.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# --------------------------------------------
# --------------------------------------------
# --------------------------------------------
# USE THIS MODEL TO REPRODUCE A BUG YOU REPORT
# --------------------------------------------
# --------------------------------------------
# --------------------------------------------
import glob
import os
from tempfile import TemporaryDirectory
import torch
from torch.utils.data import Dataset
from pytorch_lightning import Trainer, LightningModule
from pytorch_lightning.callbacks import ModelCheckpoint
class RandomDataset(Dataset):
def __init__(self, size, length):
self.len = length
self.data = torch.randn(length, size)
def __getitem__(self, index):
return self.data[index]
def __len__(self):
return self.len
class BoringModel(LightningModule):
def __init__(self):
"""
Testing PL Module
Use as follows:
- subclass
- modify the behavior for what you want
class TestModel(BaseTestModel):
def training_step(...):
# do your own thing
or:
model = BaseTestModel()
model.training_epoch_end = None
"""
super().__init__()
self.layer = torch.nn.Linear(32, 2)
def forward(self, x):
return self.layer(x)
def loss(self, batch, prediction):
# An arbitrary loss to have a loss that updates the model weights during `Trainer.fit` calls
return torch.nn.functional.mse_loss(prediction, torch.ones_like(prediction))
def step(self, x):
x = self.layer(x)
out = torch.nn.functional.mse_loss(x, torch.ones_like(x))
return out
def training_step(self, batch, batch_idx):
output = self.layer(batch)
loss = self.loss(batch, output)
self.log('loss', loss)
return {"loss": loss}
def training_step_end(self, training_step_outputs):
return training_step_outputs
def validation_step(self, batch, batch_idx):
output = self.layer(batch)
loss = self.loss(batch, output)
self.log('x', loss)
def test_step(self, batch, batch_idx):
output = self.layer(batch)
loss = self.loss(batch, output)
self.log('y', loss)
def configure_optimizers(self):
optimizer = torch.optim.AdamW(self.layer.parameters(), lr=0.1)
lr_scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=1)
return [optimizer], [lr_scheduler]
def run_test():
class TestModel(BoringModel):
def validation_step(self, batch, batch_idx):
output = self.layer(batch)
loss = self.loss(batch, output)
self.log('x', loss)
# fake data
train_data = torch.utils.data.DataLoader(RandomDataset(32, 64))
val_data = torch.utils.data.DataLoader(RandomDataset(32, 64))
# model
model = TestModel()
tmp_dir = 'temp/'
if os.path.exists(tmp_dir):
os.rmdir(tmp_dir)
trainer = Trainer(
default_root_dir=os.getcwd(),
max_epochs=2,
accelerator='ddp',
gpus=2,
checkpoint_callback=ModelCheckpoint(
dirpath=tmp_dir,
monitor='x',
mode='min',
save_top_k=1
)
)
trainer.fit(model, train_data, val_data)
checkpoints = list(sorted(glob.glob(os.path.join(tmp_dir, "*.ckpt"), recursive=True)))
print("checkpoints", checkpoints)
print(trainer.checkpoint_callback.best_model_path)
assert os.path.exists(
trainer.checkpoint_callback.best_model_path), f'Could not find checkpoint at rank {trainer.global_rank}'
if __name__ == '__main__':
run_test()
Output:
Traceback (most recent call last):
File "/home/jovyan/transformers/reproduce.py", line 139, in <module>
run_test()
File "/home/jovyan/transformers/reproduce.py", line 135, in run_test
trainer.checkpoint_callback.best_model_path), f'Could not find checkpoint at rank {trainer.global_rank}'
AssertionError: Could not find checkpoint at rank 1
Expected behavior
Assertion does not fail |
Get progress bar in VS Code instead of text/stream based progress bar | [
"feature",
"won't fix"
] | Hi, one of the maintainers of the Python extension that provides Jupyter functionality in VS Code.
Here's the original issue:
https://github.com/microsoft/vscode-python/issues/14476#issuecomment-714895448
What is your question?
When running the code in class Jupyter notebook, we get the widget (I think tqdm) progress bar.
However, when running the exact same code in VS Code, we get the class text stream based progress bar.
I would like to get the progress bar (tqdm) being rendererd in the output when running cells in VS Code.
Does this package have some auto detection of the host (whether its Jupyter labs/notebook) & then renders the progress bar as widgets & when executed from another front end like VSCode it will render it using text streams.
Please note: we are not using Jupyter to run the code, we're starting the Python kernel & communicating with the kernel over ZQM directly. Hence Jupyter is not avaiable.
Finally, VS Code does support IPyWidgets functionality.
Thanks for your help.
Code
import torch
from torch import nn
from torch.nn import functional as F
from pl_bolts.datamodules.mnist_datamodule import MNISTDataModule
import pytorch_lightning as pl
class LitLeNet(pl.LightningModule):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.conv1 = nn.Conv2d(1, 6, 5, padding=2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.classifier = nn.Linear(84, 10)
self.accuracy = pl.metrics.Accuracy()
def forward(self, x):
x = F.relu(self.conv1(x))
x = F.max_pool2d(x, (2, 2))
x = F.relu(self.conv2(x))
x = F.max_pool2d(x, (2, 2))
x = x.view(-1, 16 * 5 * 5)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.classifier(x)
x = F.softmax(x, dim=0)
return x
def configure_optimizers(self):
return torch.optim.Adam(self.parameters())
def _shared_step(self, batch, batch_idx):
x, y = batch
y_hat = self(x)
loss = F.cross_entropy(y_hat, y)
self.log('acc', self.accuracy(y_hat, y), prog_bar=True)
return loss
def training_step(self, batch, batch_idx):
return self._shared_step(batch, batch_idx)
def validation_step(self, batch, batch_idx):
return self._shared_step(batch, batch_idx)
def test_step(self, batch, batch_idx):
return self._shared_step(batch, batch_idx)
def validation_epoch_end(self, outputs):
self.log('acc', self.accuracy.compute(), prog_bar=True)
mnist = MNISTDataModule('./data', num_workers=2)
model = LitLeNet()
trainer = pl.Trainer(max_epochs=1)
trainer.fit(model, datamodule=mnist)
Here's the output in VS Code
Epoch 0: 92%|ββββββββββ| 1718/1874 [00:17<00:01, 98.09it/s, loss=2.047, v_num=19, acc=0.875]
Validating: 0it [00:00, ?it/s]
Here's the output in Jupyter Notebooks
What have you tried?
What's your environment?
OS: [e.g. iOS, Linux, Win] - OSX
Packaging [e.g. pip, conda] - Pip
Version [e.g. 0.5.2.1]
Python 3.8.2
pytorch_lightning 1.0.3
pytorch-lightning-bolts 0.2.5
torch 1.6.0
torchvision 0.7.0 |
Remove tensorboard dependency | [
"feature",
"help wanted",
"won't fix",
"discussion"
] | π Feature
It would be cool if TB was removed as a dependency, as some people (like me) don't really use it.
Motivation
Tensorboard dependency currently represents close to 50% of the download volume of PL (assuming you start with pytorch installed) - 6.2 MB out of 12.6MB for me, when I install from conda. So removing it would make the package much slimmer.
I use only Wandb for logging - or just the progress bar for quick test runs.
Alternatives
Tensorboard could be put into loggers part of extras_require in setup.py. |
Saving / loading hparams type in checkpoint | [
"help wanted",
"docs"
] | π Bug
There seems to be an issue with saving or loading hyperparams type in checkpoints. Related to #924 (unrelated to #3998).
Please reproduce using the BoringModel and post here
Here is the BoringModel gist using the snippet from @awaelchli in #924
(running in colab has been unreliable, see #3998)
https://gist.github.com/chiragraman/c235f6f2a25b2432bde1a08ae6ed1b03
Behavior
With my local master at 155f4e9, I'm getting the following error:
type of hparams <class 'pytorch_lightning.utilities.parsing.AttributeDict'>
class of hparams type AttributeDict
accessing hparams 1 no problem
GPU available: True, used: False
TPU available: False, using: 0 TPU cores
/home/chirag/miniconda3/envs/ml/lib/python3.8/site-packages/pytorch_lightning/utilities/distributed.py:45: UserWarning: GPU available but not used. Set the --gpus flag when calling the script.
warnings.warn(*args, **kwargs)
/home/chirag/miniconda3/envs/ml/lib/python3.8/site-packages/pytorch_lightning/utilities/distributed.py:45: UserWarning: The dataloader, val dataloader 0, does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` (try 8 which is the number of cpus on this machine) in the `DataLoader` init to improve performance.
warnings.warn(*args, **kwargs)
/home/chirag/miniconda3/envs/ml/lib/python3.8/site-packages/pytorch_lightning/utilities/distributed.py:45: UserWarning: The dataloader, train dataloader, does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` (try 8 which is the number of cpus on this machine) in the `DataLoader` init to improve performance.
warnings.warn(*args, **kwargs)
Epoch 0: 100%|ββββββββββββββββββ| 2/2 [00:00<00:00, 427.58it/s, loss=2.280, v_num=11]
type of hparams <class 'pytorch_lightning.utilities.parsing.AttributeDict'>
class of hparams type AttributeDict
Traceback (most recent call last):
File "/home/chirag/miniconda3/envs/ml/lib/python3.8/site-packages/pytorch_lightning/utilities/parsing.py", line 160, in __getattr__
return self[key]
KeyError: 'something'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "ckpt-loading.py", line 118, in <module>
run_test()
File "ckpt-loading.py", line 106, in run_test
model = BoringModel.load_from_checkpoint(ckpt_path)
File "/home/chirag/miniconda3/envs/ml/lib/python3.8/site-packages/pytorch_lightning/core/saving.py", line 154, in load_from_checkpoint
model = cls._load_model_state(checkpoint, strict=strict, **kwargs)
File "/home/chirag/miniconda3/envs/ml/lib/python3.8/site-packages/pytorch_lightning/core/saving.py", line 198, in _load_model_state
model = cls(**_cls_kwargs_filtered)
File "ckpt-loading.py", line 40, in __init__
print("accessing hparams", self.hparams.something, "no problem")
File "/home/chirag/miniconda3/envs/ml/lib/python3.8/site-packages/pytorch_lightning/utilities/parsing.py", line 162, in __getattr__
raise AttributeError(f'Missing attribute "{key}"') from exp
AttributeError: Missing attribute "something"
Also, for isolating the problem, look at this code:
pytorch-lightning/pytorch_lightning/trainer/connectors/checkpoint_connector.py
Lines 297 to 306
in
207ff72
if model.hparams:
if hasattr(model, '_hparams_name'):
checkpoint[LightningModule.CHECKPOINT_HYPER_PARAMS_NAME] = model._hparams_name
# add arguments to the checkpoint
if OMEGACONF_AVAILABLE:
checkpoint[LightningModule.CHECKPOINT_HYPER_PARAMS_KEY] = model.hparams
if isinstance(model.hparams, Container):
checkpoint[LightningModule.CHECKPOINT_HYPER_PARAMS_TYPE] = type(model.hparams)
else:
checkpoint[LightningModule.CHECKPOINT_HYPER_PARAMS_KEY] = dict(model.hparams)
If you add a breakpoint or log line for debugging after the inner if, so that:
if isinstance(model.hparams, Container):
print("Saving hparams type")
checkpoint[LightningModule.CHECKPOINT_HYPER_PARAMS_TYPE] = type(model.hparams)
you should see that it doesn't get triggered.
Environment
PyTorch Version : 1.0.2
PyTorch Version (e.g., 1.0) : 1.6
OS (e.g., Linux): Linux
How you installed PyTorch (conda, pip, source): conda
Python version: 3.8.5 |
Get help of argparse from docstring. | [
"feature",
"help wanted"
] | π Feature
get help from docstring for argparse.
Motivation
In code todo: https://github.com/PyTorchLightning/pytorch-lightning/blob/master/pytorch_lightning/utilities/argparse_utils.py#L163 |
[Docs] redundant whitespace included in debugging section code, new-project.rst | [
"docs"
] | π Documentation
For typos and doc fixes, please go ahead and:
Create an issue.
Fix the typo.
Submit a PR.
Thanks!
Typo
https://pytorch-lightning.readthedocs.io/en/latest/new-project.html#debugging
# train only 20% of an epoch
trainer = pl. Trainer(limit_train_batches=0.2)
This works, but I think it should be aligned in appearance with the rest of the code.
Fix
trainer = pl.Trainer(limit_train_batches=0.2) # remove redundant whitespace
I will submit a PR later |
Metrics fail on DP and multiple GPU | [
"feature",
"help wanted",
"strategy: dp"
] | π Bug
When using a metric such as Accuracy from pytorch_lightning.metrics in machine with 4 GPU and in 'dp' mode, there is an error due to accumulating the metric in different devices. In the case of Accuracy, in line:
pytorch-lightning/pytorch_lightning/metrics/classification/accuracy.py
Line 108
in
c8ccec7
self.correct += torch.sum(preds == target)
The arguments in torch.sum are in the same device the metric is been called from, but the self.correct is in a different one. The traceback is as follows:
self.accuracy_val(y_hat, y)
File "/home/***/.conda/envs/***/lib/python3.8/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/***/.conda/envs/***/lib/python3.8/site-packages/pytorch_lightning/metrics/metric.py", line 153, in forward
self.update(*args, **kwargs)
File "/home/***/.conda/envs/***/lib/python3.8/site-packages/pytorch_lightning/metrics/metric.py", line 199, in wrapped_func
return update(*args, **kwargs)
File "/home/***/.conda/envs/***/lib/python3.8/site-packages/pytorch_lightning/metrics/classification/accuracy.py", line 109, in update
self.correct += torch.sum(preds == target)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cuda:1!
Please reproduce using the BoringModel and post here
https://colab.research.google.com/drive/1zcU1ADuHZj82clrBysv-EGfgqG7SxUhN#scrollTo=V7ELesz1kVQo
To Reproduce
The shared colab is not going to be able to replicate the bug since it needs 'dp' on multiple gpus, but it should give an idea of when the error occurs. So setting
num_gpus=4,
accelerator="dp",
in the Trainer and then using a metric should bring up the issue. I have tested it with Accuracy but other users in the Slack channel reported it for other metrics such as Precision or Recall.
Expected behavior
The devices should be the same when the values are added together. I am not sure of which would be the correct approach, I have "brutely" solved it by:
self.correct += torch.sum(preds.cuda(self.correct.device.index) == target.cuda(self.correct.device.index))
self.total += target.cuda(self.correct.device.index).numel()
in the case of Accuracy, but that is quite an ugly way of dealing with it.
Update: Although this doesn't produce the error, the accuracy is not properly computed, as values get reset to 0 for some reason between steps.
Environment
CUDA:
- GPU:
- GeForce GTX 1080 Ti
- GeForce GTX 1080 Ti
- GeForce GTX 1080 Ti
- GeForce GTX 1080 Ti
- available: True
- version: 10.2
Packages:
- numpy: 1.19.2
- pyTorch_debug: False
- pyTorch_version: 1.6.0
- pytorch-lightning: 1.0.3
- tqdm: 4.50.2
System:
- OS: Linux
- architecture:
- 64bit
- ELF
- processor:
- python: 3.8.5
- version: #1 SMP Debian 4.19.152-1 (2020-10-18) |
Does AccuracyMetric compute accumulated or mean values of each batch? | [
"question"
] | there are two version of calculate accuracy:
calculate accuracy of each batch then do mean operation in validation_epoch_end
accumulate each batch's correct, total values then compute accuracy once in validation_epoch_end
which ones does AccuracyMetric do? |
Adding a Metric to LightningModule prevents loading of a checkpoint / weights | [
"bug",
"help wanted",
"checkpointing"
] | π Bug
Adding a Metric like Accuracy prevents the loading of a .ckpt due to missing keys:
RuntimeError: Error(s) in loading state_dict for BoringModel2:
Missing key(s) in state_dict: "pl_accuracy.correct", "pl_accuracy.total".
Please reproduce using the BoringModel and post here
https://colab.research.google.com/drive/1km0SE2TVRuif6R4uF8vihqUl-FshXu_u?usp=sharing
To Reproduce
class BoringModel2(BoringModel):
def __init__(self, **kwargs):
super().__init__(**kwargs)
# NEW adding metric
self.pl_accuracy = Accuracy()
model2 = BoringModel2.load_from_checkpoint(checkpoint_path="example.ckpt")
Expected behavior
Being able to add new metrics to a model without changing the layers (e.g. in transfer learning settings), but still be able to load the weights of a model without those metrics.
Actual behavior
RuntimeError Traceback (most recent call last)
<ipython-input-15-50d6f204e428> in <module>()
----> 1 model2 = BoringModel2.load_from_checkpoint(checkpoint_path="example.ckpt") # , strict=False
2 frames
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in load_state_dict(self, state_dict, strict)
1043 if len(error_msgs) > 0:
1044 raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
-> 1045 self.__class__.__name__, "\n\t".join(error_msgs)))
1046 return _IncompatibleKeys(missing_keys, unexpected_keys)
1047
RuntimeError: Error(s) in loading state_dict for BoringModel2:
Missing key(s) in state_dict: "pl_accuracy.correct", "pl_accuracy.total".
Environment
Note: Bugs with code are solved faster ! Colab Notebook should be made public !
IDE: Please, use our python bug_report_model.py template.
Colab Notebook: Please copy and paste the output from our environment collection script (or fill out the checklist below manually).
You can get the script and run it with:
wget https://raw.githubusercontent.com/PyTorchLightning/pytorch-lightning/master/tests/collect_env_details.py
# For security purposes, please check the contents of collect_env_details.py before running it.
python collect_env_details.py
Colab:
CUDA:
GPU:
Tesla T4
available: True
version: 10.1
Packages:
numpy: 1.18.5
pyTorch_debug: False
pyTorch_version: 1.6.0+cu101
pytorch-lightning: 1.0.3
tqdm: 4.41.1
System:
OS: Linux
architecture:
64bit
processor: x86_64
python: 3.6.9
version: #1 SMP Thu Jul 23 08:00:38 PDT 2020
Additional context |
show progressbar only on progress_rank 0 on ddp_slurm | [
"feature",
"help wanted"
] | π Bug
The progress bars will show repeatedly when using slurm for multi-nodes
To Reproduce
Using pytorch_lightning
To run this template just do:
python generative_adversarial_net.py
After a few epochs, launch TensorBoard to see the images being generated at every batch:
tensorboard --logdir default
import os
from argparse import ArgumentParser, Namespace
from collections import OrderedDict
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
import torchvision
import torchvision.transforms as transforms
from torch.utils.data import DataLoader
from torchvision.datasets import MNIST
from pytorch_lightning.core import LightningModule
from pytorch_lightning.trainer import Trainer
class Generator(nn.Module):
def __init__(self, latent_dim, img_shape):
super().__init__()
self.img_shape = img_shape
def block(in_feat, out_feat, normalize=True):
layers = [nn.Linear(in_feat, out_feat)]
if normalize:
layers.append(nn.BatchNorm1d(out_feat, 0.8))
layers.append(nn.LeakyReLU(0.2, inplace=True))
return layers
self.model = nn.Sequential(
*block(latent_dim, 128, normalize=False),
*block(128, 256),
*block(256, 512),
*block(512, 1024),
nn.Linear(1024, int(np.prod(img_shape))),
nn.Tanh()
)
def forward(self, z):
img = self.model(z)
img = img.view(img.size(0), *self.img_shape)
return img
class Discriminator(nn.Module):
def __init__(self, img_shape):
super().__init__()
self.model = nn.Sequential(
nn.Linear(int(np.prod(img_shape)), 512),
nn.LeakyReLU(0.2, inplace=True),
nn.Linear(512, 256),
nn.LeakyReLU(0.2, inplace=True),
nn.Linear(256, 1),
nn.Sigmoid(),
)
def forward(self, img):
img_flat = img.view(img.size(0), -1)
validity = self.model(img_flat)
return validity
class GAN(LightningModule):
def __init__(self,
latent_dim: int = 100,
lr: float = 0.0002,
b1: float = 0.5,
b2: float = 0.999,
batch_size: int = 64, **kwargs):
super().__init__()
self.latent_dim = latent_dim
self.lr = lr
self.b1 = b1
self.b2 = b2
self.batch_size = batch_size
# networks
mnist_shape = (1, 28, 28)
self.generator = Generator(latent_dim=self.latent_dim, img_shape=mnist_shape)
self.discriminator = Discriminator(img_shape=mnist_shape)
self.validation_z = torch.randn(8, self.latent_dim)
self.example_input_array = torch.zeros(2, hparams.latent_dim)
def forward(self, z):
return self.generator(z)
def adversarial_loss(self, y_hat, y):
return F.binary_cross_entropy(y_hat, y)
def training_step(self, batch, batch_idx, optimizer_idx):
imgs, _ = batch
# sample noise
z = torch.randn(imgs.shape[0], self.latent_dim)
z = z.type_as(imgs)
# train generator
if optimizer_idx == 0:
# generate images
self.generated_imgs = self(z)
# log sampled images
sample_imgs = self.generated_imgs[:6]
grid = torchvision.utils.make_grid(sample_imgs)
self.logger.experiment.add_image('generated_images', grid, 0)
# ground truth result (ie: all fake)
# put on GPU because we created this tensor inside training_loop
valid = torch.ones(imgs.size(0), 1)
valid = valid.type_as(imgs)
# adversarial loss is binary cross-entropy
g_loss = self.adversarial_loss(self.discriminator(self(z)), valid)
tqdm_dict = {'g_loss': g_loss}
return {'loss':g_loss}
# train discriminator
if optimizer_idx == 1:
# Measure discriminator's ability to classify real from generated samples
# how well can it label as real?
valid = torch.ones(imgs.size(0), 1)
valid = valid.type_as(imgs)
real_loss = self.adversarial_loss(self.discriminator(imgs), valid)
# how well can it label as fake?
fake = torch.zeros(imgs.size(0), 1)
fake = fake.type_as(imgs)
fake_loss = self.adversarial_loss(
self.discriminator(self(z).detach()), fake)
# discriminator loss is the average of these
d_loss = (real_loss + fake_loss) / 2
tqdm_dict = {'d_loss': d_loss}
return {'loss':d_loss}
def configure_optimizers(self):
lr = self.lr
b1 = self.b1
b2 = self.b2
opt_g = torch.optim.Adam(self.generator.parameters(), lr=lr, betas=(b1, b2))
opt_d = torch.optim.Adam(self.discriminator.parameters(), lr=lr, betas=(b1, b2))
return [opt_g, opt_d], []
def train_dataloader(self):
transform = transforms.Compose([transforms.ToTensor(),
transforms.Normalize([0.5], [0.5])])
dataset = MNIST(os.getcwd(), train=True, download=True, transform=transform)
return DataLoader(dataset, batch_size=self.batch_size)
def on_epoch_end(self):
pass
def main(args: Namespace) -> None:
# ------------------------
# 1 INIT LIGHTNING MODEL
# ------------------------
model = GAN(**vars(args))
# ------------------------
# 2 INIT TRAINER
# ------------------------
# If use distubuted training PyTorch recommends to use DistributedDataParallel.
# See: https://pytorch.org/docs/stable/nn.html#torch.nn.DataParallel
trainer = Trainer(gpus=8, num_nodes = 1, distributed_backend= 'ddp', profiler=True , max_epochs=10)
# ------------------------
# 3 START TRAINING
# ------------------------
trainer.fit(model)
if __name__ == '__main__':
parser = ArgumentParser()
parser.add_argument("--batch_size", type=int, default=64, help="size of the batches")
parser.add_argument("--lr", type=float, default=0.0002, help="adam: learning rate")
parser.add_argument("--b1", type=float, default=0.5,
help="adam: decay of first order momentum of gradient")
parser.add_argument("--b2", type=float, default=0.999,
help="adam: decay of first order momentum of gradient")
parser.add_argument("--latent_dim", type=int, default=100,
help="dimensionality of the latent space")
hparams = parser.parse_args()
main(hparams)
Submit
#!/bin/bash
#SBATCH --gres=gpu:8
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=8
conda activate your_env
srun python gan.py
Expected behavior
GPU available: True, used: True
TPU available: False, using: 0 TPU cores
Multi-processing is handled by Slurm.
LOCAL_RANK: 6 - CUDA_VISIBLE_DEVICES: [0,1,2,3,4,5,6,7]
GPU available: True, used: True
TPU available: False, using: 0 TPU cores
Multi-processing is handled by Slurm.
GPU available: True, used: True
LOCAL_RANK: 2 - CUDA_VISIBLE_DEVICES: [0,1,2,3,4,5,6,7]
TPU available: False, using: 0 TPU cores
Multi-processing is handled by Slurm.
LOCAL_RANK: 4 - CUDA_VISIBLE_DEVICES: [0,1,2,3,4,5,6,7]
initializing ddp: GLOBAL_RANK: 6, MEMBER: 7/8
GPU available: True, used: True
TPU available: False, using: 0 TPU cores
Multi-processing is handled by Slurm.
LOCAL_RANK: 3 - CUDA_VISIBLE_DEVICES: [0,1,2,3,4,5,6,7]
initializing ddp: GLOBAL_RANK: 2, MEMBER: 3/8
GPU available: True, used: True
TPU available: False, using: 0 TPU cores
initializing ddp: GLOBAL_RANK: 4, MEMBER: 5/8
Multi-processing is handled by Slurm.
LOCAL_RANK: 7 - CUDA_VISIBLE_DEVICES: [0,1,2,3,4,5,6,7]
GPU available: True, used: True
TPU available: False, using: 0 TPU cores
Multi-processing is handled by Slurm.
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0,1,2,3,4,5,6,7]
initializing ddp: GLOBAL_RANK: 3, MEMBER: 4/8
initializing ddp: GLOBAL_RANK: 7, MEMBER: 8/8
initializing ddp: GLOBAL_RANK: 0, MEMBER: 1/8
GPU available: True, used: True
TPU available: False, using: 0 TPU cores
Multi-processing is handled by Slurm.
LOCAL_RANK: 1 - CUDA_VISIBLE_DEVICES: [0,1,2,3,4,5,6,7]
GPU available: True, used: True
TPU available: False, using: 0 TPU cores
Multi-processing is handled by Slurm.
LOCAL_RANK: 5 - CUDA_VISIBLE_DEVICES: [0,1,2,3,4,5,6,7]
initializing ddp: GLOBAL_RANK: 1, MEMBER: 2/8
initializing ddp: GLOBAL_RANK: 5, MEMBER: 6/8
Set SLURM handle signals.
Set SLURM handle signals.
Set SLURM handle signals.
Set SLURM handle signals.
Set SLURM handle signals.
Set SLURM handle signals.
Set SLURM handle signals.
Set SLURM handle signals.
| Name | Type | Params | In sizes | Out sizes
----------------------------------------------------------------------------
0 | generator | Generator | 1 M | [2, 100] | [2, 1, 28, 28]
1 | discriminator | Discriminator | 533 K | ? | ?
Training: 0it [00:00, ?it/s]
Training: 0%| | 0/118 [00:00<?, ?it/s]
Epoch 0: 1%| | 1/118 [00:00<01:10, 1.65it/s, loss=0.710, v_num=194638]
Epoch 0: 2%|β | 2/118 [00:00<00:36, 3.17it/s, loss=0.686, v_num=194638]
Epoch 0: 3%|β | 3/118 [00:00<00:24, 4.61it/s, loss=0.664, v_num=194638]
Epoch 0: 3%|β | 4/118 [00:00<00:19, 5.97it/s, loss=0.646, v_num=194638]
Epoch 0: 4%|β | 5/118 [00:00<00:15, 7.25it/s, loss=0.630, v_num=194638]
Epoch 0: 5%|β | 6/118 [00:00<00:13, 8.47it/s, loss=0.630, v_num=194638]
Epoch 0: 6%|β | 7/118 [00:00<00:11, 9.62it/s, loss=0.606, v_num=194638]
Epoch 0: 7%|β | 8/118 [00:00<00:10, 10.71it/s, loss=0.597, v_num=194638]
Epoch 0: 8%|β | 9/118 [00:00<00:09, 11.74it/s, loss=0.589, v_n
...
...
Actual behavior
GPU available: True, used: True
TPU available: False, using: 0 TPU cores
Multi-processing is handled by Slurm.
LOCAL_RANK: 6 - CUDA_VISIBLE_DEVICES: [0,1,2,3,4,5,6,7]
GPU available: True, used: True
TPU available: False, using: 0 TPU cores
Multi-processing is handled by Slurm.
GPU available: True, used: True
LOCAL_RANK: 2 - CUDA_VISIBLE_DEVICES: [0,1,2,3,4,5,6,7]
TPU available: False, using: 0 TPU cores
Multi-processing is handled by Slurm.
LOCAL_RANK: 4 - CUDA_VISIBLE_DEVICES: [0,1,2,3,4,5,6,7]
initializing ddp: GLOBAL_RANK: 6, MEMBER: 7/8
GPU available: True, used: True
TPU available: False, using: 0 TPU cores
Multi-processing is handled by Slurm.
LOCAL_RANK: 3 - CUDA_VISIBLE_DEVICES: [0,1,2,3,4,5,6,7]
initializing ddp: GLOBAL_RANK: 2, MEMBER: 3/8
GPU available: True, used: True
TPU available: False, using: 0 TPU cores
initializing ddp: GLOBAL_RANK: 4, MEMBER: 5/8
Multi-processing is handled by Slurm.
LOCAL_RANK: 7 - CUDA_VISIBLE_DEVICES: [0,1,2,3,4,5,6,7]
GPU available: True, used: True
TPU available: False, using: 0 TPU cores
Multi-processing is handled by Slurm.
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0,1,2,3,4,5,6,7]
initializing ddp: GLOBAL_RANK: 3, MEMBER: 4/8
initializing ddp: GLOBAL_RANK: 7, MEMBER: 8/8
initializing ddp: GLOBAL_RANK: 0, MEMBER: 1/8
GPU available: True, used: True
TPU available: False, using: 0 TPU cores
Multi-processing is handled by Slurm.
LOCAL_RANK: 1 - CUDA_VISIBLE_DEVICES: [0,1,2,3,4,5,6,7]
GPU available: True, used: True
TPU available: False, using: 0 TPU cores
Multi-processing is handled by Slurm.
LOCAL_RANK: 5 - CUDA_VISIBLE_DEVICES: [0,1,2,3,4,5,6,7]
initializing ddp: GLOBAL_RANK: 1, MEMBER: 2/8
initializing ddp: GLOBAL_RANK: 5, MEMBER: 6/8
Set SLURM handle signals.
Set SLURM handle signals.
Set SLURM handle signals.
Set SLURM handle signals.
Set SLURM handle signals.
Set SLURM handle signals.
Set SLURM handle signals.
Set SLURM handle signals.
| Name | Type | Params | In sizes | Out sizes
----------------------------------------------------------------------------
0 | generator | Generator | 1 M | [2, 100] | [2, 1, 28, 28]
1 | discriminator | Discriminator | 533 K | ? | ?
Training: 0it [00:00, ?it/s]
Training: 0%| | 0/118 [00:00<?, ?it/s]
Epoch 0: 0%| | 0/118 [00:00<?, ?it/s]
Epoch 0: 1%| | 1/118 [00:00<01:10, 1.65it/s]
Epoch 0: 1%| | 1/118 [00:00<01:10, 1.65it/s, loss=0.710, v_num=194638]
Epoch 0: 2%|β | 2/118 [00:00<00:36, 3.17it/s, loss=0.686, v_num=194638]
Epoch 0: 3%|β | 3/118 [00:00<00:24, 4.61it/s, loss=0.664, v_num=194638]
Epoch 0: 3%|β | 4/118 [00:00<00:19, 5.97it/s, loss=0.646, v_num=194638]
Epoch 0: 4%|β | 5/118 [00:00<00:15, 7.25it/s, loss=0.630, v_num=194638]
Epoch 0: 5%|β | 6/118 [00:00<00:13, 8.47it/s, loss=0.630, v_num=194638]
Epoch 0: 5%|β | 6/118 [00:00<00:13, 8.47it/s, loss=0.617, v_num=194638]
Epoch 0: 6%|β | 7/118 [00:00<00:11, 9.62it/s, loss=0.606, v_num=194638]
Epoch 0: 7%|β | 8/118 [00:00<00:10, 10.71it/s, loss=0.597, v_num=194638]
Epoch 0: 8%|β | 9/118 [00:00<00:09, 11.74it/s, loss=0.589, v_n
Training: 0it [00:00, ?it/s]
Training: 0%| | 0/118 [00:00<?, ?it/s]
Epoch 0: 0%| | 0/118 [00:00<?, ?it/s]
Epoch 0: 1%| | 1/118 [00:00<01:10, 1.65it/s]
Epoch 0: 1%| | 1/118 [00:00<01:11, 1.65it/s, loss=0.710, v_num=194638]
Epoch 0: 2%|β | 2/118 [00:00<00:36, 3.17it/s, loss=0.686, v_num=194638]
Epoch 0: 3%|β | 3/118 [00:00<00:24, 4.61it/s, loss=0.665, v_num=194638]
Epoch 0: 3%|β | 4/118 [00:00<00:19, 5.97it/s, loss=0.647, v_num=194638]
Epoch 0: 4%|β | 5/118 [00:00<00:15, 7.25it/s, loss=0.631, v_num=194638]
Epoch 0: 5%|β | 6/118 [00:00<00:13, 8.47it/s, loss=0.631, v_num=194638]
Epoch 0: 5%|β | 6/118 [00:00<00:13, 8.46it/s, loss=0.618, v_num=194638]
Epoch 0: 6%|β | 7/118 [00:00<00:11, 9.61it/s, loss=0.606, v_num=194638]
Epoch 0: 7%|β | 8/118 [00:00<00:10, 10.70it/s, loss=0.597, v_num=194638]
Epoch 0: 8%|β | 9/118 [00:00<00:09, 11.74it/s, loss=0.590, v_n
Training: 0it [00:00, ?it/s]
Training: 0%| | 0/118 [00:00<?, ?it/s]
Epoch 0: 0%| | 0/118 [00:00<?, ?it/s]
Epoch 0: 1%| | 1/118 [00:00<01:10, 1.65it/s]
Epoch 0: 1%| | 1/118 [00:00<01:10, 1.65it/s, loss=0.711, v_num=194638]
Epoch 0: 2%|β | 2/118 [00:00<00:36, 3.17it/s, loss=0.686, v_num=194638]
Environment
environment
OS:CentOS Linux
Python version: 3.8
install packages
pytorch-lightning 1.0.1
torch 1.6.0
torchvision 0.7.0
How to fix (my understanding)
ddp_slurm_accelerator.py
# toggle prog bar
if self.trainer.global_rank == 0 and self.trainer.progress_bar_callback is not None:
self.trainer.progress_bar_callback.disable()
To
# toggle prog bar
if self.trainer.global_rank != 0 and self.trainer.progress_bar_callback is not None:
self.trainer.progress_bar_callback.disable() |
Not-yet-existing resume_from_checkpoint for auto-resubmit | [
"feature",
"help wanted",
"checkpointing"
] | π Feature
Accept Not-yet-existing resume_from_checkpoint in Trainer for automatic training resume / auto-resubmit.
Motivation
In cloud ML training services (e.g. Google AI platform training, AWS SageMaker, AWS Batch), there are Job auto-retry feature.
If we can specify checkpoint path, Job auto-retry can be used for training resume / resubmit.
Unfortunately, PyTorch-Lightning cannot specify Non-(yet-)existing file as resume_from_checkpoint argument of Trainer, it simply raise an error.
The motivation of this feature request is enabling training resume through Not-yet-existing resume_from_checkpoint.
(This feature looks similar to auto-resubmit of pl's SLURM. but I am totally newbie about it, it could be nonsense.)
Pitch
current checkpoint restore process:
pytorch-lightning/pytorch_lightning/trainer/connectors/checkpoint_connector.py
Lines 57 to 60
in
3abfec8
We attempt to restore weights in this order:
1. HPC weights.
2. if no HPC weights restore checkpoint_path weights
3. otherwise don't restore weights
It use (existing) resume_from_checkpoint.
If specified path's file is empty, it raise an error.
What I hope is
1. HPC weights.
2. if no HPC weights ""try to"" restore checkpoint_path weights
3. otherwise don't restore weights
It means that if checkpoint_path (resume_from_checkpoint) file do not exist, simply ignore it and start training from scratch.
In this case, training start normally from scratch, then pl save checkpoints.
If we set save-path == resume_from_checkpoint, latest checkpoint file exist in resume_from_checkpoint path.
When job auto-retry is triggered, because now checkpoint file exists in resume_from_checkpoint, in retried job pl load checkpoint from resume_from_checkpoint, so training properly resume.
Alternatives
Use hpc_save & hpc_load's resume system for normal training.
As far as I read the codes, "HPC weights load" (for slurm...?) enable auto-resubmit based on directory (not file) + file name rule (hpc_ckpt_{ckpt_number}.ckpt).
If we accept checkpoint directory (e.g. resume_from_checkpoint_dir), same mechanism can be used for resume/resubmit. |
model.to_onnx() fails if self.example_input_array is a list | [
"feature",
"help wanted"
] | π Bug
Using mode.to_onnx() does not work if the defined example array is a list or tuple of multiple inputs.
The reason is, that it tries to call inputs.to(device) on the list object:
Traceback (most recent call last):
File "C:\Users\Tobias\Anaconda3\envs\pytorch_local\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 509, in train
self.train_loop.on_train_end()
File "C:\Users\Tobias\Anaconda3\envs\pytorch_local\lib\site-packages\pytorch_lightning\trainer\training_loop.py", line 182, in on_train_end
self.trainer.call_hook('on_train_end')
File "C:\Users\Tobias\Anaconda3\envs\pytorch_local\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 829, in call_hook
output = hook_fx(*args, **kwargs)
File "C:\Users\Tobias\PycharmProjects\QG-GAN\src\utils\QG_GAN.py", line 178, in on_train_end
self.to_onnx(onnx_file, export_params=True)
File "C:\Users\Tobias\Anaconda3\envs\pytorch_local\lib\site-packages\pytorch_lightning\core\lightning.py", line 1527, in to_onnx
input_data = input_data.to(self.device)
AttributeError: 'list' object has no attribute 'to'
Please reproduce using the BoringModel and post here
https://colab.research.google.com/drive/1ZxxIBLRVXF-F0rCn68iYi2WX-GxILS9Z?usp=sharing
Expected behavior
Save the onnx-file, even if the defined self.example_input_array is a list of arrays.
Environment
CUDA:
GPU:
GeForce RTX 2080 SUPER
available: True
version: 10.2
Packages:
numpy: 1.19.1
pyTorch_debug: False
pyTorch_version: 1.6.0
pytorch-lightning: 1.0.3
tqdm: 4.49.0
System:
OS: Windows
architecture:
64bit
WindowsPE
processor: Intel64 Family 6 Model 158 Stepping 13, GenuineIntel
python: 3.8.5
version: 10.0.18362 |
LearningRateMonitor produces inconsistent logs with logging_interval="epoch" | [
"bug",
"help wanted",
"logger"
] | π Bug
LearningRateMonitor uses step=trainer.current_epoch whereas in other places, it is always step=trainer.global_step.
This creates inconsistencies and makes the log hard to process:
https://colab.research.google.com/drive/1bucI1oGCc_xNsnP_lvBWTz3x_bauH3o7
[{'lr-SGD': 0.1, 'step': 0},
{'epoch': 0, 'step': 49, 'training_loss': 6.862762529635802e-05},
{'epoch': 0, 'step': 99, 'training_loss': 4.8670010244222794e-09},
{'epoch': 0, 'step': 149, 'training_loss': 1.5898393712632242e-13},
{'epoch': 0, 'step': 199, 'training_loss': 4.135580766728708e-14},
{'epoch': 0, 'step': 249, 'training_loss': 5.2458037913538647e-14},
{'epoch': 0, 'step': 299, 'training_loss': 4.879430193227563e-14},
{'epoch': 0, 'step': 312, 'validation_loss': 4.705802669193945e-14},
{'lr-SGD': 0.010000000000000002, 'step': 1},
{'epoch': 1, 'step': 349, 'training_loss': 4.884981308350689e-14},
{'epoch': 1, 'step': 399, 'training_loss': 4.6407322429331543e-14},
{'epoch': 1, 'step': 449, 'training_loss': 4.679590048795035e-14},
{'epoch': 1, 'step': 499, 'training_loss': 4.3798298321462426e-14},
{'epoch': 1, 'step': 549, 'training_loss': 4.368727601899991e-14},
{'epoch': 1, 'step': 599, 'training_loss': 4.335420911161236e-14},
{'epoch': 1, 'step': 625, 'validation_loss': 4.4799982544226416e-14},
{'lr-SGD': 0.0010000000000000002, 'step': 2},
{'epoch': 2, 'step': 649, 'training_loss': 4.5352610555937645e-14},
{'epoch': 2, 'step': 699, 'training_loss': 4.551914400963142e-14},
{'epoch': 2, 'step': 749, 'training_loss': 4.4853010194856324e-14},
{'epoch': 2, 'step': 799, 'training_loss': 4.2021941482062175e-14},
{'epoch': 2, 'step': 849, 'training_loss': 4.651834473179406e-14},
{'epoch': 2, 'step': 899, 'training_loss': 4.296563105299356e-14},
{'epoch': 2, 'step': 938, 'validation_loss': 4.4716627725953015e-14},
{'fake_test_acc': 4.718447854656915e-14, 'step': 939}]
For lr-SGD rows, step is the epoch, otherwise, step is the global step.
Expected behavior
I expect step to be used consistently, i.e. it should be always be step=trainer.global_step.
Additionally, LearningRateMonitor could provide a value for epoch=trainer.current_epoch.
Environment
See Colab notebook. |
WandbLogger _sanitize_callable_params throws AttributeError if param does not have __name__ | [
"bug",
"help wanted",
"logger"
] | π Bug
Using WandB logger throws an error:
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/adrian/.conda/envs/lightning/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 20, in _wrap
fn(i, *args)
File "/home/adrian/repositories/pytorch-lightning/pytorch_lightning/accelerators/ddp_spawn_accelerator.py", line 145, in ddp_train
self.trainer.train_loop.setup_training(model)
File "/home/adrian/repositories/pytorch-lightning/pytorch_lightning/trainer/training_loop.py", line 135, in setup_training
self.trainer.logger.log_hyperparams(ref_model.hparams_initial)
File "/home/adrian/repositories/pytorch-lightning/pytorch_lightning/utilities/distributed.py", line 35, in wrapped_fn
return fn(*args, **kwargs)
File "/home/adrian/repositories/pytorch-lightning/pytorch_lightning/loggers/wandb.py", line 138, in log_hyperparams
params = self._sanitize_callable_params(params)
File "/home/adrian/repositories/pytorch-lightning/pytorch_lightning/loggers/base.py", line 194, in _sanitize_callable_params
return {key: _sanitize_callable(val) for key, val in params.items()}
File "/home/adrian/repositories/pytorch-lightning/pytorch_lightning/loggers/base.py", line 194, in <dictcomp>
return {key: _sanitize_callable(val) for key, val in params.items()}
File "/home/adrian/repositories/pytorch-lightning/pytorch_lightning/loggers/base.py", line 191, in _sanitize_callable
return val.__name__
File "/home/adrian/.conda/envs/lightning/lib/python3.7/site-packages/torch/nn/modules/module.py", line 772, in __getattr__
type(self).__name__, name))
torch.nn.modules.module.ModuleAttributeError: 'Backbone' object has no attribute '__name__'
To Reproduce
from argparse import ArgumentParser
import torch
import pytorch_lightning as pl
from torch.nn import functional as F
from torch.utils.data import DataLoader, random_split
from pytorch_lightning.loggers import WandbLogger
try:
from torchvision.datasets.mnist import MNIST
from torchvision import transforms
except Exception as e:
from tests.base.datasets import MNIST
class Backbone(torch.nn.Module):
def __init__(self, hidden_dim=128):
super().__init__()
self.l1 = torch.nn.Linear(28 * 28, hidden_dim)
self.l2 = torch.nn.Linear(hidden_dim, 10)
def forward(self, x):
x = x.view(x.size(0), -1)
x = torch.relu(self.l1(x))
x = torch.relu(self.l2(x))
return x
class LitClassifier(pl.LightningModule):
def __init__(self, backbone, learning_rate=1e-3):
super().__init__()
self.save_hyperparameters()
self.backbone = backbone
def forward(self, x):
# use forward for inference/predictions
embedding = self.backbone(x)
return embedding
def training_step(self, batch, batch_idx):
x, y = batch
y_hat = self.backbone(x)
loss = F.cross_entropy(y_hat, y)
self.log('train_loss', loss)
return loss
def validation_step(self, batch, batch_idx):
x, y = batch
y_hat = self.backbone(x)
loss = F.cross_entropy(y_hat, y)
self.log('valid_loss', loss)
def test_step(self, batch, batch_idx):
x, y = batch
y_hat = self.backbone(x)
loss = F.cross_entropy(y_hat, y)
self.log('test_loss', loss)
def configure_optimizers(self):
# self.hparams available because we called self.save_hyperparameters()
return torch.optim.Adam(self.parameters(), lr=self.hparams.learning_rate)
@staticmethod
def add_model_specific_args(parent_parser):
parser = ArgumentParser(parents=[parent_parser], add_help=False)
parser.add_argument('--learning_rate', type=float, default=0.0001)
return parser
def cli_main():
pl.seed_everything(1234)
# ------------
# args
# ------------
parser = ArgumentParser()
parser.add_argument('--batch_size', default=32, type=int)
parser.add_argument('--hidden_dim', type=int, default=128)
parser = pl.Trainer.add_argparse_args(parser)
parser = LitClassifier.add_model_specific_args(parser)
args = parser.parse_args()
# ------------
# data
# ------------
dataset = MNIST('', train=True, download=True, transform=transforms.ToTensor())
mnist_test = MNIST('', train=False, download=True, transform=transforms.ToTensor())
mnist_train, mnist_val = random_split(dataset, [55000, 5000])
train_loader = DataLoader(mnist_train, batch_size=args.batch_size)
val_loader = DataLoader(mnist_val, batch_size=args.batch_size)
test_loader = DataLoader(mnist_test, batch_size=args.batch_size)
# ------------
# model
# ------------
model = LitClassifier(Backbone(hidden_dim=args.hidden_dim), args.learning_rate)
logger = WandbLogger(project="test", name="test")
# ------------
# training
# ------------
trainer = pl.Trainer.from_argparse_args(args, max_steps=1, limit_train_batches=2, logger=logger)
trainer.fit(model, train_loader, val_loader)
# ------------
# testing
# ------------
result = trainer.test(model, test_dataloaders=test_loader)
print(result)
if __name__ == '__main__':
cli_main()
Expected behavior
Environment
environment does not matter
Additional context
A recent PR #4320 introduced the function that throws the error.
cc @tchaton |
Role of internal `hpc_load` & `hpc_save` | [
"question"
] | β Questions and Help
What is your question?
What is role/responsibility of hpc_load & hpc_save?
Is it same with that of restore?
Motivation
pl has two internal way of save/load, restore way & hpc_load/hpc_save way.
They do similar dump/loading, and has different checkpoint selection mechanism.
If these two method have same role/responsibility in the sense of dump/loading, we can refactor them with common dump/loading code.
These is already some disparity (#1947), it could be potential bug source.
The motivation of this question is understanding role/responsibility for refactoring.
What have you tried?
Survey public documents and internal codes.
hpc_save
No public API (search result in docs), only used in internal SLURMConnector (search result in repo)
pytorch-lightning/pytorch_lightning/trainer/connectors/slurm_connector.py
Line 88
in
66e58f5
self.trainer.checkpoint_connector.hpc_save(self.trainer.weights_save_path, self.trainer.logger)
hpc_load
No public API (search result in docs), only used in internal CheckpointConnector.hpc_load (search result in repo)
pytorch-lightning/pytorch_lightning/trainer/connectors/checkpoint_connector.py
Line 202
in
3abfec8
self.hpc_load(folderpath, self.trainer.on_gpu) |
Pre-computing total number of training steps | [
"feature",
"help wanted",
"won't fix"
] | π Feature
As it stands, Lightning does not expose the anticipated total number of training steps for a given run. For instance, let's say we specify max_epochs as 100. Our data loader has a total length of 24, our batch size is 4, and we use 2 GPUs. From this we could easily calculate that we'll perform 100 * 24 / (2 * 4) = 300 optimizer steps. It would be convenient to expose this directly, e.g. as a module attribute like self.total_steps.
Given that training duration can be specified either via max_steps or max_epochs, some decision logic is required.
Motivation
Some LR schedulers need to know in advance how many steps will be taken by the optimizer. For instance, if we do linear LR decay to zero over the full duration of training, we need to know this number ahead of time to calculate an appropriate per-step decay factor (example here: https://huggingface.co/transformers/_modules/transformers/optimization.html#get_linear_schedule_with_warmup).
Alternatives
We're currently calculating this value manually and so are some other configurations out there (example here: https://github.com/nateraw/hf-text-classification/blob/ef4caef22239c9988e4a704512d53bd629d52e8f/train.py#L38). However, it's tedious boilerplate that would be nice to abstract away.
Additional context
I've discussed this on Slack: https://pytorch-lightning.slack.com/archives/CRBLFHY79/p1603786080478900 The upshot was that it's currently not available but could be of interest. To track this properly, I'm adding a feature request here. |
loss from progress bar appears to be sum of loss across all GPUs in Lightning 1.0.3 | [
"bug",
"help wanted",
"logger"
] | π Bug
The loss from progress bar during the training appears to be sum of loss across of all GPUs
Please reproduce using the BoringModel and post here
To Reproduce
One can pick any model and dataset and varies the number of GPUs for training. In my example, I have roughly 0.7 loss for 1 GPUs, 1.4 loss for 2 GPUs and 0.28 loss for 4 GPUs for the first few training batches, indicating loss is roughly the sum instead of mean of loss across all GPUs.
I used the standard from the doc:
def train_step(self, batch, batch_idx):
loss = self.forward(batch)
self.log('loss/train', loss, on_step=True, on_epoch=False, sync_dist=True, sync_dist_op='mean')
Note: the loss logged in tensorboard appears to be correct (mean of loss)
Expected behavior
The loss from progress bar during the training should be mean of loss across of all GPUs
Environment
Note: Bugs with code are solved faster ! Colab Notebook should be made public !
IDE: Please, use our python bug_report_model.py template.
Colab Notebook: Please copy and paste the output from our environment collection script (or fill out the checklist below manually).
You can get the script and run it with:
wget https://raw.githubusercontent.com/PyTorchLightning/pytorch-lightning/master/tests/collect_env_details.py
# For security purposes, please check the contents of collect_env_details.py before running it.
python collect_env_details.py
PyTorch Version (e.g., 1.0): 1.4
OS (e.g., Linux): Linux
How you installed PyTorch (conda, pip, source): pip
Build command you used (if compiling from source):
Python version: 3.7
CUDA/cuDNN version: 10.1
GPU models and configuration:
Any other relevant information: PyTorch Lightning 1.0.3
Additional context |
Does one need to reset Metrics during the end of epoch in Lightning 1.0.3 | [
"question"
] | β Questions and Help
Before asking:
What is your question?
Given the new metrics in 1.0.0 and later (which I really like!), I have three accuracy metrics for training, validation and test initialized in __init__ function. Do I need to reset them at the end of training and validation epoch given they will be used multiple times?
Code
What have you tried?
What's your environment?
OS: [e.g. iOS, Linux, Win]
Packaging [e.g. pip, conda]
Version [e.g. 0.5.2.1] |
Spec for DDP tests | [
"ci"
] | |
on_epoch logging in validation_step appears to miss the data for the 1st epoch: Lightning 1.0.3 | [
"bug",
"help wanted",
"waiting on author",
"logger"
] | π Bug
On_epoch logging in validation_step appears to miss the data for the 1st epoch. My epoch has 313 steps and therefore I expect the first on_epoch logging in validation_step should at step 312 (0-indexed), but I saw it was at step 625, the end of the 2nd epochs.
Please reproduce using the BoringModel and post here
To Reproduce
I have 4 GPUs and used DDP and the following validation_step:
def validation_step(batch, batch_idx):
loss = self.forward(batch)
self.log('loss/valid', loss, on_epoch=True, on_step=False, sync_dist=True, sync_dist__op='mean'
Expected behavior
The first on_epoch logging should occur at the end of the first epoch, not the second.
Environment
Note: Bugs with code are solved faster ! Colab Notebook should be made public !
IDE: Please, use our python bug_report_model.py template.
Colab Notebook: Please copy and paste the output from our environment collection script (or fill out the checklist below manually).
You can get the script and run it with:
wget https://raw.githubusercontent.com/PyTorchLightning/pytorch-lightning/master/tests/collect_env_details.py
# For security purposes, please check the contents of collect_env_details.py before running it.
python collect_env_details.py
PyTorch Version (e.g., 1.0): 1.4
OS (e.g., Linux): Linux
How you installed PyTorch (conda, pip, source): pip
Build command you used (if compiling from source):
Python version: 3.7
CUDA/cuDNN version: 10.1
GPU models and configuration:
Any other relevant information: PyTorch Lightning 1.0.3, 4 GPUs with DDP
Additional context |
Add tests for .write in step result | [
"help wanted",
"ci"
] | |
Add tests for parsing.py | [
"help wanted",
"ci"
] | |
Enable trainer val_check_interval to be greater than number of the training batches | [
"feature",
"help wanted",
"design"
] | Currently I can't set val_check_interval greater than number of the training batches
Error occurs
/site-packages/pytorch_lightning/trainer/data_loading.py", line 203, in reset_train_dataloader
raise ValueError(
ValueError: `val_check_interval` (1000000) must be less than or equal to the number of the training batches (5). If you want to disable validation set `limit_val_batches` to 0.0 instead.
But it is a useful feature when validation takes considerate time. I want to do validation after multiple train epochs(but fixed number of steps, so I can't use check_val_every_n_epoch)
cc @Borda @tchaton @justusschock @awaelchli |
Metrics inside a dictionary are not properly moved to GPU | [
"help wanted",
"working as intended"
] | π Bug
The following code WORKS:
self.accuracy = Accuracy()
The following code DOES NOT WORK (throws CPU error):
self.metrics['accuracy'] = Accuracy()
Please reproduce using the BoringModel and post here
https://colab.research.google.com/drive/1oLE2Ts4AkQwd2oLSaz8qNvFdiygyvgrZ?usp=sharing
To Reproduce
Expected behavior
Move metric to proper device (GPU) whether assigned to self.some_metrics or self.all_metrics['some_metric']
Actual behavior
--> 113 self.correct += torch.sum(preds == target)
RuntimeError: Trying to pass too many CPU scalars to CUDA kernel!
Environment
COLLAB
CUDA:
GPU:
Tesla P100-PCIE-16GB
available: True
version: 10.1
Packages:
numpy: 1.18.5
pyTorch_debug: False
pyTorch_version: 1.6.0+cu101
pytorch-lightning: 1.0.4
tqdm: 4.41.1
System:
OS: Linux
architecture:
64bit
processor: x86_64
python: 3.6.9
version: #1 SMP Thu Jul 23 08:00:38 PDT 2020
Additional context
I put the Metrics in a dict, so I can use the following code:
for metric_name, metric in self.metrics.items():
self.log(f"{metric_name}/{phase}_step", metric(predictions, targets)) |
Remove check_val_every_n_epoch | [
"feature",
"help wanted",
"good first issue",
"refactor"
] | π Feature
I think check_val_every_n_epoch trainer flag should be removed: val_check_interval already contains* all of its functionality.
* The only thing needed would be to interpret val_check_interval=2.0 as check every two epochs. This is just a straightforward extension of the current functionality |
Collapse videos in documentation | [
"won't fix",
"docs"
] | π Feature
Hide the documentation videos.
Motivation
Videos take up a significant portion of screen space in the documentation, making each documentation page much longer than it needs to be.
The videos appear to be intended as a learning tool. I believe doc pages are much more often used for quick look-ups than for learning.
Inexperienced users will likely view them a few times. After that, they are are in the way.
Case in point: This is a section of the Trainer documentation, where the videos take up just shy of 50% of vertical screen space
Pitch
Collapse the videos. "Click here to watch the video" |
Can't TorchScript LightningModule when using Metric | [
"bug",
"help wanted"
] | π Bug
Please reproduce using the BoringModel and post here
able to reproduce it in https://colab.research.google.com/drive/1MscNHxIc_LIbZxALHbZOAkooNu0TzVly?usp=sharing
To Reproduce
Expected behavior
Able to torchscript a Lightning moduel no matter Metric is used or not
It seems hard to make Metric torchscriptable as *args and **kwargs are useful in Python but not supported in torchscript.
As Metric is not needed for inference, I think it should be excluded when calling LightningModule.to_torchscript().
Environment
Note: Bugs with code are solved faster ! Colab Notebook should be made public !
Colab Notebook: Please copy and paste the output from our environment collection script (or fill out the checklist below manually).
You can get the script and run it with:
wget https://raw.githubusercontent.com/PyTorchLightning/pytorch-lightning/master/tests/collect_env_details.py
# For security purposes, please check the contents of collect_env_details.py before running it.
python collect_env_details.py
CUDA:
GPU:
Tesla T4
available: True
version: 10.1
Packages:
numpy: 1.18.5
pyTorch_debug: False
pyTorch_version: 1.6.0+cu101
pytorch-lightning: 0.10.0
tqdm: 4.41.1
System:
OS: Linux
architecture:
64bit
processor: x86_64
python: 3.6.9
version: #1 SMP Thu Jul 23 08:00:38 PDT 2020
Additional context |
Support teardown for Lightning DataModule | [
"feature",
"help wanted",
"won't fix",
"data handling"
] | π Feature
teardown as a hook can be useful for data modules.
Motivation
This could be used for:
Clean up downloaded data after training finishes
Closing any open connections a dataloader makes
etc
Pitch
This has natural connections to prepare_data and setup and could be implemented very similarly to how those are supported across the data module and lightning module.
By default this should do nothing
cc @nateraw |
NCCL error using DDP and PyTorch 1.7 | [
"bug",
"help wanted",
"priority: 0",
"distributed",
"3rd party"
] | π Bug
Getting this error when attempting to use ddp with the "getting started" autoencoder example:
Stack Trace:
GPU available: True, used: True
TPU available: False, using: 0 TPU cores
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0,1,2]
LOCAL_RANK: 1 - CUDA_VISIBLE_DEVICES: [0,1,2]
initializing ddp: GLOBAL_RANK: 1, MEMBER: 2/2
initializing ddp: GLOBAL_RANK: 0, MEMBER: 1/2
Traceback (most recent call last):
File "01_getting_started_autoencoder.py", line 66, in <module>
modle, trainer = cli_main()
File "01_getting_started_autoencoder.py", line 60, in cli_main
trainer.fit(model, train_dl)
File "/home/user/anaconda3/envs/playground-pl/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 440, in fit
Traceback (most recent call last):
File "/home/user/development/_training/ml/pl-playground/01_getting_started_autoencoder.py", line 66, in <module>
results = self.accelerator_backend.train()
File "/home/user/anaconda3/envs/playground-pl/lib/python3.7/site-packages/pytorch_lightning/accelerators/ddp_accelerator.py", line 138, in train
results = self.ddp_train(process_idx=self.task_idx, model=model)
File "/home/user/anaconda3/envs/playground-pl/lib/python3.7/site-packages/pytorch_lightning/accelerators/ddp_accelerator.py", line 231, in ddp_train
self.trainer.is_slurm_managing_tasks
File "/home/user/anaconda3/envs/playground-pl/lib/python3.7/site-packages/pytorch_lightning/accelerators/accelerator.py", line 213, in init_ddp_connection
torch_backend, rank=global_rank, world_size=world_size
File "/home/user/anaconda3/envs/playground-pl/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py", line 442, in init_process_group
barrier()
File "/home/user/anaconda3/envs/playground-pl/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py", line 1947, in barrier
modle, trainer = cli_main()
File "/home/user/development/_training/ml/pl-playground/01_getting_started_autoencoder.py", line 60, in cli_main
trainer.fit(model, train_dl)
File "/home/user/anaconda3/envs/playground-pl/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 440, in fit
results = self.accelerator_backend.train()
File "/home/user/anaconda3/envs/playground-pl/lib/python3.7/site-packages/pytorch_lightning/accelerators/ddp_accelerator.py", line 138, in train
work = _default_pg.barrier()
RuntimeError: NCCL error in: /pytorch/torch/lib/c10d/ProcessGroupNCCL.cpp:784, invalid usage, NCCL version 2.7.8
results = self.ddp_train(process_idx=self.task_idx, model=model)
File "/home/user/anaconda3/envs/playground-pl/lib/python3.7/site-packages/pytorch_lightning/accelerators/ddp_accelerator.py", line 231, in ddp_train
self.trainer.is_slurm_managing_tasks
File "/home/user/anaconda3/envs/playground-pl/lib/python3.7/site-packages/pytorch_lightning/accelerators/accelerator.py", line 213, in init_ddp_connection
torch_backend, rank=global_rank, world_size=world_size
File "/home/user/anaconda3/envs/playground-pl/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py", line 442, in init_process_group
barrier()
File "/home/user/anaconda3/envs/playground-pl/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py", line 1947, in barrier
work = _default_pg.barrier()
RuntimeError: NCCL error in: /pytorch/torch/lib/c10d/ProcessGroupNCCL.cpp:784, invalid usage, NCCL version 2.7.8
To Reproduce
Follow the code in the getting started question with these parameters to Trainer:
model = LitAutoEncoder()
trainer = pl.Trainer(gpus='1,2', distributed_backend='ddp')
trainer.fit(model, train_dl)
Expected behavior
For it to train on multiple GPUs :)
Environment
PyTorch Version 1.7:
OS (e.g., Linux): Ubuntu 18.04
How you installed PyTorch (conda, pip, source): pip
Build command you used (if compiling from source): n/a
Python version: 3.7
CUDA/cuDNN version: 10.2/7.6.5
GPU models and configuration: 2 1080Tis
Any other relevant information: n/a |
Gradient accumulation fails with fp16 precision | [
"bug",
"help wanted"
] | π Bug
Setting accumulate_grad_batches > 1 and precision = 16 causes the following error:
RuntimeError: unscale_() has already been called on this optimizer since the last update().
Please reproduce using the BoringModel and post here
https://colab.research.google.com/drive/1_7pxqPlpc79k0VYlRdtRXE0JQbhSBWHy?usp=sharing
Environment
CUDA:
GPU:
Tesla T4
available: True
version: 10.1
Packages:
numpy: 1.18.5
pyTorch_debug: False
pyTorch_version: 1.6.0+cu101
pytorch-lightning: 1.0.4
tqdm: 4.41.1
System:
OS: Linux
architecture:
64bit
processor: x86_64
python: 3.6.9
version: #1 SMP Thu Jul 23 08:00:38 PDT 2020 |
Colab TPU Exception in device=TPU:4: Could not run 'torchvision::nms' with arguments from the 'XLA' backend | [
"bug",
"help wanted",
"accelerator: tpu",
"3rd party"
] | π Bug
Getting this error on Colab:
Exception in device=TPU:4: Could not run 'torchvision::nms' with arguments from the 'XLA' backend. 'torchvision::nms' is only available for these backends: [CPU, BackendSelect, Named, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, Tracer, Autocast, Batched, VmapMode].
CPU: registered at /pytorch/vision/torchvision/csrc/vision.cpp:64 [kernel]
BackendSelect: fallthrough registered at /pytorch/aten/src/ATen/core/BackendSelectFallbackKernel.cpp:3 [backend fallback]
Named: registered at /pytorch/aten/src/ATen/core/NamedRegistrations.cpp:7 [backend fallback]
AutogradOther: fallthrough registered at /pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:35 [backend fallback]
AutogradCPU: fallthrough registered at /pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:39 [backend fallback]
AutogradCUDA: fallthrough registered at /pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:43 [backend fallback]
AutogradXLA: fallthrough registered at /pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:47 [backend fallback]
Tracer: fallthrough registered at /pytorch/torch/csrc/jit/frontend/tracer.cpp:970 [backend fallback]
Autocast: fallthrough registered at /pytorch/aten/src/ATen/autocast_mode.cpp:254 [backend fallback]
Batched: registered at /pytorch/aten/src/ATen/BatchingRegistrations.cpp:555 [backend fallback]
VmapMode: fallthrough registered at /pytorch/aten/src/ATen/VmapModeRegistrations.cpp:33 [backend fallback]
Exception in device=TPU:2: Could not run 'torchvision::nms' with arguments from the 'XLA' backend. 'torchvision::nms' is only available for these backends: [CPU, BackendSelect, Named, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, Tracer, Autocast, Batched, VmapMode].
Environment
Colab Notebook: https://colab.research.google.com/drive/1HNn5YAQTenVXkllY9cITIVYFAo8ITJBX?usp=sharing
IDE: Please, use our python bug_report_model.py template.
Colab Notebook:
`* CUDA:
GPU:
available: False
version: None
Packages:
numpy: 1.18.5
pyTorch_debug: False
pyTorch_version: 1.8.0a0+d0df29a
pytorch-lightning: 0.8.1
tqdm: 4.41.1
System:
OS: Linux
architecture:
64bit
processor: x86_64
python: 3.6.9
version: #1 SMP Thu Jul 23 08:00:38 PDT 2020` |
add fsspec suport to tuner | [
"feature",
"help wanted",
"good first issue",
"trainer: tune"
] | see #4424 |
Some test directories clash with standard library modules | [
"bug",
"help wanted",
"ci"
] | Namely:
https://github.com/PyTorchLightning/pytorch-lightning/tree/master/tests/trainer/warnings
https://github.com/PyTorchLightning/pytorch-lightning/tree/master/tests/trainer/logging
(I haven't checked if there are more)
This is problematic, for example, when you run tests using a tool like PyCharm. This is because local modules take precedence over stdlib modules.
So you get errors like this one:
Traceback (most recent call last):
File "/home/carmocca/.local/share/JetBrains/Toolbox/apps/PyCharm-C/ch-0/202.6397.98/plugins/python-ce/helpers/pydev/pydevd.py", line 22, in <module>
from _pydevd_bundle.pydevd_constants import IS_JYTH_LESS25, IS_PYCHARM, get_thread_id, get_current_thread_id, \
File "/home/carmocca/.local/share/JetBrains/Toolbox/apps/PyCharm-C/ch-0/202.6397.98/plugins/python-ce/helpers/pydev/_pydevd_bundle/pydevd_constants.py", line 216, in <module>
from _pydev_imps._pydev_saved_modules import thread
File "/home/carmocca/.local/share/JetBrains/Toolbox/apps/PyCharm-C/ch-0/202.6397.98/plugins/python-ce/helpers/pydev/_pydev_imps/_pydev_saved_modules.py", line 21, in <module>
import xmlrpc.client as xmlrpclib
File "/usr/lib/python3.8/xmlrpc/client.py", line 136, in <module>
import http.client
File "/usr/lib/python3.8/http/client.py", line 71, in <module>
import email.parser
File "/usr/lib/python3.8/email/parser.py", line 12, in <module>
from email.feedparser import FeedParser, BytesFeedParser
File "/usr/lib/python3.8/email/feedparser.py", line 27, in <module>
from email._policybase import compat32
File "/usr/lib/python3.8/email/_policybase.py", line 9, in <module>
from email.utils import _has_surrogates
File "/usr/lib/python3.8/email/utils.py", line 28, in <module>
import random
File "/usr/lib/python3.8/random.py", line 40, in <module>
from warnings import warn as _warn
ImportError: cannot import name 'warn' from 'warnings' (/home/carmocca/projects/carmocca-lightning/tests/trainer/warnings/__init__.py)
We shoud rename them. |
Random CUDA OOM error when starting SLURM jobs | [
"bug",
"help wanted",
"won't fix",
"priority: 0",
"environment: slurm"
] | π Bug
When submitting jobs to SLURM, some jobs (around 1-2% of them) will randomly encounter a CUDA OOM error during the setup prior to training. I can confirm it's not an issue with the configuration of the job vs hardware itself, since I can resubmit the exact same job script and it will work. I also know that my resource consumption is not even close to the limit, since the jobs that do work use ~10Gb on 32Gb GPUs.
I already checked with nvidia-smi that the GPUs I get allocated are empty at the start of the training, confirming that it's not a problem with the cluster:
echo $(nvidia-smi --query-gpu=memory.used --format=csv,noheader) # At the beginning of the script
# Output: 0 MiB
Here is the output I get when my jobs crash:
CometLogger will be initialized in online mode
GPU available: True, used: True
TPU available: False, using: 0 TPU cores
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Using native 16bit precision.
Traceback (most recent call last):
File "my_code.py", line XX, in XX
trainer.fit(model)
File "my_virtualenv/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 424, in fit
self.accelerator_backend.setup(model)
File "my_virtualenv/lib/python3.8/site-packages/pytorch_lightning/accelerators/gpu_accelerator.py", line 36, in setup
model.cuda(self.trainer.root_gpu)
File "my_virtualenv/lib/python3.8/site-packages/pytorch_lightning/utilities/device_dtype_mixin.py", line 124, in cuda
return super().cuda(device=device)
File "my_virtualenv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 463, in cuda
return self._apply(lambda t: t.cuda(device))
File "my_virtualenv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 359, in _apply
module._apply(fn)
File "my_virtualenv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 359, in _apply
module._apply(fn)
File "my_virtualenv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 359, in _apply
module._apply(fn)
[Previous line repeated 2 more times]
File "my_virtualenv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 381, in _apply
param_applied = fn(param)
File "my_virtualenv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 463, in <lambda>
return self._apply(lambda t: t.cuda(device))
RuntimeError: CUDA error: out of memory
Please reproduce using [the BoringModel and post here]
Unfortunately, given the nature of the bug, it's not something I can reproduce with the BoringModel.
To Reproduce
Here are any of the SLURM parameters that could be relevant (only excludes logging and job time):
#SBATCH --gres=gpu:v100l:1
#SBATCH --cpus-per-task=8
#SBATCH --mem=48000M
The related non-default parameter to the trainer and dataloaders are the following:
Trainer(..., benchmark=true, precision=16, ...)
# Inside the lightning module
DataLoader(..., shuffle=True, num_workers=7, pin_memory=self.on_gpu, ...)
From experience, I set num_workers to 1 lower than the number of CPUs I request. On my local machine, that's never caused me any issues.
Expected behavior
SLURM jobs should not encounter random CUDA OOM error when configured with the necessary ressources.
Environment
PyTorch and CUDA are provided by the cluster's manager, so that they are optimized for the hardware. PyTorch Lightning I installed myself using pip.
CUDA:
GPU:
Tesla V100-SXM2-32GB
available: True
version: 10.2
Packages:
numpy: 1.19.1
pyTorch_debug: True
pyTorch_version: 1.7.0
pytorch-lightning: 1.0.4
tqdm: 4.50.2
System:
OS: Linux
architecture:
64bit
ELF
processor:
python: 3.8.0
version: #1 SMP Tue Feb 4 23:02:59 UTC 2020
Additional context
The PyTorch version reported here is 1.7, but I updated it just yesterday. I encountered the reported bug using both PyTorch 1.6 and 1.7.
EDIT: Fixed mistakes when detailing trainer & dataloader arguments. |
wave2vec integration in NeMo | [] | to promote NeMo/lightning further in a medium article showing accessibility and ease to make audio research headway |
AttributeError: 'dict' object has no attribute 'get_epoch_log_metrics' | [
"question"
] | I have created a lightning module which is working fine for single validation dataset but throws following error while using multiple validation dataset:
self._log_on_evaluation_epoch_end_metrics(epoch_logs)
File "/home/ubuntu/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/pytorch_lightning/trainer/connectors/logger_connector.py", line 186, in _log_on_evaluation_epoch_end_metrics
logger_metrics = reduced_epoch_metrics.get_epoch_log_metrics()
AttributeError: 'dict' object has no attribute 'get_epoch_log_metrics'
Here's my validation_step method. To support single as well as multiple validation dataset I have kept dataset_idx = 0
def validation_step(self, batch, batch_idx, dataset_idx=0):
qa_pairs, labels = batch
# labels = labels.type(torch.LongTensor)
labels = labels.long()#(torch.LongTensor)
# run the model for the inputs
outputs = self(qa_pairs)
# compute the loss
if self.loss.lower() == "cross_entropy":
loss = F.cross_entropy(outputs, labels)
self.valid_acc(outputs, labels)
self.log('val_loss', loss, on_step=True)
self.log('val_acc', self.valid_acc, on_step=True, on_epoch=True, prog_bar=True)
Looking at the error msgs, I commented the logging part, which seems to get rid of the error.
system:
pytorch: 1.6
ubuntu 18.04
pytorch-lightning: 1.0.3
StackOverflow Post |
Data loading hangs before first validation step | [
"help wanted",
"won't fix",
"waiting on author"
] | π Bug
After training epoch, before the first validation step, training gets stuck somewhere in the data loaders (I think).
I can't provide a reproduction script unfortunately: Getting the training into the specific situation takes a long time (must train for long enough for the situation to arise).
I train on 4x 1080 Ti using DDP and num_workers=20. After the first training epoch, before the first validation, training gets stuck. All GPUs are reported to have 100% compute and memory utilization, but only 50/250 W power consumption. Only the 4 main Python threads seem to be doing any working (busy looping?). The 20 worker processes seem to have been stopped already.
To me it looks like the main threads are still busy waiting for new samples, while the dataloaders have already gone.
Note that I use limit_train_batches=0.1, maybe this is the cause?
Unfortunately I don't have ptrace capability on the machine, so can't use GDB etc. I printed the stack traces of all Python threads every 10s using a debugging thread. Logs of the hang situation are here: https://gist.github.com/jonashaag/b74ae9fc9267bde2cecd35ae316232c0
I am currently training without limit_train_batches to see if it's due to that setting. EDIT: No, I can also reproduce without limit_train_batches set.
Environment
* CUDA:
- GPU:
- GeForce GTX 1080 Ti
- GeForce GTX 1080 Ti
- GeForce GTX 1080 Ti
- GeForce GTX 1080 Ti
- available: True
- version: 11.0
* Packages:
- numpy: 1.19.2
- pyTorch_debug: True
- pyTorch_version: 1.8.0.dev20201028
- pytorch-lightning: 0.10.0
- tqdm: 4.51.0
* System:
- OS: Linux
- architecture:
- 64bit
-
- processor: x86_64
- python: 3.7.8
- version: #88~16.04.1-Ubuntu SMP Wed Feb 12 04:19:15 UTC 2020 |
Training_step outputs not propagated | [
"docs"
] | cc @ananthsub @awaelchli @justusschock
Hi, after updating to version 1.0.4, i think below approach seems to be not working as desired
def training_step(...):
return {'loss': loss, 'random_thing': [1, 'a', Tensor(), ...]}
def training_epoch_end(self, training_step_outputs):
for d in training_step_outputs:
random_thing = d['random_thing']
training_step_outputs is always empty. On further debugging i find that training_step is never getting called, instead this function training_step_and_backward is called within run_training_batch
Any help appreciated and also apologies if i am making any mistake
Originally posted by @nazim1021 in #3681 (comment) |
Is there a way to supress all printing and warnings in the Trainer? | [
"question"
] | Hello,
I am aware of the progress_bar_refresh_rate and weight_summary parameters, but even when I disable them I get these GPU warning-like messages:
I would like to disable all warnings and printings from the Trainer, is this possible?
Why? because I want to perform several training operations in a loop and monitor them with tqdm, so intermediate printing will ruin the tqdm progress bar. |
stepwise learning rate scheduler | [
"question"
] | Hello,
I am trying to manage LR scheduling with my training with pL. Main methods are relying on epoch-wise updates on LR, is there a way to reduce this to step-wise?
OS: [Linux]
Packaging [pip]
Version [1.0.4] |
calling `self.log(..., on_step=True)` in optimizer_step fails | [
"feature",
"help wanted"
] | π Bug
calling self.log(..., on_step=True) in optimizer_step fails.
Expected behavior
It should get treated same way if I were to .log() in training_step
Additional context
I believe it's connected to #4439. Tried debugging and hotfixing it but the whole .log() behavior (controlled by self._current_fx_name) seems not to be implemented too well. |
ModelCheckpoint filenames and metric names containing / (slash) | [
"bug",
"help wanted",
"checkpointing"
] | π Bug
The current behaviour of ModelCheckpoint is problematic when the checkpoint's name includes a metric whose name contains a slash character.
Since the implementation always includes the name of the metric along with its value, a format specifier like {some/name} results in the checkpoint being stored in a subdirectory. Everything appears to work correctly apart from the inconvenience of having a subdirectory which is not deleted when deleting old checkpoints.
Reproducer
https://colab.research.google.com/drive/1E4EKc2Ndt4XluRjpGC83hyohyF0BQESU?usp=sharing
The two lines marked as # STEP 1: ... and # STEP 2: ... are sufficient to reproduce the issue, the checkpoints are stored in lightning_logs/version_0/checkpoints.
Expected behavior
I haven't found a way to avoid this problem, because the substitution from {metric} to metric=value is hardcoded in PyTorch Lightning code.
I don't particularly care about the specific name of my checkpoints, so I'd propose for PL to automatically escape the slashes to some other character. Alternatively, PL should provide a way to manually specify a different name for the metric in the filename or a way to suppress the name altoghether.
Environment
CUDA:
GPU:
Tesla T4
available: True
version: 10.1
Packages:
numpy: 1.18.5
pyTorch_debug: False
pyTorch_version: 1.6.0+cu101
pytorch-lightning: 1.0.4
tqdm: 4.41.1
System:
OS: Linux
architecture:
64bit
processor: x86_64
python: 3.6.9
version: #1 SMP Thu Jul 23 08:00:38 PDT 2020
Additional context
Supporting metrics with a slash in the name is desirable for me because wandb then automatically groups them in the dashboard. |
Help with understanding unknown 'c10::Error' thrown during DDP training | [
"bug",
"help wanted"
] | π Bug
This is not going to be a good and concise description, since I have no way of reproducing the error with 100% certainty. However, more and more I get these types of c10::Error instances thrown during training, both during the training phase and during the validation phase. I am running DDP on a single-node with 20 CPUs and 4 GPUs. The error is not located to any specific node.
I don't think it's necessarily a PyTorch Lightning issue, so I am unsure whether it is appropriate here, but maybe someone has had similar experiences?
terminate called after throwing an instance of 'c10::Error'
what(): CUDA error: initialization error
Exception raised from insert_events at /opt/conda/conda-bld/pytorch_1595629427478/work/c10/cuda/CUDACachingAllocator.cpp:717 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x4d (0x7fcc65f5177d in /home/groups/mignot/miniconda3/envs/nml/lib/python3.7/site-packages/torch/lib/libc10.so)
frame #1: c10::cuda::CUDACachingAllocator::raw_delete(void*) + 0x1130 (0x7fcc661a2370 in /home/groups/mignot/miniconda3/envs/nml/lib/python3.7/site-packages/torch/lib/libc10_cuda.so)
frame #2: c10::TensorImpl::release_resources() + 0x4d (0x7fcc65f3db1d in /home/groups/mignot/miniconda3/envs/nml/lib/python3.7/site-packages/torch/lib/libc10.so)
frame #3: <unknown function> + 0x53f29a (0x7fcca3ab529a in /home/groups/mignot/miniconda3/envs/nml/lib/python3.7/site-packages/torch/lib/libtorch_python.so)
frame #4: <unknown function> + 0x1241eb (0x55fadc0131eb in /home/groups/mignot/miniconda3/envs/nml/bin/python)
frame #5: <unknown function> + 0x1243e6 (0x55fadc0133e6 in /home/groups/mignot/miniconda3/envs/nml/bin/python)
frame #6: _PyObject_GC_Malloc + 0x84 (0x55fadc0134d4 in /home/groups/mignot/miniconda3/envs/nml/bin/python)
frame #7: _PyObject_GC_New + 0xe (0x55fadc0134fe in /home/groups/mignot/miniconda3/envs/nml/bin/python)
frame #8: PyFunction_NewWithQualName + 0x2e (0x55fadc0525be in /home/groups/mignot/miniconda3/envs/nml/bin/python)
frame #9: _PyEval_EvalFrameDefault + 0x1918 (0x55fadc0bd9b8 in /home/groups/mignot/miniconda3/envs/nml/bin/python)
frame #10: _PyEval_EvalCodeWithName + 0xc30 (0x55fadc004160 in /home/groups/mignot/miniconda3/envs/nml/bin/python)
frame #11: _PyFunction_FastCallKeywords + 0x387 (0x55fadc054107 in /home/groups/mignot/miniconda3/envs/nml/bin/python)
frame #12: _PyEval_EvalFrameDefault + 0x416 (0x55fadc0bc4b6 in /home/groups/mignot/miniconda3/envs/nml/bin/python)
frame #13: _PyEval_EvalCodeWithName + 0x2f9 (0x55fadc003829 in /home/groups/mignot/miniconda3/envs/nml/bin/python)
frame #14: _PyFunction_FastCallKeywords + 0x387 (0x55fadc054107 in /home/groups/mignot/miniconda3/envs/nml/bin/python)
frame #15: _PyEval_EvalFrameDefault + 0x416 (0x55fadc0bc4b6 in /home/groups/mignot/miniconda3/envs/nml/bin/python)
frame #16: _PyEval_EvalCodeWithName + 0xc30 (0x55fadc004160 in /home/groups/mignot/miniconda3/envs/nml/bin/python)
frame #17: _PyFunction_FastCallKeywords + 0x387 (0x55fadc054107 in /home/groups/mignot/miniconda3/envs/nml/bin/python)
frame #18: _PyEval_EvalFrameDefault + 0x4a89 (0x55fadc0c0b29 in /home/groups/mignot/miniconda3/envs/nml/bin/python)
frame #19: _PyEval_EvalCodeWithName + 0xc30 (0x55fadc004160 in /home/groups/mignot/miniconda3/envs/nml/bin/python)
frame #20: _PyFunction_FastCallKeywords + 0x387 (0x55fadc054107 in /home/groups/mignot/miniconda3/envs/nml/bin/python)
frame #21: _PyEval_EvalFrameDefault + 0x6a0 (0x55fadc0bc740 in /home/groups/mignot/miniconda3/envs/nml/bin/python)
frame #22: _PyFunction_FastCallDict + 0x10b (0x55fadc00485b in /home/groups/mignot/miniconda3/envs/nml/bin/python)
frame #23: _PyEval_EvalFrameDefault + 0x1e4a (0x55fadc0bdeea in /home/groups/mignot/miniconda3/envs/nml/bin/python)
frame #24: _PyFunction_FastCallKeywords + 0xfb (0x55fadc053e7b in /home/groups/mignot/miniconda3/envs/nml/bin/python)
frame #25: _PyEval_EvalFrameDefault + 0x6a0 (0x55fadc0bc740 in /home/groups/mignot/miniconda3/envs/nml/bin/python)
frame #26: _PyFunction_FastCallKeywords + 0xfb (0x55fadc053e7b in /home/groups/mignot/miniconda3/envs/nml/bin/python)
frame #27: _PyEval_EvalFrameDefault + 0x6a0 (0x55fadc0bc740 in /home/groups/mignot/miniconda3/envs/nml/bin/python)
frame #28: _PyFunction_FastCallKeywords + 0xfb (0x55fadc053e7b in /home/groups/mignot/miniconda3/envs/nml/bin/python)
frame #29: _PyEval_EvalFrameDefault + 0x6a0 (0x55fadc0bc740 in /home/groups/mignot/miniconda3/envs/nml/bin/python)
frame #30: _PyFunction_FastCallDict + 0x10b (0x55fadc00485b in /home/groups/mignot/miniconda3/envs/nml/bin/python)
frame #31: _PyObject_Call_Prepend + 0x63 (0x55fadc0234d3 in /home/groups/mignot/miniconda3/envs/nml/bin/python)
frame #32: <unknown function> + 0x16bd5a (0x55fadc05ad5a in /home/groups/mignot/miniconda3/envs/nml/bin/python)
frame #33: _PyObject_FastCallKeywords + 0x128 (0x55fadc05b968 in /home/groups/mignot/miniconda3/envs/nml/bin/python)
frame #34: _PyEval_EvalFrameDefault + 0x49e6 (0x55fadc0c0a86 in /home/groups/mignot/miniconda3/envs/nml/bin/python)
frame #35: _PyFunction_FastCallKeywords + 0xfb (0x55fadc053e7b in /home/groups/mignot/miniconda3/envs/nml/bin/python)
frame #36: _PyEval_EvalFrameDefault + 0x4a89 (0x55fadc0c0b29 in /home/groups/mignot/miniconda3/envs/nml/bin/python)
frame #37: _PyFunction_FastCallKeywords + 0xfb (0x55fadc053e7b in /home/groups/mignot/miniconda3/envs/nml/bin/python)
frame #38: _PyEval_EvalFrameDefault + 0x4a89 (0x55fadc0c0b29 in /home/groups/mignot/miniconda3/envs/nml/bin/python)
frame #39: _PyFunction_FastCallKeywords + 0xfb (0x55fadc053e7b in /home/groups/mignot/miniconda3/envs/nml/bin/python)
frame #40: _PyEval_EvalFrameDefault + 0x6a0 (0x55fadc0bc740 in /home/groups/mignot/miniconda3/envs/nml/bin/python)
frame #41: _PyEval_EvalCodeWithName + 0x5da (0x55fadc003b0a in /home/groups/mignot/miniconda3/envs/nml/bin/python)
frame #42: _PyFunction_FastCallDict + 0x1d5 (0x55fadc004925 in /home/groups/mignot/miniconda3/envs/nml/bin/python)
frame #43: _PyObject_Call_Prepend + 0x63 (0x55fadc0234d3 in /home/groups/mignot/miniconda3/envs/nml/bin/python)
frame #44: <unknown function> + 0x16bd5a (0x55fadc05ad5a in /home/groups/mignot/miniconda3/envs/nml/bin/python)
frame #45: _PyObject_FastCallKeywords + 0x128 (0x55fadc05b968 in /home/groups/mignot/miniconda3/envs/nml/bin/python)
frame #46: _PyEval_EvalFrameDefault + 0x49e6 (0x55fadc0c0a86 in /home/groups/mignot/miniconda3/envs/nml/bin/python)
frame #47: _PyFunction_FastCallDict + 0x10b (0x55fadc00485b in /home/groups/mignot/miniconda3/envs/nml/bin/python)
frame #48: <unknown function> + 0x16349a (0x55fadc05249a in /home/groups/mignot/miniconda3/envs/nml/bin/python)
frame #49: PyObject_GetIter + 0x16 (0x55fadc002d76 in /home/groups/mignot/miniconda3/envs/nml/bin/python)
frame #50: <unknown function> + 0x1b91dc (0x55fadc0a81dc in /home/groups/mignot/miniconda3/envs/nml/bin/python)
frame #51: _PyMethodDef_RawFastCallKeywords + 0x1e4 (0x55fadc054884 in /home/groups/mignot/miniconda3/envs/nml/bin/python)
frame #52: _PyCFunction_FastCallKeywords + 0x21 (0x55fadc054a31 in /home/groups/mignot/miniconda3/envs/nml/bin/python)
frame #53: _PyEval_EvalFrameDefault + 0x46f5 (0x55fadc0c0795 in /home/groups/mignot/miniconda3/envs/nml/bin/python)
frame #54: <unknown function> + 0x16cfb4 (0x55fadc05bfb4 in /home/groups/mignot/miniconda3/envs/nml/bin/python)
frame #55: <unknown function> + 0x177b6f (0x55fadc066b6f in /home/groups/mignot/miniconda3/envs/nml/bin/python)
frame #56: <unknown function> + 0x16d575 (0x55fadc05c575 in /home/groups/mignot/miniconda3/envs/nml/bin/python)
frame #57: _PyMethodDef_RawFastCallKeywords + 0xe9 (0x55fadc054789 in /home/groups/mignot/miniconda3/envs/nml/bin/python)
frame #58: _PyCFunction_FastCallKeywords + 0x21 (0x55fadc054a31 in /home/groups/mignot/miniconda3/envs/nml/bin/python)
frame #59: _PyEval_EvalFrameDefault + 0x46f5 (0x55fadc0c0795 in /home/groups/mignot/miniconda3/envs/nml/bin/python)
frame #60: <unknown function> + 0x16cfb4 (0x55fadc05bfb4 in /home/groups/mignot/miniconda3/envs/nml/bin/python)
frame #61: _PyEval_EvalFrameDefault + 0xa46 (0x55fadc0bcae6 in /home/groups/mignot/miniconda3/envs/nml/bin/python)
frame #62: _PyFunction_FastCallKeywords + 0xfb (0x55fadc053e7b in /home/groups/mignot/miniconda3/envs/nml/bin/python)
frame #63: _PyEval_EvalFrameDefault + 0x6a0 (0x55fadc0bc740 in /home/groups/mignot/miniconda3/envs/nml/bin/python)
Traceback (most recent call last):
File "/home/users/alexno/sleep-staging/train.py", line 194, in <module>
run_training()
File "/home/users/alexno/sleep-staging/train.py", line 147, in run_training
trainer.fit(model, dm)
File "/home/groups/mignot/miniconda3/envs/nml/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 440, in fit
results = self.accelerator_backend.train()
File "/home/groups/mignot/miniconda3/envs/nml/lib/python3.7/site-packages/pytorch_lightning/accelerators/ddp_accelerator.py", line 146, in train
results = self.ddp_train(process_idx=self.task_idx, model=model)
File "/home/groups/mignot/miniconda3/envs/nml/lib/python3.7/site-packages/pytorch_lightning/accelerators/ddp_accelerator.py", line 279, in ddp_train
results = self.train_or_test()
File "/home/groups/mignot/miniconda3/envs/nml/lib/python3.7/site-packages/pytorch_lightning/accelerators/accelerator.py", line 66, in train_or_test
results = self.trainer.train()
File "/home/groups/mignot/miniconda3/envs/nml/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 483, in train
self.train_loop.run_training_epoch()
File "/home/groups/mignot/miniconda3/envs/nml/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 541, in run_training_epoch
batch_output = self.run_training_batch(batch, batch_idx, dataloader_idx)
File "/home/groups/mignot/miniconda3/envs/nml/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 687, in run_training_batch
self.log_training_step_metrics(opt_closure_result, batch_callback_metrics, batch_log_metrics)
File "/home/groups/mignot/miniconda3/envs/nml/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 501, in log_training_step_metrics
self.trainer.logger_connector.add_progress_bar_metrics(step_pbar_metrics)
File "/home/groups/mignot/miniconda3/envs/nml/lib/python3.7/site-packages/pytorch_lightning/trainer/connectors/logger_connector.py", line 105, in add_progress_bar_metrics
v = v.item()
File "/home/groups/mignot/miniconda3/envs/nml/lib/python3.7/site-packages/torch/utils/data/_utils/signal_handling.py", line 66, in handler
_error_if_any_worker_fails()
RuntimeError: DataLoader worker (pid 51292) is killed by signal: Aborted.
Environment
* CUDA:
- GPU:
- Tesla V100-SXM2-16GB
- Tesla V100-SXM2-16GB
- Tesla V100-SXM2-16GB
- Tesla V100-SXM2-16GB
- available: True
- version: 10.2
* Packages:
- numpy: 1.19.2
- pyTorch_debug: False
- pyTorch_version: 1.6.0
- pytorch-lightning: 1.0.2
- tensorboard: 2.3.0
- tqdm: 4.51.0
* System:
- OS: Linux
- architecture:
- 64bit
-
- processor: x86_64
- python: 3.7.9
- version: #1 SMP Mon Jul 29 17:46:05 UTC 2019
Additional context
Please let me know if this is more suited towards the PyTorch team. |
import pytorch_lightning as pl: Segmentation fault (core dump) | [] | Pytorch_lightning immediately segfaults upon loading at the first executable line in the file:
import pytorch_lightning as pl
apparently it's doing this upon attempting to load /opt/anaconda3/lib/python3.8/site-packages/torchtext/_torchtext.so, which exists and is not protected, after not crashing when it loaded /opt/anaconda3/lib/python3.8/site-packages/torchvision/_C.so. It seems to be crashing at file:line /opt/anaconda3/lib/python3.8/ctypes/__init__.py:373 which is
if handle is None: self._handle = _dlopen(self._name, mode)
inside the init function of class object CDLL(), called from _Ops.load_library( ) at file:line /opt/anaconda3/lib/python3.8/site-packages/torch/_ops.py:105 with
with dl_open_guard(): # Import the shared library into the process, thus running its # static (global) initialization code in order to register custom # operators with the JIT. ctypes.CDLL(path)
I have no idea what's going on here.
sys.platform is "linux"
machine is running Lubuntu 20.04 with an AMD Ryzen 7 processor and single GeForce GTX 1660-Ti GPU
Python 3.8.3
nvcc: Built on Sun_Jul_28_19:07:16_PDT_2019
Cuda compilation tools, release 10.1, V10.1.243
NVIDIA Driver Version: 450.80.02
lightning had this issue both times when using the standard pip version:
pip install pytorch-lightning Downloading pytorch_lightning-1.0.4-py3-none-any.whl (554 kB)
[double-check: pip install pytorch-lightning Requirement already satisfied: pytorch-lightning in /opt/anaconda3/lib/python3.8/site-packages (1.0.4) ]
and the development version:
pip install git+https://github.com/PytorchLightning/pytorch-lightning.git@master --upgrade Building wheels for collected packages: pytorch-lightning Building wheel for pytorch-lightning (PEP 517) ... done Created wheel for pytorch-lightning: filename=pytorch_lightning-1.0.4-py3-none-any.whl size=563410 Successfully built pytorch-lightning Installing collected packages: pytorch-lightning Attempting uninstall: pytorch-lightning Found existing installation: pytorch-lightning 1.0.4 Uninstalling pytorch-lightning-1.0.4: Successfully uninstalled pytorch-lightning-1.0.4 Successfully installed pytorch-lightning-1.0.4
best tentative guess is machine has perhaps somehow been locked down with some kind of tight security such as systemctl / ufw, and maybe it's not letting a forked python process access the disk? But this really wouldn't make much sense, as everything's on /opt. Second best guess is torchtext.so needs some kind of fixing...
this is a showstopper on NeMo. |
Allow Trainer to accept dictionary as input? | [
"feature",
"question",
"discussion"
] | I searched issues and the forum but couldn't find anyone who mentioned this, so here I'm asking: can we allow Trainer to accept dictionaries as an input argument?
This came up as I'm writing a script to run experiments under various configurations. For each experiment, I'm passing a configuration file (in JSON) into an experiment runner which builds callbacks, Trainer, and models internally. At the moment, I'm doing something that looks like:
trainer = pl.Trainer(
gpus = config.trainer.gpus,
auto_select_gpus = config.trainer.auto_select_gpus,
precision = config.trainer.precision,
deterministic = config.trainer.deterministic,
max_epochs = config.trainer.max_epochs,
auto_lr_find = config.trainer.auto_lr_find,
gradient_clip_val = config.trainer.gradient_clip_val,
logger = logger,
callbacks = callbacks,
)
However, I would like to be able to do:
trainer = pl.Trainer(config.trainer)
I think running experiments would be so much more flexible this way unless there is a technical reason that prevents this from happening. |
Logging with "self.log" in training_step does not create any outputs in progress bar or external Logger when loss isn't returned | [
"bug",
"help wanted",
"priority: 0",
"logger"
] | π Bug
I think the newly introduced log function function does not log properly while being used in the training_step. The same code in validation_step creates the desired results.
def training_step(self, batch, batch_idx):
output = self.layer(batch)
loss = self.loss(batch, output)
self.log("loss", loss, prog_bar=True, logger=True, on_step=True, on_epoch=True)
self.log("my_metric_train", 1001, prog_bar=True, logger=True, on_step=True, on_epoch=True)
##### Doesn't Work #######
def validation_step(self, batch, batch_idx):
output = self.layer(batch)
loss = self.loss(batch, output)
self.log("val_loss", loss, prog_bar=True, logger=True, on_step=True, on_epoch=True)
self.log("my_metric_val", 1001, prog_bar=True, logger=True, on_step=True, on_epoch=True)
##### Works #######
Please reproduce using
https://gist.github.com/tobiascz/bb2c6de83263eb38181052840062b5ac
Expected behavior
Logs created in training_step should show up in the prog_bar and loggers (such as tensorboard logger). Same code in the validation_step creates the desired results.
Environment
CUDA:
GPU:
Tesla T4
available: True
version: 10.1
Packages:
numpy: 1.18.5
pyTorch_debug: False
pyTorch_version: 1.6.0+cu101
pytorch-lightning: 0.10.0
tqdm: 4.41.1
System:
OS: Linux
architecture:
64bit
processor: x86_64
python: 3.6.9
version: #1 SMP Thu Jul 23 08:00:38 PDT 2020
In [ ]: |
Validation step is skipped when on_train_batch_start returns -1 | [
"help wanted",
"working as intended"
] | π Bug
It seems that when on_train_batch_start returns -1 the validation step is skipped.
Expected behavior
The trainer should skip the training epoch and start the evaluation.
- pytorch-lightning version 1.0.4
- PyTorch Version (e.g., 1.0): 1.7
- OS (e.g., Linux): Linux
- How you installed PyTorch (`conda`, `pip`, source): conda
- Python version: 3.7 |
Why metric inherits torch.nn.Module? | [
"question"
] | If i define a complex metric and the computation may convert torch.tensor to numpy. Can this metric use in ddpοΌ
If i want train model with train_dataloader in ddp and validate val_dataloader in single model with customized metric, how to do it? |
using autograd in neural network calculation breaks the validation step | [
"bug",
"help wanted"
] | π Bug
Hi,
just a small bug report, maybe this can be a feature in the future.
Assuming you want to have a derivative in your neural network as output,
the option: torch.set_grad_enabled(False), which is activated during validation, breaks the PyTorch lightning module.
So if you want to train and validate a model like this (see Code) in pytorch ligthning, a workaround is to torch.set_grad_enabled(True) at the beginning of the validation step.
import torch
class Feedforward(torch.nn.Module):
def __init__(self, input_size, hidden_size):
super(Feedforward, self).__init__()
self.input_size = input_size
self.hidden_size = hidden_size
self.fc1 = torch.nn.Linear(self.input_size, self.hidden_size)
self.relu = torch.nn.ReLU()
self.fc2 = torch.nn.Linear(self.hidden_size, 1)
def forward(self, x):
hidden = self.fc1(x)
relu = self.relu(hidden)
output = self.fc2(relu)
output = output.sum()
output =torch.autograd.grad(outputs=output, inputs=x, retain_graph=True, create_graph=True)
return output[0]
test_input = torch.rand((10,3),requires_grad=True)
test_output = torch.rand((10,3))
model = Feedforward(3,10)
optim = torch.optim.Adam(model.parameters())
optim.zero_grad()
loss_fn = torch.nn.L1Loss()
model.train()
out = model(test_input)
loss = loss_fn(out, test_output)
loss.backward()
optim.step()
I dont know if this behavior is intendet and if you want to find a workaround, but I leave this here as a Feature Request/ Bug Report. |
Trainer.test() fail when trys to log_hyperparameter | [
"bug",
"help wanted",
"priority: 2"
] | π Bug
This the error code
Traceback (most recent call last):
File "test.py", line 31, in <module>
cli_main()
File "test.py", line 28, in cli_main
trainer.test(model, test_dataloaders=dm.test_dataloader())
File "C:\Users\Mohammed\.conda\envs\dl_env\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 710, in test
results = self.__test_given_model(model, test_dataloaders)
File "C:\Users\Mohammed\.conda\envs\dl_env\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 775, in __test_given_model
results = self.fit(model)
File "C:\Users\Mohammed\.conda\envs\dl_env\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 440, in fit
results = self.accelerator_backend.train()
File "C:\Users\Mohammed\.conda\envs\dl_env\lib\site-packages\pytorch_lightning\accelerators\gpu_accelerator.py", line 51, in train
self.trainer.train_loop.setup_training(model)
File "C:\Users\Mohammed\.conda\envs\dl_env\lib\site-packages\pytorch_lightning\trainer\training_loop.py", line 135, in setup_training
self.trainer.logger.log_hyperparams(ref_model.hparams_initial)
File "C:\Users\Mohammed\.conda\envs\dl_env\lib\site-packages\pytorch_lightning\utilities\distributed.py", line 35, in wrapped_fn
return fn(*args, **kwargs)
File "C:\Users\Mohammed\.conda\envs\dl_env\lib\site-packages\pytorch_lightning\loggers\tensorboard.py", line 159, in log_hyperparams
params = self._flatten_dict(params)
File "C:\Users\Mohammed\.conda\envs\dl_env\lib\site-packages\pytorch_lightning\loggers\base.py", line 228, in _flatten_dict
return {delimiter.join(keys): val for *keys, val in _dict_generator(params)}
File "C:\Users\Mohammed\.conda\envs\dl_env\lib\site-packages\pytorch_lightning\loggers\base.py", line 228, in <dictcomp>
return {delimiter.join(keys): val for *keys, val in _dict_generator(params)}
TypeError: sequence item 1: expected str instance, numpy.int32 found
I think this error is caused by something related to the saved hparams
so I tried to print the hparams of my trained model, this is the result of printing my model.hparams:
"batch_size": 16
"callback": False
"ckpt_path": checkpoints\sgd_cnn_rnn_5\last.ckpt
"decoder": {1: 'aa', 2: 'la', 3: 'sh', 4: 'ra', 5: 'ya', 6: 'ay', 7: 'da', 8: 'wa', 9: 'sp', 10: 'ta', 11: 'te', 12: 'ka', 13: 'sa', 14: 'gh', 15: 'ee', 16: 'kh', 17: 'na', 18: 'de', 19: 'fa', 20: 'ha', 21: 'ba', 22: 'he', 23: 'hamala', 24: 'to', 25: 'ma', 26: 'th', 27: 'ke', 28: 'se', 29: 'ze', 30: 'aala', 31: 'dh', 32: 'ae', 33: 'za', 34: 'al', 35: 'aela', 36: 'ja', 37: 'hh', 38: 'mala', 39: 'ah', 40: '7', 41: '0', 42: '2', 43: 'jala', 44: 'hala', 45: 'hana', 46: '8', 47: '1', 48: 'khla', 49: '9', 50: '6', 51: 'am', 52: 'ahla'}
"epochs": 400
"factor": 0.5
"height": 32
"hidden_dims": 64
"learning_rate": 0.003
"model_type": cnn_rnn
"momentum": 0.9
"n_classes": 53
"name": sgd_cnn_rnn_5_continue
"notes": None
"optimizer": sgd
"patience": 5
"rnn_dims": 256
"test_paths": None
"train_paths": None
"val_paths": None
"weight_decay": 0.0001
This is my testing code
parser = ArgumentParser()
parser.add_argument('--test_paths', type=str, default=None, help="comma separate different dirs")
parser.add_argument('--ckpt_path', type=str, required=True)
parser.add_argument('--height', type=int, default=32)
parser.add_argument('--batch_size', type=int, default=16)
args = parser.parse_args()
if args.test_paths:
test_dirs = args.test_paths.split(',')
dm = IFN_ENITDataModule(height=args.height, batch_size=args.batch_size, test_dirs=test_dirs)
else:
dm = IFN_ENITDataModule(height=args.height, batch_size=args.batch_size)
dm.setup("test")
model = ResNetwork.load_from_checkpoint(checkpoint_path=args.ckpt_path, n_classes=dm.num_classes+1, decoder=dm.decoder)
print(model.hparams_initial)
wandb = WandbLogger(project='Word Recognition', name=model.hparams.name, version=model.hparams.name)
trainer = pl.Trainer(gpus=1, precision=16)
trainer.test(model, test_dataloaders=dm.test_dataloader())
I have pytorch_lightning version 1.0.4 |
Expose lr_scheduler to pl.LightningModule as a property (self.scheduler)? | [
"question"
] | What is your question?
Dear all,
Is lr_scheduler (or an array of them) exposed to pl.LightningModule as self.scheduler or something similar ? I only see self.trainer but self.scheduler would really complement the trainer!
What have you tried?
I tried looking for self.scheduler (and different variants) in the source / PDF manual but found nothing
What's your environment?
OS: CentOS 6
Packaging: conda
Version: 1.0.4 |
Packages fails to import when system site-packages contain a different version | [
"bug",
"help wanted",
"3rd party"
] | π Bug
The package mixes imports from the user's site-packages and system's site-packages, which leads to import errors or other unexpected behavior.
To Reproduce
Install older version of pytorch-lightning as system package, e.g.:
sudo pip3 install pytorch-lightning==0.8.0
Install newer version of pytorch-lightning as user package, e.g.:
pip3 install pytorch-lightning==1.0.4 --user
Try to import the package:
Python 3.8.2 (default, Feb 26 2020, 02:56:10)
[GCC 7.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import pytorch_lightning
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/mybot/.local/lib/python3.8/site-packages/pytorch_lightning/__init__.py", line 79, in <module>
__import__('pkg_resources').declare_namespace(__name__)
File "/usr/local/lib/python3.8/dist-packages/pkg_resources/__init__.py", line 2293, in declare_namespace
_handle_ns(packageName, path_item)
File "/usr/local/lib/python3.8/dist-packages/pkg_resources/__init__.py", line 2226, in _handle_ns
loader.load_module(packageName)
File "/usr/local/lib/python3.8/dist-packages/pytorch_lightning/__init__.py", line 53, in <module>
from pytorch_lightning.core import LightningModule, data_loader
ImportError: cannot import name 'data_loader' from 'pytorch_lightning.core' (/home/mybot/.local/lib/python3.8/site-packages/pytorch_lightning/core/__init__.py)
Note that python prefers the user packages over system packages so the first import use a file in /home/mybot/.local/ while the others start with /usr/local/lib/ (system packages).
Lines from the relevant file /home/mybot/.local/lib/python3.8/site-packages/pytorch_lightning/core/__init__.py:
# for compatibility with namespace packages
__import__('pkg_resources').declare_namespace(__name__)
Expected behavior
The import use only one version of the package.
Environment
Standard Ubuntu 18.04 docker image with CUDA 10.
NAME="Ubuntu"
VERSION="18.04.3 LTS (Bionic Beaver)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 18.04.3 LTS"
VERSION_ID="18.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=bionic
UBUNTU_CODENAME=bionic
Pip version:
pip 20.2.4 from /home/tclbot/.local/lib/python3.8/site-packages/pip (python 3.8)
Pytorch: 1.6.0
Additional context
This scenario occurs in practice. My docker image contains preinstalled packages from requirements.txt installed as system packages. Additionally at the start of the pod I defensively run pip install -r requirements.txt, just in case my docker image was outdated. I use system packages as a sort of cache.
In my case the workaround is quite simple, but this bug can lead to other, much more confusing issues. |
How to use lightning without any explicit train_dataloader? | [
"question",
"won't fix"
] | β Questions and Help
What is your question?
I'm working on Neural Style Transfer (the original paper by Gatys et. al.). The method is essentially specific to one style and one content image only, which can be loaded in the __init__() method itself. And there's no other data set required (training or validation). Once we have the style and content images, we iteratively generate the stylized image (and the logic resides in training_step()).
However, Lightning requires at minimum to define a train_dataloader(). How should I move forward now? Should I create a dummy dataloader? Is there a recommended way to get around this?
What's your environment?
OS: Linux, Win
Packaging: conda
Version: 1.0.4 |
Test drone on pytorch 1.7, 1.8 | [
"help wanted",
"ci"
] | extend teste PT versions with multi-GPU |
TypeError: __init__() got an unexpected keyword argument 'logger' | [] | I am new to developing with Pytorch-Lightning. I have constructed a VideoClassifier following the examples and templates.
I am currently running into the following error when I try to run the code
TypeError: init() got an unexpected keyword argument 'logger'
Back tracing the error also does not give any pointers on how to solve it.
Is there a version change of the package that can solve this error?
My code is as follows:
def main(args: Namespace) -> None:
if args.seed is not None:
pl.seed_everything(args.seed)
if args.distributed_backend == 'ddp':
# When using a single GPU per process and per
# DistributedDataParallel, we need to divide the batch size
# ourselves based on the total number of GPUs we have
args.batch_size = int(args.batch_size / max(1, args.gpus))
args.workers = int(args.workers / max(1, args.gpus))
data_dir = get_video_dir()
dataset = VideoDataModule(data_dir)
**logger = TensorBoardLogger('tb_logs', name='video_classifier')**
model = VideoClassifier(**vars(args))
trainer = pl.Trainer.from_argparse_args(args, logger=logger)
if args.evaluate:
trainer.test(model)
else:
train_dataloader = dataset.train_dataloader()
val_dataloader = dataset.val_dataloader()
trainer.fit(model, train_dataloader, val_dataloader)
def run_cli():
parent_parser = ArgumentParser(add_help=False)
parent_parser = pl.Trainer.add_argparse_args(parent_parser)
parent_parser.add_argument('-e', '--evaluate', dest='evaluate', action='store_true',
help='evaluate model on validation set')
parent_parser.add_argument('--seed', type=int, default=42,
help='seed for initializing training.')
parser = VideoClassifier.add_model_specific_args(parent_parser)
parser.set_defaults(
profiler=True,
deterministic=True,
max_epochs=30,
)
args = parser.parse_args()
main(args)
if __name__ == '__main__':
run_cli()
Any help is greatly appreciated! |
Profiler is not reset after calling trainer.tune() with auto_lr_find=True | [
"bug",
"help wanted",
"trainer: tune",
"priority: 1"
] | π Bug
When using SimpleProfiler together with the Learning Rate Finder, only the timings from the call to train within the Learning Rate Finder are logged. For the actual training, no timings are recorded.
Please reproduce using [the BoringModel and post here]
https://colab.research.google.com/drive/1FUOW-A7gJk1fuR7ODQdMSCQkrkgamtru?usp=sharing
Expected behavior
Profiler is reset after trainer.tune() is called OR the profiler is not active at all during tuning.
Environment
CUDA:
GPU:
Tesla T4
available: True
version: 10.1
Packages:
numpy: 1.18.5
pyTorch_debug: False
pyTorch_version: 1.6.0+cu101
pytorch-lightning: 0.10.0
tqdm: 4.41.1
System:
OS: Linux
architecture:
64bit
processor: x86_64
python: 3.6.9
version: #1 SMP Thu Jul 23 08:00:38 PDT 2020
Additional context
Beside this issue, I'm wondering if calling the LR Finder manually like it's suggested here (lr_finder = trainer.tuner.lr_find(model)) might lead to some problems compared to trainer.tune(). In the Trainer class there are several things happening before lr_find is called. However, as far as I know the first method is necessary to obtain e.g. the plot. |
Videos in the documentation quality change for lower speed connections | [
"docs"
] | π Feature
adding the option to change the quality of the tutorial videos in the documentation page.
Motivation
the videos inside the documentation seem very useful however every time I try to watch them I can't because they have a high quality and my internet access can't load it in the proper time.
Alternatives
Simply upload them on Youtube |
Error when disabling an optimizer with native AMP turned on | [
"bug",
"help wanted",
"priority: 1"
] | π Bug
When running my Lightning code with:
fp16 native AMP
Multiple optimizers
One of the optimizers disabled (in this case by returning None for it in training_step)
I'm getting the following stacktrace:
Traceback (most recent call last):
File "./train_stage1.py", line 353, in <module>
trainer.fit(model)
File "/home/wj359634/venv/lib64/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 440, in fit
results = self.accelerator_backend.train()
File "/home/wj359634/venv/lib64/python3.6/site-packages/pytorch_lightning/accelerators/gpu_accelerator.py", line 54, in train
results = self.train_or_test()
File "/home/wj359634/venv/lib64/python3.6/site-packages/pytorch_lightning/accelerators/accelerator.py", line 68, in train_or_test
results = self.trainer.train()
File "/home/wj359634/venv/lib64/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 485, in train
self.train_loop.run_training_epoch()
File "/home/wj359634/venv/lib64/python3.6/site-packages/pytorch_lightning/trainer/training_loop.py", line 544, in run_training_epoch
batch_output = self.run_training_batch(batch, batch_idx, dataloader_idx)
File "/home/wj359634/venv/lib64/python3.6/site-packages/pytorch_lightning/trainer/training_loop.py", line 713, in run_training_batch
self.optimizer_step(optimizer, opt_idx, batch_idx, train_step_and_backward_closure)
File "/home/wj359634/venv/lib64/python3.6/site-packages/pytorch_lightning/trainer/training_loop.py", line 453, in optimizer_step
optimizer, batch_idx, opt_idx, train_step_and_backward_closure
File "/home/wj359634/venv/lib64/python3.6/site-packages/pytorch_lightning/accelerators/accelerator.py", line 122, in optimizer_step
using_lbfgs=is_lbfgs
File "/home/wj359634/venv/lib64/python3.6/site-packages/pytorch_lightning/core/lightning.py", line 1209, in optimizer_step
self.trainer.scaler.step(optimizer)
File "/home/wj359634/venv/lib64/python3.6/site-packages/torch/cuda/amp/grad_scaler.py", line 318, in step
assert len(optimizer_state["found_inf_per_device"]) > 0, "No inf checks were recorded for this optimizer."
AssertionError: No inf checks were recorded for this optimizer.
Exception ignored in: <object repr() failed>
Traceback (most recent call last):
File "/home/wj359634/venv/lib64/python3.6/site-packages/tqdm/std.py", line 1086, in __del__
File "/home/wj359634/venv/lib64/python3.6/site-packages/tqdm/std.py", line 1293, in close
File "/home/wj359634/venv/lib64/python3.6/site-packages/tqdm/std.py", line 1471, in display
File "/home/wj359634/venv/lib64/python3.6/site-packages/tqdm/std.py", line 1089, in __repr__
File "/home/wj359634/venv/lib64/python3.6/site-packages/tqdm/std.py", line 1433, in format_dict
TypeError: 'NoneType' object is not iterable
To Reproduce
(I'm hoping those are all conditions that have to be met)
Run a Lightning model with
fp16 native AMP
Multiple optimizers
One of the optimizers disabled (in this case by returning None for it in training_step)
Expected behavior
The code should skip this optimizer
Environment
* CUDA:
- GPU:
- Tesla V100-PCIE-32GB
- available: True
- version: 10.2
* Packages:
- numpy: 1.18.4
- pyTorch_debug: True
- pyTorch_version: 1.7.0
- pytorch-lightning: 1.0.4
- tqdm: 4.46.0
* System:
- OS: Linux
- architecture:
- 64bit
- ELF
- processor: x86_64
- python: 3.6.8
- version: #1 SMP Tue Aug 25 17:23:54 UTC 2020
Additional context |
save_hyperparameters() doesn't work with decorated __init__() | [
"bug",
"feature",
"help wanted",
"checkpointing"
] | π Bug
save_hyperparameters() breaks when init() is decorated
https://colab.research.google.com/drive/1njXP32G3FSg4nWvwo7kuhVr6iqH3eabQ?usp=sharing
Expected behavior
hyperparameters should be saved
Environment
* CUDA:
- GPU:
- Tesla T4
- available: True
- version: 10.1
* Packages:
- numpy: 1.18.5
- pyTorch_debug: True
- pyTorch_version: 1.7.0+cu101
- pytorch-lightning: 1.0.0
- tqdm: 4.41.1
* System:
- OS: Linux
- architecture:
- 64bit
-
- processor: x86_64
- python: 3.6.9
- version: #1 SMP Thu Jul 23 08:00:38 PDT 2020
Additional context
I think the right way to detect parameters is using inspect.signature(cls.__init__).parameters instead of inspect.getfullargspec(Reinforce.__init__). signature() does cooperate with functools.wraps(). You can use kind attribute on Parameter object to detect the variable positional & keyword arguments. |
Does log_every_n_steps work in validation_step? | [
"question"
] | I find that golbal_step is the training_step, if i want to log something every k validation_steps, how to implementοΌ
log_every_n_steps works in trains_loop but does not work in validation loop.
Should add global_traing_step, global_val_step and global_test_step?
self.log only support on_step and on epoch, can this api support k_steps or k_epoch (training, validation and test)? |
Testing metrics on all batches and all ranks | [
"question"
] | What is your question?
I don't understand why under the comment "check on all batches on all ranks" in function _class_test only NUM_BATCHES examples are taken whereas all preds and targets contain NUM_BATCHES * worldsize examples.
Thank you very much for your help and excuse me if the question is stupid. |
Create an Enum for tracking Train, validation, test stage | [
"feature",
"help wanted",
"good first issue",
"refactor",
"priority: 2"
] | π Feature
Motivation
The goal is to create single point of truth about stages for the trainer, so we don't get typo errors.
Pitch
Alternatives
Additional context |
prefix argument in loggers | [
"feature",
"logger"
] | π Feature
A simple prefix argument in loggers, similar to what we have in ModelCheckpoint that can prepend this value in metric names.
Motivation
One use-case where I need this is while doing Kfold training. For now, I have to do it manually by updating the metric_name in self.log, but this can be done by the loggers using this prefix parameter. There might be other use-cases too.
Pitch
if prefix='fold1'
and we do self.log('val_loss', ...)
logged metric will be fold1_val_loss.
Let me know if this is a good feature to add :)
cc @PyTorchLightning/core-contributors |
Torch-Summary Integration to replace ModelSummary | [
"feature",
"help wanted",
"won't fix",
"discussion"
] | π Feature, Motivation, Pitch
Hello, I am the current maintainer of torch-summary, which is a rewrite of yet another torchsummary library. Currently, ModelSummary is a builtin PyTorch-Lightning feature, but it is limited in scope and support, especially in more complex models. The proposal here, seconded by @carmocca , is to deprecate/remove the current ModelSummary implementation in pytorch-lightning and instead move to torch-summary as a more sustainable and long-term solution to the problem of visualizing model details.
Implementations
There are several ways to implement this idea, all of which I am open to:
Using torch-summary as a required dependency for pytorch-lightning by adding it to requirements.txt and integrating it into applicable models. Lightning would use the current public API for torch-summary.
Bring the torch-summary code in-house as an upgrade to the existing code as a one time upgrade.
Adopt torch-summary (or a copy of the repository) into the PytorchLightning organization as a Lightning-specific repository that can be developed independently of the original project. (in other words, surfacing an API that only works for Lightning models). I hope that this would add additional contributors that will help the project grow to fit more diverse kinds of models.
Additional context
As the maintainer, I will say that this project is far from perfect, and there are many kinds of models that torch-summary does not yet fully support. e.g. functional layers, lists or dicts of tensors, other types of CUDA, etc. The project is currently only maintained by me, which is sustainable for the current user base, but not sustainable if Lightning users' needs eventually outpace the project.
However, I do think that separating the ModelSummary functionality from Lightning is important, and I think that expanding this feature is something that a lot of users would enjoy.
(Continuing discussion from #4521) |
Replace AttributeDict in with dict in checkpoint | [
"bug",
"checkpointing",
"priority: 1"
] | π Feature
When we save hyperparameters to the checkpoint, we save them in a Lightning datastructure called AttributeDict.
We should replace this with a regular dict when saving the checkpoint.
Motivation
Allows loading checkpoints with torch.load in environments where Lightning is not installed.
Pitch
Convert AttributeDict to dict when saving, convert dict to AttributeDict when loading.
cc @Borda |
Class version of AUROC metric | [
"feature",
"help wanted",
"design"
] | π Feature
Class version of AUROC metric following the v1.x.x standard, so we can do:
auroc = AUROC(num_classes=4)
auroc(pred, target)
auroc.compute()
Motivation
Class based Metrics can be nicer to use in the PyTorch Lightning workflow, so why not add one for AUROC?
Pitch
AUROC will automatically use the correct functional function, auroc or multiclass_auroc, based on whether num_classes == 2 or num_classes > 2, respectively.
Not sure why auroc or multiclass_auroc are 2 separate functions?
Alternatives
Implement the AUROC Metric yourself.
Additional context
Considerations:
How to handle update()? While it's true that most likely the whole dataset will include all classes in pred and targets, this cannot be assumed for a single batch. |
Clarify ModelCheckpoint behavior when self.monitor == None | [
"won't fix",
"docs"
] | π Documentation
In ModelCheckpoint, setting self.monitor to None (as for the built-in checkpoint callback in the Trainer) causes it to default to either val_loss or checkpoint_on, as per L459. This, in turn, overrides the behavior described in the docs, which is caused by L500.
Incriminated docs at L53:
monitor: quantity to monitor. By default it is None which saves a checkpoint only for the last epoch.
I think that this should be clarified to reflect the (pretty complex and a bit tricky, IMHO) behavior of ModelCheckpoint, perhaps by adding this to the doc entry:
monitor: quantity to monitor. By default it is None which saves a checkpoint only for the last epoch; if the user logs either val_loss or checkpoint_on, it will instead monitor their quantity.
I will happily submit a pull request once I hear from a maintainer! :) |
Gpu memory leak with self.log on_epoch=True | [
"bug",
"help wanted",
"priority: 0",
"logger"
] | pl 1.0.5
Using new logging api I want to log a metric in LightningModule
self.log(";;;;;;;;;;;;;;;;;;;", 1, on_step=False, on_epoch=True)
This is a dummy example but it is sufficient to add to LightningModule's training_step to cause a memory leak on gpu.
What could go wrong? We want to log a metric which is not even a cuda tensor. How could it lead to a gpu memory leak?
Well thanks to the magic of metric epoch aggregation stuff
Let's dig in and take a look at here
pytorch-lightning/pytorch_lightning/trainer/training_loop.py
Lines 550 to 569
in
b3db197
# ------------------------------------
# TRAINING_STEP + TRAINING_STEP_END
# ------------------------------------
batch_output = self.run_training_batch(batch, batch_idx, dataloader_idx)
# when returning -1 from train_step, we end epoch early
if batch_output.signal == -1:
break
# only track outputs when user implements training_epoch_end
# otherwise we will build up unnecessary memory
epoch_end_outputs = self.process_train_step_outputs(
batch_output.training_step_output_for_epoch_end,
self.early_stopping_accumulator,
self.checkpoint_accumulator,
)
# hook
# TODO: add outputs to batches
self.on_train_batch_end(epoch_output, epoch_end_outputs, batch, batch_idx, dataloader_idx)
Here we run batch, convert batch_output to epoch_end_outputs if on_epoch was set and append epoch_end_outputs to epoch_output inside on_train_batch_end
epoch_output is defined here
pytorch-lightning/pytorch_lightning/trainer/training_loop.py
Line 540
in
b3db197
epoch_output = [[] for _ in range(self.num_optimizers)]
Everything seems normal, but there is a problem inside batch_output there is a surprise - loss value stored on gpu.
I think you can guess by now what could go wrong if we store a lot of separate cuda tensors in a long long epoch_output
Yeah the gpu memory is going to end and you'll get a famous
RuntimeError: CUDA out of memory. Tried to allocate 114.00 MiB (GPU 1; 10.92 GiB total capacity; 9.39 GiB already allocated; 27.38 MiB free; 10.24 GiB reserved in total by PyTorch)
Where is the loss appended to output? Here
pytorch-lightning/pytorch_lightning/trainer/training_loop.py
Lines 396 to 427
in
b3db197
def _process_training_step_output_1_0(self, training_step_output, split_batch):
result = self.trainer.get_model()._results
loss = None
hiddens = None
# handle dict return
if isinstance(training_step_output, dict):
loss = training_step_output.pop("loss", None)
hiddens = training_step_output.pop("hiddens", None)
result["extra"] = training_step_output
# handle scalar return
elif isinstance(training_step_output, torch.Tensor):
loss = training_step_output
result["extra"] = {}
# map to results under the hood
result.minimize = loss
result.hiddens = hiddens
# track batch for manual reduction with result
result.track_batch_size(len(split_batch))
# track metrics without grads for epoch reduction
training_step_output_for_epoch_end = copy(result)
training_step_output_for_epoch_end.detach()
# what flows back into the system
training_step_output = result
return training_step_output_for_epoch_end, training_step_output
In the first line we get a pretty result without the loss in it, and in line 414 the loss get appended and we start our memory leak chain of events
How is it affecting the training? It can lead to error only on the first epoch of training. If you've got enough memory to hold a list of gpu losses during the 1st epoch there won't be any exceptions as subsequent epochs will have the same list of losses, if not you'll get it somewhere in the middle of 1st epoch. And of course the more steps you have in an epoch the more memory this list of gpu losses will require as one loss is stored per step
Here is the comparison for my task. My gpu could hold 2k steps before memory error
With self.log
Without self.log
You can see how there is a rapid growth in the first minute in both as the model is loaded and feeded the 1st batch.
The difference is in subsequent minutes where in the former case the list of losses eats 7gb of gpu memory and leads to crash, and in the latter nothing happens and training goes on
Pretty cool how one self.log could eat 2 times more gpu memory more than actual training process |
Attribute finder doesn't check datamodule when hparams is set | [
"bug",
"help wanted"
] | π Bug
Issue is seen here too: #3233 (comment)
See: https://github.com/rnett/pytorch-lightning/blob/6e5f232f5cec2b5e635ae34fa365c6b969d0902e/pytorch_lightning/utilities/parsing.py#L177
In pytorch_lightning.utilities.parsing.lightning_hasattr (and getattr and setattr), because hasattr(model, 'hparams') is used in the elif, checking only continues (i.e. to the dataloader) if hparams is not present. It does not continue if hparams is present, but does not have the attribute.
I can make a PR, but the semantics needs to be decided on first, especially when hparams is a dict: do we only see the attribute if it is already present in the dict? This is particularly important for making setting and getting match. |
TypeError: __init__() got an unexpected keyword argument 'max_nb_epochs' | [
"question"
] | from pytorch_lightning import Trainer
from argparse import ArgumentParser
# from research_seed.digit_recognition.mnist import MNISTRecognizer
# def main():
model = MNISTRecognizer()
trainer = Trainer(max_nb_epochs=50,gpus=1)
trainer.fit(model)
pytorch-lightning==0.9.1rc1
There is error:
TypeError: init() got an unexpected keyword argument 'max_nb_epochs' |
IOU Class Metric Module | [
"feature",
"help wanted"
] | π Feature
Is there a reason why IOU doesn't have a class metric module (currently the only classification class metrics implemented are: Accuracy, Precision, Recall, Fbeta)? Is this already on the roadmap?
I implemented a version below. Does this look good? If so, should I submit a PR?
I was initially worried that the reason it wasn't implemented yet was non-trivial issues with syncing across devices in ddp, but I don't see any issues with my implementation... Did I miss something?
import torch
from typing import Any, Callable, Optional
import pytorch_lightning as pl
from pytorch_lightning.metrics.metric import Metric
from pytorch_lightning.metrics.functional.classification import stat_scores_multiple_classes
from pytorch_lightning.metrics.functional.reduction import reduce
class IOU(Metric):
"""
Computes IOU.
Args:
...
compute_on_step:
Forward only calls ``update()`` and return None if this is set to False. default: True
dist_sync_on_step:
Synchronize metric state across processes at each ``forward()``
before returning the value at the step. default: False
process_group:
Specify the process group on which synchronization is called. default: None (which selects the entire world)
dist_sync_fn:
Callback that performs the allgather operation on the metric state. When `None`, DDP
will be used to perform the allgather. default: None
"""
def __init__(
self,
num_classes: int,
ignore_index: Optional[int] = None,
absent_score: float = 0.0,
reduction: str = 'elementwise_mean',
compute_on_step: bool = True,
dist_sync_on_step: bool = False,
process_group: Optional[Any] = None,
#dist_sync_fn: Callable = None,
):
super().__init__(
compute_on_step=compute_on_step,
dist_sync_on_step=dist_sync_on_step,
process_group=process_group,
#dist_sync_fn=dist_sync_fn,
)
self.num_classes = num_classes
self.ignore_index = ignore_index
self.absent_score = absent_score
self.reduction = reduction
self.add_state("tps", default=torch.zeros(num_classes), dist_reduce_fx="sum")
self.add_state("fps", default=torch.zeros(num_classes), dist_reduce_fx="sum")
self.add_state("fns", default=torch.zeros(num_classes), dist_reduce_fx="sum")
self.add_state("sups", default=torch.zeros(num_classes), dist_reduce_fx="sum")
def update(self, preds: torch.Tensor, target: torch.Tensor):
"""
Update state with predictions and targets.
Args:
preds: Predictions from model
target: Ground truth values
"""
tps, fps, _, fns, sups = stat_scores_multiple_classes(preds, target, self.num_classes)
self.tps += tps
self.fps += fps
self.fns += fns
self.sups += sups
def compute(self):
"""
Computes mean squared error over state.
"""
scores = torch.zeros(self.num_classes, device=self.tps.device, dtype=torch.float32)
for class_idx in range(self.num_classes):
if class_idx == self.ignore_index:
continue
tp = self.tps[class_idx]
fp = self.fps[class_idx]
fn = self.fns[class_idx]
sup = self.sups[class_idx]
# If this class is absent in the target (no support) AND absent in the pred (no true or false
# positives), then use the absent_score for this class.
if sup + tp + fp == 0:
scores[class_idx] = self.absent_score
continue
denom = tp + fp + fn
# Note that we do not need to worry about division-by-zero here since we know (sup + tp + fp != 0) from above,
# which means ((tp+fn) + tp + fp != 0), which means (2tp + fp + fn != 0). Since all vars are non-negative, we
# can conclude (tp + fp + fn > 0), meaning the denominator is non-zero for each class.
score = tp.to(torch.float) / denom
scores[class_idx] = score
# Remove the ignored class index from the scores.
if self.ignore_index is not None and self.ignore_index >= 0 and self.ignore_index < self.num_classes:
scores = torch.cat([
scores[:self.ignore_index],
scores[self.ignore_index + 1:],
])
print(scores)
return reduce(scores, reduction=self.reduction)
Thanks |
progress bar refresh_rate prevents old bars being cleared | [
"bug",
"help wanted"
] | If you set refresh_rate=100 then the sanity check and validation bars continue being displayed forever even though leave=False. Also the validation bars only show part complete.
I note also that the mininterval parameter for tqdm bars is ignored. |
Native AMP effectively broken when rewriting the optimizer_step function | [
"bug",
"help wanted",
"priority: 0"
] | π Bug
If I turn on the native AMP (--precision 16) and modify optimizer_step like it's recommended in the docs (https://pytorch-lightning.readthedocs.io/en/latest/optimizers.html#step-optimizers-at-arbitrary-intervals), the training stops converging. The issue is caused by
pytorch-lightning/pytorch_lightning/core/lightning.py
Line 1229
in
e81707b
self.trainer.scaler.step(optimizer)
not being called since the entire LightningModule.optimizer_step function is getting overridden.
To Reproduce
Turn on native AMP and overwrite optimizer_step function (can be with "default" implementation from the docs) and start training. NANs should appear given complex enough network.
Expected behavior
The training should converge correctly.
Additional context
It appears that #1759 has not properly been fixed. I had encountered this bug trying to hotfix #4524.
I'm not sure what a correct fix is, my guess is either 1) a proper support for closures in AMP (possibly with some type of middleware) or 2) disabling the possibility of overwriting optimizer_step when using automatic optimization completely. One way or another, the solution will probably have to deal with #4524 and this bug at the same time. |
Mergify never merge PullRequest | [
"working as intended"
] | π Bug
.mergify.yml requires over 54 tests pass, but CI All checks have passed results in 53 successful checks.
Automatic merge never triggered.
To Reproduce
Make pull request which pass all test.
Mergify unexpectedly do not merge this PR.
Expected behavior
Mergify automatically merge the PR.
Environment
master latest (ee35907)
Example non-merged PR: #4526
Additional context
Current mergify config:
pytorch-lightning/.mergify.yml
Lines 26 to 27
in
ee35907
# this serves as ALL check has to pass as we have actually around 40 tests in total
- "#status-success>=54"
CI All checks have passed results in 53 successful checks (example: above image).
But config requires over 54.
It seems to be the cause of this bug (It could be nonsense because I am totally newbie of Mergify.) |
Tensorboard logging only part of the logged metrics and images. | [
"question",
"won't fix"
] | I am trying to use Tensorboard and log images generated by a GAN at the end of each epoch. To do that, I set up a callback that automatically logs an image into tensorboard. Tensorboard is showing the image that was generated after the first epoch only. In addition, I am also trying to log some losses such as gradient penalty and minmax loss and it seems like tensorboard is showing only part of the losses that I am trying to log. I am not sure if I am doing something wrong or if there's a bug somewhere within lightning.
The image saving callback code:
class GenerateCollage(Callback):
def __init__(self, config):
super().__init__()
test_set = ds_fac[config['test_dataset_cfg']['type']](config['test_dataset_cfg'])
self.test_data_loader = DataLoader(test_set,
batch_size=config['test_dataloader_cfg']['batch_size'],
shuffle=config['test_dataloader_cfg']['shuffle'],
num_workers=config['test_dataloader_cfg']['num_workers'],
drop_last=config['test_dataloader_cfg']['drop_last'],
pin_memory=config['test_dataloader_cfg']['pin_memory'])
self.test_every = config['test_every']
def on_train_epoch_end(self, trainer, pl_module, outputs):
pl_module.gen.eval()
test_batch = next(iter(self.test_data_loader))
real = test_batch['real'].to(pl_module.device)
noisy = test_batch['noisy'].to(pl_module.device)
collage_size = 10
with torch.no_grad():
g_out = pl_module.gen(y=noisy, z=False, encoder_assistance=True)
if type(g_out) is list:
g_out = g_out[0]
g_out = g_out.to(torch.device("cpu"))
all_imgs_to_show = [utils.tensor2uint(elem.to(torch.device("cpu"))) for elem in real]
all_imgs_to_show.extend([utils.tensor2uint(elem.to(torch.device("cpu"))) for elem in noisy])
all_imgs_to_show.extend([utils.tensor2uint(elem) for elem in g_out])
for i in range(collage_size):
with torch.no_grad():
out = pl_module.gen(y=noisy, z=True, encoder_assistance=True)
if type(out) is list:
out = out[0]
out = out.to(torch.device("cpu"))
all_imgs_to_show.extend([utils.tensor2uint(elem) for elem in out])
titles = ["Clean", "Noisy", "Denoised"] + ["Generated"] * collage_size
num_cols = len(titles)
batch_size = noisy.shape[0]
fig = plt.figure(figsize=(num_cols * 2, batch_size * 2))
grid = ImageGrid(fig, 111, nrows_ncols=(batch_size, num_cols), axes_pad=0.02, direction="column")
for i, (ax, im) in enumerate(zip(grid, all_imgs_to_show)):
# Iterating over the grid returns the Axes.
ax.imshow(im)
ax.set_axis_off()
for ax, title in zip(grid.axes_row[0], titles):
ax.set_title(title, size='xx-large')
fig.set_tight_layout(True)
buf = io.BytesIO()
fig.savefig(buf)
buf.seek(0)
image = PIL.Image.open(buf)
image = ToTensor()(image)
buf.close()
pl_module.logger.experiment.add_image('generated_collage', image, trainer.batch_idx)
pl_module.gen.train()
The way I am returning the losses, for example from a function that calculates the loss of the discriminator (look at the call function):
`class DiscWGAN(pl.LightningModule):
def init(self, config, gen, disc):
super().init()
self.gp_reg = config['gp_reg']
self.gen = gen
self.disc = disc
def __call__(self, real, gen_input, step):
with torch.no_grad():
fake = self.gen(y=gen_input, z=True, encoder_assistance=True)
if type(fake) is list:
fake = fake[0]
batch_size = real.shape[0]
num_channels = real.shape[1]
patch_h = real.shape[2]
patch_w = real.shape[3]
# Gradient penalty calculation
alpha = torch.rand((batch_size, 1), device=self.device)
alpha = alpha.expand(batch_size, int(real.nelement() / batch_size)).contiguous()
alpha = alpha.view(batch_size, num_channels, patch_h, patch_w)
interpolates = alpha * real.detach() + (1 - alpha) * fake.detach()
interpolates.requires_grad_(True)
disc_interpolates = self.disc(x=interpolates)
gradients = torch.autograd.grad(outputs=disc_interpolates, inputs=interpolates,
grad_outputs=torch.ones(disc_interpolates.size(), device=self.device),
create_graph=True, retain_graph=True, only_inputs=True)[0]
gradients = gradients.view(gradients.size(0), -1)
gradients_norm = gradients.norm(2, dim=1)
gp = ((gradients_norm - 1) ** 2).mean()
d_out_fake_mean = torch.mean(self.disc(x=fake))
d_out_real_mean = torch.mean(self.disc(x=real))
minmax_loss = d_out_fake_mean - d_out_real_mean
return OrderedDict({"log": {"disc_gp": gp, "disc_minmax": minmax_loss},
"loss": minmax_loss + self.gp_reg * gp})`
I tried to return a regular dictionary, to use self.log instead, to use self.logger.experiment.add_scalar instead, etc...
Also I noticed that lightning modules that are nested inside another lightning modules do not maintain the same logger, which might be a good feature to add (for example, in my code I build a loss function as a lightning module and initialize it within another module).
I am running CUDA 10.2 with tensorboard 2.3
Thanks! |
upgrading PL via pip uninstalls pytorch-1.8 | [
"bug",
"help wanted",
"3rd party",
"priority: 1"
] | π Bug
To Reproduce
$ pip install pytorch-lightning -U
[...]
Installing collected packages: torch, pytorch-lightning
Attempting uninstall: torch
Found existing installation: torch 1.8.0.dev20201106+cu110
Uninstalling torch-1.8.0.dev20201106+cu110:
Successfully uninstalled torch-1.8.0.dev20201106+cu110
Expected behavior
It shouldn't uninstall pytorch-1.8.
Nvidia RTX-30* don't work with pytorch < 1.8.
Environment
* CUDA:
- GPU:
- GeForce RTX 3090
- GeForce GTX 1070 Ti
- available: True
- version: 10.2
* Packages:
- numpy: 1.19.4
- pyTorch_debug: True
- pyTorch_version: 1.7.0
- pytorch-lightning: 1.0.5
- tqdm: 4.50.2
* System:
- OS: Linux
- architecture:
- 64bit
- ELF
- processor: x86_64
- python: 3.8.5
- version: #57-Ubuntu SMP Thu Oct 15 10:57:00 UTC 2020
Additional context |
How to load to from a checkpoint to same device when pretrained encoder was used | [
"question"
] | β Questions and Help
I implemented a ClassificationNet (see below) that's using a pretrained encoder. After training, I'm trying to load it to CPU using ClassificationNet.load_from_checkpoint(pth, map_location=torch.device("cpu"), but since map_location in get_encoder is None, the encoder tries to load to GPU. How can I inform get_encoder to load to the same map_location?
Since I just started using Lightning, I guess there's a much smarter way to circumvent this situation altogether -- I look forward to your suggestions :) Thanks!
Code
class ClassificationNet(LightningModule):
...
self.encoder = get_encoder(pretrained=True)
get_encoder(pretrained=False, map_location=None):
model = FancyModel()
if pretrained:
ckpt_data = torch.utils.model_zoo.load_url(url, map_location=map_location)
....
OS: Manjaro Linux
Version 1.0.5 |
How to change the Datamodule during training with a callback? | [
"question",
"won't fix",
"data handling"
] | What is your question?
How to change the Datamodule during training with a callback?
More details:
I am looking for a way to reinitialized my Datamodule with different parameter, I am currently sending the height of my images as argument to my datamodule and I want to change this height at some point during training, the simple way is to call trainer.fit multiple times with different datamodules, but I am wondering is there a way to do this on callback, in the same way as you do when you change the optimizer or lr_scheduler?
Something similar to this:
def on_train_epoch_start(self, trainer, pl_module):
sch = optim.lr_scheduler.StepLR(optimizer, 1, 0.96)
scheduler = {
'scheduler': sch,
'interval': interval, # or 'step'
'monitor': 'train_loss',
'reduce_on_plateau': False,
'frequency': 1,
}
trainer.lr_schedulers = trainer.configure_schedulers([scheduler]) |
Test step to handle non-scalar outputs | [
"feature",
"help wanted",
"won't fix"
] | π Feature
Handle output from test loop not being a single value.
Motivation
I often need to use a callback to do some processing on test values (to make plots, etc.), which I like to separate from the core module code. In this case, I would like to use on_test_batch_end to build a list of predicted values, calculated in the core test_step.
Pitch
To make this work, I need to output an object from test_step, something like {"loss": loss, "predictions": preds, "truth": truth}. However, the test loop runs .item() on any torch tensors, which doesn't work if the outputs are non-scalar. It would be cool if the test loop handled this situation, otherwise the output from test loop (and therefore any inputs to callbacks) is limited to scalar tensors. |
self.log does not work in on_train_batch_start/end hooks | [
"bug",
"help wanted"
] | π Bug
Logging with self.log doesn't seem to work properly in the on_train_batch_start and on_train_batch_end model hooks. Specifically:
when put in on_train_batch_start it crashes because self._current_fx_name is set to validation_epoch_endwhich seems like incorrect behaviour. (It seems like it should be set to training_step)
when put in on_train_batch_end it just seems to never log anything (the example I wrote tries to log to the progress bar but nothing is logged). It is also not present in tensorboard when I run this code locally.
Is this intended behaviour? The docs suggest that self.log might only work in (training|validation)_step_(end)? methods, but it also says explicitly that self.log can be called from anywhere in the module. If this is the intended behaviour then I think the documentation could be clarified.
Link to boring model showcasing the bug
To Reproduce
See colab
Expected behavior
In the colab, both metric1 and metric2 should appear in the progress bar, but only metric1 does. metric2 is logged in on_train_batch_start (similar results occur with on_train_batch_end)
Environment
IDE: Colab
Colab Notebook:
CUDA:
- GPU:
- Tesla T4
- available: True
- version: 10.1
* Packages:
- numpy: 1.18.5
- pyTorch_debug: True
- pyTorch_version: 1.7.0+cu101
- pytorch-lightning: 1.0.5
- tqdm: 4.41.1
* System:
- OS: Linux
- architecture:
- 64bit
-
- processor: x86_64
- python: 3.6.9
- version: #1 SMP Thu Jul 23 08:00:38 PDT 2020
You can get the script and run it with: |
Strange issue with DataParallel | [
"bug",
"help wanted",
"won't fix",
"strategy: dp",
"priority: 2"
] | π Bug
I'm having an issue with tensors being on different devices when using distributed_backend=dp. But it only occurs under what seems like some pretty specific circumstances. I can only reproduce the bug when I have done BOTH of the following:
replaced the forward method of the internal network (see the code sample below)
replaced the BatchSampler in the training dataloader with my own version
Everything is fine if I do only one of those two things. And I don't have any problem doing both of them if I train on a single GPU. What's doubly strange is that I only replace the BatchSampler for the train loader, but I encounter the error during the initial validation sanity check. I really have no idea what's going on here.
To Reproduce
Here's code to reproduce. Sorry it's a little long.
import os
from collections import defaultdict
import random
import torch
from torch.utils.data import Dataset
from pytorch_lightning import Trainer, LightningModule
import torchvision
class BalancedBatchSampler(torch.utils.data.sampler.BatchSampler):
'''Samples a batch by first randomly choosing n_classes classes and then
n_samples images from each class'''
def __init__(self, dataset, batch_size, n_samples):
self.targets = dataset.label
self.target_idx = defaultdict(list)
for i, t in enumerate(self.targets):
self.target_idx[t].append(i)
for ii in self.target_idx.values():
random.shuffle(ii)
self.used_count = {t: 0 for t in self.target_idx}
self.n_classes = batch_size // n_samples
self.n_samples = n_samples
self.dataset = dataset
self.batch_size = self.n_samples * self.n_classes
def __iter__(self):
self.count = 0
nsamp = self.n_samples
while self.count + self.batch_size < len(self.dataset):
classes = random.sample(self.target_idx.keys(), self.n_classes)
indices = []
for c in classes:
used = self.used_count[c]
indices.extend(self.target_idx[c][used:used+nsamp])
self.used_count[c] += nsamp
if self.used_count[c] + nsamp > len(self.target_idx[c]):
random.shuffle(self.target_idx[c])
self.used_count[c] = 0
yield indices
self.count += self.n_classes * self.n_samples
def __len__(self):
return len(self.dataset) // self.batch_size
class RandomDataset(Dataset):
def __init__(self, size, length):
self.len = length
self.data = torch.randn(length, *size)
self.label = torch.randint(0,10,(length,))
def __getitem__(self, index):
return self.data[index], self.label[index]
def __len__(self):
return self.len
def ResnetBase(resnet_model, forward_func=None):
'''Return a torchvision.models.ResNet model without the average pooling
and fully connected layers at the end. An optional forward_func can be
passed to redefine the forward pass of the model (for instance, to return
intermediate layer outputs).'''
def forward(self, x):
x = self.conv1(x)
x = self.bn1(x)
x = self.relu(x)
x = self.maxpool(x)
x = self.layer1(x)
x = self.layer2(x)
x = self.layer3(x)
x = self.layer4(x)
return x
if forward_func is None: forward_func = forward
cout = resnet_model.fc.in_features
resnet_model.cout = cout
resnet_model.forward = forward_func.__get__(resnet_model)
delattr(resnet_model, 'avgpool')
delattr(resnet_model, 'fc')
return resnet_model
class BoringModel(LightningModule):
def __init__(self, layer):
"""
Testing PL Module
Use as follows:
- subclass
- modify the behavior for what you want
class TestModel(BaseTestModel):
def training_step(...):
# do your own thing
or:
model = BaseTestModel()
model.training_epoch_end = None
"""
super().__init__()
net = torchvision.models.resnet18(pretrained=False)
#self.net = net # This works fine
self.net = ResnetBase(net) # This does not
self.layer = torch.nn.Conv2d(3, 32, 3)
self.use_layer = layer
def forward(self, x):
if self.use_layer:
x = self.layer(x)
else:
x = self.net(x)
return x
def loss(self, batch, prediction):
# An arbitrary loss to have a loss that updates the model weights during `Trainer.fit` calls
return torch.nn.functional.mse_loss(prediction, torch.ones_like(prediction))
def training_step(self, batch, batch_idx):
x, y = batch
output = self(x)
loss = self.loss(batch, output)
return loss
def validation_step(self, batch, batch_idx):
x, y = batch
output = self(x)
loss = self.loss(batch, output)
return {"x": loss}
def configure_optimizers(self):
optimizer = torch.optim.SGD(self.layer.parameters(), lr=0.1)
lr_scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=1)
return [optimizer], [lr_scheduler]
def run_test(args):
# fake data
train_data = RandomDataset((3,64,64), 128)
val_data = RandomDataset((3,64,64), 128)
if args.sampler:
sampler = BalancedBatchSampler(train_data, 32, 4)
else:
sampler = None
train_data = torch.utils.data.DataLoader(train_data, batch_sampler=sampler)
val_data = torch.utils.data.DataLoader(val_data)
# model
model = BoringModel(args.layer)
trainer = Trainer.from_argparse_args(
args,
default_root_dir=os.getcwd(),
limit_train_batches=5,
limit_val_batches=5,
max_epochs=1,
weights_summary=None,
distributed_backend='dp',
gpus=2,
)
trainer.fit(model, train_data, val_data)
if __name__ == '__main__':
import argparse
parser = argparse.ArgumentParser()
parser.add_argument('--sampler', action='store_true')
parser.add_argument('--layer', action='store_true')
parser = Trainer.add_argparse_args(parser)
args = parser.parse_args()
run_test(args)
If you save the above in bug.py and then run
python bug.py --sampler
You should get the following error
Traceback (most recent call last):
File "dpbug.py", line 195, in <module>
run_test(args)
File "dpbug.py", line 185, in run_test
trainer.fit(model, train_data, val_data)
File "/home/catalys1/pylt/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 440, in fit
results = self.accelerator_backend.train()
File "/home/catalys1/pylt/lib/python3.8/site-packages/pytorch_lightning/accelerators/dp_accelerator.py", line 97, in train
results = self.train_or_test()
File "/home/catalys1/pylt/lib/python3.8/site-packages/pytorch_lightning/accelerators/accelerator.py", line 68, in train_or_test
results = self.trainer.train()
File "/home/catalys1/pylt/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 485, in train
self.train_loop.run_training_epoch()
File "/home/catalys1/pylt/lib/python3.8/site-packages/pytorch_lightning/trainer/training_loop.py", line 544, in run_training_epoch
batch_output = self.run_training_batch(batch, batch_idx, dataloader_idx)
File "/home/catalys1/pylt/lib/python3.8/site-packages/pytorch_lightning/trainer/training_loop.py", line 713, in run_training_batch
self.optimizer_step(optimizer, opt_idx, batch_idx, train_step_and_backward_closure)
File "/home/catalys1/pylt/lib/python3.8/site-packages/pytorch_lightning/trainer/training_loop.py", line 452, in optimizer_step
self.trainer.accelerator_backend.optimizer_step(
File "/home/catalys1/pylt/lib/python3.8/site-packages/pytorch_lightning/accelerators/accelerator.py", line 115, in optimizer_step
model_ref.optimizer_step(
File "/home/catalys1/pylt/lib/python3.8/site-packages/pytorch_lightning/core/lightning.py", line 1216, in optimizer_step
optimizer.step(closure=optimizer_closure)
File "/home/catalys1/pylt/lib/python3.8/site-packages/torch/optim/lr_scheduler.py", line 67, in wrapper
return wrapped(*args, **kwargs)
File "/home/catalys1/pylt/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 15, in decorate_context
return func(*args, **kwargs)
File "/home/catalys1/pylt/lib/python3.8/site-packages/torch/optim/sgd.py", line 86, in step
loss = closure()
File "/home/catalys1/pylt/lib/python3.8/site-packages/pytorch_lightning/trainer/training_loop.py", line 703, in train_step_and_backward_closure
result = self.training_step_and_backward(
File "/home/catalys1/pylt/lib/python3.8/site-packages/pytorch_lightning/trainer/training_loop.py", line 798, in training_step_and_backward
result = self.training_step(split_batch, batch_idx, opt_idx, hiddens)
File "/home/catalys1/pylt/lib/python3.8/site-packages/pytorch_lightning/trainer/training_loop.py", line 305, in training_step
training_step_output = self.trainer.accelerator_backend.training_step(args)
File "/home/catalys1/pylt/lib/python3.8/site-packages/pytorch_lightning/accelerators/dp_accelerator.py", line 111, in training_step
output = self.trainer.model(*args)
File "/home/catalys1/pylt/lib/python3.8/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/catalys1/pylt/lib/python3.8/site-packages/pytorch_lightning/overrides/data_parallel.py", line 87, in forward
outputs = self.parallel_apply(replicas, inputs, kwargs)
File "/home/catalys1/pylt/lib/python3.8/site-packages/pytorch_lightning/overrides/data_parallel.py", line 151, in parallel_apply
return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
File "/home/catalys1/pylt/lib/python3.8/site-packages/pytorch_lightning/overrides/data_parallel.py", line 310, in parallel_apply
raise output
File "/home/catalys1/pylt/lib/python3.8/site-packages/pytorch_lightning/overrides/data_parallel.py", line 263, in _worker
output = module.training_step(*input, **kwargs)
File "dpbug.py", line 147, in training_step
output = self(x)
File "/home/catalys1/pylt/lib/python3.8/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "dpbug.py", line 138, in forward
x = self.net(x)
File "/home/catalys1/pylt/lib/python3.8/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "dpbug.py", line 85, in forward
x = self.conv1(x)
File "/home/catalys1/pylt/lib/python3.8/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/catalys1/pylt/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 419, in forward
return self._conv_forward(input, self.weight)
File "/home/catalys1/pylt/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 415, in _conv_forward
return F.conv2d(input, weight, self.bias, self.stride,
RuntimeError: Expected tensor for argument #1 'input' to have the same device as tensor for argument #2 'weight'; but device 2 does not equal 1 (while checking arguments for cudnn_convolution)
Running it as either
python bug.py --sampler --layer
or
python bug.py
gives no error
I'm on lightning version 1.0.4.
Expected behavior
Normal operation without strange device mismatch error.
Environment
PyTorch Version (e.g., 1.0): 1.0.4
OS (e.g., Linux): Linux
How you installed PyTorch (conda, pip, source): pip
Python version: 3.8
CUDA/cuDNN version: 10.2
GPU models and configuration: 1080 Ti |
Memory leak using AMP and transformers | [
"bug",
"help wanted",
"3rd party"
] | Context can be found here: huggingface/transformers#8403
10x memory consumption increase with native amp on PL
I'm running under debugger pt15+apex side by side with pt16+native amp, huggingface/transformers#8403
and found the first issue of objects not being freed
I ruled out GradScaler I think there is a huge memory leak somewhere.
Typically this happens due to circular references, but I don't know PL codebase... so asking for help.
Oh, and I ruled out transformers - the HF trainer using the same model code doesn't have this issue. |
Weird logging to console behavior. | [
"bug",
"help wanted",
"logging"
] | π Bug
Logging to console prints some stuff twice, and does not output my custom logging. Verbose EarlyStopping does also not output to console:
|segmentation|base|py-3.8.5 Stanley in ~/Repos/segmentation
Β± |master U:1 ?:1 β| β python train.py
GPU available: True, used: True
INFO:lightning:GPU available: True, used: True
TPU available: False, using: 0 TPU cores
INFO:lightning:TPU available: False, using: 0 TPU cores
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
INFO:lightning:LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Using native 16bit precision.
INFO:lightning:Using native 16bit precision.
Missing logger folder: ./logs/11-11-2020-04-29-21_LR0_001_BS5_IS512
WARNING:lightning:Missing logger folder: ./logs/11-11-2020-04-29-21_LR0_001_BS5_IS512
initializing ddp: GLOBAL_RANK: 0, MEMBER: 1/1
INFO:lightning:initializing ddp: GLOBAL_RANK: 0, MEMBER: 1/1
| Name | Type | Params | In sizes | Out sizes
-------------------------------------------------------------------------------------------------------------------
0 | criterion | BCEWithLogitsLoss | 0 | ? | ?
1 | in_conv | DoubleConvolution | 38 K | [5, 3, 512, 512] | [5, 64, 512, 512]
2 | down_conv_1 | Down | 221 K | [5, 64, 512, 512] | [5, 128, 256, 256]
3 | down_conv_2 | Down | 885 K | [5, 128, 256, 256] | [5, 256, 128, 128]
4 | down_conv_3 | Down | 3 M | [5, 256, 128, 128] | [5, 512, 64, 64]
5 | down_conv_4 | Down | 4 M | [5, 512, 64, 64] | [5, 512, 32, 32]
6 | up_conv_1 | Up | 5 M | [[5, 512, 32, 32], [5, 512, 64, 64]] | [5, 256, 64, 64]
7 | up_conv_2 | Up | 1 M | [[5, 256, 64, 64], [5, 256, 128, 128]] | [5, 128, 128, 128]
8 | up_conv_3 | Up | 369 K | [[5, 128, 128, 128], [5, 128, 256, 256]] | [5, 64, 256, 256]
9 | up_conv_4 | Up | 110 K | [[5, 64, 256, 256], [5, 64, 512, 512]] | [5, 64, 512, 512]
10 | out_conv | OutConvolution | 65 | [5, 64, 512, 512] | [5, 1, 512, 512]
INFO:lightning:
| Name | Type | Params | In sizes | Out sizes
-------------------------------------------------------------------------------------------------------------------
0 | criterion | BCEWithLogitsLoss | 0 | ? | ?
1 | in_conv | DoubleConvolution | 38 K | [5, 3, 512, 512] | [5, 64, 512, 512]
2 | down_conv_1 | Down | 221 K | [5, 64, 512, 512] | [5, 128, 256, 256]
3 | down_conv_2 | Down | 885 K | [5, 128, 256, 256] | [5, 256, 128, 128]
4 | down_conv_3 | Down | 3 M | [5, 256, 128, 128] | [5, 512, 64, 64]
5 | down_conv_4 | Down | 4 M | [5, 512, 64, 64] | [5, 512, 32, 32]
6 | up_conv_1 | Up | 5 M | [[5, 512, 32, 32], [5, 512, 64, 64]] | [5, 256, 64, 64]
7 | up_conv_2 | Up | 1 M | [[5, 256, 64, 64], [5, 256, 128, 128]] | [5, 128, 128, 128]
8 | up_conv_3 | Up | 369 K | [[5, 128, 128, 128], [5, 128, 256, 256]] | [5, 64, 256, 256]
9 | up_conv_4 | Up | 110 K | [[5, 64, 256, 256], [5, 64, 512, 512]] | [5, 64, 512, 512]
10 | out_conv | OutConvolution | 65 | [5, 64, 512, 512] | [5, 1, 512, 512]
Epoch 3: 70%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 5173/7395 [33:13<14:16, 2.60it/s, loss=0.327, v_num=0]
Testing: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1752/1752 [43:15<00:00, 1.49s/it]--------------------------------------------------------------------------------
DATALOADER:0 TEST RESULTS
{'test_f1': tensor(0.9091, device='cuda:0'),
'test_loss': tensor(0.2796, device='cuda:0'),
'test_precision': tensor(0.9091, device='cuda:0'),
'test_recall': tensor(0.9091, device='cuda:0'),
'train_f1': tensor(0.9245, device='cuda:0'),
'train_loss': tensor(0.2836, device='cuda:0'),
'train_precision': tensor(0.9245, device='cuda:0'),
'train_recall': tensor(0.9245, device='cuda:0'),
'val_f1': tensor(0.9164, device='cuda:0'),
'val_loss': tensor(0.2818, device='cuda:0'),
'val_precision': tensor(0.9164, device='cuda:0'),
'val_recall': tensor(0.9164, device='cuda:0')}
--------------------------------------------------------------------------------
Testing: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1752/1752 [43:16<00:00, 1.48s/it]
|segmentation|base|py-3.8.5 Stanley in ~/Repos/segmentation
Β± |master U:1 ?:2 β| β
To Reproduce
Here is my training code:
import logging
import os
import sys
from argparse import ArgumentParser
from datetime import datetime
from knockknock import discord_sender
import torch
from dotenv import load_dotenv
from pytorch_lightning import Trainer
from pytorch_lightning.callbacks import EarlyStopping
from pytorch_lightning.loggers import TensorBoardLogger
from torch.backends import cudnn
from unet.unet_model import UNet
load_dotenv(verbose=True)
@discord_sender(webhook_url=os.getenv("DISCORD_WH"))
def main():
"""
Main training loop.
"""
parser = ArgumentParser()
parser = UNet.add_model_specific_args(parser)
parser = Trainer.add_argparse_args(parser)
args = parser.parse_args()
prod = bool(os.getenv("PROD"))
logging.getLogger("lightning").setLevel(logging.INFO)
if prod:
logging.info("Training i production mode, disabling all debugging APIs")
torch.autograd.set_detect_anomaly(False)
torch.autograd.profiler.profile(enabled=False)
torch.autograd.profiler.emit_nvtx(enabled=False)
else:
logging.info("Training i development mode, debugging APIs active.")
torch.autograd.set_detect_anomaly(True)
torch.autograd.profiler.profile(
enabled=True, use_cuda=True, record_shapes=True, profile_memory=True
)
torch.autograd.profiler.emit_nvtx(enabled=True, record_shapes=True)
model = UNet(**vars(args))
logging.info(
f"Network:\n"
f"\t{model.hparams.n_channels} input channels\n"
f"\t{model.hparams.n_classes} output channels (classes)\n"
f'\t{"Bilinear" if model.hparams.bilinear else "Transposed conv"} upscaling'
)
cudnn.benchmark = True # cudnn Autotuner
cudnn.enabled = True # look for optimal algorithms
early_stop_callback = EarlyStopping(
monitor="val_loss",
min_delta=0.00,
mode="min",
patience=3 if not os.getenv("EARLY_STOP") else int(os.getenv("EARLY_STOP")),
verbose=True,
)
run_name = "{}_LR{}_BS{}_IS{}".format(
datetime.now().strftime("%d-%m-%Y-%H-%M-%S"),
args.lr,
args.batch_size,
args.image_size,
).replace(".", "_")
log_folder = (
"./logs" if not os.getenv("DIR_ROOT_DIR") else os.getenv("DIR_ROOT_DIR")
)
if not os.path.isdir(log_folder):
os.mkdir(log_folder)
logger = TensorBoardLogger(log_folder, name=run_name)
try:
trainer = Trainer.from_argparse_args(
args,
gpus=-1,
precision=16,
distributed_backend="ddp",
logger=logger,
callbacks=[early_stop_callback],
accumulate_grad_batches=1.0
if not os.getenv("ACC_GRAD")
else int(os.getenv("ACC_GRAD")),
gradient_clip_val=0.0
if not os.getenv("GRAD_CLIP")
else float(os.getenv("GRAD_CLIP")),
max_epochs=100 if not os.getenv("EPOCHS") else int(os.getenv("EPOCHS")),
val_check_interval=0.1
if not os.getenv("VAL_INT_PER")
else float(os.getenv("VAL_INT_PER")),
default_root_dir=os.getcwd()
if not os.getenv("DIR_ROOT_DIR")
else os.getenv("DIR_ROOT_DIR"),
)
trainer.fit(model)
trainer.test(model)
except KeyboardInterrupt:
torch.save(model.state_dict(), "INTERRUPTED.pth")
logging.info("Saved interrupt")
try:
sys.exit(0)
except SystemExit:
os._exit(0)
if __name__ == "__main__":
main()
Expected behavior
Environment
CUDA:
GPU:
GeForce RTX 2070 SUPER
available: True
version: 11.0
Packages:
numpy: 1.19.4
pyTorch_debug: True
pyTorch_version: 1.7.0+cu110
pytorch-lightning: 1.0.5
tqdm: 4.51.0
System:
OS: Linux
architecture:
64bit
ELF
processor: x86_64
python: 3.8.5
version: #57-Ubuntu SMP Thu Oct 15 10:57:00 UTC 2020 |
Keeping DDP override in sync with upstream torch | [
"discussion",
"distributed",
"refactor"
] | From @ananthsub:
how should Lightning keep its DDP override in sync with the upstream torch DistributedDataParallel? these implementations have now diverged. I think this leads to performance degradations with Lightning + gradient accumulations, since the require_backward_grad_sync attribute isn't checked before the backwards pass |
Evaluation over the validation set | [
"feature",
"refactor",
"design"
] | What is the recommended way of performing just one evaluation over the validation set? Basically, I'm looking for the equivalent of trainer.test(...) but for the validation set.
Maybe this is possible using trainer.fit(...) and some combination of arguments to Trainer.__init__? |
how to define my own sampler in ddp training | [
"question"
] | When using ddp as the accelerator, i want to define my own sampler in dataloader, how to do it? Noramally, i do it by overriding _collate_fn. But in pytorch-lightning, it seems that is not correct. |
Unneeded elements in state dict | [
"bug",
"help wanted",
"checkpointing"
] | State_dict keys after training resnet18:
dict_keys(['model.conv1.weight', 'model.bn1.weight', 'model.bn1.bias', 'model.bn1.running_mean', 'model.bn1.running_var', 'model.bn1.num_batches_tracked', 'model.layer1.0.conv1.weight', 'model.layer1.0.bn1.weight', 'model.layer1.0.bn1.bias', 'model.layer1.0.bn1.running_mean', 'model.layer1.0.bn1.running_var', 'model.layer1.0.bn1.num_batches_tracked', 'model.layer1.0.conv2.weight', 'model.layer1.0.bn2.weight', 'model.layer1.0.bn2.bias', 'model.layer1.0.bn2.running_mean', 'model.layer1.0.bn2.running_var', 'model.layer1.0.bn2.num_batches_tracked', 'model.layer1.1.conv1.weight', 'model.layer1.1.bn1.weight', 'model.layer1.1.bn1.bias', 'model.layer1.1.bn1.running_mean', 'model.layer1.1.bn1.running_var', 'model.layer1.1.bn1.num_batches_tracked', 'model.layer1.1.conv2.weight', 'model.layer1.1.bn2.weight', 'model.layer1.1.bn2.bias', 'model.layer1.1.bn2.running_mean', 'model.layer1.1.bn2.running_var', 'model.layer1.1.bn2.num_batches_tracked', 'model.layer2.0.conv1.weight', 'model.layer2.0.bn1.weight', 'model.layer2.0.bn1.bias', 'model.layer2.0.bn1.running_mean', 'model.layer2.0.bn1.running_var', 'model.layer2.0.bn1.num_batches_tracked', 'model.layer2.0.conv2.weight', 'model.layer2.0.bn2.weight', 'model.layer2.0.bn2.bias', 'model.layer2.0.bn2.running_mean', 'model.layer2.0.bn2.running_var', 'model.layer2.0.bn2.num_batches_tracked', 'model.layer2.0.downsample.0.weight', 'model.layer2.0.downsample.1.weight', 'model.layer2.0.downsample.1.bias', 'model.layer2.0.downsample.1.running_mean', 'model.layer2.0.downsample.1.running_var', 'model.layer2.0.downsample.1.num_batches_tracked', 'model.layer2.1.conv1.weight', 'model.layer2.1.bn1.weight', 'model.layer2.1.bn1.bias', 'model.layer2.1.bn1.running_mean', 'model.layer2.1.bn1.running_var', 'model.layer2.1.bn1.num_batches_tracked', 'model.layer2.1.conv2.weight', 'model.layer2.1.bn2.weight', 'model.layer2.1.bn2.bias', 'model.layer2.1.bn2.running_mean', 'model.layer2.1.bn2.running_var', 'model.layer2.1.bn2.num_batches_tracked', 'model.layer3.0.conv1.weight', 'model.layer3.0.bn1.weight', 'model.layer3.0.bn1.bias', 'model.layer3.0.bn1.running_mean', 'model.layer3.0.bn1.running_var', 'model.layer3.0.bn1.num_batches_tracked', 'model.layer3.0.conv2.weight', 'model.layer3.0.bn2.weight', 'model.layer3.0.bn2.bias', 'model.layer3.0.bn2.running_mean', 'model.layer3.0.bn2.running_var', 'model.layer3.0.bn2.num_batches_tracked', 'model.layer3.0.downsample.0.weight', 'model.layer3.0.downsample.1.weight', 'model.layer3.0.downsample.1.bias', 'model.layer3.0.downsample.1.running_mean', 'model.layer3.0.downsample.1.running_var', 'model.layer3.0.downsample.1.num_batches_tracked', 'model.layer3.1.conv1.weight', 'model.layer3.1.bn1.weight', 'model.layer3.1.bn1.bias', 'model.layer3.1.bn1.running_mean', 'model.layer3.1.bn1.running_var', 'model.layer3.1.bn1.num_batches_tracked', 'model.layer3.1.conv2.weight', 'model.layer3.1.bn2.weight', 'model.layer3.1.bn2.bias', 'model.layer3.1.bn2.running_mean', 'model.layer3.1.bn2.running_var', 'model.layer3.1.bn2.num_batches_tracked', 'model.layer4.0.conv1.weight', 'model.layer4.0.bn1.weight', 'model.layer4.0.bn1.bias', 'model.layer4.0.bn1.running_mean', 'model.layer4.0.bn1.running_var', 'model.layer4.0.bn1.num_batches_tracked', 'model.layer4.0.conv2.weight', 'model.layer4.0.bn2.weight', 'model.layer4.0.bn2.bias', 'model.layer4.0.bn2.running_mean', 'model.layer4.0.bn2.running_var', 'model.layer4.0.bn2.num_batches_tracked', 'model.layer4.0.downsample.0.weight', 'model.layer4.0.downsample.1.weight', 'model.layer4.0.downsample.1.bias', 'model.layer4.0.downsample.1.running_mean', 'model.layer4.0.downsample.1.running_var', 'model.layer4.0.downsample.1.num_batches_tracked', 'model.layer4.1.conv1.weight', 'model.layer4.1.bn1.weight', 'model.layer4.1.bn1.bias', 'model.layer4.1.bn1.running_mean', 'model.layer4.1.bn1.running_var', 'model.layer4.1.bn1.num_batches_tracked', 'model.layer4.1.conv2.weight', 'model.layer4.1.bn2.weight', 'model.layer4.1.bn2.bias', 'model.layer4.1.bn2.running_mean', 'model.layer4.1.bn2.running_var', 'model.layer4.1.bn2.num_batches_tracked', 'model.fc.weight', 'model.fc.bias',
'train_accuracy.correct', 'train_accuracy.total', 'val_accuracy.correct', 'val_accuracy.total'])
I have elements with keys
'train_accuracy.correct', 'train_accuracy.total', 'val_accuracy.correct', 'val_accuracy.total in the state dict. Which are metrics that are logged during training.
If it is a feature - it is a strange feature. Now I need to manually delete them so that I will be able to use the checkpoint with the resnet18 model. |
Keep the setting of user created DataLoader in replacing DistributedSampler | [
"feature",
"help wanted"
] | π Feature
Motivation
As mention at #2789, the default behavior of replace_sampler_ddp is creating a new DistributedSampler. The shuffle setting depends on the kind of dataloader (train or val/test dataloader). However, this behavior override the setting of user defined dataloader, such as shuffle or drop_last. A more reasonable solution is to get this setting direly from user created dataloader, and apply the same setting in DistributedSampler.
Pitch
For example, we can get the shuffle setting form dataloader.sampler. If this is a instance of SequentialSampler, shuffle=False.
Alternatives
Set replace_sampler_ddp=False, and handle it by hand.
Additional context |
LR scheduler lags behind after resume_from_checkpoint | [
"bug",
"help wanted",
"checkpointing",
"priority: 1"
] | π Bug
MultiStepLR lags behind by one epoch after resuming from checkpoint in Trainer.
See this collab to reproduce.
In the example below,
class BoringModel(LightningModule):
...
def training_step(self, batch, batch_idx):
print(f'Epoch {self.trainer.current_epoch} / Step {self.trainer.global_step}: lr {self.trainer.optimizers[0].param_groups[0]["lr"]}')
...
def configure_optimizers(self):
optimizer = torch.optim.SGD(self.layer.parameters(), lr=100)
lr_scheduler = torch.optim.lr_scheduler.MultiStepLR(optimizer, milestones=[4], gamma=.1)
return [optimizer], [lr_scheduler]
def test_x(resume_from_checkpoint=None):
model = BoringModel()
trainer = pl.Trainer(
callbacks=[pl.callbacks.ModelCheckpoint(save_top_k=-1)],
resume_from_checkpoint=resume_from_checkpoint
)
trainer.fit(model, train)
test_x()
test_x('.../checkpoints/epoch=2.ckpt')
in the run from scratch the learning rate is changed at epoch 4
Epoch 0 / Step 0: lr 100
Epoch 0 / Step 1: lr 100
Epoch 1 / Step 2: lr 100
Epoch 1 / Step 3: lr 100
Epoch 2 / Step 4: lr 100
Epoch 2 / Step 5: lr 100
Epoch 3 / Step 6: lr 100
Epoch 3 / Step 7: lr 100
Epoch 4 / Step 8: lr 10.0 # lr changed
Epoch 4 / Step 9: lr 10.0
Epoch 5 / Step 10: lr 10.0
Epoch 5 / Step 11: lr 10.0
and in the resumed run the learning rate is changed at epoch 5, i.e., the scheduler lags by one epoch.
Epoch 3 / Step 6: lr 100
Epoch 3 / Step 7: lr 100
Epoch 4 / Step 8: lr 100
Epoch 4 / Step 9: lr 100
Epoch 5 / Step 10: lr 10.0 # lr changed
Epoch 5 / Step 11: lr 10.0
Environment
The output from the environment collection script:
CUDA:
GPU:
Tesla T4
available: True
version: 10.1
Packages:
numpy: 1.18.5
pyTorch_debug: True
pyTorch_version: 1.7.0+cu101
pytorch-lightning: 1.0.6
tqdm: 4.41.1
System:
OS: Linux
architecture:
64bit
processor: x86_64
python: 3.6.9
version: #1 SMP Thu Jul 23 08:00:38 PDT 2020 |
[Trainer] flush_logs_every_n_steps not working | [
"bug",
"help wanted",
"won't fix",
"priority: 1",
"logging"
] | π Bug
HI all ! thanks a lot for this great module :)
Trainer has this neat argument flush_logs_every_n_steps, and does indeed take it into account before calling logger.save() in training_loop.py.
Yet, the LoggerConnector flushes at each log_metrics call by calling self.trainer.logger.save().
Is that the expected behaviour of flush_logs_every_n_steps?
Shouldn't it be something like
if self.trainer.logger is not None:
if self.trainer.is_global_zero:
self.trainer.logger.agg_and_log_metrics(scalar_metrics, step=step)
if self.should_flush_logs:
self.trainer.logger.save()
in logger_connector.py ?
To Reproduce
Print when Logger.save is called.
Expected behavior
Only call logger.save() every n steps.
Environment
I've tried to reproduce the bug in Colab but I get an error as test_tube is not installed even though I did pip install test_tube .
CUDA:
GPU:
available: False
version: 10.2
Packages:
numpy: 1.19.4
pyTorch_debug: True
pyTorch_version: 1.7.0
pytorch-lightning: 1.0.6
tqdm: 4.51.0
System:
OS: Linux
architecture:
64bit
processor: x86_64
python: 3.7.9
version: #1 SMP Tue Aug 11 16:36:14 UTC 2020
Cheers,
Emile |
Gradient Clipping w/ Multiple Optimizers | [
"question",
"won't fix"
] | β Questions and Help
Before asking:
No docs helped.
What is your question?
I have a GAN with two optimizers, one generator one discriminator. I would like to clip only generator parameters.
What is the cleanest way to accomplish this?
Currently just using standard training_step and configure_optimizers with automatic optimization enabled. |
What is the difference of the two function, train_epoch_end and on_train_epoch_end | [
"question"
] | It seems that don't have some difference. Hope someone can explain this for me. Thanks a lot. |
load_from_checkpoint not working when using save_hyperparameters(conf) | [
"bug",
"help wanted"
] | π Bug
Following the docs (https://pytorch-lightning.readthedocs.io/en/latest/hyperparameters.html#lightningmodule-hyperparameters), precisely point 4.:
class Example(LightningModule):
def __init__(self, cfg, *args, **kwargs):
super().__init__()
self.save_hyperparameters(cfg)
(...)
Example(dict(key="value", key2="value2"))
Attempting to load previously saved checkpoint through Example.load_from_checkpoint yields following error:
Traceback (most recent call last):
File "./decode.py", line 101, in <module>
model = DeeplabSearch.load_from_checkpoint(args.checkpoint)
File "/home/wj359634/venv/lib64/python3.6/site-packages/pytorch_lightning/core/saving.py", line 154, in load_from_checkpoint
model = cls._load_model_state(checkpoint, strict=strict, **kwargs)
File "/home/wj359634/venv/lib64/python3.6/site-packages/pytorch_lightning/core/saving.py", line 194, in _load_model_state
model = cls(**_cls_kwargs)
TypeError: __init__() missing 1 required positional argument: 'cfg'
I will try to reproduce using the BoringModel once I find some time if someone requests it.
PS
I'll definitely appreciate a hotfix if one comes to someone's mind. |
self.log on validation_step is broken on pre 1.1 [nightly] | [
"bug",
"help wanted",
"priority: 0"
] | https://colab.research.google.com/drive/1tSphAIaCdy3tC9Lzhe1GEK_fH_0a6oYj?usp=sharing |
What are the `outputs` in the `on_train_batch_end` callback? | [
"question"
] | β Questions and Help
Before asking:
Try to find answers to your questions in the Lightning Forum!
Search for similar issues.
Search the docs.
What is your question?
For my application, I need to save the raw outputs of the model to disk for every training and validation example. I think a callback is the right thing to use for this-- PL already has hooks in "on_train_batch_end". According to the latest docs, this method takes an outputs arg, which I presume to be the outputs of the pl_module, or the value returned by the training_step function. However, no matter what I change in the training_step, outputs is always an empty list. Likewise, the outputs in on_train_epoch_end is an empty list of lists.
class SaverCallback(Callback):
def __init__(self):
super().__init__()
def on_train_epoch_end(self, trainer, pl_module, outputs):
print('train epoch outputs: {}'.format(outputs))
def on_train_batch_end(self, trainer, pl_module, outputs, batch, batch_idx, dataloader_idx):
print('train outputs: {}'.format(outputs))
def on_validation_batch_end(self, trainer, pl_module, outputs, batch, batch_idx, dataloader_idx):
# import pdb; pdb.set_trace()
print('val outputs: {}'.format(outputs))
def on_validation_epoch_end(self, trainer, pl_module):
pass
Here are the relevant portions of my Lightning Module:
def training_step(self, batch_dict, batch_i):
...
return {'loss': loss, 'testing': 'testing'}
def validation_step(self, batch_dict, batch_i):
...
return {'loss': loss, 'testing': 'testing'}
Results:
train outputs: []
val outputs: {'loss': tensor(0.0395, device='cuda:0', dtype=torch.float64), 'testing': 'testing'}
train epoch outputs: [[]]
Where are train outputs defined?
Possibly related issues:
#3864
#3592
#4609
What's your environment?
OS: Linux
Packaging pip
Version 1.0.4, installed from master |
batch size behaves strange for ddp | [
"bug",
"help wanted"
] | π Bug
I am running SimCLR code. I notice the batch size plays an important role for contrastive learning, so I want to increase batch size.
I can run batch_size=64, on a single 2080Ti. When I use 4 GPUs by setting distributed_backend='ddp'. I still have to set batch_size=64, since each GPU is fully occupied by this setting when I checked by nvidia-smi. To be more clear, if I increase batch_size, there will be an error for CUDA out of memory.
However, from what I understand I should be able to set batch_size = 4*64 or so.
Please reproduce using [the BoringModel and post here]
I use the SimCLR example provided by pl.
To Reproduce
Expected behavior
I should be able to set batch_size = 4*64 or so.
Environment
I am using 1.0.2 version from conda-forge. I understand the lastest version is .5 or .6, but I am pretty sure there are other bugs, so using the latest version I even cannot start train SimCLR.
Note: Bugs with code are solved faster ! Colab Notebook should be made public !
IDE: PyCharm on Ubuntu
Additional context |
trainer.test(datamodule=dm) stores reference to wrong checkpoint | [
"bug",
"help wanted"
] | π Bug
When finetuning from saved weights in bolts, trainer.test() picks up reference to checkpoints which have already been deleted or not yet created.
Checkpoint created using default trainer options, no callbacks added from the user's side.
Please reproduce using [the BoringModel and post here]
Not sure how to reproduce fine-tuning from a checkpoint using the boring model.
To Reproduce
clone bolts using git clone https://github.com/PyTorchLightning/pytorch-lightning-bolts.git
cd to pl_bolts/models/self_supervised/swav/
wget 'https://pl-bolts-weights.s3.us-east-2.amazonaws.com/swav/checkpoints/swav_stl10.pth.tar'
python swav_finetuner.py --ckpt swav_stl10.pth.tar --dataset stl10 --batch_size 256 --gpus 1 --learning_rate 0.1
Latest saved checkpoint is say 'epoch=33.ckpt' but line 712 in trainer.py looks for other saved checkpoints which might be epochs before or after the one present in checkpoints folder.
File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 712, in test
results = self.__test_using_best_weights(ckpt_path, test_dataloaders)
Error:
FileNotFoundError: [Errno 2] No such file or directory: '/home/jovyan/pytorch_lightning_bolts/pl_bolts/models/self_supervised/swav/lightning_logs/version_3/checkpoints/epoch=7.ckpt'
FileNotFoundError: [Errno 2] No such file or directory: '/home/jovyan/pytorch_lightning_bolts/pl_bolts/models/self_supervised/swav/lightning_logs/version_3/checkpoints/epoch=21.ckpt'
FileNotFoundError: [Errno 2] No such file or directory: '/home/jovyan/pytorch_lightning_bolts/pl_bolts/models/self_supervised/swav/lightning_logs/version_3/checkpoints/epoch=37.ckpt'
Expected behavior
trainer.test(datamodule=dm) should pickup the reference to the correct checkpoint saved in lightning_logs/version_x/checkpoints
Environment
PyTorch Lightning version 1.0.4+ (tested with both 1.0.4 and 1.0.6)
bolts from master
PyTorch Version (e.g., 1.0): 1.6
OS (e.g., Linux): linux
How you installed PyTorch (conda, pip, source): pip
Build command you used (if compiling from source):
Python version: 3.7
CUDA/cuDNN version:
GPU models and configuration: V100s
Any other relevant information:
Additional context |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.