title
stringlengths 5
164
| labels
list | bodyText
stringlengths 0
46.7k
|
---|---|---|
Shouldn't LightningDataModule inherit abc.ABC to have @abstractmethod decorator works properly ?
|
[
"feature",
"data handling"
] |
pytorch-lightning/pytorch_lightning/core/datamodule.py
Line 89
in
a55c481
class LightningDataModule(object, metaclass=_DataModuleWrapper): # pragma: no cover
To ensure all abstract methods are overridden, one should inherit abc.ABC or set metaclass=abc.ABCMeta.
Like below,
class _DataModuleWrapperABCMeta(_DataModuleWrapper, ABCMeta):
pass
class LightningDataModule(metaclass=_DataModuleWrapperABCMeta): # pragma: no cover
However, this way will require user to explicitly define all three (train/val/test) dataloaders, even if some are not actually needed.
|
When is `on_validation_epoch_start` / `on_validation_epoch_end` being called?
|
[
"question"
] |
β Questions and Help
What is your question?
When is on_validation_epoch_start / on_validation_epoch_end being called?
It there a doc that explains the order of the callback function being called?
I need a callback function that will be called at the end of every validation epoch.
I read the callback docs and I think on_validation_epoch_end is the target function I need to override. But after my trial and error, I found that it is on_validation_end that meets my needs.
What have you tried?
And I found that on the latest version, nobody will call on_validation_epoch_start except the function on_validation_epoch_start itself in "callback_hook.py". Is it means that on_validation_epoch_start will never be called?
~/pytorch-lightning$ grep -rH "on_validation_epoch_start"
pytorch_lightning/core/hooks.py: def on_validation_epoch_start(self) -> None:
pytorch_lightning/callbacks/base.py: def on_validation_epoch_start(self, trainer, pl_module):
pytorch_lightning/trainer/callback_hook.py: def on_validation_epoch_start(self):
pytorch_lightning/trainer/callback_hook.py: callback.on_validation_epoch_start(self, self.get_model())
What's your environment?
OS: Linux
Packaging: pip
Version: 0.9.0
|
Issue with resume_from_checkpoint (on CPU and GPU)
|
[
"bug",
"help wanted"
] |
π Bug
Hi,
I've upgraded recently pytorch-lightning from 0.7.5 to 0.8.5, and I have encountered an issue with the resume_from_checkpoint from the Trainer class.
To Reproduce
The dummy example below shows the behaviour:
Run the script for a few loops in order to create a first checkpoint.
Stop.
Re-run the code, it should resume from the previously created checkpoint.
This script works well and resume properly in 0.7.5, however it does not for 0.8.5.
Code sample
from munch import Munch
from pathlib import Path
import torch
from torch.utils.data import Dataset, DataLoader
from torch.nn import Linear, MSELoss
from torch.optim import Adam
from pytorch_lightning import LightningModule, Trainer
class MyDataset(Dataset):
def __init__(self, n):
self.n = n
def __len__(self):
return self.n
def __getitem__(self, item):
x = torch.randn(1)
y = 1.5 * x + 2
return {
"x": x,
"y": y,
}
class MyModule(LightningModule):
def __init__(self):
super(MyModule, self).__init__()
self.model = Linear(in_features=1, out_features=1)
self.criterion = MSELoss()
def configure_optimizers(self):
return Adam(self.model.parameters())
def forward(self, x):
x = self.model(x)
return x
def training_step(self, input, batch_idx):
input = Munch.fromDict(input)
output = Munch()
output.y = self(input.x)
loss = self.criterion(input=output.y, target=input.y)
return {"loss": loss}
if __name__ == "__main__":
dataset = MyDataset(n=2 ** 15)
train_dataloader = DataLoader(dataset, batch_size=32, num_workers=8)
model = MyModule()
path = Path("/home/guillaume/projects/test/models/scratch")
checkpoints = sorted(path.rglob("*.ckpt"))
if checkpoints:
checkpoint = checkpoints[-1]
else:
checkpoint = None
print(checkpoint)
trainer = Trainer(
default_root_dir=path,
gpus=1,
auto_select_gpus=True,
resume_from_checkpoint=checkpoint,
)
trainer.fit(model=model, train_dataloader=train_dataloader)
GPU available: True, used: True
TPU available: False, using: 0 TPU cores
CUDA_VISIBLE_DEVICES: [0]
| Name | Type | Params
--------------------------------------
0 | model | Linear | 2
1 | criterion | MSELoss | 0
Traceback (most recent call last):
File "/home/guillaume/projects/test/src/scratch.py", line 75, in <module>
trainer.fit(model=model, train_dataloader=train_dataloader)
File "/home/guillaume/miniconda3/envs/terrestrial/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1003, in fit
results = self.single_gpu_train(model)
File "/home/guillaume/miniconda3/envs/terrestrial/lib/python3.7/site-packages/pytorch_lightning/trainer/distrib_parts.py", line 186, in single_gpu_train
results = self.run_pretrain_routine(model)
File "/home/guillaume/miniconda3/envs/terrestrial/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1160, in run_pretrain_routine
self.restore_weights(model)
File "/home/guillaume/miniconda3/envs/terrestrial/lib/python3.7/site-packages/pytorch_lightning/trainer/training_io.py", line 182, in restore_weights
self.restore(self.resume_from_checkpoint, on_gpu=self.on_gpu)
File "/home/guillaume/miniconda3/envs/terrestrial/lib/python3.7/site-packages/pytorch_lightning/trainer/training_io.py", line 295, in restore
checkpoint = pl_load(checkpoint_path, map_location=lambda storage, loc: storage)
File "/home/guillaume/miniconda3/envs/terrestrial/lib/python3.7/site-packages/pytorch_lightning/utilities/cloud_io.py", line 8, in load
if urlparse(path_or_url).scheme == '' or Path(path_or_url).drive: # no scheme or with a drive letter
File "/home/guillaume/miniconda3/envs/terrestrial/lib/python3.7/urllib/parse.py", line 367, in urlparse
url, scheme, _coerce_result = _coerce_args(url, scheme)
File "/home/guillaume/miniconda3/envs/terrestrial/lib/python3.7/urllib/parse.py", line 123, in _coerce_args
return _decode_args(args) + (_encode_result,)
File "/home/guillaume/miniconda3/envs/terrestrial/lib/python3.7/urllib/parse.py", line 107, in _decode_args
return tuple(x.decode(encoding, errors) if x else '' for x in args)
File "/home/guillaume/miniconda3/envs/terrestrial/lib/python3.7/urllib/parse.py", line 107, in <genexpr>
return tuple(x.decode(encoding, errors) if x else '' for x in args)
AttributeError: 'PosixPath' object has no attribute 'decode'
Expected behavior
This exact same code works well with pytorch-lightning: 0.7.5, which is the version I used previously.
Environment
* CUDA:
- GPU:
- GeForce GTX 970
- available: True
- version: 10.2
* Packages:
- numpy: 1.19.1
- pyTorch_debug: False
- pyTorch_version: 1.6.0
- pytorch-lightning: 0.8.5
- tensorboard: 2.3.0
- tqdm: 4.48.2
* System:
- OS: Linux
- architecture:
- 64bit
-
- processor: x86_64
- python: 3.7.7
- version: #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020
|
Resume training with resetting / increasing max number of epochs
|
[
"feature",
"help wanted",
"won't fix"
] |
Hi! I would like to know how can one continue training from existing checkpoint if after resuming you got saved learning rate, current epoch and other significant info which interrupts training immediately.
Let's say I train classifier using ReduceLROnPlateau and saving best epoch via ModelCheckpoint callback. I set max_epochs as 10, train for 5 epochs, at 9 epoch lr scheduler got activated and metric improves. So i have learning rate reduced at 10th epoch and best checkpoint also leads to 10th epoch.
Then I resume training from this checkpoint. I also have set max_epochs to 10 and start from another learning rate. But all that I got is my current epoch set to 10, learning rate changes to which one the saving with the checkpoint callback was performed and training stops because 10 is the last epoch. How can we improve such situations?
This would be also very useful when training using stages. You might have first stage for pretraining for 100 epochs and you would like to train for another 50 epochs at another dataset etc, but you might get checkpoint at let's say epoch 77 and you will not be able to train second stage because max_epochs would be set to 50.
|
Disabling automatic .train() for loss criteria
|
[
"question",
"won't fix"
] |
β Questions and Help
What is your question?
I have a loss module that is loaded as part of my lightning module with its own inner network. (output is passed through the network and the result is used to compute the loss)
The problem is that when starting a train_step Lightning automatically changes the entire module to .train() which recursively changes the loss inner network as well. As the inner network has batch normalization layers this effectively makes the loss criteria change during training which is of course not desired.
Is there a right way to incorporate such a loss function?
|
Fix DDP logging
|
[
"bug",
"priority: 0",
"distributed"
] |
Add a global_zero_only=true flag, if false- create individual files, prefixed with machine nun
Write a logging callback that will do map reduce
Can we do this in the metrics?
Aggregate all tensors on global zero first
(might run into memory issues)
Gather each output individually in CPU memory
We want to preserve the fact that logging is at 0
|
Trainer.on_gpu incorrectly set to False when specifying `gpus=0`
|
[
"help wanted",
"docs"
] |
π Bug
When creating a trainer with the arg gpus=0, the field on_gpu is always set False, even on machines with CUDA available.
The existing logic for on_gpu is:
self.on_gpu = True if (gpus and torch.cuda.is_available()) else False
is buggy because 0 is "falsy". It should probably be:
self.on_gpu = gpus is not None and torch.cuda.is_available()
To Reproduce
trainer = trainer.Trainer(gpus=0, ...)
|
Support Slash Seperator for TrainResult / EvalResult for TensorBoard
|
[
"feature",
"help wanted",
"won't fix"
] |
π Feature
TensorBoard supports and recommends grouping tags prefixed with slashes. Tags are group under the same drop down with other tags that share the prefix. For example:
step
step/train_loss
step/val_loss
epoch
epoch/train_loss
epoch/val_loss
Currently, both TrainResult and EvalResult prefix the results with an underscore (i.e. step_ and epoch_). For compatibility with TensorBoard, please support the slash separator or provide user configurable argument to set the separator.
|
Cannot pickle custom metric with DDP mode.
|
[
"bug",
"help wanted"
] |
π Bug
To Reproduce
Steps to reproduce the behavior:
just run this script.
Code sample
import torch
import torch.nn.functional as F
import pytorch_lightning as pl
from pytorch_lightning.metrics import Metric, TensorMetric
class MetricPerplexity(Metric):
"""
Computes the perplexity of the model.
"""
def __init__(self, pad_idx: int, *args, **kwargs):
super().__init__(name='ppl')
self.pad_idx = pad_idx
def forward(self, pred: torch.Tensor, target: torch.Tensor) -> torch.Tensor:
loss = F.cross_entropy(pred, target, reduction='none')
non_padding = target.ne(self.pad_idx)
loss = loss.masked_select(non_padding).sum()
num_words = non_padding.sum()
ppl = torch.exp(
torch.min(loss / num_words, torch.tensor([100]).type_as(loss))
)
return ppl
class TensorPerplexity(TensorMetric):
"""
Computes the perplexity of the model.
"""
def __init__(self, pad_idx: int, *args, **kwargs):
super().__init__(name='ppl', *args, **kwargs)
self.pad_idx = pad_idx
def forward(self, pred: torch.Tensor, target: torch.Tensor) -> torch.Tensor:
loss = F.cross_entropy(pred, target, reduction='none')
non_padding = target.ne(self.pad_idx)
loss = loss.masked_select(non_padding).sum()
num_words = non_padding.sum()
ppl = torch.exp(
torch.min(loss / num_words, torch.tensor([100]).type_as(loss))
)
return ppl
class ModelWithMetric(pl.LightningModule):
def __init__(self):
super().__init__()
self.lin = torch.nn.Linear(50, 1)
self.ppl = MetricPerplexity(100)
def training_step(self, batch, batch_nb):
return {'loss': torch.mean(self(batch))}
def configure_optimizers(self):
return torch.optim.Adam(self.parameters(), 0.01)
def forward(self, batch):
return self.lin(batch[0])
class ModelWithTensorMetric(pl.LightningModule):
def __init__(self):
super().__init__()
self.lin = torch.nn.Linear(50, 1)
self.ppl = TensorPerplexity(100)
def training_step(self, batch, batch_nb):
return {'loss': torch.mean(self(batch))}
def configure_optimizers(self):
return torch.optim.Adam(self.parameters(), 0.01)
def forward(self, batch):
return self.lin(batch[0])
if __name__ == "__main__":
m1 = ModelWithMetric()
m2 = ModelWithTensorMetric()
t1 = pl.Trainer(distributed_backend='ddp_cpu', fast_dev_run=True)
t2 = pl.Trainer(distributed_backend='ddp_cpu', fast_dev_run=True)
loader = torch.utils.data.DataLoader(torch.utils.data.TensorDataset(torch.rand(10, 50)), batch_size=5)
t1.fit(m1, train_dataloader=loader)
print('works well')
t2.fit(m2, train_dataloader=loader)
Error log
AttributeError: Can't pickle local object '_apply_to_outputs.<locals>.decorator_fn.<locals>.new_func'
Expected behavior
I do not know how to explain this behaviour, I think both of this classes should work well with DDP.
Environment
OS: MacOS
CUDA:
- GPU:
- available: False
- version: None
Packages:
- numpy: 1.18.5
- pyTorch_debug: False
- pyTorch_version: 1.6.0
- pytorch-lightning: 0.9.0rc9
- tensorboard: 2.2.2
- tqdm: 4.48.0
System:
- OS: Darwin
- architecture:
- 64bit
- processor: i386
- python: 3.7.7
- version: Darwin Kernel Version 19.6.0: Sun Jul 5 00:43:10 PDT 2020
Additional context
|
Support for Scheduler taking a value to the step function
|
[
"feature",
"won't fix",
"discussion"
] |
Looking to the code in training_loop.py it seems like the only scheduler that can take values to the step function is ReduceLROnPlateau, however, there is CosineAnnealingWarmRestarts scheduler and custom schedulers that can take epoch/step id or any other value to the step function.
pytorch-lightning/pytorch_lightning/trainer/training_loop.py
Line 1148
in
9ab0715
if lr_scheduler['reduce_on_plateau']:
|
Failing docker-Conda build
|
[
"bug",
"help wanted",
"priority: 0"
] |
π Bug
there seems to be some connection issue while creating Conda env
To Reproduce
https://github.com/PyTorchLightning/pytorch-lightning/runs/957741187
Additional context
|
Some questions about checkpoints and learning rate
|
[
"question"
] |
How to pass learning rate to progression bar and how to choose metric for saving model weights? Thanks!
|
Throw warning for using monitor in trainer
|
[
"bug",
"feature",
"help wanted",
"let's do it!"
] |
In the init method for monitor checkpoint, we should throw a warning for calling monitor key inside the trainer.
|
The total number of batches shows by the progress bar of the sanity check is wrong
|
[
"bug",
"help wanted",
"priority: 0"
] |
π Bug
The total of the sanity check progress bar is set by
pytorch-lightning/pytorch_lightning/callbacks/progress.py
Line 296
in
4d0406e
self.val_progress_bar.total = convert_inf(trainer.num_sanity_val_steps * len(trainer.val_dataloaders))
The progress bar will always show trainer.num_sanity_val_steps even if the length of the validation DataLoader is less than trainer.num_sanity_val_steps.
Maybe the total could be computed by
from pytorch_lightning.trainer import data_loading
num_full_val_dataloader_batches = [
len(dataloader) if data_loading._has_len(dataloader) else float('inf')
for dataloader in trainer.val_dataloaders
]
self.val_progress_bar.total = convert_inf(
sum(min(num_batches, trainer.num_sanity_val_steps)
for num_batches in num_full_val_dataloader_batches))
We use the private function data_loading._has_len to check if dataloader has __len__, maybe we could make data_loading._has_len public.
Or we could make num_full_val_dataloader_batches (and num_full_train_dataloader_batches) a member variable of Trainer and update the value in pytorch_lightning.trainer.data_loading.TrainerDataLoadingMixin.
To Reproduce
The progress bar of the sanity check in the following code (num_sanity_val_steps == 999 and len(val_data_loader) == 10) shows
Validation sanity check: 1%| | 9/999 [00:09<16:31, 1.00s/it]`
Code sample
import time
import pytorch_lightning as pl
from torch.utils import data
class Dataset(data.Dataset):
def __init__(self, length):
self._elements = list(range(length))
def __getitem__(self, item):
return self._elements[item]
def __len__(self):
return len(self._elements)
class Model(pl.LightningModule):
def forward(self, *args, **kwargs):
pass
def training_step(self, *args, **kwargs):
pass
def train_dataloader(self):
pass
def configure_optimizers(self):
pass
def validation_step(self, *args, **kwargs):
time.sleep(1)
return pl.EvalResult()
if __name__ == '__main__':
model = Model()
val_dataset_length = 10
val_dataset = Dataset(val_dataset_length)
val_data_loader = data.DataLoader(val_dataset)
trainer = pl.Trainer(num_sanity_val_steps=999, limit_val_batches=999,
max_epochs=0)
trainer.fit(model, val_dataloaders=val_data_loader)
Expected behavior
The program above should be
Validation sanity check: 100%|ββββββββββ| 10/10 [00:10<00:00, 1.00s/it]
Environment
CUDA:
GPU:
available:
version:
Packages:
numpy: 1.18.5
pyTorch_debug: False
pyTorch_version: 1.6.0+cpu
pytorch-lightning: 0.9.0rc11
tensorboard: 1.15.0
tqdm: 4.48.2
System:
OS: Windows
architecture:
64bit
WindowsPE
processor:
python: 3.7.3
version: 10.0.18362
Additional context
|
unexpected keyword argument 'amp_type' in trainer __init__()
|
[
"bug",
"help wanted"
] |
π Bug
Versions used:
Pytorch: 1.6.0
Pytorch Lightning: 0.9.12rc.
trainer = Trainer(amp_type='apex', ...)
Error message: __init__() got an unexpected keyword argument 'amp_type'
To Reproduce
Init trainer as shown above.
|
Optimizer initialization with DDP
|
[
"feature",
"discussion"
] |
β Questions and Help
What is your question?
I would have expected optimizers to always be initialized after parameters have been moved to their destination device.
However, some ddp backends such as
ddp_backend, ddp_spawn_backend, ddp2_backend
initialize the optimizer with the CPU parameters before moving the model to the GPU while others such as gpu_backend pass the GPU parameters.
I'm currently trying to understand two things:
(1) Where does the linking from the CPU to GPU parameters happen?
(2) Is it actually necessary to initialize the optimizer before moving to the specific device or could it be done afterwards? (Most tutorials initialize the optimizer after placing the parameters on the corresponding device, e.g. this one)
The reason I'm asking is that I did some parameter/gradient bending to be views into other tensors. This does not work with the current implementation as the optimizer keeps the reference to the CPU parameters with these tweaks but works fine when adapting the pytorch_lightning code by moving the optimizer creation after the model has been moved to the correct device.
|
Batchsize and learning rate scheduler
|
[
"question"
] |
I was wondering if I need to adjust the batch size when using TPUs. I had a memory error when trying to run a (image 256x256x3) batch of size 128, which works perfectly fine on GPUs.
Furthermore, do I need to adjust my custom learning rate scheduler which on GPUs run every batch (not just epoch):
def configure_optimizers(self):
optimizer = get_optimizer(self)
one_cycle_scheduler = OneCycleScheduler(optimizer, EPOCHS * steps_per_epoch // GRAD_ACCUMULATE)
scheduler = {'scheduler': one_cycle_scheduler, "interval": "step"}
return [optimizer], [scheduler]
pl version: 0.8.5
system: kaggle
|
How to use multiple metric monitors in ModelCheckpoint callback?
|
[
"question",
"discussion",
"design"
] |
β Questions and Help
What is your question?
How can I use multiple metric monitors in the ModelCheckpointοΌ In another way, how can I use multiple ModelCheckpoint callbacksοΌIt seems that the Trainer only accepts a singleModelCheckpoint in the checkpoint_callback argument.
Code
site-packages/pytorch_lightning/trainer/callback_config.py", line 46, in configure_checkpoint_callback
checkpoint_callback.save_function = self.save_checkpoint
AttributeError: 'list' object has no attribute 'save_function'
What's your environment?
OS: Ubuntu 16.04
Packaging: pip
Version: pytorch-lightning==0.9.0rc12
|
load_from_checkpoint: TypeError: __init__() missing 1 required positional argument
|
[
"bug",
"question"
] |
β Questions and Help
What is your question?
load_from_checkpoint: TypeError: init() missing 1 required positional argument
I have read the issues before, but the things different is my LightningModule is inherited from my self-defined LightningModule.
How to solve this problem or what is the best practice better suited to my needs?
Code
To reproduce the error:
import os
import torch
from torch.nn import functional as F
from torch.utils.data import DataLoader
from torchvision.datasets import MNIST
from torchvision import transforms
import pytorch_lightning as pl
from pytorch_lightning import Trainer
from argparse import Namespace
class _LitModel(pl.LightningModule):
def __init__(self, hparams):
super().__init__()
if isinstance(hparams, dict):
hparams = Namespace(**hparams)
self.hparams = hparams
self.l1 = torch.nn.Linear(28 * 28, hparams.classes)
def forward(self, x):
return torch.relu(self.l1(x.view(x.size(0), -1)))
def training_step(self, batch, batch_idx):
x, y = batch
y_hat = self(x)
loss = F.cross_entropy(y_hat, y)
tensorboard_logs = {'train_loss': loss}
return {'loss': loss, 'log': tensorboard_logs}
def validation_step(self, batch, batch_idx):
x, y = batch
y_hat = self(x)
loss = F.cross_entropy(y_hat, y)
return {'val_loss': loss}
def validation_epoch_end(self, outputs):
avg_loss = torch.stack([x['val_loss'] for x in outputs]).mean()
return {'val_loss': avg_loss}
def configure_optimizers(self):
return torch.optim.Adam(self.parameters(), lr=0.001)
class LitModel(_LitModel):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
from argparse import ArgumentParser
parser = ArgumentParser()
parser.add_argument('--classes', type=int, default=10)
parser.add_argument('--checkpoint', type=str, default=None)
hparams = parser.parse_args()
mnist_train = MNIST(os.getcwd(), train=True, download=True,
transform=transforms.ToTensor())
mnist_train = DataLoader(mnist_train, num_workers=1)
mnist_val = MNIST(os.getcwd(), train=False, download=False,
transform=transforms.ToTensor())
mnist_val = DataLoader(mnist_val, num_workers=1)
# A bit weird here. I just want to show `load_from_checkpoint` will fail.
if hparams.checkpoint is None:
model = LitModel(hparams)
else:
model = LitModel.load_from_checkpoint(hparams.checkpoint)
trainer = Trainer(max_epochs=2, limit_train_batches=2,
limit_val_batches=2, progress_bar_refresh_rate=0)
trainer.fit(model, mnist_train, mnist_val)
Error msg
Traceback (most recent call last):
File "main.py", line 64, in <module>
model = LitModel.load_from_checkpoint(hparams.checkpoint)
File "/home/siahuat0727/.local/lib/python3.8/site-packages/pytorch_lightning/core/saving.py", line 138, in load_from_checkpoint
model = cls._load_model_state(checkpoint, *args, **kwargs)
File "/home/siahuat0727/.local/lib/python3.8/site-packages/pytorch_lightning/core/saving.py", line 174, in _load_model_state
model = cls(*cls_args, **cls_kwargs)
File "main.py", line 46, in __init__
super().__init__(*args, **kwargs)
TypeError: __init__() missing 1 required positional argument: 'hparams'
How to run to get the error
$ python3 main.py
$ python3 main.py --checkpoint lightning_logs/version_0/checkpoints/epoch\=1.ckpt
What's your environment?
OS: Linux
Packaging: pip
Version 0.9.0rc12
|
Checkpoint monitor str in default Trainer
|
[
"question",
"won't fix"
] |
β Questions and Help
Hi,
In the default setting (with checkpoint_callback=True), when the method configure_checkpoint_callback is invoked by the c'ntr, the model hasn't yet been loaded to the trainer (as run_pretrain_routine is invoked only after fit()).
Thus, when calling self.configure_checkpoint_callback(True), the line train_step_only = not self.is_overridden('validation_step') always returns False; even if validation_step was overridden in the user-class which extends pl.LightningModule.
This can obviously be solved by instantiating an exterior ModelCheckpoint(), but I guess this is not what was intended.
I'm new to this community, so I might be missing something here.
Thanks in advance!
|
DistributedDataParallel with nccl backend produces zombie processes
|
[
"bug"
] |
Hey lightning community,
first I want to thank you for this nice project. It helped me a lot to improve my research code and I'm happy to recommend it to my colleagues whenever they are complaining about their code mess.
I have a problem with some kind of racing condition which is reproducible in a slurm environment with multiple GPUs. I used DistributedDataParallel with the 'nccl'-backend.
The default implementation of PyTorch-lightning can produce zombie processes, which reserve GPU memory and prevent further usage of the memory. It happens mainly if the main process is stopped or crashes.
You can reproduce this with the given code. If the training starts and the main process is killed with a strict signal like SIGKILL, the child processes stay persistent in most cases.
I could solve the problem for me by overwriting init_ddp_connection. The overwritten method sets the NCCL_BLOCKING_WAIT=1 and reduced the timeout additionally. (The torch documentation mentions that the timeout is only used for nccl if NCCL_BLOCKING_WAIT is 1)
Is there a better way to get rid of this problem?
Or should we adjust the default behavior of lightning for the nccl backend?
Best regrads
Leon Varga
Code
import pytorch_lightning as lightning
import torch
from torch.utils.data import DataLoader
class MyModule(lightning.LightningModule):
def __init__(self):
super(MyModule, self).__init__()
self.model = torch.nn.Linear(1000, 1000)
self.criterion = torch.nn.MSELoss()
def train_dataloader(self) -> DataLoader:
data = torch.randn((int(1e5), 2, 1000))
training_generator = DataLoader(data)
return training_generator
def configure_optimizers(self):
optimizer = torch.optim.Adam(self.model.parameters())
return [optimizer]
def training_step(self, batch, batch_idx):
pred = self.model(batch[:, 0])
loss = self.criterion(pred, batch[:, 1])
return {
'loss': loss,
}
def forward(self, x):
return self.model(x)
if __name__ == "__main__":
model = MyModule()
trainer = lightning.Trainer(default_root_dir='/tmp/test',
max_epochs=100, gpus=-1,
distributed_backend='ddp')
trainer.fit(model)
What's your environment?
Linux
Pytorch 1.5 / 1.6
PytorchLightning 0.8.5
|
ModelCheckpoint with custom filepath don't support training on multiple nodes
|
[
"bug",
"help wanted",
"priority: 0"
] |
π Bug
When training on multiple nodes using ModelCheckpoint with custom filepath, it will raise FileExistsError caused by the following line of code: model_checkpoint.py#L127.
Maybe a try-except block is needed?
|
RNN batch_first performance considerations based on backing kernel
|
[
"question",
"won't fix"
] |
Question:
Can I use batch_first=True everywhere without worrying about performance differences on CPU, GPU, TPU?
The defaults PyTorch sets are batch_first=False for all RNNs (RNN, LSTM, GRU). Pytorch Lightning mandates batch_first=True for truncated_bptt_steps, however. Will setting it True this mean relative faster performance on CPU and slower on GPU (for example) as compared to setting it False?
If batch_first affects performance specific to device (CPU, GPU, TPU), should we check for device and set batch_first accordingly? (In this case, seems like a prime target for this framework.)
(Aside: Something I've always wondered: if the CUDA kernel is better (better time/space?) when batch_first=False, why doesn't it internally transpose(0, 1), compute, and transpose(1, 0) back instead of exposing this complexity to the user?
ie., in the general case, might it not be better to use batch_first everywhere, and perform gymnastics internally within the specific implementation.)
Not sure if this project is the best place to ask this question, but I was implementing a project using this wonderful framework and it seems to me this question falls within the "engineering code" purview of Pytorch Lightning.
|
is limit_train_batches shuffle or random
|
[
"question"
] |
hi, I am using limit_train_batches . If it is set, is it means a subdataset of whole train dataset ? similar with torch.utils.data.random_split
|
warm up LR causes crash
|
[
"bug",
"help wanted"
] |
My resnet encoder and transformer decoder are not training well. So trying all kinds of stuff.
Latest attempt to improve is to use a warmup learning rate as described here:
https://github.com/PyTorchLightning/pytorch-lightning/blob/master/docs/source/optimizers.rst
My code is an exact copy:
def optimizer_step(self, epoch_nb, batch_nb, optimizer, optimizer_i, opt_closure):
if self.trainer.global_step < 500:
lr_scale = min(1., float(self.trainer.global_step + 1) / 500.)
for pg in optimizer.param_groups:
pg['lr'] = lr_scale * self.hparams.learning_rate
print(f"lr={pg['lr']}")
optimizer.step()
optimizer.zero_grad()
The crash is as follows:
Epoch 1: 0%|β | 15/3544 [00:27<1:48:33, 1.85s/it, loss=nan, v_num=19]Traceback (most recent call last):
File "kiss_transformer.py", line 539, in <module>
trainer.fit(model, train_loader)
File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/states.py", line 34, in wrapped_fn
result = fn(self, *args, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1023, in fit
self.accelerator_backend.train(model, nprocs=self.num_processes)
File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/accelerators/ddp_spawn_backend.py", line 42, in train
mp.spawn(self.ddp_train, nprocs=nprocs, args=(self.mp_queue, model,))
File "/opt/conda/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 200, in spawn
return start_processes(fn, args, nprocs, join, daemon, start_method='spawn')
File "/opt/conda/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 158, in start_processes
while not context.join():
File "/opt/conda/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 119, in join
raise Exception(msg)
Exception:
-- Process 0 terminated with the following error:
Traceback (most recent call last):
File "/opt/conda/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 20, in _wrap
fn(i, *args)
File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/accelerators/ddp_spawn_backend.py", line 154, in ddp_train
results = self.trainer.run_pretrain_routine(model)
File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1211, in run_pretrain_routine
self.train()
File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 393, in train
self.run_training_epoch()
File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 490, in run_training_epoch
batch_output = self.run_training_batch(batch, batch_idx)
File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 887, in run_training_batch
grad_norm_dic = self.run_batch_backward_pass(split_batch, batch_idx, opt_idx, optimizer)
File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 948, in run_batch_backward_pass
self.call_optimizer_step(optimizer, opt_idx, batch_idx, split_batch)
File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 986, in call_optimizer_step
using_native_amp=native_amp)
TypeError: optimizer_step() got an unexpected keyword argument 'using_native_amp'
I have tried both these versions and same crash:
#RUN pip install pytorch-lightning==0.8.5
RUN pip install pytorch-lightning==0.9.0rc12
Not that I believe these things are related, but my code has the following:
trainer = pl.Trainer(gpus=[0, 1], accumulate_grad_batches=16, callbacks=[lr_logger]) #
trainer.fit(model, train_loader)
|
Trainer "optimizers" attribute is None when saving checkpoint and callbacks list is not empty
|
[
"bug",
"help wanted",
"waiting on author",
"checkpointing"
] |
π Bug
I'm training a GAN and I'm running a few custom callbacks as well. When the model attempts to save at the end of the first epoch, it crashes. Here's the very strange thing: I have the exact same code in a Jupyter notebook and the error doesn't occur.
To Reproduce
Steps to reproduce the behavior:
The bug does not occur when the callbacks list passed into the trainer is empty. None of the callbacks I'm using have anything to do with saving checkpoints, they're all for logging certain things about the model. Enabling any one of them causes the error. Running the exact same code in Jupyter results in no crashes.
Stack trace:
Traceback (most recent call last):βββββββββββββββββββββββββββββββββββ-| 98.33% [590/600 00:05<00:00 loss: -0.558, v_num: 1, d_loss: -1.120, g_loss: -0.016]
File "mnist-dense-gan-convergence.py", line 55, in <module>
main(args)
File "mnist-dense-gan-convergence.py", line 45, in main
trainer.fit(gan)
File "/Users/robbie/.conda/envs/ganresearch/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1044, in fit
results = self.run_pretrain_routine(model)
File "/Users/robbie/.conda/envs/ganresearch/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1213, in run_pretrain_routine
self.train()
File "/Users/robbie/.conda/envs/ganresearch/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 370, in train
self.run_training_epoch()
File "/Users/robbie/.conda/envs/ganresearch/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 502, in run_training_epoch
self.check_checkpoint_callback(should_check_val)
File "/Users/robbie/.conda/envs/ganresearch/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 513, in check_checkpoint_callback
[c.on_validation_end(self, self.get_model()) for c in checkpoint_callbacks]
File "/Users/robbie/.conda/envs/ganresearch/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 513, in <listcomp>
[c.on_validation_end(self, self.get_model()) for c in checkpoint_callbacks]
File "/Users/robbie/.conda/envs/ganresearch/lib/python3.7/site-packages/pytorch_lightning/utilities/distributed.py", line 12, in wrapped_fn
return fn(*args, **kwargs)
File "/Users/robbie/.conda/envs/ganresearch/lib/python3.7/site-packages/pytorch_lightning/callbacks/model_checkpoint.py", line 309, in on_validation_end
self._do_check_save(filepath, current, epoch)
File "/Users/robbie/.conda/envs/ganresearch/lib/python3.7/site-packages/pytorch_lightning/callbacks/model_checkpoint.py", line 346, in _do_check_save
self._save_model(filepath)
File "/Users/robbie/.conda/envs/ganresearch/lib/python3.7/site-packages/pytorch_lightning/callbacks/model_checkpoint.py", line 168, in _save_model
self.save_function(filepath, self.save_weights_only)
File "/Users/robbie/.conda/envs/ganresearch/lib/python3.7/site-packages/pytorch_lightning/trainer/training_io.py", line 268, in save_checkpoint
checkpoint = self.dump_checkpoint(weights_only)
File "/Users/robbie/.conda/envs/ganresearch/lib/python3.7/site-packages/pytorch_lightning/trainer/training_io.py", line 350, in dump_checkpoint
for i, optimizer in enumerate(self.optimizers):
TypeError: 'NoneType' object is not iterable
Code sample
Here is the relevant part of my setup code:
inception_callback = GANInceptionScorer(classifier, logits=True, sample_size=1000, input_shape=(-1, 1, 28, 28))
log_dir = os.path.abspath('../logs/mnist-dense-gan-convergence')
params = ParameterMatrixCallback()
callbacks = [
GANProgressBar(),
GANTensorboardImageView(),
params,
inception_callback
]
trainer_args = {
'max_epochs': 100,
'default_root_dir': log_dir,
'callbacks': callbacks,
'progress_bar_refresh_rate': 0
}
print(log_dir)
try:
trainer = Trainer(gpus=1, **trainer_args)
except MisconfigurationException:
trainer = Trainer(**trainer_args)
trainer.fit(gan)
Expected behavior
Environment
Please copy and paste the output from our
environment collection script
(or fill out the checklist below manually).
You can get the script and run it with:
wget https://raw.githubusercontent.com/PyTorchLightning/pytorch-lightning/master/tests/collect_env_details.py
# For security purposes, please check the contents of collect_env_details.py before running it.
python collect_env_details.py
and the same code in Jupyter:
inception_callback = GANInceptionScorer(classifier, logits=True, sample_size=1000, input_shape=(-1, 1, 28, 28))
log_dir = os.path.abspath('../logs/mnist-gan-dense')
params = ParameterMatrixCallback()
trainer_args = {
'max_epochs': 200,
'callbacks': [GANProgressBar(), GANTensorboardImageView(n=4), params, inception_callback],
'progress_bar_refresh_rate': 0,
'default_root_dir': log_dir
}
t = Trainer(**trainer_args)
PyTorch Version (e.g., 1.0): 1.3.1
OS (e.g., Linux): macOS
How you installed PyTorch (conda, pip, source): conda
Python version: 3.7
Any other relevant information: pytorch-lightning 0.8.5
|
mlflow checkpoints in the wrong location
|
[
"bug",
"help wanted"
] |
I'm not sure if I'm doing something wrong, I'm using mlflow instead of tensorboard as a logger. I've used the defaults i.e.
mlflow = loggers.MLFlowLogger()
trainer = pl.Trainer.from_argparse_args(args, logger=mlflow)
I'm ending up with the following folder structure
\mlflow
\mlflow\1
\mlflow\1\{guid}\artifacts
\mlflow\1\{guid}\metrics
\mlflow\1\{guid}\params
\mlflow\1\{guid}\meta.yaml
\1\{guid}\checkpoints
i.e. the checkpoints are in the wrong location, they should be in the \mlflow folder.
Perhaps this is an mlflow rather than pytorch-lightning issue?
I'm using pytorch-lightning 0.8.5 on macos running in python 3.7.6
|
Custom Checkpoint callback for multiple models
|
[
"question"
] |
β Questions and Help
Before asking:
search the issues.
search the docs.
What is your question?
I am looking to write my own callback for checkpointing for a list of models I initialize in init().
Code
I created 10 timeseries models and 1 image model lets say. Each model inherits Lightningmodule.
So LITFusionExp has 11 models.
When I save the checkpoint I can only see cnn_model's checkpoint and not ts_models.
However, I can see that trainer updates my ts_models.
The problem thus is when I reload the checkpoint all the ts_models are just randomly initialized. How to save ts_models too?
Thanks for the help
class LITFusionExp(LightningModule):
def __init__(self,hparams):
super().__init__()
self.ts_models = [ Conv1dmultivariate(input_channels=10).cuda() for _ in range(10)]
self.cnn_model = LITConvAEexp(hparams)
trainer.fit(LITFusionExp())
trainer .save('mypath.ckpt')
###
my_ckpt= torch.load( 'mypath.ckpt')
#my_ckpt['state_dict'] has only keys with respect to CNN model
What's your environment?
OS: Linux
Packaging [e.g. conda]
Version [e.g. 0.8.5]
|
Issue with pl.Trainer.from_argparse_args(...)
|
[
"bug",
"help wanted"
] |
π Bug
To Reproduce
Steps to reproduce the behavior:
Use parser = pl.Trainer.add_argparse_args(parser)
Run python main.py --overfit_batches 1
The training runs over the whole dataset instead of running on a single batch
Code sample
Expected behavior
Only one batch should have run.
Environment
CUDA:
GPU:
Tesla P100-PCIE-16GB
available: True
version: 10.1
Packages:
numpy: 1.18.5
pyTorch_debug: False
pyTorch_version: 1.6.0+cu101
pytorch-lightning: 0.8.5
tensorboard: 2.3.0
tqdm: 4.41.1
System:
OS: Linux
architecture:
64bit
processor: x86_64
python: 3.6.9
version: #1 SMP Thu Jul 23 08:00:38 PDT 2020
Additional context
|
Using IterableDatasets without __len__ for Training
|
[
"bug",
"help wanted"
] |
Calling fit(model, trainloader, evalloader) internally calls enforce_datamodule_dataloader_override. This function
has the if statement if (train_dataloader or val_dataloaders) and datamodule:.
pytorch-lightning/pytorch_lightning/trainer/configuration_validator.py
Line 13
in
2c935d0
if (train_dataloader or val_dataloaders) and datamodule:
This is similar to the PR #1560, the problem is that the if(dl) translates to if(bool(dl)), but there's no dataloader.bool so bool() uses dataloader.len > 0. But... dataloader.len uses IterableDataset.len for IterableDatasets for which len is undefined.
The fix is also the same, the if dl should be replaced by if dl is not None.
I will open a PR fixing this.
|
Adaptive Gradient Clipping
|
[
"feature",
"help wanted"
] |
π Feature
See code here: https://github.com/pseeth/autoclip
Motivation
a simple method for automatically and adaptively choosing a gradient clipping threshold, based on the history of gradient norms observed during training. Experimental results show that applying AutoClip results in improved generalization performance for audio source separation networks. Observation of the training dynamics of a separation network trained with and without AutoClip show that AutoClip guides optimization into smoother parts of the loss landscape. AutoClip is very simple to implement and can be integrated readily into a variety of applications across multiple domains.
|
TensorBoardLogger not saving hparams without metrics
|
[
"bug",
"help wanted"
] |
π Bug
log_hyperparams for TensorBoardLogger saves no data with default metrics=None, only hparam entries/names show up in sidebar
To Reproduce
Steps to reproduce the behavior:
import pytorch_lightning as pl
logger = pl.loggers.TensorBoardLogger("./test_logs")
test_dict = {"test":0}
logger.log_hyperparams(test_dict) ## no data saved
logger.log_hyperparams(test_dict, test_dict) ## works
logger.log_metrics(test_dict) ## works
logger.experiment.add_hparams(test_dict, test_dict) ## works but saves in a different events file
Expected behavior
hparams data is saved and viewable via tensorboard with default args
Proposed solution
For default hparams logging without metrics, add a placeholder metric? I can do a PR if this is appropriate.
|
Partially overwrite parameters when using load_from_checkpoint
|
[
"feature",
"help wanted",
"won't fix"
] |
π Feature
I noticed that PL now support arbitary nested dictionary as hparams, I'm really happy with that. But now I found a small problem when using load_from_checkpoint. This function accepts kwargs to override hparams from checkpoint, although it uses dict.update, original update function from python cannot handle nested update well. I use addict.Dict to passes parameters around and what I want to achieve is exactly what it does: https://github.com/mewwts/addict#update
Which means if the configs in checkpoint are like {a: {b: 3, c: 4}, and I overwrite the parameters by load_from_checkpoint(..., kwargs={a: {c: 5}}), the hparams will be {a: {b:3, c:5}} instead of {a: {c: 5}}.
I do initialization in __init__ of my module, so it's a bit tricky to change parameters after the module is initialized. Is there another way to change the hparams from checkpoint before initialize the module? Right now I have to directly tweak the checkpoint config...
|
AttributeError: module 'pytorch_lightning' has no attribute 'TrainResult'
|
[
"bug",
"help wanted"
] |
π Bug
Following https://pytorch-lightning.readthedocs.io/en/latest/new-project.html
I tried to log loss values to Tensorboard:
def training_step(self, batch, batch_idx):
loss = ...
result = pl.TrainResult(minimize=loss)
result.log('train_loss', loss)
return result
It seems module "TrainResult" is not present in pytorch-lighting, I'm using latest version (0.8.5)
After browsing various github and SO issues I tried
import TrainResult from pytorch_lightning
but it still doesn't work
To Reproduce
Just follow example in https://pytorch-lightning.readthedocs.io/en/latest/new-project.html
def training_step(self, batch, batch_idx):
loss = ...
result = pl.TrainResult(minimize=loss)
result.log('train_loss', loss)
return result
Expected behavior
I expect code sample in documentation to work (not crash when importing a pytorch-lightning module)
Environment
PyTorch Version (e.g., 1.0): 1.5.0
OS (e.g., Linux): linux
How you installed PyTorch (conda, pip, source): pip
Build command you used (if compiling from source): N/A
Python version: 3.6.9
CUDA/cuDNN version: docker image nvidia/cuda:10.2-cudnn7-devel-ubuntu18.04
GPU models and configuration: Nvidia GTI 1080
|
CrossEntropyLoss with weights
|
[
"question"
] |
I need weights in CrossEntropyLoss (actually multiple, but the same issue). The documentation talks about tensors copied from other tensors, but there is no tensor to copy from in the init. So I'm stuck.
To make the weights unquestionably simple, I use ones.
class JJG_Transformer(pl.LightningModule):
def __init__(self, alphanet_plus_2, letter_weights_per_position):
super(JJG_Transformer, self).__init__()
self.criterions = []
for weight in self.letter_weights_per_position:
weight = torch.ones((94))
self.criterions.append( torch.nn.CrossEntropyLoss(weight=weight) )
def validation_step(self, batch, batch_idx):
batch_im, batch_true_value_NT, batch_letter_transformer_input = batch
out_NTA = self(batch_im, batch_letter_transformer_input)
loss0 = self.criterions[0](out_NTA[:,0,:], batch_true_value_NT[:,0])
loss1 = self.criterions[1](out_NTA[:,1,:], batch_true_value_NT[:,1])
loss = loss0 + loss1
tensorboard_logs = {'val_loss': loss, 'val_loss0': loss0, 'val_loss1':loss1}
return {'val_loss': loss, 'log': tensorboard_logs}
File "/home/john/Documents/GitHub/Offline_Handwriting_Recognition/Solutions/Aug2020_simple_transformer/src/kiss_transformer.py", line 254, in <module>
trainer.fit(model, train_dataloader=train_loader, val_dataloaders=val_loader)
File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/states.py", line 34, in wrapped_fn
result = fn(self, *args, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1017, in fit
self.accelerator_backend.train(model)
File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/accelerators/ddp_backend.py", line 56, in train
self.ddp_train(process_idx=self.task_idx, mp_queue=None, model=model)
File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/accelerators/ddp_backend.py", line 219, in ddp_train
results = self.trainer.run_pretrain_routine(model)
File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1196, in run_pretrain_routine
self._run_sanity_check(ref_model, model)
File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1229, in _run_sanity_check
eval_results = self._evaluate(model, self.val_dataloaders, max_batches, False)
File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/evaluation_loop.py", line 325, in _evaluate
output = self.evaluation_forward(model, batch, batch_idx, dataloader_idx, test_mode)
File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/evaluation_loop.py", line 609, in evaluation_forward
output = model(*args)
File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/overrides/data_parallel.py", line 160, in forward
output = self.module.validation_step(*inputs[0], **kwargs[0])
File "/home/john/Documents/GitHub/Offline_Handwriting_Recognition/Solutions/Aug2020_simple_transformer/src/kiss_transformer.py", line 128, in validation_step
loss0 = self.criterions[0](out_NTA[:,0,:], batch_true_value_NT[:,0])
File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/loss.py", line 948, in forward
ignore_index=self.ignore_index, reduction=self.reduction)
File "/opt/conda/lib/python3.7/site-packages/torch/nn/functional.py", line 2422, in cross_entropy
return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction)
File "/opt/conda/lib/python3.7/site-packages/torch/nn/functional.py", line 2218, in nll_loss
ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index)
RuntimeError: Expected object of device type cuda but got device type cpu for argument #3 'weight' in call to _thnn_nll_loss_forward
Traceback (most recent call last):
File "kiss_transformer.py", line 254, in <module>
trainer.fit(model, train_dataloader=train_loader, val_dataloaders=val_loader)
File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/states.py", line 34, in wrapped_fn
result = fn(self, *args, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1030, in fit
results = self.accelerator_backend.spawn_ddp_children(model)
File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/accelerators/ddp_backend.py", line 118, in spawn_ddp_children
results = self.ddp_train(local_rank, mp_queue=None, model=model, is_master=True)
File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/accelerators/ddp_backend.py", line 219, in ddp_train
results = self.trainer.run_pretrain_routine(model)
File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1196, in run_pretrain_routine
self._run_sanity_check(ref_model, model)
File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1229, in _run_sanity_check
eval_results = self._evaluate(model, self.val_dataloaders, max_batches, False)
File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/evaluation_loop.py", line 325, in _evaluate
output = self.evaluation_forward(model, batch, batch_idx, dataloader_idx, test_mode)
File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/evaluation_loop.py", line 609, in evaluation_forward
output = model(*args)
File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/overrides/data_parallel.py", line 160, in forward
output = self.module.validation_step(*inputs[0], **kwargs[0])
File "kiss_transformer.py", line 128, in validation_step
loss0 = self.criterions[0](out_NTA[:,0,:], batch_true_value_NT[:,0])
File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/loss.py", line 948, in forward
ignore_index=self.ignore_index, reduction=self.reduction)
File "/opt/conda/lib/python3.7/site-packages/torch/nn/functional.py", line 2422, in cross_entropy
return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction)
File "/opt/conda/lib/python3.7/site-packages/torch/nn/functional.py", line 2218, in nll_loss
ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index)
RuntimeError: Expected object of device type cuda but got device type cpu for argument #3 'weight' in call to _thnn_nll_loss_forward
trainer = pl.Trainer( gpus=[0, 1],
accumulate_grad_batches=16,
max_epochs=500,
check_val_every_n_epoch=1,
distributed_backend='ddp',
pl__version__ 0.9.0rc12
|
Gradient Clipping for discriminator only
|
[
"feature",
"help wanted"
] |
How can I clip the weights for only discriminator while training GAN?
Thanks
|
valdation_epoch_end won't log if no logging is done in validation_step
|
[
"bug",
"help wanted",
"checkpointing"
] |
π Bug
@edenlightning looks like setting both logger=False and prog_bar=False won't do anything. If this is intended, maybe we should add a warning or something.
Also saw another issue, if I don't log anything in validation_step then logged values in validation_epoch_end won't be logged too even if we set logger=True. Updated the notebook attatched above for the same to verify.
Code sample
https://colab.research.google.com/drive/1Os9oSPK_rwwutcFdZTDsBaMc2IrqkSrP#scrollTo=3BJmPqHwX6WA
Environment
pl: master
env: colab
|
ModelCheckpoint does not create full path
|
[
"bug",
"help wanted",
"priority: 0",
"checkpointing"
] |
π Bug
To Reproduce
Run checkpoint_callback = ModelCheckpoint('my/path/')
Only my folder is created.
I think this line discard the last trailing slash. So the directories are not created as intended when the paths are getting split.
Expected behavior
Path should be fully created.
|
Validation step isn't being ran
|
[
"question"
] |
β Questions and Help
What is your question?
I have been trying to get the trainer to call the validation_step function but it doesn't seem to ever get called. I assume I am missing something obvious but have looking at the tutorials and docs I haven't been able to find the obvious. The code for the model and trainer are below. What might I be missing? Thank you for the help!
Code
class SegModel(pl.LightningModule):
def __init__(self, batch_size, lr):
super(SegModel, self).__init__()
self.batch_size = batch_size
self.learning_rate = lr
self.net = UNet(num_classes=1)
self.transform = transforms.Compose([
transforms.ToTensor()
])
self.trainset = Stacker(input_images, truth_images, transform=self.transform)
self.validset = Stacker(input_images, truth_images, transform=self.transform)
self.testset = Stacker(input_images, truth_images, transform=self.transform)
def forward(self, x):
return self.net(x)
def training_step(self, batch, batch_nb):
img, mask = batch
img = img.float()
mask = mask.long()
out = self.forward(img)
loss_val = dice_loss(mask, out)
return {'loss': loss_val, 'log': {'train_loss': loss_val}}
def validation_step(self, batch, batch_nb):
print("RUNNING VALIDATION")
img, mask = batch
img = img.float()
mask = mask.long()
out = self.forward(img)
loss_val = dice_loss(mask, out)
return {'val_loss': loss_val,
'val_dice': dice(out, mask),
'val_iou': IoU(out, mask)
}
def test_step(self, batch, batch_nb):
img, mask = batch
img = img.float()
mask = mask.long()
out = self.forward(img)
loss_val = dice_loss(mask, out)
return {'test_loss': loss_val,
'test_dice': dice(out, mask),
'test_iou': IoU(out, mask)
}
def validation_end(self, outputs):
if len(outputs)==0: return {}
val_loss_mean = torch.stack([x['val_loss'] for x in outputs]).mean()
val_dice_mean = torch.stack([x['val_dice'] for x in outputs]).mean()
val_iou_mean = torch.stack([x['val_iou'] for x in outputs]).mean()
return {'val_loss': val_loss_mean,
'log': {
'val_loss': val_loss_mean,
'val_dice': val_dice_mean,
'val_iou': val_iou_mean
}}
def test_end(self, outputs):
if len(outputs)==0: return {}
test_loss_mean = torch.stack([x['test_loss'] for x in outputs]).mean()
test_dice_mean = torch.stack([x['test_dice'] for x in outputs]).mean()
test_iou_mean = torch.stack([x['test_iou'] for x in outputs]).mean()
print(test_dice_mean, test_iou_mean)
return {'test_loss': test_loss_mean,
'log': {
'test_loss': test_loss_mean,
'test_dice': test_dice_mean,
'test_iou': test_iou_mean
}}
def configure_optimizers(self):
opt = torch.optim.Adam(self.net.parameters(), lr=self.learning_rate)
sch = torch.optim.lr_scheduler.CosineAnnealingLR(opt, T_max=10)
return [opt], [sch]
@pl.data_loader
def train_dataloader(self):
return DataLoader(self.trainset, batch_size=self.batch_size, shuffle=True)
@pl.data_loader
def valid_dataloader(self):
return DataLoader(self.validset, batch_size=self.batch_size, shuffle=False)
@pl.data_loader
def test_dataloader(self):
return DataLoader(self.testset, batch_size=self.batch_size, shuffle=False)
model = SegModel(1, 0.001)
trainer = pl.Trainer(
gpus=[0],
early_stop_callback=None,
max_epochs=40,
check_val_every_n_epoch=1,
)
trainer.fit(model)
What's your environment?
OS: Windows
Packaging: conda
Version: 0.6.1
|
add a self.lr to trainer
|
[
"feature",
"help wanted"
] |
π Feature
Motivation
Pitch
Alternatives
Additional context
|
calling Trainer.test(model) will perform test twice
|
[
"bug",
"help wanted"
] |
π Bug
When calling Trainer.test(model) after training separately, the Test routine will be called twice.
To Reproduce
Steps to reproduce the behavior:
model = my_lightning_model(some_hparams)
trainer = Trainer()
trainer.fit(model)
trainer.test(model) #the test routine will be performed twice
Expected behavior
The test should be only performed once via trainer.test(model).
Environment
PyTorch version: 1.4.0
Is debug build: No
CUDA used to build PyTorch: 10.1
OS: Ubuntu 18.04.3 LTS
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
CMake version: version 3.10.2
Python version: 3.6
Is CUDA available: Yes
CUDA runtime version: 10.1.243
GPU models and configuration:
GPU 0: GeForce GTX 1080 Ti
GPU 1: GeForce GTX 1080 Ti
Nvidia driver version: 430.14
cuDNN version: Could not collect
Versions of relevant libraries:
[pip3] numpy==1.18.1
[pip3] pytorch-lightning==0.6.1.dev0
[pip3] torch==1.4.0
[pip3] torchvision==0.5.0
[conda] Could not collect
Potential reason causing this bug
By dive into pl's source code, I found the reason might be related to pytorch_lightning/trainer/trainer.py at line 1104 & line 1177:
line 1104:
# when testing requested only run test and return
if self.testing:
# only load test dataloader for testing
self.reset_test_dataloader(ref_model)
self.run_evaluation(test_mode=True)
return
line 1177:
self.testing = True
if model is not None:
self.fit(model)
self.run_evaluation(test_mode=True)
When calling trainer.test(model) at line 1177, self.testing will be set to True; since model is not None, self.fit(model) will be called; then in line 1104 at fit(), self.testing is True, so the self.run_evaluation will be called first time. After that, the self.run_evaluation in test() (as in line 1180) will be called again, thus result in twice test evulation.
Potential fix
change the code snippet in test() to
self.testing = True
if model is not None:
#as self.testing = True, self.fit(model) will only perform test
self.fit(model)
else:
#if model is not None, then the test has been already performed in fit()
self.run_evaluation(test_mode=True)
might fix the problem?
P.S. if possible, I'd be happy to submit a PR on this :)
|
TypeError: validation_step() takes 3 positional arguments but 4 were given
|
[
"bug",
"help wanted"
] |
π Bug
When running my model I get the error message: TypeError: validation_step() takes 3 positional arguments but 4 were given
Stacktrace:
line 106, in <module> trainer.fit(model)
line 707, in fit self.run_pretrain_routine(model)
line 812, in run_pretrain_routine self.evaluate(model, self.get_val_dataloaders(),self.num_sanity_val_steps, self.testing)
line 234, in evaluate
test)
line 365, in evaluation_forward
output = model.validation_step(*args)
To Reproduce
Steps to reproduce the behavior:
Install PyTorch-lightning with pip install pytorch-lightning
Follow the tutorial from William Falcon
Run it
See error
Code sample
Here is my validation step:
def validation_step(self, val_batch, batch_idx):
x, y = val_batch
logits = self.forward(x)
loss = self.cross_entropy_loss(logits, y)
return {'val_loss': loss}
Expected behavior
No error
Environment
Python version: 3.7
Is CUDA available: Yes
CUDA runtime version: 10.2.89
GPU models and configuration: GPU 0: GeForce GTX 1050 Ti
Nvidia driver version: 442.19
cuDNN version: Could not collect
Versions of relevant libraries:
[pip3] numpy==1.16.1
[conda] blas 1.0 mkl
[conda] mkl 2020.0 166
[conda] mkl-service 2.3.0 py37hb782905_0
[conda] mkl_fft 1.0.15 py37h14836fe_0
[conda] mkl_random 1.1.0 py37h675688f_0
[conda] pytorch 1.4.0 py3.7_cuda101_cudnn7_0 pytorch
[conda] pytorch-ignite 0.4.0.dev20200229 pypi_0 pypi
[conda] pytorch-lightning 0.6.0 pypi_0 pypi
[conda] torchvision 0.4.1 pypi_0 pypi
[conda] torchviz 0.0.1 pypi_0 pypi
Additional context
Related Issue I think:
#105
|
I can't find callbacks on pl.Trainer()
|
[
"question"
] |
I saw in Documentation that show ' argument', but when a run I got a error
The same for 'num_tpu_core's argument
TypeError Traceback (most recent call last)
<ipython-input-23-62ad7a703918> in <module>()
2
3 model = LightningBirdsClassifier(num_classes=200)
----> 4 trainer = pl.Trainer(gpus=-1, max_nb_epochs=120, callbacks=[ProgressBar()])
5
6 trainer.fit(model)
TypeError: __init__() got an unexpected keyword argument 'callbacks'
|
The difference between Module and Trainer load from checkpoint?
|
[
"docs"
] |
LightningModule has a function load_from_checkpoint, while the trainer also has a variable, namely resume_from_checkpoint, what's the difference between them?
By the way, I want to print the best result of the checkpoint whenever I resume from the ckpt, how can I do this job? I override the on_load_checkpoint of LightningModule, but nothing happens...
|
Error: object has no attribute 'num_train_imgs' in version master, but not 0.6.0
|
[
"bug",
"help wanted"
] |
I am running on Ubuntu 18, and trying to run the vq_vae.yaml config from this package:
https://github.com/AntixK/PyTorch-VAE
It runs fine under version 0.6.0 with python 3.7 but when I update to lighting-master (0.6.1) I get this error:
AttributeError: 'VAEXperiment' object has no attribute 'num_train_imgs'
The relevant .py file is attached:
experiment.zip
Is there a change-log that would tell me what might be wrong since AntixK wrote this package for 0.6.0 and I'd like to use it with Apex, which isn't supported on 0.6.0
π Bug
To Reproduce
Steps to reproduce the behavior:
Go to '...'
Run '....'
Scroll down to '....'
See error
Code sample
Expected behavior
Environment
Please copy and paste the output from our
environment collection script
(or fill out the checklist below manually).
You can get the script and run it with:
wget https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py
# For security purposes, please check the contents of collect_env.py before running it.
python collect_env.py
PyTorch Version (e.g., 1.0):
OS (e.g., Linux):
How you installed PyTorch (conda, pip, source):
Build command you used (if compiling from source):
Python version:
CUDA/cuDNN version:
GPU models and configuration:
Any other relevant information:
Additional context
|
Why is there no training_epoch_end?
|
[
"feature",
"help wanted",
"let's do it!"
] |
π Feature
If i want to calculate and log average statistics for the training epoch, it seems like there is no option to define a "training_epoch_end" in the LightningModule, as there is validation_epoch_end and test_epoch_end.
Motivation
Seems very intuitive to have this function. I know the on_epoch_end hook exists, but the "outputs" object with training history for that epoch is not available.
Pitch
Same behavior of validation_epoch_end and test_epoch_end in training.
Sorry if there is something like this already, just started to use Pl. (the master version).
|
fast_dev_run -> unit_test
|
[
"feature",
"discussion"
] |
Anyone want to make this change?
rename fast_dev_run -> unit_test
add checking the test set as well (currently only checks val, train).
|
[distributed] set_nvidia_flags doesn't affect dp, does affect ddp
|
[
"bug",
"help wanted",
"good first issue",
"priority: 0"
] |
π Bug
When default CUDA GPU order differs from PCI_BUS_ID order the user won't use the same GPUs in DP and DDP modes.
Trainer(gpus='0,1,2', distributed_backend='dp') uses gpus with PCI_BUS_ID's: '0,1,3' (on my machine) whereas Trainer(gpus='0,1,2', distributed_backend='ddp') uses gpus with PCI_BUS_ID's: '0,1,2'
It seems that setting environment variables programmatically doesn't actually affect that process, but it does affect spawned processes (ex. torch.multiprocessing.spawn)
os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID"
os.environ["CUDA_VISIBLE_DEVICES"] = $GPU_STR
To Reproduce
Steps to reproduce the behavior (I don't know how to reproduce GPU ordering, let me know if I should provide my own outputs)
Have machine with multiple GPUs ordered differently by CUDA and PCI_BUS_ID
pip install pytorch-lightning (or just pip install torch, mine is 1.4.0)
Use loop to check device order (and environment variables) before setting os.environ:
for x in range($NUM_GPUS):
print(f'Data parallel device id: {x}: name: {torch.cuda.get_device_name(x)}
Set os.environ programmatically as in pytorch-lightning distrib set_nvidia_flags
Use loop in 2. and observe no change
Multiprocessing success example gist available:
2a. Run gist with cuda_gpu_test.py . Observe Device order is changed only in spawned processes
2b. Run gist with CUDA_DEVICE_ORDER=PCI_BUS_ID cuda_gpu_test.py. Observe Device order is same in spawned processes.
2c. [optional] run gist with CUDA_VISIBLE_DEVICES=0,1 cuda_gpu_test.py. Observe pytorch error because certain gpus are not available
Expected behavior
Using the same gpus parameter / flag for trainer should use same local gpus no matter dp or ddp.
Setting CUDA_DEVICE_ORDER to PCI_BUS_ID should change the order no matter dp or ddp.
Environment
PyTorch version: 1.4.0
Is debug build: No
CUDA used to build PyTorch: 10.1
OS: Ubuntu 19.10
GCC version: (Ubuntu 9.2.1-9ubuntu2) 9.2.1 20191008
CMake version: Could not collect
Python version: 3.7
Is CUDA available: Yes
CUDA runtime version: Could not collect
GPU models and configuration:
GPU 0: GeForce RTX 2080 Ti
GPU 1: GeForce RTX 2080 Ti
GPU 2: GeForce GT 710
GPU 3: GeForce RTX 2080 Ti
Nvidia driver version: 435.21
cuDNN version: Could not collect
Versions of relevant libraries:
[pip] numpy==1.18.1
[pip] pytorch-lightning==0.7.1
[pip] torch==1.4.0
[conda] pytorch-lightning 0.7.1 pypi_0 pypi
[conda] torch 1.4.0 pypi_0 pypi
(Installed pytorch-lightning from master in this environment, but pytorch is the relevant package here, as pytorch-lightning code I'm talking about hasn't changed in 5 months)
Additional context
Obviously a niche case, mostly wanted to bring to attention that the programmatic "PCI_BUS_ID" order doesn't actually do anything in DP mode. I think for most users they won't notice this, but my build can't seem to figure out which slots go to which GPUs.
This does have implications about CUDA_VISIBLE_DEVICES in ddp vs dp, as the environment variable goes into effect on ddp spawned processes but doesn't affect dp.
If the user doesn't set CUDA_VISIBLE_DEVICES from the command line I believe all devices are available to torch, but if the user explicitly sets that env variable they will be limited to the env variable (tested on minimal gist, torch throws "invalid device id" if cuda makes the device not visible).
An obnoxious workaround is importing torch after setting CUDA_DEVICE_ORDER, as noted on stack overflow. I tried this in an edit to the minimal gist but I'm pretty sure that's not going to be very useful to pytorch-lightning.
It seems to me that DP might need to launch a new process to put the lightning defined environment variables in effect. This would make it functionally more similar to ddp and ddp2 (haven't tested ddp2 personally), but does not make much sense from the pytorch / python side of things.
I don't have a minimal example with a pytorch lightning module as I ran into this today on a research project I've been working on. Would be glad to set one up with one of the lightning examples, but this might be specific to my machine.
Been loving the repo so far, cheers!
|
Update CHANGELOG for 0.7.x
|
[
"help wanted"
] |
π Bug
Updated CHANGELOG according to the reset changes (about last two weeks) especially deprecated items like data_loader or xxxxx_end
Additional context
https://github.com/PyTorchLightning/pytorch-lightning/milestone/4
|
Training on TPU stuck at "Waiting to connect to client mesh master (300 seconds) localhost:54541"
|
[
"bug",
"help wanted"
] |
π Bug
I am training GPT2 model on TPU but training is getting stuck with following as the last line:
tensorflow/compiler/xla/xla_client/mesh_service.cc:208] Waiting to connect to client mesh master (300 seconds) localhost:54541
To Reproduce
I have followed all steps as outlined in https://github.com/mgrankin/ru_transformers/tree/master/tpu to train a GPT2 model on TPU on Google Cloud. As mentioned there, I was able to successfully run MNIST example without any issue
python /pytorch/xla/test/test_train_mp_mnist.py
But when I ran the full training which is on a small dataset (10MB) just to make sure it runs successfully, the training is getting stuck with above line and doesn't proceed further. When I press Ctrl-C, I can see it is waiting in socket polling. I have tried restarting the TPU but same problem is observed.
Steps to reproduce the behavior:
Run the fit.sh present in the repo here: https://github.com/mgrankin/ru_transformers after all the necessary configuration.
Logs
TPU Hang.log
Expected behavior
Training should complete successfully.
Environment
Collecting environment information...
PyTorch version: 1.5.0a0+65bad41
Is debug build: No
CUDA used to build PyTorch: None
OS: Debian GNU/Linux 9 (stretch)
GCC version: (Debian 6.3.0-18+deb9u1) 6.3.0 20170516
CMake version: version 3.14.0
Python version: 3.6
Is CUDA available: No
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
Versions of relevant libraries:
[pip] numpy==1.18.1
[pip] numpydoc==0.9.1
[pip] torch==1.5.0a0+65bad41
[pip] torch-xla==0.8+98a2790
[pip] torchvision==0.6.0a0+b6f28ec
[conda] blas 1.0 mkl
[conda] mkl 2019.4 243
[conda] mkl-service 2.3.0 py36he904b0f_0
[conda] mkl_fft 1.0.14 py36ha843d7b_0
[conda] mkl_random 1.1.0 py36hd6b4f25_0
[conda] torch 1.5.0a0+65bad41 <pip>
[conda] torch-xla 0.8+98a2790 <pip>
[conda] torchvision 0.6.0a0+b6f28ec <pip>
```### Additional context
This is my first time using TPU for training.
|
Github 0.7.1 release
|
[
"docs"
] |
If I do install pip install pytorch-lightning I get version 0.7.1 however, there is no official 0.7.1 release on Github.
Is it intentional?
Note: that there is 0.7.1 git tag
|
Callback derived class is called without module argument
|
[
"bug",
"help wanted"
] |
π Bug
The class Callback(abc.ABC) API expects trainer and sometimes pl_module to be supplied. E.g.:
def on_init_start(self, trainer):
pass
See the definition
However, the caller TrainerCallbackHookMixin calls the callback methods without the module argument. E.g.:
for callback in self.callbacks:
callback.on_init_start(self) # self is Trainer / TrainerCallbackHookMixin
See the calls
To Reproduce
Follow the documentation for callbacks
and expect similar output
Traceback (most recent call last):
File "./train_encdec_pl.py", line 33, in <module>
callbacks=[SavePredictionCB])
File "/workspace/oplatek/code/demo/venv/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 239, in __init__
self.on_init_start()
File "/workspace/oplatek/code/demo/venv/lib/python3.6/site-packages/pytorch_lightning/trainer/callback_hook.py", line 18, in on_init_start
callback.on_init_start(self)
TypeError: on_init_start() missing 1 required positional argument: 'trainer'
|
Issue running on Colab TPUs
|
[
"bug",
"help wanted"
] |
I am trying to train my model using Colab TPUs, but I am getting the following error and am a bit baffled. Any help or guidance would be greatly appreciated.
/usr/local/lib/python3.6/dist-packages/pytorch_lightning/core/decorators.py:13: UserWarning: data_loader decorator deprecated in 0.7.0. Will remove 0.9.0
warnings.warn(w)
2020-03-10 01:18:06.634594: I tensorflow/compiler/xla/xla_client/mesh_service.cc:208] Waiting to connect to client mesh master (300 seconds) localhost:57985
2020-03-10 01:18:06.638992: I tensorflow/compiler/xla/xla_client/mesh_service.cc:208] Waiting to connect to client mesh master (300 seconds) localhost:57985
2020-03-10 01:18:06.645716: I tensorflow/compiler/xla/xla_client/mesh_service.cc:208] Waiting to connect to client mesh master (300 seconds) localhost:57985
2020-03-10 01:18:06.653893: I tensorflow/compiler/xla/xla_client/mesh_service.cc:208] Waiting to connect to client mesh master (300 seconds) localhost:57985
2020-03-10 01:18:06.660726: I tensorflow/compiler/xla/xla_client/mesh_service.cc:208] Waiting to connect to client mesh master (300 seconds) localhost:57985
2020-03-10 01:18:06.663941: I tensorflow/compiler/xla/xla_client/mesh_service.cc:208] Waiting to connect to client mesh master (300 seconds) localhost:57985
2020-03-10 01:18:06.669584: I tensorflow/compiler/xla/xla_client/mesh_service.cc:208] Waiting to connect to client mesh master (300 seconds) localhost:57985
2020-03-10 01:18:15.441052: E tensorflow/compiler/xla/xla_client/tf_logging.cc:11] Check failed: session.Run({tensorflow::Output(result, 0)}, &outputs) == ::tensorflow::Status::OK() (Internal: From /job:tpu_worker/replica:0/task:0:
Global core array does not contain host cores [0x0x0_TC0,0x0x0_TC1,1x0x0_TC0,1x0x0_TC1,0x1x0_TC0,0x1x0_TC1,1x1x0_TC0,1x1x0_TC1]
[[{{node configure_distributed_tpu/_0}}]] vs. OK)
*** Begin stack trace ***
tensorflow::CurrentStackTrace[abi:cxx11]()
xla::XrtComputationClient::InitializeAndFetchTopology(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, int, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, tensorflow::ConfigProto const&)
xla::XrtComputationClient::InitializeDevices(std::unique_ptr<tensorflow::tpu::TopologyProto, std::default_delete<tensorflow::tpu::TopologyProto> >)
xla::XrtComputationClient::XrtComputationClient(xla::XrtComputationClient::Options, std::unique_ptr<tensorflow::tpu::TopologyProto, std::default_delete<tensorflow::tpu::TopologyProto> >)
xla::ComputationClient::Create()
xla::ComputationClient::Get()
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
PyObject_Call
_PyEval_EvalFrameDefault
PyObject_Call
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyFunction_FastCallDict
_PyObject_FastCallKeywords
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
PyEval_EvalCode
PyRun_FileExFlags
PyRun_SimpleFileExFlags
Py_Main
main
__libc_start_main
_start
*** End stack trace ***
Traceback (most recent call last):
File "main.py", line 114, in <module>
main(hparams)
File "main.py", line 95, in main
trainer.fit(model)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/trainer.py", line 614, in fit
xmp.spawn(self.tpu_train, args=(model,), nprocs=self.num_tpu_cores, start_method=start_method)
File "/usr/local/lib/python3.6/dist-packages/torch_xla/distributed/xla_multiprocessing.py", line 182, in spawn
start_method=start_method)
File "/usr/local/lib/python3.6/dist-packages/torch/multiprocessing/spawn.py", line 158, in start_processes
while not context.join():
File "/usr/local/lib/python3.6/dist-packages/torch/multiprocessing/spawn.py", line 119, in join
raise Exception(msg)
Exception:
-- Process 0 terminated with the following error:
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/torch/multiprocessing/spawn.py", line 20, in _wrap
fn(i, *args)
File "/usr/local/lib/python3.6/dist-packages/torch_xla/distributed/xla_multiprocessing.py", line 116, in _start_fn
_setup_replication()
File "/usr/local/lib/python3.6/dist-packages/torch_xla/distributed/xla_multiprocessing.py", line 108, in _setup_replication
device = xm.xla_device()
File "/usr/local/lib/python3.6/dist-packages/torch_xla/core/xla_model.py", line 137, in xla_device
devkind=[devkind] if devkind is not None else None)
File "/usr/local/lib/python3.6/dist-packages/torch_xla/core/xla_model.py", line 41, in get_xla_supported_devices
xla_devices = torch_xla._XLAC._xla_get_devices()
RuntimeError: tensorflow/compiler/xla/xla_client/xrt_computation_client.cc:1198 : Check failed: session.Run({tensorflow::Output(result, 0)}, &outputs) == ::tensorflow::Status::OK() (Internal: From /job:tpu_worker/replica:0/task:0:
Global core array does not contain host cores [0x0x0_TC0,0x0x0_TC1,1x0x0_TC0,1x0x0_TC1,0x1x0_TC0,0x1x0_TC1,1x1x0_TC0,1x1x0_TC1]
[[{{node configure_distributed_tpu/_0}}]] vs. OK)
*** Begin stack trace ***
tensorflow::CurrentStackTrace[abi:cxx11]()
xla::XrtComputationClient::InitializeAndFetchTopology(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, int, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, tensorflow::ConfigProto const&)
xla::XrtComputationClient::InitializeDevices(std::unique_ptr<tensorflow::tpu::TopologyProto, std::default_delete<tensorflow::tpu::TopologyProto> >)
xla::XrtComputationClient::XrtComputationClient(xla::XrtComputationClient::Options, std::unique_ptr<tensorflow::tpu::TopologyProto, std::default_delete<tensorflow::tpu::TopologyProto> >)
xla::ComputationClient::Create()
xla::ComputationClient::Get()
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
PyObject_Call
_PyEval_EvalFrameDefault
PyObject_Call
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyFunction_FastCallDict
_PyObject_FastCallKeywords
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
PyEval_EvalCode
PyRun_FileExFlags
PyRun_SimpleFileExFlags
Py_Main
main
__libc_start_main
_start
*** End stack trace ***
|
Advices on some cases hard to remove cuda() call.
|
[
"question"
] |
β Questions and Help
What is your question?
I understand that it's beneficial to remove .cuda() or .to() calls to make the code flexible. But I experience in some cases it's hard to know on what device my tensor is.
In the following code, my batch is raw strings, so the batch is passed as pure python list not tensor. In this case how can I use .type_as() to move my newly created tensor to relevant device? Help needed.
Code
def training_step(self, batch, batch_idx):
x, y = batch # x = list(), not tensor.
# put the z on the appropriate gpu or tpu core
z = sample_noise()
z = z.type_as(x.type()) # still I need to move z to relevant device depending on situation. What should I do?
|
TensorBoardLogger should be able to add metric names in hparams
|
[
"feature",
"help wanted",
"won't fix"
] |
π Feature
TensorBoard allows investigating the effect of hyperparameters in the hparams tab. Unfortunately, the log_hyperparams function in TensorBoardLogger cannot add any information about which of the logged metrics is actually a "metric" which can be used for such a comparison.
Motivation
I would like to use the built-in hparams module of TensorBoard to evaluate my trainings.
Pitch
PyTorch-Lightning should give me the possibility to define the metrics of my model in some way such that any logger is able to derive which metric may be used for hyperparameter validation, as well as other possible characteristics which may be defined for those.
Additional context
The hparams method of a summary takes the following parameters:
def hparams(hparam_dict=None, metric_dict=None):
metric_dict is basically a dictionary mapping metric names to values, whereas the values are omitted in the function itself.
|
Enable artifact logging for mlflow logger
|
[
"feature",
"help wanted",
"logger"
] |
π Feature
mlflow provides the ability to log artifacts (a local file or directory). However, the MLFlowLogger class does not have the wrapper method for this functionality.
Motivation
I always use the log_artifacts method of mlflow to log things like the last weights file for the run, confusion matrix image for the last set of weights, etc. It would be great to have these features integrated with the lightning logger.
Pitch
I'm currently using the mlflow logger and not the pytorch lightning's MLFlowLogger class but having this functionality would make the migration better.
Additional context
https://www.mlflow.org/docs/latest/python_api/mlflow.html#mlflow.log_artifact
https://www.mlflow.org/docs/latest/python_api/mlflow.html#mlflow.log_artifacts
https://www.mlflow.org/docs/latest/concepts.html#referencing-artifacts
|
Wandb logger doesn't upload saved model checkpoint for final epoch
|
[
"bug",
"help wanted",
"logger"
] |
π Bug
When training a model on the TPU and using the wandb logger, the checkpoint for the last epoch trained doesn't get uploaded to wandb.
To Reproduce
Colab notebook: https://colab.research.google.com/drive/1oPaRWGZcz6YEol012xFADN42LV-jowtT
|
Hyperparameter Search
|
[
"question"
] |
Hi,
It looks like the hyperparameters search example is broken (https://github.com/optuna/optuna/blob/master/examples/pytorch_lightning_simple.py). Since this is such a common task, is there any example that documents how to properly integrate with a library like optuna or ray tune?
Thanks!
|
configure_optimizers with OneCycleLR and Pretrain Freeze/Unfreeze
|
[
"question",
"won't fix"
] |
Hello. Thanks for the work on this framework - it's something I've been looking for and I am currently working on transition all my own work from fast.ai to pytorch-lightining. I'm currently stuck on the configure_optimizers step.
For those not familiar, the core workflow of fast.ai goes something like this:
#create model with frozen pretrained resnet backbone and untrained linear head
model = MyResnetBasedModel()
learner = Learner(model, ...)
#train the head
learner.fit_one_cycle(5)
#unfreeze pretrained layers and train whole model
learner.unfreeze()
learner.fit_one_cycle(5)
fast.ai uses it's own system for implementing the OneCycleScheduler and it's not the most transparent system. PyTorch has an implementation of the OneCycleScheduler which their documentation illustrates as follows:
data_loader = torch.utils.data.DataLoader(...)
optimizer = torch.optim.SGD(model.parameters(), lr=0.1, momentum=0.9)
scheduler = torch.optim.lr_scheduler.OneCycleLR(optimizer, max_lr=0.01, steps_per_epoch=len(data_loader), epochs=10)
Note that OneCycleLR needs to know the total number of steps (or steps per epoch + epochs, from which it determines total steps) in order to generate the correct schedule. configure_optimizers does not appear to offer a way of accessing the necessary values to initialize OneCycleLR, as in my code below.
def configure_optimizers(self):
optimzer = torch.optim.AdamW(self.parameters(), lr=self.hparams.lr)
scheduler = torch.optim.lr_scheduler.OneCycleLR(optimzer, self.hparams.lr, ???) #<---
return optimzer, scheduler
Additionally, it's unclear how the fast.ai flow of freeze, train, unfreeze, train work with Lightning as it appears that configure_optimizers is called once internally by the trainer. It appears it may be possible to train frozen, checkpoint, load and unfreeze but this does add some extra code overhead.
How can I arrange my code to use OneCycleLR with pretrained freezing/unfreezing? Any guidance on how to approach this would be appreciated.
Thanks.
|
Do not have combined train+val progress bar, keep bar output after epoch is finished
|
[
"feature",
"help wanted",
"won't fix"
] |
π Feature
The progress bar labeled "Epoch" should be renamed to "Train" and the validation data should be displayed only in a separate bar.
Additionally, each epoch should leave the final training and validation bar on-screen for visual inspection.
Motivation
It's confusing that training and validation are shown in a single bar, and it destroys the information about the iterations/sec from training. Since validation is faster, the value spikes up right at the end.
Pitch
Have two separate bars, one for training, one for validation, and none combined.
After a bar fills up, move the cursor down a line to not overwrite it.
|
Better message when DataLoader is wrong
|
[
"bug",
"let's do it!"
] |
On the verge between bug and improvement.
There was a bug in my Validation DataLoader and was returning irrelevant staff. Accidentally the length was 0. Probably an edge case combination. The error I was getting during the validation sanity check was quite cryptic:
Traceback (most recent call last):
File "UNet_WaveProp.py", line 174, in <module>
trainer.fit(model)
File "/mnt/RDS/home/code/pytorch-lightning/pytorch_lightning/trainer/trainer.py", line 629, in fit
self.run_pretrain_routine(model)
File "/mnt/RDS/home/code/pytorch-lightning/pytorch_lightning/trainer/trainer.py", line 809, in run_pretrain_routine
False)
File "/mnt/RDS/home/code/pytorch-lightning/pytorch_lightning/trainer/evaluation_loop.py", line 300, in evaluate
eval_results = model.validation_epoch_end(outputs)
File "UNet_WaveProp.py", line 138, in validation_epoch_end
avg_loss = torch.stack([x['val_loss'] for x in outputs]).mean()
RuntimeError: stack expects a non-empty TensorList
I had to go through the code of pytorch-lightning for few hours to understand what was happening.
Maybe a more informative message would make more sense?
One thing would be to check if the DataLoader's size is 0.
What do you think? I could take a stab at a PR.
|
Colab TPU error
|
[
"question",
"accelerator: tpu"
] |
I'm trying to run a LSTM model on TPU with colab. It throws me following error.
Exception in device=TPU:1: Aborted: Session 0275bc9f6430801b is not found.
Exception in device=TPU:3: Aborted: Session 780bc43376b5f650 is not found.
Traceback (most recent call last):
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/torch_xla/distributed/xla_multiprocessing.py", line 119, in _start_fn
fn(gindex, *args)
File "/usr/local/lib/python3.6/dist-packages/torch_xla/distributed/xla_multiprocessing.py", line 119, in _start_fn
fn(gindex, *args)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/distrib_parts.py", line 500, in tpu_train
self.run_pretrain_routine(model)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/distrib_parts.py", line 500, in tpu_train
self.run_pretrain_routine(model)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/trainer.py", line 809, in run_pretrain_routine
False)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/trainer.py", line 809, in run_pretrain_routine
False)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/evaluation_loop.py", line 251, in evaluate
for batch_idx, batch in enumerate(dataloader):
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/evaluation_loop.py", line 251, in evaluate
for batch_idx, batch in enumerate(dataloader):
File "/usr/local/lib/python3.6/dist-packages/torch_xla/distributed/parallel_loader.py", line 31, in __next__
return self.next()
File "/usr/local/lib/python3.6/dist-packages/torch_xla/distributed/parallel_loader.py", line 31, in __next__
return self.next()
File "/usr/local/lib/python3.6/dist-packages/torch_xla/distributed/parallel_loader.py", line 34, in next
xm.mark_step()
File "/usr/local/lib/python3.6/dist-packages/torch_xla/distributed/parallel_loader.py", line 34, in next
xm.mark_step()
File "/usr/local/lib/python3.6/dist-packages/torch_xla/core/xla_model.py", line 405, in mark_step
wait=xu.getenv_as('XLA_SYNC_WAIT', bool, False))
File "/usr/local/lib/python3.6/dist-packages/torch_xla/core/xla_model.py", line 405, in mark_step
wait=xu.getenv_as('XLA_SYNC_WAIT', bool, False))
RuntimeError: Aborted: Session 0275bc9f6430801b is not found.
RuntimeError: Aborted: Session 780bc43376b5f650 is not found.
Exception in device=TPU:5: Aborted: Session e191a99b56d63c29 is not found.
Traceback (most recent call last):
Anyone has any ideas what this could mean ?
Thanks in advance!
|
Change the way the configure_optimizers() returns
|
[
"feature",
"help wanted",
"won't fix",
"discussion"
] |
π Feature
Force the method LightningModule.configure_optimizers() to return two lists.
Motivation
Right now you offer flexibility in return one of the following:
- Single optimizer
- List or Tuple - List of optimizers
- Two lists - The first list has multiple optimizers, the second a list of LR schedulers
But, that can be simplified to just returning two lists, one for optimizer and one for lr_schedulers since it does not make any sense to have that flexibility when you actually just convert it to two lists at the end of the day
Pitch
Forcing to return two lists, adds minimum overhead to researches and coders while being compliant to your codebase already.
The change aims at reducing variability while maintaining the same degree of flexibility. This will make memorizing the behavior of your framework easier and more straight forward.
It can be done as follows:
- Single optimizer - [optimizer], []
- List or Tuple - List of optimizers, [optimizer_1, optimizer_2], []
- Two lists - Multiple optimizers, single lr_scheduler - [optimizer_1, optimizer_2], [lr_scheduler]
- Two lists - Multiple optimizers, single lr_scheduler - [optimizer_1, optimizer_2], [lr_scheduler_1, lr_scheduler_2]
Alternatives
N/A
Additional context
Past Slack discussion: https://pytorch-lightning.slack.com/archives/CRBLFHY79/p1584064294116400
|
IterableDataset Issue, OverflowError: cannot convert float infinity to integer
|
[
"bug",
"help wanted"
] |
Hey all,
I am very new to ML and PyTorch and PyTorch Lightning, so if this is a simple problem sorry to bother.
However, I am struggling to switch from PyTorch to PyTorch Lightning. The PyTorch code runs with no error on Google Colab hence I think that the structure is fine.
Now I am trying to implement Lightning following: https://towardsdatascience.com/from-pytorch-to-pytorch-lightning-a-gentle-introduction-b371b7caaf09
These are the link to my codes:
https://github.com/ykukkim/MLevent/blob/master/final_model.py, PyTorch
https://github.com/ykukkim/MLevent/blob/master/lightningtest.py PyTorch Lightning
However, I get the following error and it seems that this is to do with IterableDataset. As my datasets are imbalance, meaning that I do not have a constant length of the dataset as well as there are more 0βs than 1βs, approximately 100:1, hence I need to penalise the 0βs by multiplying it with an arbitrary number.
I guess that this issue has been raised a few times, and I am not too sure whether there is a general fix or I have to play around with my dataset.
Traceback (most recent call last):
File "/Applications/PyCharm CE.app/Contents/plugins/python-ce/helpers/pydev/pydevd.py", line 1434, in _exec
pydev_imports.execfile(file, globals, locals) # execute the script
File "/Applications/PyCharm CE.app/Contents/plugins/python-ce/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "/Users/YKK/Documents/GitHub/mlevent/lightningtest.py", line 201, in <module>
main(hparams)
File "/Users/YKK/Documents/GitHub/mlevent/lightningtest.py", line 182, in main
trainer.fit(model)
File "/Users/YKK/anaconda3/envs/LMBTrain/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 707, in fit
self.run_pretrain_routine(model)
File "/Users/YKK/anaconda3/envs/LMBTrain/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 771, in run_pretrain_routine
self.get_dataloaders(ref_model)
File "/Users/YKK/anaconda3/envs/LMBTrain/lib/python3.7/site-packages/pytorch_lightning/trainer/data_loading.py", line 200, in get_dataloaders
self.init_train_dataloader(model)
File "/Users/YKK/anaconda3/envs/LMBTrain/lib/python3.7/site-packages/pytorch_lightning/trainer/data_loading.py", line 79, in init_train_dataloader
self.val_check_batch = int(self.num_training_batches * self.val_check_interval)
**OverflowError: cannot convert float infinity to integer**
|
Race condition and repeated os.remove in load_spawn_weights
|
[
"question",
"won't fix"
] |
In multi-gpu, multi-node training, after training is finished, the following error occurs:
0: File ".../estevens/pytorch/lightning_resnet50.py", line 126, in <module>
0: ResnetLightningExample.init_from_cli(sys.argv[1:]).main()
0: File "...//lightning_utils/lightning_utils.py", line 215, in main
0: trainer.fit(self)
0: File ".../pytorch_lightning/trainer/trainer.py", line 593, in fit
0: self.load_spawn_weights(model)
0: File ".../pytorch_lightning/trainer/distrib_data_parallel.py", line 372, in load_spawn_weights
0: os.remove(path)
0: FileNotFoundError: [Errno 2] No such file or directory: '/mnt/sun-pcs01/estevens/resnet50/__temp_weight_ddp_end.ckpt'
I think this is a similar issue to other os.remove bugs, but I'm worried that merely adding an if rank == 0 will move the issue to a race condition (where rank 0 removes the file before other processes can load it).
|
Add support for hierarchical dict
|
[
"feature",
"let's do it!"
] |
π Feature
Motivation
Since v0.7.0, LightningModule accepts dict hparams, however, still TensorBoardLogger raises an error with hierarchical dict. Considering the compatibility of the other package, especially Hydra #807, hierarchical dict should be accepted by any loggers.
Pitch
Flatten hierarchical dict before hparam logging
Alternatives
The function _convert_params in loggers/base.py will be changed like:
def _convert_params(self, params: Union[Dict[str, Any], Namespace]) -> Dict[str, Any]:
# in case converting from namespace
if isinstance(params, Namespace):
params = vars(params)
# added part
params = flatten_dict(params) # convert {'a': {'b': 'c'}} -> {'a/b': 'c'}
if params is None:
params = {}
return params
|
epoch_end Logs Default to Steps Instead of Epochs
|
[
"bug",
"help wanted"
] |
π Bug
Logs generated within validation_epoch_end have their iteration set to the number of steps instead of number of epochs.
To Reproduce
Steps to reproduce the behavior:
Create LightningModile with the below functions.
Train the module for 1 or more epochs
Run Tensorboard
Note the iteration for logs generated in validation_epoch_end
Code Sample
def training_step(self, train_batch, batch_idx):
x, y = train_batch
y_hat = self.forward(x)
loss = self.cross_entropy_loss(y_hat, y)
accuracy = self.accuracy(y_hat, y)
logs = {'batch/train_loss': loss}
return {'loss': loss, 'log': logs}
def validation_step(self, val_batch, batch_idx):
x, y = val_batch
y_hat = self.forward(x)
loss = self.cross_entropy_loss(y_hat, y)
accuracy = self.accuracy(y_hat, y)
return {'batch/val_loss': loss, 'batch/accuracy': accuracy}
def validation_epoch_end(self, outputs):
avg_loss = torch.stack([x['batch/val_loss'] for x in outputs]).mean()
avg_accuracy = torch.stack([x['batch/accuracy'] for x in outputs]).mean()
tensorboard_logs = {'epoch/val_loss': avg_loss, 'epoch/accuracy' : avg_accuracy}
return {'avg_val_loss': avg_loss, 'accuracy' : avg_accuracy, 'log': tensorboard_logs}
Version
v0.7.1
Screenshot
Expected behavior
Logs generated in *_epoch_end functions use the epoch as their iteration on the y axis
The documentation seems unclear on the matter. The Loggers documentation is empty. The LightningModule class doesn't describe logging in detail and the Introduction Guide has a bug in the example self.logger.summary.scalar('loss', loss) -> AttributeError: 'TensorBoardLogger' object has no attribute 'summary'.
Is there a workaround for this issue?
|
Question about return value of `validation_epoch_end`
|
[
"question",
"won't fix"
] |
β Questions and Help
Before asking:
search the issues.
search the docs.
What is your question?
I'm a bit confused about what to return from methods like validation_epoch_end and what to put inside its log member.
Based on the document the log member of the return value of validation_epoch_end mainly for logging and plotting?
In the MNIST example, if I change the validation_epoch_end method to
def validation_epoch_end(self, outputs):
# OPTIONAL
avg_loss = torch.stack([x['val_loss'] for x in outputs]).mean()
tensorboard_logs = {'val_loss': avg_loss}
return {'avg_val_loss': avg_loss}
I will get a RuntimeWarning: Can save best model only with val_loss available, skipping.. It seems that it's looking metrics inside the log member to determine best model.
If I change the training_stepmethod to
def training_step(self, batch, batch_nb):
# REQUIRED
x, y = batch
y_hat = self.forward(x)
loss = F.cross_entropy(y_hat, y)
tensorboard_logs = {'train_loss': loss}
return {'log': tensorboard_logs}
and only put train_loss inside log, I will get a RuntimeError: No loss value in the dictionary returned frommodel.training_step(). It seems that some procedure is looking for value inside the return value but not its log member.
I'm confused about what to put inside these methods' return value and their log member.
Updated:
Now I encountered this issue, I'm getting more and more confused why the test result will be found in return of progress_bar member...
Maybe I'm missing something, but I didn't find details of all theses in the docs.
Versions
pytorch-lightning: 0.7.1.
|
No validation checks when overfit_pct is set
|
[
"bug",
"help wanted"
] |
π Bug
When setting the overfit_pct to any value between 0 and 1 (exclusive) in trainer, the validation checks are disabled.
To Reproduce
I have worked on a minimal example to reproduce the bug:
import pytorch_lightning as pl
import torch
class Dataset(torch.utils.data.Dataset):
def __init__(self, input_dim, output_dim):
super(Dataset, self).__init__()
self.input_dim = input_dim
self.output_dim = output_dim
def __getitem__(self, idx):
X = torch.rand(1, self.input_dim)
y = torch.randint(0, self.output_dim, (1,))
return X, y
def __len__(self):
return 1000
class Model(pl.LightningModule):
def __init__(self, input_dim, output_dim):
super(Model, self).__init__()
self.layer = torch.nn.Linear(input_dim, output_dim)
self.dataset = Dataset(input_dim, output_dim)
def forward(self, x, y):
yhat = torch.softmax(self.layer(x), -1)
return F.nll_loss(logits, y)
def train_dataloader(self):
return torch.utils.data.DataLoader(self.dataset, batch_size=64)
def configure_optimizers(self):
return torch.optim.Adam(self.parameters(), lr=1e-3)
def training_step(self, batch, batch_idx):
loss = self.forward(*batch)
return {'loss': loss, 'log': {'loss': loss}}
def validation_step(self, batch, batch_idx):
loss = self.forward(*batch)
return {'val_loss': loss, 'log': {'val_loss': loss}}
if __name__ == '__main__':
model = Model(100, 10)
trainer = pl.Trainer(overfit_pct=.01)
trainer.fit(model)
Expected behavior
Validation checks occur normally
Environment
PyTorch version: 1.4.0
Is debug build: No
CUDA used to build PyTorch: 10.1
OS: Manjaro Linux
GCC version: (GCC) 8.3.0
CMake version: Could not collect
Python version: 3.7
Is CUDA available: No
CUDA runtime version: 10.2.89
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: /usr/lib/libcudnn.so.7.6.5
Versions of relevant libraries:
[pip] numpy==1.18.1
[pip] pytorch-lightning==0.7.1
[pip] torch==1.4.0
[pip] torchvision==0.5.0
[conda] mkl 2020.0 166
[conda] pytorch 1.4.0 py3.7_cuda10.1.243_cudnn7.6.3_0 pytorch
[conda] pytorch-lightning 0.7.1 pypi_0 pypi
[conda] torchvision 0.5.0 py37_cu101 pytorch
|
Learning Rate Schedulers' default dictionary parameters should be set via the Trainer
|
[
"feature",
"help wanted"
] |
π Feature
The default Learning Rate Schedulers (LRS) dictionary parameters should be settable from the Trainer constructor.
Motivation
The documentation doesn't seem to be clear that the LRS have the following additional parameters available to be set when you configure the optimizers:
'interval': 'epoch', # default every epoch
'frequency': 1, # default every epoch/batch
'reduce_on_plateau': False, # most often not ReduceLROnPlateau scheduler
'monitor': 'val_loss'
Pitch
Set those defaults in the constructor of the Trainer and the user can set them themselves.
Alternatives
Force the return type of the function LightningModule.configure_optimizers() to be List[Optimizer], List[Dict]
|
multi-gpu ddp calls validation and testing loops too many times
|
[
"bug",
"help wanted"
] |
When using ddp with multiple gpus, each validation and test loop is called with the entire validation dataset for each gpu.
Expected behavior is that the dataset is divided appropriately across the gpus.
I am using current master (cloned Mar 14), Ubuntu 19.10, Cuda 10.1, python 3.7.5, pytorch 1.4, venv environment.
The problem appears to be in auto_add_sampler() in data_loading.py. It does not create a DistributedSampler for validation or test datasets.
|
No Callbacks for Validation Batch Step - How To Get Progress of Validation?
|
[
"feature",
"help wanted"
] |
π Feature
The Callbacks has two functions on_batch_start and on_batch_end. The documentation and code execution shows that these are only called for training batches, not validaton (or test). I am building a Callback for my own logging/dashboarding via Streamlit and I have a requirement to track the progress of both the train batch steps and validation batch steps during training. I see that there is a validation progress bar in the console during training - how is this implemented (if not via a Callback)?
Current Code
class MyCallback(pl.Callback):
def on_train_start(self, trainer, pl_module):
self.train_batch_count = 0
self.train_batch_total = len(trainer.train_dataloader)
def on_batch_end(self, train, pl_module):
self.train_batch_count += 1
percent_complete = self.train_batch_count / self.train_batch_total
# display on screen
def on_epoch_end(self, train, pl_module):
self.train_batch_count = 0
|
training_epoch needs to return a "loss" key in the dict
|
[
"docs"
] |
π Documentation
Hi everyone!
In the docs detailing the usage of the logging functions training_epoch_end, if "loss": loss is not explicitly passed as a return value, then the code will fail.
The docs at https://pytorch-lightning.readthedocs.io/en/latest/experiment_reporting.html#log-metrics is not correct,
def training_epoch_end(self, outputs):
loss = some_loss()
...
logs = {'train_loss': loss}
results = {'log': logs}
return results
shall be changed to
def training_epoch_end(self, outputs):
loss = some_loss()
...
logs = {'train_loss': loss}
results = {'loss': loss, 'log': logs} # <------------------------ Here is the change
return results
Use case: I want to monitor the training metrics to check whether my network is able to overfit the data (for this functionality, refer to #1076 )
|
Additional dataloader created and discarded when training with reload_dataloaders_every_epoch
|
[
"bug",
"help wanted"
] |
π Bug
I am training with reload_dataloaders_every_epoch and I've noticed it instantiates an extra DataLoader before training for which nothing is run. This is an issue for me as I am training with chunks that get loaded every epoch and it is messing with the order I load them in especially if I reload a checkpoint; it would be an issue for people that generate a new dataset every epoch as they waste computation. The tqdm bar also keeps the information of the first, discarded DataLoader (in the screenshot, the number of iterations is the same for both whereas they should be different sizes)
To Reproduce
Run the code sample below, which runs for one epoch and displays a message every time a DataLoader is created.
A DataLoader gets instantiated a first time line 286 in training_loop.py outside of the epoch loop (that's the usual time it gets instantiated when not reloading every epoch. Then when using reload_dataloaders_every_epoch another one is created at the start of every epoch line 386, inside the loop, so for the first epoch there's an extra one.
Code sample
import torch
import pytorch_lightning as pl
from torch.utils.data import DataLoader, Dataset
from time import sleep
class MinimalDataset(Dataset):
def __init__(self, index):
self.data = torch.Tensor(64 * index, 1024)
def __getitem__(self, item):
return self.data[item]
def __len__(self):
return len(self.data)
class MinimalModule(pl.LightningModule):
def __init__(self):
super(MinimalModule, self).__init__()
self.nn = torch.nn.Linear(1024, 1)
self.current_index = 0
def forward(self, batch):
return self.nn(batch)
def training_step(self, batch, batch_idx):
sleep(0.1)
loss = self.nn(batch)[0]
return {'loss': loss}
def validation_step(self, batch, batch_idx):
loss = self.nn(batch)[0]
return {'val_loss': loss}
def configure_optimizers(self):
return torch.optim.Adam(self.parameters(), lr=0.01)
def train_dataloader(self):
# REQUIRED
self.current_index += 1
print(f"initializing DataLoader n{self.current_index}")
data_loader = DataLoader(MinimalDataset(self.current_index))
return data_loader
model = MinimalModule()
trainer = pl.Trainer(reload_dataloaders_every_epoch=True, num_sanity_val_steps=0, val_check_interval=8, max_epochs=1)
trainer.fit(model)
Expected behavior
Only one dataloader should be created; two are. The tqdm bar should show 128 iterations as that is the dataset size the second time; but it shows 64 instead (I added the sleep(0.1) to leave time to observe that)
Environment
PyTorch version: 1.4.0
Is debug build: No
CUDA used to build PyTorch: 10.1
OS: Ubuntu 18.04.4 LTS
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
CMake version: Could not collect
Python version: 3.6
Is CUDA available: Yes
CUDA runtime version: Could not collectepoch_end
GPU models and configuration: GPU 0: GeForce RTX 2070 with Max-Q Design
Nvidia driver version: 435.21
cuDNN version: Could not collect
Versions of relevant libraries:
[pip3] numpy==1.18.1
[pip3] pytorch-lightning==0.7.1
[pip3] torch==1.4.0
[pip3] torchvision==0.4.2
[conda] Could not collect
Additional context
|
Restarts ignores `val_check_interval`
|
[
"bug",
"help wanted",
"won't fix"
] |
π Bug
With val_check_interval != 1, checkpoints are saved in the middle of the epoch, but that location is not saved in the checkpoint, and after a restart, it always begins from the beginning of the last epoch.
To reproduce:
Run any training with val_check_interval=0.2
Kill training after, say, 40% of an epoch
Restart from the saved checkpoint
You will find that it is starting from 0% of that checkpoint
Solution:
Save and load batch_idx
Save and load the dataloader (or just the sampler). According to this https://discuss.pytorch.org/t/how-to-save-dataloader/62813/4, it is just torch.save(dataloader), but I haven't tried.
|
neptune.ai logger console error: X-coordinates must be strictly increasing
|
[
"bug",
"help wanted",
"logger"
] |
π Bug
When using the neptune.ai logger, epochs automatically get logged, even though I never explicitly told it to do so. Also, I get an error, yet everything seems to get logged correctly (apart from the epochs, which also get logged every training step and not every epoch):
WARNING:neptune.internal.channels.channels_values_sender:Failed to send channel value: Received batch errors sending channels' values to experiment BAC-32. Cause: Error(code=400, message='X-coordinates must be strictly increasing for channel: 2262414e-e5fc-4a8f-b3f8-4a8d84d7a5e2. Invalid point: InputChannelValue(2020-03-18T17:18:04.233Z,Some(164062.27599999998),None,Some(Epoch 35: 99%|#######', type=None) (metricId: '2262414e-e5fc-4a8f-b3f8-4a8d84d7a5e2', x: 164062.27599999998) Skipping 3 values.
To Reproduce
Steps to reproduce the behavior:
Write a lightning module and use the neptune.ai logger. See my own code below.
Code sample
class MemoryTest(pl.LightningModule):
# Main Testing Unit for Experiments on Recurrent Cells
def __init__(self, hp):
super(MemoryTest, self).__init__()
self.predict_col = hp.predict_col
self.n_datasamples = hp.n_datasamples
self.dataset = hp.dataset
if self.dataset is 'rand':
self.seq_len = None
else:
self.seq_len = hp.seq_len
self.hparams = hp
self.learning_rate = hp.learning_rate
self.training_losses = []
self.final_loss = None
self.model = RecurrentModel(1, hp.n_cells, hp.n_layers, celltype=hp.celltype)
def forward(self, input, input_len):
return self.model(input, input_len)
def training_step(self, batch, batch_idx):
x, y, input_len = batch
features_y = self.forward(x, input_len)
loss = F.mse_loss(features_y, y)
mean_absolute_loss = F.l1_loss(features_y, y)
self.training_losses.append(mean_absolute_loss.item())
tensorboard_logs = {'batch/train_loss': loss, 'batch/mean_absolute_loss': mean_absolute_loss}
return {'loss': loss, 'batch/mean_absolute_loss': mean_absolute_loss, 'log': tensorboard_logs}
def on_epoch_end(self):
train_loss_mean = np.mean(self.training_losses)
self.final_loss = train_loss_mean
self.logger.experiment.log_metric('epoch/mean_absolute_loss', y=train_loss_mean, x=self.current_epoch)
self.training_losses = [] # reset for next epoch
def on_train_end(self):
self.logger.experiment.log_text('network/final_loss', str(self.final_loss))
def configure_optimizers(self):
return torch.optim.SGD(self.parameters(), lr=self.learning_rate)
@pl.data_loader
def train_dataloader(self):
train_dataset = dg.RandomDataset(self.predict_col, self.n_datasamples)
if self.dataset == 'rand_fix':
train_dataset = dg.RandomDatasetFix(self.predict_col, self.n_datasamples, self.seq_len)
if self.dataset == 'correlated':
train_dataset = dg.CorrelatedDataset(self.predict_col, self.n_datasamples)
train_loader = DataLoader(dataset=train_dataset, batch_size=1)
return train_loader
@staticmethod
def add_model_specific_args(parent_parser):
# MODEL specific
model_parser = ArgumentParser(parents=[parent_parser])
model_parser.add_argument('--learning_rate', default=1e-2, type=float)
model_parser.add_argument('--n_layers', default=1, type=int)
model_parser.add_argument('--n_cells', default=5, type=int)
model_parser.add_argument('--celltype', default='LSTM', type=str)
# training specific (for this model)
model_parser.add_argument('--epochs', default=500, type=int)
model_parser.add_argument('--patience', default=200, type=int)
model_parser.add_argument('--min_delta', default=0.01, type=float)
# data specific
model_parser.add_argument('--n_datasamples', default=1000, type=int)
model_parser.add_argument('--seq_len', default=10, type=int)
model_parser.add_argument('--dataset', default='rand', type=str)
model_parser.add_argument('--predict_col', default=2, type=int)
return model_parser
def main(hparams):
neptune_logger = NeptuneLogger(
project_name="dunrar/bachelor-thesis",
params=vars(hparams),
)
early_stopping = EarlyStopping('batch/mean_absolute_loss', min_delta=hparams.min_delta, patience=hparams.patience)
model = MemoryTest(hparams)
trainer = pl.Trainer(logger=neptune_logger,
gpus=hparams.cuda,
val_percent_check=0,
early_stop_callback=early_stopping,
max_epochs=hparams.epochs)
trainer.fit(model)
Expected behavior
Epochs should not be logged without my explicit instruction. Also there should be no error when running that code.
|
hparams need to allow None values
|
[
"help wanted",
"question"
] |
π Bug
I can't set hparams.gpus to None:
I0318 11:12:45.466972 15554 lightning_utils.py:182] <class '__main__.ResnetLightningExample'> hparams: Namespace(amp_level='O2', backend='', batch_size=16, debug_print_env=False, debug_skip_loaded_hparams_check=False, do_test=False, early_stop_metric='val_loss', early_stop_mode='min', early_stop_patience=10, enable_batch_size_scaling=True, enable_early_stop=False, gpus=None, learning_rate=0.01, max_epochs=1, min_epochs=1, model_load_checkpoint_path='', model_save_path='', nodes=1, use_amp=False)
Traceback (most recent call last):
File "/home/estevens/.cache/bazel/_bazel_estevens/7fa7f74bbe03dba6c4e36403df17d704/execroot/zoox/bazel-out/k8-py3-fastbuildcuda/bin/experimental/estevens/pytorch/lightning_resnet50.runfiles/zoox/experimental/estevens/pytorch/lightning_resnet50.py", line 128, in <module>
ResnetLightningExample.init_from_cli(sys.argv[1:]).main()
File "/home/estevens/.cache/bazel/_bazel_estevens/7fa7f74bbe03dba6c4e36403df17d704/execroot/zoox/bazel-out/k8-py3-fastbuildcuda/bin/experimental/estevens/pytorch/lightning_resnet50.runfiles/zoox/tflight/lightning_utils/lightning_utils.py", line 255, in main
trainer.fit(self)
File "/home/estevens/.cache/bazel/_bazel_estevens/7fa7f74bbe03dba6c4e36403df17d704/execroot/zoox/bazel-out/k8-py3-fastbuildcuda/bin/experimental/estevens/pytorch/lightning_resnet50.runfiles/pypi__pytorch_lightning_python3_deps/pytorch_lightning/trainer/trainer.py", line 630, in fit
self.run_pretrain_routine(model)
File "/home/estevens/.cache/bazel/_bazel_estevens/7fa7f74bbe03dba6c4e36403df17d704/execroot/zoox/bazel-out/k8-py3-fastbuildcuda/bin/experimental/estevens/pytorch/lightning_resnet50.runfiles/pypi__pytorch_lightning_python3_deps/pytorch_lightning/trainer/trainer.py", line 748, in run_pretrain_routine
self.logger.log_hyperparams(ref_model.hparams)
File "/home/estevens/.cache/bazel/_bazel_estevens/7fa7f74bbe03dba6c4e36403df17d704/execroot/zoox/bazel-out/k8-py3-fastbuildcuda/bin/experimental/estevens/pytorch/lightning_resnet50.runfiles/pypi__pytorch_lightning_python3_deps/pytorch_lightning/loggers/base.py", line 18, in wrapped_fn
fn(self, *args, **kwargs)
File "/home/estevens/.cache/bazel/_bazel_estevens/7fa7f74bbe03dba6c4e36403df17d704/execroot/zoox/bazel-out/k8-py3-fastbuildcuda/bin/experimental/estevens/pytorch/lightning_resnet50.runfiles/pypi__pytorch_lightning_python3_deps/pytorch_lightning/loggers/tensorboard.py", line 113, in log_hyperparams
exp, ssi, sei = hparams(params, {})
File "/home/estevens/.cache/bazel/_bazel_estevens/7fa7f74bbe03dba6c4e36403df17d704/execroot/zoox/bazel-out/k8-py3-fastbuildcuda/bin/experimental/estevens/pytorch/lightning_resnet50.runfiles/pypi__torch_python3_deps/torch/utils/tensorboard/summary.py", line 156, in hparams
raise ValueError('value should be one of int, float, str, bool, or torch.Tensor')
ValueError: value should be one of int, float, str, bool, or torch.Tensor
To Reproduce
To reproduce, have parser.add_argument('--gpus', default=None, type=str) and then don't give the --gpus CLI argument.
Expected behavior
Screen out None values before sending them on.
PyTorch Version (e.g., 1.0): 1.4.0
OS (e.g., Linux): Ubuntun 14.04
How you installed PyTorch (conda, pip, source): pip
Build command you used (if compiling from source):
Python version: 3.6.8
CUDA/cuDNN version: 10.0
GPU models and configuration: 2080 Ti
Any other relevant information:
|
Refactor fit/train/run_pretrain_routine/evaluate/test
|
[
"discussion"
] |
Right now, a lot of model setup happens in fit and run_pretrain routine. However, that model setup needs to happen before we run evaluate, test, etc, so we end up calling fit even when we want to do testing. This makes the code hard to follow.
We should refactor model setup out into its own method that fit, train, test, etc. all call to make things a little more clear. I need to do some more work to figure out what the current state is and what the new state we should aim for is, but I wanted to get an issue created to track this.
|
Early stopping not working on 0.7.1
|
[
"bug",
"help wanted"
] |
π Bug
Early stopping does not work anymore. When I downgrade from 0.7.1 or the current dev version to 0.6.0 early stopping works again, with the same code.
Code sample
def main(hparams):
if hparams.early_stopping == 'yes':
early_stopping = EarlyStopping(
monitor='batch/mean_absolute_loss',
min_delta=hparams.min_delta,
patience=hparams.patience,
mode='min'
)
else:
early_stopping = False
model = MemoryTest(hparams)
trainer = pl.Trainer(
val_percent_check=0,
early_stop_callback=early_stopping,
default_save_path=src.settings.LOG_DIR,
max_epochs=hparams.epochs
)
trainer.fit(model)
class MemoryTest(pl.LightningModule):
# Main Testing Unit for Experiments on Recurrent Cells
def __init__(self, hp):
super(MemoryTest, self).__init__()
self.predict_col = hp.predict_col
self.n_datasamples = hp.n_datasamples
self.dataset = hp.dataset
if self.dataset is 'rand':
self.seq_len = None
else:
self.seq_len = hp.seq_len
self.hparams = hp
self.learning_rate = hp.learning_rate
self.training_losses = []
self.final_loss = None
self.model = RecurrentModel(1, hp.n_cells, hp.n_layers, celltype=hp.celltype)
def forward(self, input, input_len):
return self.model(input, input_len)
def training_step(self, batch, batch_idx):
x, y, input_len = batch
features_y = self.forward(x, input_len)
loss = F.mse_loss(features_y, y)
mean_absolute_loss = F.l1_loss(features_y, y)
self.training_losses.append(mean_absolute_loss.item())
neptune_logs = {'batch/train_loss': loss, 'batch/mean_absolute_loss': mean_absolute_loss}
return {'loss': loss, 'batch/mean_absolute_loss': mean_absolute_loss, 'log': neptune_logs}
def on_epoch_end(self):
train_loss_mean = np.mean(self.training_losses)
self.final_loss = train_loss_mean
self.training_losses = [] # reset for next epoch
def configure_optimizers(self):
return torch.optim.SGD(self.parameters(), lr=self.learning_rate)
@pl.data_loader
def train_dataloader(self):
train_dataset = dg.RandomDataset(self.predict_col, self.n_datasamples)
if self.dataset == 'rand_fix':
train_dataset = dg.RandomDatasetFix(self.predict_col, self.n_datasamples, self.seq_len)
if self.dataset == 'correlated':
train_dataset = dg.CorrelatedDataset(self.predict_col, self.n_datasamples)
train_loader = DataLoader(dataset=train_dataset, batch_size=1)
return train_loader
@staticmethod
def add_model_specific_args(parent_parser):
# MODEL specific
model_parser = ArgumentParser(parents=[parent_parser])
model_parser.add_argument('--learning_rate', default=1e-2, type=float)
model_parser.add_argument('--n_layers', default=1, type=int)
model_parser.add_argument('--n_cells', default=5, type=int)
model_parser.add_argument('--celltype', default='LSTM', type=str)
# training specific (for this model)
model_parser.add_argument('--epochs', default=500, type=int)
model_parser.add_argument('--patience', default=5, type=int)
model_parser.add_argument('--min_delta', default=0.1, type=float)
model_parser.add_argument('--early_stopping', default='yes', type=str)
# data specific
model_parser.add_argument('--n_datasamples', default=1000, type=int)
model_parser.add_argument('--seq_len', default=10, type=int)
model_parser.add_argument('--dataset', default='rand', type=str)
model_parser.add_argument('--predict_col', default=1, type=int)
return model_parser
Expected behavior
Early-stopping to take effect again.
|
Logging the learning rate
|
[
"feature",
"help wanted",
"discussion"
] |
Hey,
I think it would a cool feature to add a flag enabling the logging of the learning rate(s).
Thanks for your amazing work !
|
Allow custom scatter function in data parallel
|
[
"help wanted",
"question",
"won't fix",
"strategy: dp"
] |
π Feature
Allow custom scatter function to be passed in data parallel module.
Motivation
Is there a way to customize scattering process in data parallel? My use case is that I have sparse tensors represented in COO format and they cannot be stored in a single tensor, but require a list to store. In this way, the built-in scatter cannot split the list properly, it tries to iterate over the list and split each of its item.
|
update docs to recommend __call__ for forward passes
|
[
"docs"
] |
π Documentation
We should update the docs to recommend usage of self(x) for calculating the forward pass rather than self.forward(x). Calling forward() directly can cause issues when you're using PyTorch model hooks (eg. see the additional logic in nn.Module.__call__).
Although most people don't play around with hooks and using self.forward() will work fine for them, we should probably follow the best practice (as I understand) of using __call__ for calculating forward passes.
Related issues: #632
|
Dataloader starving the gpu
|
[
"bug",
"help wanted",
"won't fix"
] |
Hey,
Thank you for this amazing library !
I'm using pytorch_lightning to train a segmentation model on 3D images. Augmentation on these images is quite slow, mostly because I do full volume elastic transforms, which takes ~2s per image on a single cpu.
I'm running a large unet with precision 16 and amp_level 'O2', I'm using pytorch 1.4, pytorch_lightning 7.1.
I noticed that these augmentations do lead to an underusage of the GPU which seems to be waiting for the data: not surprising if my augmentation is slow. But in fact, the time it takes to run an epoch is (much) larger than the time to run an epoch without augmentation + the time to do the augmentation.
To measure that, I changed my dataset so that it 'preloads' all the augmentations before the epoch starts i.e. loads in ram memory the tensors ready to be fed to the network, and then just return these tensors in the get_item method of my dataset. In this setting, I'm roughly 50% faster than originally, when doing the augmentation in the get_item method !
Is this expected behaviour ? Are the dataloaders pre-computing the get_item so that the GPU is fed as fast as possible ? More generally, what do you advise, when augmentation is slow, to best feed the gpu ?
Additional question: I couldn't make pytorch_lightning profiler work, I tried profiler=True in the trainer declaration or to set it to the BaseProfiler or to AdvancedProfiler and nothing seems to happen. Is this the right way to use it ?
Best,
Maxime
|
tensorboard hyperparameters don't update
|
[
"bug",
"help wanted"
] |
π Bug
Given two sets of HPARAMS, h_1 and h_2 where h_1 is a strict subset of h_2.
If you run pytorch lightning with parameters h_1, then h_2, the additional parameters from h_2 are not shown in tensorboard
If you run pytorch lightning with parameters h_2, then h_1, the missing parameters from h_1 are shown empty in tensorboard
Case 2 is fine, Case 1 is not.
I already issued this at tensorboard but was directed back here again.
To Reproduce
Run code
start tensorboard --logdir=lightning_logs in same directory
Go to HPARAMS in website
See only layer_1_dim
Code sample
import pytorch_lightning as pl
from argparse import ArgumentParser
import torch
class LitMNIST(pl.LightningModule):
def __init__(self, hparams):
super(LitMNIST, self).__init__()
self.hparams = hparams
self.layer_1 = torch.nn.Linear(28 * 28, self.hparams.layer_1_dim)
def forward(self, *args, **kwargs):
pass
if __name__ == '__main__':
parser = ArgumentParser()
parser.add_argument('--layer_1_dim', type=int, default=10)
args = parser.parse_args()
# print(args)
## > Namespace(layer_1_dim=10)
model = LitMNIST(hparams=args)
trainer = pl.Trainer()
try:
trainer.fit(model)
except:
pass
parser = ArgumentParser()
parser.add_argument('--layer_1_dim', type=int, default=10)
parser.add_argument('--another_hyperparameter', type=int, default=10)
args = parser.parse_args()
# print(args)
## > Namespace(another_hyperparameter=10, layer_1_dim=10)
model = LitMNIST(hparams=args)
trainer = pl.Trainer()
try:
trainer.fit(model)
except:
pass
Change that "solves" the problem: First call net with both parameters
import pytorch_lightning as pl
from argparse import ArgumentParser
import torch
class LitMNIST(pl.LightningModule):
def __init__(self, hparams):
super(LitMNIST, self).__init__()
self.hparams = hparams
self.layer_1 = torch.nn.Linear(28 * 28, self.hparams.layer_1_dim)
def forward(self, *args, **kwargs):
pass
if __name__ == '__main__':
parser = ArgumentParser()
parser.add_argument('--layer_1_dim', type=int, default=10)
parser.add_argument('--another_hyperparameter', type=int, default=10)
args = parser.parse_args()
# print(args)
## > Namespace(another_hyperparameter=10, layer_1_dim=10)
model = LitMNIST(hparams=args)
trainer = pl.Trainer()
try:
trainer.fit(model)
except:
pass
parser = ArgumentParser()
parser.add_argument('--layer_1_dim', type=int, default=10)
args = parser.parse_args()
# print(args)
## > Namespace(layer_1_dim=10)
model = LitMNIST(hparams=args)
trainer = pl.Trainer()
try:
trainer.fit(model)
except:
pass
Expected behavior
Run code
start tensorboard --logdir=lightning_logs in same directory
Go to HPARAMS in website
See layer_1_dim and another_hyperparameter
but another_hyperparameter empty in version0
Hackaround:
Run second code. The trick is to call the net with all hyperparameters first, then tensorboard gets another_hyperparameter
Environment
PyTorch Version (e.g., 1.0): py3.7_cuda101_cudnn7_0
OS (e.g., Linux): Win10
How you installed PyTorch: conda
Build command you used (if compiling from source): -
Python version: 3.7.0
CUDA/cuDNN version: 10.1
GPU models and configuration: GTX1650
Any other relevant information:
conda list for pytorch: pytorch-lightning 0.7.1 pypi_0 pypi
|
WandbLogger does not log test results
|
[
"bug",
"help wanted",
"logger"
] |
π Bug
The WandbLogger does not log test results when testing happens right after training.
To Reproduce
When running the MNIST example with the WandbLogger like in the following snippet, the test results do not get logged to wandb because it syncs before testing starts:
Code sample
...
trainer = pl.Trainer(logger=WandbLogger(...))
trainer.fit(model)
trainer.test()
Expected behavior
WandbLogger should also log the test results by default like TensorBoardLogger or TestTubeLogger to be a real alternative for the other loggers.
Environment
PyTorch version: 1.4.0
Is debug build: No
CUDA used to build PyTorch: 10.1
OS: Ubuntu 18.04.4 LTS
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
CMake version: version 3.10.2
Python version: 3.7
Is CUDA available: No
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
Versions of relevant libraries:
[conda] pytorch-lightning 0.7.1 pypi_0 pypi
[conda] torch 1.4.0 pypi_0 pypi
[conda] torchvision 0.5.0 pypi_0 pypi
|
Save checkpoing under the lightning_logs/version_X/ directory
|
[
"duplicate",
"help wanted",
"question"
] |
π Bug
After running training the output file structure looks like
epoch=9_vl_val_loss=10.10.ckpt
lightning_logs/
βββ version_0
β βββ events.out.tfevents.1585053395.dltn.22357.0
β βββ meta_tags.csv
but the expected file structure looks like
lightning_logs/
βββ version_0
β βββ events.out.tfevents.1585053395.dltn.22357.0
β βββ meta_tags.csv
β βββ epoch=9_vl_val_loss=10.10.ckpt
To Reproduce
Steps to reproduce the behavior:
Use PyTorch 1.4 and PL 0.7.1
Run the following snippet "checkpoint_demo.py"
Code sample
#!/usr/bin/env python
"""checkpoint_demo.py"
from torch.utils import data
import torch
import torch.nn as nn
import torch.optim as optim
from pytorch_lightning import Trainer
from pytorch_lightning import LightningModule
from pytorch_lightning.callbacks import ModelCheckpoint
class ConstantDataset(data.Dataset):
def __len__(self): return 6
def __getitem__(self, idx):
c = torch.tensor(7.0, dtype=torch.float)
return c, c
class CheckpointDemo(LightningModule):
def __init__(self):
super(CheckpointDemo, self).__init__()
self.linear = nn.Linear(1, 1)
@staticmethod
def createModelCheckpoint():
return ModelCheckpoint(monitor='val_loss', mode='min',
filepath='./{epoch}_vl_{val_loss:.2f}',
# filepath='{epoch}_vl_{val_loss:.2f}', # if just filename it raises exception
# "/workspace/oplatek/code/.../venv/lib/python3.6/site-packages/pytorch_lightning/callbacks/model_checkpoint.py",
# os.makedirs(self.dirpath, exist_ok=True)
# File "/workspace/bin/anaconda3/lib/python3.6/os.py", line 220, in makedirs
# mkdir(name, mode)
# FileNotFoundError: [Errno 2] No such file or directory: ''
save_weights_only=False,
verbose=True)
def forward(self, x):
return self.linear(x)
def train_dataloader(self):
return data.DataLoader(ConstantDataset(), batch_size=1)
def val_dataloader(self):
return data.DataLoader(ConstantDataset(), batch_size=1)
def configure_optimizers(self):
return optim.Adam(self.parameters(), lr=1.0)
def validation_epoch_end(self, outputs):
val_loss = torch.stack([o['val_loss'] for o in outputs]).mean()
return {'val_loss': val_loss, 'log': {'val_loss': val_loss}}
def training_step(self, batch, batch_idx):
x, y = batch
return {f'loss': torch.nn.functional.mse_loss(self.forward(x), y)}
def validation_step(self, batch, batch_idx):
return {f'val_loss': torch.tensor(10 + (1 / (self.current_epoch + 1)))}
if __name__ == "__main__":
model = CheckpointDemo()
trainer = Trainer(max_epochs=10, checkpoint_callback=CheckpointDemo.createModelCheckpoint())
trainer.fit(model)
|
How to log hparams to Tensorboard?
|
[
"question"
] |
Hello! I'm trying to view my hparams on tensorboard, but can't actually see them there. As I understood from documentation, to log hparams one should add self.hparams in the init of the LightningModule. Here's what I'm doing:
class MyModule(pl.LightningModule):
def __init__(self,hparams):
super().__init__()
self.hparams = Namespace(**{'learning_rate': hparams.learning_rate,
'batch_size':hparams.batch_size,
'normalize_embeddings':hparams.normalize_embeddings,
'normalize': hparams.normalize,
'k_neighbors':hparams.k_neighbors,
'melspec_dir':hparams.melspec_dir})
My hparams also contain Trainer hparams, so to log only the right ones, I wrote this workaround. But anyway, I could not see any of my hparams. What could be the problem?
|
data preprocess tool using hdf5 or tfrecord
|
[
"feature",
"help wanted",
"won't fix",
"design",
"3rd party"
] |
π Feature
A subpackage or tool using hdf5 or tfrecord to preprocess data into one single file.
Motivation
In some field like asr or cv, it is not very novel to just use pytorch dataloader because it may cause speed loss in online data process like making fbank feature(asr) or some transforms(cv). And hdf5 or tfrecord can be a good choice to avoid IO bottleneck and cpu bottleneck. And I think it could be much helpful that our project can have a sub package or tool to do that---either write and read. And there is a texar-pytorch have made such function see:
https://texar-pytorch.readthedocs.io/en/latest/code/data.html#recorddata
also, dataloder utils should be adapted to this because this may need to use iterable dataset plus using num_workers > 0 in dataloader and the missing of the length of the dataset can be a problem for the training process.
Pitch
the link above can be an example but there still a need for writing and loading var length processed feature(tensor dim like [1, sequence_length, feature_dim]) in using hdf5(this can be a little complex)
I tried to write a little tool for this intention https://github.com/tongjinle123/tfrecord_builder
but when I was using it in our project some months ago, I found it hard to use it directly because the iterable dataset is hard to use.
also, there are some awesome tools for this intention like: https://github.com/vahidk/tfrecord
Alternatives
Additional context
It can be much helpful that our project can take this into consideration and please forgive my bad English : )
I hope I have fully expressed my idea.
|
AdvancedProfiler error
|
[
"bug",
"help wanted"
] |
Hi, as others have pointed out, the Profiler doesn't seem to work (it prints nothing), and trying out the AdvancedProfiler as in https://pytorch-lightning.readthedocs.io/en/latest/profiler.html like:
from pytorch_lightning.profiler import AdvancedProfiler
profiler = AdvancedProfiler(output_filename="prof.txt")
trainer = Trainer(profiler=profiler, (other params here)
gives me the following error:
Validation sanity check: 50it [00:00, 212.11it/s] Traceback (most recent call last):
File "/Users/sdumitre/work/style/training.py", line 177, in <module>
main(hparams)
File "/Users/sdumitre/work/style/training.py", line 77, in main
trainer.fit(model)
File "/Users/sdumitre/virtual/p3/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 630, in fit
self.run_pretrain_routine(model)
File "/Users/sdumitre/virtual/p3/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 810, in run_pretrain_routine
_, _, _, callback_metrics, _ = self.process_output(eval_results)
File "/Users/sdumitre/virtual/p3/lib/python3.7/site-packages/pytorch_lightning/trainer/logging.py", line 117, in process_output
callback_metrics[k] = v.item()
ValueError: only one element tensors can be converted to Python scalars
Process finished with exit code 1
Any pointers?
My env: torch 1.4 installed with pip, Python 3.7, no GPU, on, MacOS.
Thanks for the great lib you're developing!
|
Validation progress bar with metrics
|
[
"feature",
"help wanted",
"good first issue",
"let's do it!"
] |
π Feature
Logging validation metrics throughout the duration of validation (e.g., current batch loss/acc or average batch loss/acc so far).
Motivation
If the validation set is large, it'd helpp to know right away if I've loaded the wrong checkpoint or loaded a checkpoint in the wrong way within a couple iterations, rather than waiting to run through the entire validation set.
Pitch
I want to be able to see basically the same progress bar during training as I do during validation/testing.
Alternatives
Not sure of any alternatives - this is quite helpful for debugging and is probably a not-too-large change since training already supports this feature.
|
Allow callbacks to access internal variables of training loop
|
[
"feature",
"help wanted",
"question",
"won't fix",
"let's do it!"
] |
π Feature
Internal variables (batch, predictions, etc) of the training loop (training + validation step) should be made transparent to callbacks. Right now, there is no way to access these internal variables of the training loop through callbacks without making them as attributes of lightning module. This doesn't sound optimal as it pollutes the lightning module with non-essential code.
Motivation
Use case: Visualize images and predictions from a training batch. Right now, there are two ways:
Add a log method in pl module and call this from training_step method of pl module. By doing this, we are essentially polluting pl module with non-essential code.
Write a visualization callback. As of now, callback has access to pl module and trainer but still, it canβt access the variables (images, predictions, etc) in the training step. We can make these variables as attributes of pl module but then updating these attributes in every training step (so that callback can access it) would also count as βnon-essential codeβ which would defeat the point of the callback. It also spoils the neat separation between callbacks and pl module as we'll be caching/updating attributes in pl module even if we switch off the callback.
|
add TPU tests
|
[
"feature",
"help wanted",
"good first issue",
"ci",
"accelerator: tpu"
] |
π Feature
we shall also cover TPU usage as we are supporting it
Motivation
now all changes are tested for GPUs and CPU but we do not have a check for TPU yet
Pitch
getting coverage back to ~99%
|
[metrics] Automatic reduction of metrics from several validation steps
|
[
"feature",
"help wanted",
"discussion"
] |
π Feature
As per the slack, it could be cool to implement this. More detail below.
Motivation
To avoid the user having to do this
logits = torch.cat(x['logits'] for x in output)
labels = torch.cat(x['labels'] for x in output)
and so on ...
Pitch
Something like this:
def collate_metrics(self, output):
"""
Function to collate the output from several validation steps
"""
collated_output = {}
keys = output[0].keys()
for key in keys:
tensor_dim = output[0][key].dim()
if tensor_dim > 0:
collated_output[key] = torch.cat([x[key] for x in output])
elif tensor_dim == 0:
# Reduce scalars by mean
collated_output[key] = torch.tensor([x[key] for x in output]).mean()
return collated_output
Alternatives
Can just add the above to lightning module and use it anyway.
|
Validation every epoch with non-finite dataloader
|
[
"feature",
"help wanted"
] |
π Feature
Providing a way to do validation every epoch with non-finite (__len__ not implemented) dataloaders.
Motivation
Doing validation every epoch is a natural choice, and with finite dataloader you can do it easily by setting val_check_interval=1.0. However, with non-finite dataloader you cannot set val_check_interval to be float. There's no simple way to work around it yet.
Pitch
There can be several way to make it happen.
One solution on top of my head is to let val_check_interval to be None. if it is none, just do the validation according to the number of check_val_every_n_epoch at the end of every epoch.
Alternatives
Alternatively, you can let the user set val_check_interval to be 1.0 even with non-finite dataloaders. Anything below 1.0 would be invalid but only 1.0 can be valid. If it is 1.0 then do the validation according to the number of check_val_every_n_epoch at the end of every epoch.
Additional context
Not all dataloaders without __len__ implemented are infinite dataloaders. Some just cannot decide length in advance. With these dataloaders the concept of 'epoch' is still valid. Pytorch-lightning needs to serve this kind of dataloaders better.
|
Support for passing dataloader to trainer.test()
|
[
"duplicate",
"feature",
"help wanted"
] |
π Feature
dl = DataLoader(...)
trainer = Trainer(...)
trainer.test(model, dl)
Motivation
In most cases of mine, I got a fixed training/validation dataset but other collaborators would send me different datasets to evaluate my model. In which case, I hope not to modify the code inside the CoolSystem. I prefer to build different dataloaders for different collaborators in an evaluation.py, which sounds more reasonable than putting unseen test data into the training process.
I found there are similar features in trainer.fit(). I am wondering if it is possible to have this feature for trainer.test().
Current Solution
My current solution follows a complex way but wish to have a feature to simplify it.
class MyCoolSystem(pl.lightning_module):
...
class CoolSystemForDataA(MyCoolSystem):
def test_dataloader():
# Overriden dataloaders.
|
better checking of data returned from training_step
|
[
"feature",
"good first issue",
"won't fix"
] |
π Feature
let's add more validation checks on what's returned from training_step and provide the user with useful error messages when they're not returning the right values.
Motivation
i feel like i've seen a lot of users confused about what they're supposed to return in training_step and validation_step. additionally, i don't think we document how we treat extra keys as "callback metrics" very well.
Pitch
what you do you think about adding some structure and validation for Trainer's process_output method?
right now, we have expectations about a set of keys {progress_bar, log, loss, hiddens} and assume everything else is a callback metric. however, this is a silent assumption.
we could instead enforce a more rigid structure:
{
'loss': loss # REQUIRED
'log': {} # optional dict
'progress_bar': {} # optional dict
'hiddens': [h0, c0] # optional collection of tensors
'metrics': {} # optional dict
}
moreover, we can leverage pydantic to do validation automatically and provide useful error message out of the box when data validation fails.
cc @PyTorchLightning/core-contributors
Alternatives
Do nothing, keep things as they are.
Additional context
This would be a backwards incompatible change.
|
Neptune.ai logger slow, lags behind training
|
[
"bug",
"help wanted",
"logger"
] |
π Bug
When running a script which trains multiple models after another, I came across the problem that the neptune.ai logger lags behind my training quite severely ( when the model has finished training, the logger is only about halfway there). This would not be such a big problem for me if the next model would start training while the logger still processes the logs of the previous model, but training continues only after the logger has finished.
I tried logging fewer things, only logging metrics per epoch and disabling hardware monitoring by uninstalling psutil. I don't know if this is really a bug, but even if it is not, is there maybe a way to bypass/mitigate this? Is this maybe related to the windows timing error from #1186 @jakubczakon?
To Reproduce
Steps to reproduce the behavior:
Use the neptune.ai logger (with Windows)
Log a metric
Expected behavior
The logger should keep up with training or continue logging previous models while another model is already being trained.
|
incorrect run on the test set with overwritten validation_end and test_epoch_end
|
[
"bug",
"help wanted"
] |
π Bug
If I override validation_end and test_epoch_end, TrainerEvaluationLoopMixin.evaluate works incorrectly on the test set
Suppose we override validation_epoch_end and test_end, but not validation_end and test_epoch_end. (I actually did this since I am a newbie and haven't yet figured out how everything works; also it seems validation_end is the same as validation_epoch_end, and test_end seems to be the same as test_epoch_end). Suppose I run trainer.test(model). Consider lines 300-312 in evaluation_loop.py. Then we have (test_mode and self.is_overriden('test_end', model=model)) == True, so the first if block is executed, that is eval_results = model.test_end(outputs). But look at the second if and its elif. We have (test_mode and self.is_overriden('test_epoch_end', model=model)) == False, hence the elif of the second if will also be executed, that is eval_results = model.validation_epoch_end(outputs). And we will have validation results recorder as test results, which is a mistake.
This problem is present in the commit 60b8246. And the inverse problem (which happens if we override only test_epoch_end and validation_end is present in 0.7.1.
|
model summarize can not log to stdout
|
[
"question"
] |
pytorch-lightning/pytorch_lightning/core/lightning.py
Line 1446
in
ac6692d
log.info('\n' + model_summary.__str__())
during training, the default model summarize function can not log the model summary information to stdout. I guess is because of the logger is not referring to the actual logger.
|
bug(tqdm): creating multiple rows after 70%
|
[
"bug",
"help wanted"
] |
π Bug
When I run a training loop, every epoch it would do tqdm well until roughly 70%, and then create more rows until finished:
Expected behavior
No new tqdm rows
Environment
cuda:
GPU:
GeForce GTX 1080 Ti
available: True
version: 10.1
packages:
numpy: 1.18.1
pyTorch_debug: False
pyTorch_version: 1.4.0
pytorch-lightning: 0.7.1
tensorboard: 2.1.0
tqdm: 4.43.0
system:
OS: Linux
architecture:
64bit
processor: x86_64
python: 3.7.6
version: #1 SMP Fri Dec 6 15:49:49 UTC 2019
Additional context
Running this using PyCharm, not directly on the terminal.
This seems to happen directly in the terminal as well, in tmux
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.