title
stringlengths 5
164
| labels
list | bodyText
stringlengths 0
46.7k
|
---|---|---|
Early Stopping Callback not working
|
[
"help wanted"
] |
π Bug
See #2038. Early stopping stopped too early when using Lightning 0.7.7.dev0 (only on a Slurm cluster, not locally, but I might have been using slightly different Lightning versions). After upgrading to current master, early stopping does not work anymore, at all. I had a similar problem because of not having a val_step (see #1201), maybe something like that crept back in.
To Reproduce
See #2038 but use current master
|
Suggestion to add the default interval of scheduler in the documentation
|
[
"help wanted",
"good first issue",
"docs"
] |
π Documentation
The default interval of scheduler is per epoch. However, it is not explicitly mentioned in the documentation. I have to dig into the code and figure it out.
pytorch-lightning/pytorch_lightning/trainer/optimizers.py
Line 86
in
0914873
default_config = {'interval': 'epoch', # default every epoch
I understand it is sort of common sense for people who are already familiar with training the network in PyTorch, but if lightning is aiming for the more general public, then adding this explicitly into the documentation may help people to better understand the training pipeline of their network.
Maybe we need to add all the default ("interval", "frequency", "reduce_on_plateau", "monitor") into the documentation explicitly.
By the way, I am not sure if the default of scheduler in TF is in step or epoch, but some popular networks (e.g. DeepLabV3) using step instead of epoch as the default interval for their scheduler,
|
Hydra MLFlow Clash
|
[
"bug",
"help wanted",
"good first issue",
"logger"
] |
π Bug
When using the MLFlow logger with Hydra, because the parameters passed to the LightningModule is a DictConfig, the condition in the logger/base.py is not met.
pytorch-lightning/pytorch_lightning/loggers/base.py
Line 177
in
8211256
if isinstance(input_dict, dict):
To Reproduce
Use Hydra and MLFlow together.
Traceback (most recent call last):
File "/home/siavash/KroniKare/kwae2/kwae_ma/models/pl_train_segmentation_model.py", line 115, in <module>
main()
File "/home/siavash/anaconda3/envs/kwae-ma/lib/python3.7/site-packages/hydra/main.py", line 24, in decorated_main
strict=strict,
File "/home/siavash/anaconda3/envs/kwae-ma/lib/python3.7/site-packages/hydra/_internal/utils.py", line 174, in run_hydra
overrides=args.overrides,
File "/home/siavash/anaconda3/envs/kwae-ma/lib/python3.7/site-packages/hydra/_internal/hydra.py", line 86, in run
job_subdir_key=None,
File "/home/siavash/anaconda3/envs/kwae-ma/lib/python3.7/site-packages/hydra/plugins/common/utils.py", line 109, in run_job
ret.return_value = task_function(task_cfg)
File "/home/siavash/KroniKare/kwae2/kwae_ma/models/pl_train_segmentation_model.py", line 111, in main
trainer.fit(wound_seg_pl)
File "/home/siavash/anaconda3/envs/kwae-ma/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 765, in fit
self.single_gpu_train(model)
File "/home/siavash/anaconda3/envs/kwae-ma/lib/python3.7/site-packages/pytorch_lightning/trainer/distrib_parts.py", line 492, in single_gpu_train
self.run_pretrain_routine(model)
File "/home/siavash/anaconda3/envs/kwae-ma/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 843, in run_pretrain_routine
self.logger.log_hyperparams(ref_model.hparams)
File "/home/siavash/anaconda3/envs/kwae-ma/lib/python3.7/site-packages/pytorch_lightning/loggers/base.py", line 275, in log_hyperparams
[logger.log_hyperparams(params) for logger in self._logger_iterable]
File "/home/siavash/anaconda3/envs/kwae-ma/lib/python3.7/site-packages/pytorch_lightning/loggers/base.py", line 275, in <listcomp>
[logger.log_hyperparams(params) for logger in self._logger_iterable]
File "/home/siavash/anaconda3/envs/kwae-ma/lib/python3.7/site-packages/pytorch_lightning/utilities/distributed.py", line 10, in wrapped_fn
return fn(*args, **kwargs)
File "/home/siavash/anaconda3/envs/kwae-ma/lib/python3.7/site-packages/pytorch_lightning/loggers/mlflow.py", line 105, in log_hyperparams
self.experiment.log_param(self.run_id, k, v)
File "/home/siavash/anaconda3/envs/kwae-ma/lib/python3.7/site-packages/mlflow/tracking/client.py", line 206, in log_param
self._tracking_client.log_param(run_id, key, value)
File "/home/siavash/anaconda3/envs/kwae-ma/lib/python3.7/site-packages/mlflow/tracking/_tracking_service/client.py", line 177, in log_param
_validate_param_name(key)
File "/home/siavash/anaconda3/envs/kwae-ma/lib/python3.7/site-packages/mlflow/utils/validation.py", line 120, in _validate_param_name
INVALID_PARAMETER_VALUE)
mlflow.exceptions.MlflowException: Invalid parameter name: ''. Names may be treated as files in certain cases, and must not resolve to other names when treated as such. This name would resolve to '.'
Expected behavior
Check whether the instance if dict or DictConfig in the given line.
|
Keyboard Interrupt lunches test but wandblogger kills the process
|
[
"help wanted",
"question",
"won't fix"
] |
π Bug
I am using WandBLogger and when I have a run that I want to stop manually with a keyboard interrupt, the model correctly stops training and starts executing the Test function. The problem is that at the same time WandBLogger starts uploading the data and when it is done it kills the process, therefore the test function is not finished and that data is not uploaded to the logger ofc.
To Reproduce
Steps to reproduce the behavior:
Execute a run with wandblogger that has (trainer.fit(model); trainer.test())
Ctrl + C
Expected behavior
Execute train -> Ctrl+C -> stop the training -> lunch the test -> upload everything to the logger
|
No validation dataset
|
[
"question"
] |
I am just starting to use pytorch_lightning. I have one question regarding the validation set. I may or may not have a validation set during my training. How should the validation_step be structured in this case?
is it enough do something like this:
@pl.data_loader
def val_dataloader(self):
if has_valset:
return self.__dataloader(train=False)
return None
So if it returns None, then no validation loop calls will be made?
Or do I need something like:
def validation_step(self, batch, batch_idx):
# Is this check necessary?
if not has_vaset:
return
# process the validation batch
|
0.8.0-dev doesn't save hparams.yaml
|
[
"bug",
"help wanted"
] |
π Bug
I install pytorch-lightning from master, version 0.8.0-dev, new version doesn't save arguments in hparams.yaml. old version 0.7.6 can save it correctly
Environment
PyTorch Version (e.g., 1.0): 1.5
OS (e.g., Linux): Linux
How you installed PyTorch (conda, pip, source): source
Build command you used (if compiling from source):
Python version:
CUDA/cuDNN version:
GPU models and configuration:
Any other relevant information:
|
custom training with 0.8.0.dev0 gives import error
|
[
"help wanted",
"question"
] |
Due to another issue and the advise to upgrade to master, I upgraded to 0.8.0.dev0. Now, the same model and no code changes gives the error:
Traceback (most recent call last):
File "/home/luca/project/apps/train_siamnet.py", line 3, in <module>
from ..models.siamnet import SiameseNet
ImportError: attempted relative import with no known parent package
This did not happen before and does not make sense TBH as there is no such invalid import.
After that, the thing just hangs during GGPus initialising phase:
initializing ddp: LOCAL_RANK: 0/3 WORLD_SIZE:4
I am trying to train my model on multiple GPUs and the training code is:
model = SiameseNet(hparams)
if torch.cuda.is_available():
trainer = Trainer(gpus=-1, distributed_backend='ddp')
else:
trainer = Trainer()
trainer.fit(model)
The model def is:
class SiameseNet(pl.LightningModule):
"""
Implement a siamese network as a feature extractor withh Lightning module
"""
def __init__(self,
hparams):
"""
Build the network
"""
super(SiameseNet, self).__init__()
self.net = self._build_net()
self.hparams = hparams
self.train_data_path = hparams.get('train_data_path', None)
self.test_data_path = hparams.get('test_data_path', None)
self.val_data_path = hparams.get('val_data_path', None)
self.train_dataset = None
self.val_dataset = None
self.test_dataset = None
self.lossfn = TripletLoss(margin=1.0)
def forward_once(self, x):
output = self.net(x)
output = torch.squeeze(output)
return output
def forward(self, input1, input2, input3=None):
output1 = self.forward_once(input1)
output2 = self.forward_once(input2)
if input3 is not None:
output3 = self.forward_once(input3)
return output1, output2, output3
return output1, output2
@staticmethod
def _build_net():
net = nn.Sequential(
nn.Conv2d(3, 32,kernel_size=3,stride=2),
nn.ReLU(inplace=True),
nn.BatchNorm2d(32),
nn.Conv2d(32, 64, kernel_size=3, stride=2),
nn.ReLU(inplace=True),
nn.BatchNorm2d(64),
nn.Conv2d(64, 128, kernel_size=3, stride=2),
nn.ReLU(inplace=True),
nn.BatchNorm2d(128),
nn.Conv2d(128, 256, kernel_size=1, stride=2),
nn.ReLU(inplace=True),
nn.BatchNorm2d(256),
nn.Conv2d(256, 256, kernel_size=1, stride=2),
nn.ReLU(inplace=True),
nn.BatchNorm2d(256),
nn.Conv2d(256, 512, kernel_size=3, stride=2),
nn.ReLU(inplace=True),
nn.BatchNorm2d(512),
nn.Conv2d(512, 1024, kernel_size=1, stride=1),
nn.ReLU(inplace=True),
nn.BatchNorm2d(1024))
return net
def prepare_data(self):
transform = torchvision.transforms.Compose([
torchvision.transforms.Resize((128, 128)),
torchvision.transforms.ColorJitter(hue=.05, saturation=.05),
torchvision.transforms.RandomHorizontalFlip(),
torchvision.transforms.RandomRotation(20, resample=PIL.Image.BILINEAR),
torchvision.transforms.ToTensor()
])
if self.train_data_path:
train_folder_dataset = dset.ImageFolder(root=self.train_data_path)
self.train_dataset = SiameseTriplet(image_folder_dataset=train_folder_dataset,
transform=transform)
if self.val_data_path:
val_folder_dataset = dset.ImageFolder(root=self.val_data_path)
self.val_dataset = SiameseTriplet(image_folder_dataset=val_folder_dataset)
if self.test_data_path:
test_folder_dataset = dset.ImageFolder(root=self.test_data_path)
self.test_dataset = SiameseTriplet(image_folder_dataset=test_folder_dataset)
def training_step(self, batch, batch_idx):
anchor, positive, negative = batch
anchor_out, positive_out, negative_out = self.forward(anchor, positive, negative)
loss_val = self.lossfn(anchor_out, positive_out, negative_out)
return {'loss': loss_val}
def configure_optimizers(self):
optimizer = optim.Adam(self.parameters(), lr=self.hparams.get('learning_rate', 0.001))
scheduler = optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max=10)
return [optimizer], [scheduler]
@pl.data_loader
def train_dataloader(self):
if self.train_dataset:
return DataLoader(self.train_dataset,
self.hparams.get('batch_size', 64),
num_workers=12)
return None
|
Adding NVIDIA-SMI like information
|
[
"feature",
"help wanted",
"good first issue",
"let's do it!"
] |
π Feature
Add the GPU usage information during training.
Motivation
Most of the research is done on HPC. Therefore, if I want to see the GPU RAM and usage of my job, I have to open a secondary screen to run "watch nvidia-smi" or "nvidia-smi dmon".
Have this info saved in the logs will help to:
See if I have space for larger batches
Report the correct resources needed to replicate my experiment.
Pitch
When training starts, report the GPU RAM and the GPU usage together with loss and v_num
Alternatives
After the first epoch is loaded into the GPU, log the GPU RAM and the GPU usage
Additional context
|
CUDA error: an illegal memory access was encountered after updating to the latest stable packages
|
[
"help wanted",
"won't fix"
] |
Can anyone help with this CUDA error: an illegal memory access was encountered ??
It runs fine for several iterations...
π Bug
Traceback (most recent call last):
File "train_gpu.py", line 237, in <module>
main_local(hparam_trial)
File "train_gpu.py", line 141, in main_local
trainer.fit(model)
File "/shared/storage/cs/staffstore/username/anaconda3/envs/sh1/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 859, in fit
self.single_gpu_train(model)
File "/shared/storage/cs/staffstore/username/anaconda3/envs/sh1/lib/python3.7/site-packages/pytorch_lightning/trainer/distrib_parts.py", line 503, in single_gpu_train
self.run_pretrain_routine(model)
File "/shared/storage/cs/staffstore/username/anaconda3/envs/sh1/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1015, in run_pretrain_routine
self.train()
File "/shared/storage/cs/staffstore/username/anaconda3/envs/sh1/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 347, in train
self.run_training_epoch()
File "/shared/storage/cs/staffstore/username/anaconda3/envs/sh1/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 419, in run_training_epoch
_outputs = self.run_training_batch(batch, batch_idx)
File "/shared/storage/cs/staffstore/username/anaconda3/envs/sh1/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 604, in run_training_batch
self.batch_loss_value.append(loss)
File "/shared/storage/cs/staffstore/username/anaconda3/envs/sh1/lib/python3.7/site-packages/pytorch_lightning/trainer/supporters.py", line 44, in append
x = x.to(self.memory)
RuntimeError: CUDA error: an illegal memory access was encountered
To Reproduce
Environment
CUDA:
- GPU:
- Quadro P6000
- available: True
- version: 10.2
Packages:
- numpy: 1.18.1
- pyTorch_debug: False
- pyTorch_version: 1.5.0
- pytorch-lightning: 0.7.6
- tensorboard: 2.2.2
- tqdm: 4.46.1
System:
- OS: Linux
- architecture:
- 64bit
-
- processor: x86_64
- python: 3.7.0
- version: #47~18.04.1-Ubuntu SMP Thu May 7 13:10:50 UTC 2020
|
0.8.0-dev hydra changed working directory for other GPUs with DDP
|
[
"help wanted"
] |
π Bug
When I moved from 0.7.6 to 0.8.0-dev for DicConfig support for saving model hparams, I found that working directory changed for GPUs in DDP setting. I modify from huggingface.
Code sample
python: can't open file '/home/joe/summarization/models/bart/outputs/2020-06-05/11-57-25/finetune.py': [Errno 2] No such file or directory
python: can't open file '/home/joe/summarization/models/bart/outputs/2020-06-05/11-57-25/finetune.py': [Errno 2] No such file or directory
python: can't open file '/home/joe/summarization/models/bart/outputs/2020-06-05/11-57-25/finetune.py': [Errno 2] No such file or directory
python: can't open file '/home/joe/summarization/models/bart/outputs/2020-06-05/11-57-25/finetune.py': [Errno 2] No such file or directory
python: can't open file '/home/joe/summarization/models/bart/outputs/2020-06-05/11-57-25/finetune.py': [Errno 2] No such file or directory
python: can't open file '/home/joe/summarization/models/bart/outputs/2020-06-05/11-57-25/finetune.py': [Errno 2] No such file or directory
python: can't open file '/home/joe/summarization/models/bart/outputs/2020-06-05/11-57-25/finetune.py': [Errno 2] No such file or directory
initializing ddp: LOCAL_RANK: 0/7 WORLD_SIZE:8
[2020-06-05 11:57:57,549][lightning][INFO] - initializing ddp: LOCAL_RANK: 0/7 WORLD_SIZE:8
where finetune.py is in /home/joe/summarization/models/bart/
and the program just stuck here like forever.
My hydra config.yaml is
defaults:
- trainer: train
- data: example
model_name_or_path: bart-large
output_dir: /data/models/
cache_dir: ""
config_name: bart-large
tokenizer_name: bart-large
downgrade: True
gradient_accumulation_steps: 1
max_grad_norm: 1
fp16: False
n_tpu_cores: 0
weight_decay: 0
adam_epsilon: 1e-8
Expected behavior
Programs on other GPUs should load finetune.py from '/home/joe/summarization/models/bart/
Environment
version: 0.8.0-dev
PyTorch Version (e.g., 1.0): 1.4
OS (e.g., Linux): ubuntu 18.04
How you installed PyTorch (conda, pip, source): conda
Python version: 3.6
|
Add multi GPU tests for torchelastic nightly
|
[
"help wanted",
"ci"
] |
Add a test to CircleCI that spawns to torchelastic and verify that raining works and that the training model returned is correct.
|
[ddp] New ddp implementation doesn't work in notebooks / using scripts
|
[
"bug",
"help wanted",
"priority: 0"
] |
The using .spawn() to spin off subprocesses ddp in had a few problems:
Everything needs to be picklable.
It doesnβt work well with num_workers on dataloaders because of spawn
fit(model) trains the model in a subprocess, so the original model is not updated.
Those are not limitations of lightning, but of pytorch and python.
As a result, we removed .spawn and instead call the script under the hood.
This approach solves all problems above, but it assumes you can call your model like
python train.py ... and does not support other ways of calling the script.
We should decide how to support DDP on Jupyter notebooks.
|
Compatibility when DataLoader returns multiple batches, for prefetching purposes
|
[
"question"
] |
I'm trying to convert https://github.com/xcmyz/FastSpeech to Pytorch Lightning. The code does something complicated in the DataLoader where batch_size**2 batches are placed on the device, then iterated over, effectively prefetching batch_size batches to use in an inner loop of your typical pytorch training loop (here)
Is there an easy way to make this compatible with Pytorch Lightning without giving up this prefetching optimization?
Edit: I think I was incorrect here, it's not prefetching, he's just defined a DataLoader that returns batches of batches. I think I can probably fix this.
|
Change prefix in learning rate logger from "lr-" to "lr/"
|
[
"feature",
"help wanted",
"won't fix"
] |
This allows all lr related values to be folded in tensorboard.
|
CI: run doctests only on GPU machine
|
[
"feature",
"help wanted",
"ci"
] |
Currently the doctests run on CPU and GPU but we can't include any GPU related doctests, otherwise they would fail on CPU-only machines.
I propose to turn off CI doctests for CPU-only machines and only let them run on machines with GPU (i.e. drone).
This would allow us to write doctests also for GPU related stuff.
@Borda
|
check_model_configuration throws error for no val_dataloader even if val_check_percent == 0
|
[
"won't fix"
] |
The check_model_configuration method raises an error when val_dataloader is None even if val_check_percent is 0, due to the line below.
pytorch-lightning/pytorch_lightning/trainer/trainer.py
Line 1145
in
c09317e
if self.is_overridden('validation_step', model):
This ends up being a bit annoying, since a common use case is to have a LightningModule set up that has a validation_step, but for some part of development temporarily pass in no validation dataset. The current implementation already has a way for disabling validation, which is to set val_check_percent to 0. However, in this case this does not avoid the error in the line above.
Fix should be simple, simply changing the condition to
if self.is_overridden('validation_step', model) and self.val_percent_check != 0:
Or it might also make sense to simply accept the case where no val_dataloader is passed and skip the validation_step, showing a warning to the user. There seems to already be a related warning that says UserWarning: One of given dataloaders is None and it will be skipped..
|
LR finder broken
|
[
"bug",
"help wanted",
"priority: 0"
] |
#614 π Bug
To Reproduce
Steps to reproduce the behavior:
model = TestModel()
trainer = pl.Trainer(gpus=1, default_save_path=exp_path, max_epochs=100)
def configure_optimizers(self):
optim = torch.optim.Adam(self.parameters(), lr=self.lr)
sched = torch.optim.lr_scheduler.ReduceLROnPlateau(optim, 'min')
return [optim], [sched]
# Run learning rate finder
lr_finder = trainer.lr_find(model)
# Results can be found in
lr_finder.results
# Plot with
fig = lr_finder.plot(suggest=True)
fig.show()
The following returns consistently:
optimizer got an empty parameter list
The regular .fit method works as expected.
PL version: '0.7.6'
|
Which places are .item() to be used in?
|
[
"question"
] |
Hi,
I read some of the example code using Lightning and in some places, there is usage of .item() in the *_epoch_end() methods to convert the loss to scalar. Is it needed or not? In PyTorch, I know this is used to prevent memory leaks, but I am unsure if this is done automatically in Lightning behind the scenes?
def validation_epoch_end(self, outputs):
avg_loss = torch.stack([x['val_loss'] for x in outputs]).mean()
return {'avg_val_loss': avg_loss.item()}
What's your environment?
OS: Linux
Packaging: conda
|
Tensorboard logging by epoch instead of by step
|
[
"question",
"logger"
] |
Short question concerning the tensorboard logging:
I am using it like this:
def training_epoch_end(self, outputs):
avg_loss = torch.stack([x['loss'] for x in outputs]).mean()
tensorboard_logs = {'train/loss': avg_loss}
for name in self.metrics:
tensorboard_logs['train/{}'.format(name)] = torch.stack([x['metr'][name] for x in outputs]).mean()
return {'loss': avg_loss, 'log': tensorboard_logs}
It works very well, but in the plots (the x-axis) is the step, so each batch is a step. Is it possible to have the x-axis be the epoch as I want to plot the metrics only per epoch?
|
verify ddp and ddp_spawn implementation
|
[
"bug",
"help wanted",
"priority: 0"
] | |
OmegaConf save to hparams
|
[
"bug",
"help wanted",
"priority: 0"
] |
π Bug
I have updated to the latest version 0.8.0rc1 and wanted to test out the new OmegaConf support. I can pass a OmegaConf object into my model, although saving to hparams says that OmegaConf is an unsupported type. I maybe doing this wrong, but I followed updated docs which shows this should be supported
Traceback (most recent call last):
File "test_n.py", line 105, in <module>
mnist_model = MNISTModel(cfg)
File "test_n.py", line 26, in __init__
self.hparams = hparams
File "/Users/a.bisulco/robotics/.venv/lib/python3.6/site-packages/torch/nn/modules/module.py", line 620, in __setattr__
object.__setattr__(self, name, value)
File "/Users/a.bisulco/robotics/.venv/lib/python3.6/site-packages/pytorch_lightning/core/lightning.py", line 1682, in hparams
self.save_hyperparameters(hp, frame=inspect.currentframe().f_back.f_back)
File "/Users/a.bisulco/robotics/.venv/lib/python3.6/site-packages/pytorch_lightning/core/lightning.py", line 1657, in save_hyperparameters
self._set_hparams(hp)
File "/Users/a.bisulco/robotics/.venv/lib/python3.6/site-packages/pytorch_lightning/core/lightning.py", line 1667, in _set_hparams
raise ValueError(f'Unsupported config type of {type(hp)}.')
To Reproduce
import os
import torch
from torch.nn import functional as F
from torch.utils.data import DataLoader
from torchvision.datasets import MNIST
from torchvision import transforms
import pytorch_lightning as pl
class MNISTModel(pl.LightningModule):
def __init__(self, hparams):
super(MNISTModel, self).__init__()
# not the best model...
# self.hparams = hparams
self.hparams = hparams
self.l1 = torch.nn.Linear(28 * 28, 10)
def forward(self, x):
# called with self(x)
return torch.relu(self.l1(x.view(x.size(0), -1)))
def training_step(self, batch, batch_nb):
# REQUIRED
x, y = batch
y_hat = self(x)
loss = F.cross_entropy(y_hat, y)
tensorboard_logs = {"train_loss": loss}
return {"loss": loss, "log": tensorboard_logs}
def validation_step(self, batch, batch_nb):
# OPTIONAL
x, y = batch
y_hat = self(x)
return {"val_loss": F.cross_entropy(y_hat, y)}
def validation_epoch_end(self, outputs):
# OPTIONAL
avg_loss = torch.stack([x["val_loss"] for x in outputs]).mean()
tensorboard_logs = {"val_loss": avg_loss}
return {"val_loss": avg_loss, "log": tensorboard_logs}
def test_step(self, batch, batch_nb):
# OPTIONAL
x, y = batch
y_hat = self(x)
return {"test_loss": F.cross_entropy(y_hat, y)}
def test_epoch_end(self, outputs):
# OPTIONAL
avg_loss = torch.stack([x["test_loss"] for x in outputs]).mean()
logs = {"test_loss": avg_loss}
return {"test_loss": avg_loss, "log": logs, "progress_bar": logs}
def configure_optimizers(self):
# REQUIRED
# can return multiple optimizers and learning_rate schedulers
# (LBFGS it is automatically supported, no need for closure function)
return torch.optim.Adam(self.parameters(), lr=self.hparams.lr)
def train_dataloader(self):
# REQUIRED
return DataLoader(
MNIST(
os.getcwd(), train=True, download=True, transform=transforms.ToTensor()
),
batch_size=32,
)
def val_dataloader(self):
# OPTIONAL
return DataLoader(
MNIST(
os.getcwd(), train=True, download=True, transform=transforms.ToTensor()
),
batch_size=32,
)
def test_dataloader(self):
# OPTIONAL
return DataLoader(
MNIST(
os.getcwd(), train=False, download=True, transform=transforms.ToTensor()
),
batch_size=32,
)
# train!
train_loader = DataLoader(
MNIST(os.getcwd(), train=True, download=True, transform=transforms.ToTensor()),
batch_size=32,
)
cfg = OmegaConf.create({"lr": 0.001})
mnist_model = MNISTModel(cfg)
# most basic trainer, uses good defaults (1 gpu)
trainer = pl.Trainer(gpus=0, max_epochs=1, logger=None)
trainer.fit(mnist_model)
Expected behavior
The object should be saved to the hparams object.
Environment
CUDA:
GPU:
available: False
version: None
Packages:
numpy: 1.17.0
pyTorch_debug: False
pyTorch_version: 1.4.0
pytorch-lightning: 0.8.0rc1
tensorboard: 2.1.1
tqdm: 4.45.0
System:
OS: Darwin
architecture:
64bit
processor: i386
python: 3.6.9
version: Darwin Kernel Version 18.7.0: Sat Oct 12 00:02:19 PDT 2019; root:xnu-4903.278.12~1/RELEASE_X86_64
|
`save_last` should only keep the most recent checkpoint (along with the top k)
|
[
"feature",
"help wanted",
"good first issue"
] |
π Feature
After save_last saves a checkpoint, it removes the previous "last" (i.e. latest) checkpoint (i.e. separate from top k).
Motivation
For example, for someone limited by disk space, a good strategy during training would be to always save the best checkpoint as well as the latest checkpoint to restore from in case training gets interrupted (and ideally with an option to automatically remove the latest checkpoint with training finishes, but that might belong to a separate issue). Setting save_top_k=1 and save_last=True almost achieves this, but the latter actually results in all checkpoints being saved, which is unnecessary for the purpose of restoring training. So I propose save_last to keep only one latest checkpoint at all times. This is separate from the top k checkpoints. If a user really wants to keep all checkpoints around, he/she can always do save_top_k=-1 anyway.
Alternatives
Some other config option that allows always keeping one latest checkpoint.
|
How to use metrics classes of 0.8.0
|
[
"question"
] |
β Questions and Help
0.8.0 has new Metric class which can auto_reduce in ddp. But no examples of them. Can you give some examples about how to use it?
|
How do you save a trained model in standard pytorch format?
|
[
"question"
] |
I've been googling how to save the model on it's own so anyone with torch can just load it and start making predictions but I've found it difficult to get documentation on this? My assumption was there would be some way to directly access the underlying pytorch model and just pickle it but I'm unsure how to do this.
|
Wrong order of arguments to loss function in template example for finetuning
|
[] |
Hi,
I observed that the order for the loss function in the example for finetuning is switched. Instead of
self.loss(y_true, y_logits) it should be self.loss(y_logits, y_true)
Below is the exact location of the error:
pytorch-lightning/pl_examples/domain_templates/computer_vision_fine_tuning.py
Line 246
in
bd49b07
train_loss = self.loss(y_true, y_logits)
|
Very slow training on colab with TPU
|
[
"help wanted",
"accelerator: tpu"
] |
https://colab.research.google.com/drive/1OxoEcbNVCF5aj_9o0axTnKAh8p5I4Ikw?usp=sharing
|
Early stopping callback
|
[
"bug",
"help wanted"
] |
π Bug
Early stopping does not have the desired effect when creating a custom callback. Even when creating a custom callback with the default values, the training will stop before the early stopping before the conditions are met.
To Reproduce
Create callback
early_stop_callback = EarlyStopping(
monitor='val_loss',
min_delta=0.00,
patience=3,
verbose=False,
mode='min',
strict=True
)
Create trainer
trainer = Trainer.from_argparse_args(Namespace(**dict(train_config)), early_stop_callback=early_stop_callback)
Train
trainer.fit(model)
Here are the validation steps in the model:
def validation_step(self, batch, batch_idx):
batch, y = batch
y_hat = self(batch)
loss = F.cross_entropy(y_hat, y.long())
labels_hat = torch.argmax(y_hat, dim=1)
n_correct_pred = torch.sum(y == labels_hat).item()
return {'val_loss': loss, "n_correct_pred": n_correct_pred, "n_pred": len(y)}
def validation_epoch_end(self, outputs):
avg_loss = torch.stack([x['val_loss'] for x in outputs]).mean()
val_acc = sum([x['n_correct_pred'] for x in outputs]) / sum(x['n_pred'] for x in outputs)
tensorboard_logs = {'val_loss': avg_loss, 'val_acc': val_acc}
return {'val_loss': avg_loss, 'log': tensorboard_logs}
Expected behavior
In my case, training stops after 2 epochs, whether the validation loss increases or not. The callback behavior should be the same as the default. When I don't pass a custom callback, it works fine. I'm probably doing something wrong.
Environment
PyTorch Version : 1.4.0+cu100
OS: Ubuntu 18.04
How you installed PyTorch (conda, pip, source): pip
Python version: 3.6.9
CUDA/cuDNN version: 10.0.130/7.6.4
GPU models and configuration: GeForce GTX 860M
Thanks'!
|
Setting of PYTHONHASHSEED has no effect
|
[
"help wanted",
"question"
] |
π Bug
(Previously submitted here: #1939, but I didn't use the correct template, so now I'm resubmitting)
In
pytorch-lightning/pytorch_lightning/trainer/seed.py
Line 32
in
9045b6c
os.environ["PYTHONHASHSEED"] = str(seed)
, PYTHONHASHSEED is assigned a value in order to ensure reproducability. However, this assignment has no effect. In fact, this assignment might mislead the user or any logging software into believing that PYTHONHASHSEED has a specific value, when in fact it has another.
To see that setting PYTHONHASHSEED inside the current program has no effect, run the following two commands:
PYTHONHASHSEED=1 python -c "import os; print(hash('a'))"
PYTHONHASHSEED=1 python -c "import os; os.environ['PYTHONHASHSEED']='2'; print(hash('a'))"
The commands should output the same value, meaning that setting PYTHONHASHSEED after the process has started has no effect.
The following commands will likely output different values, also indicating that setting PYTHONHASHSEED after the process has started has no effect:
unset PYTHONHASHSEED # make sure it is not already set
python -c "import os; os.environ['PYTHONHASHSEED']='2'; print(hash('a'))"
python -c "import os; os.environ['PYTHONHASHSEED']='2'; print(hash('a'))"
To Reproduce
Steps to reproduce the behavior:
Start python terminal with PYTHONHASHSEED=1 python
Run
import pytorch_lightning as pl
pl.seed_everything(100)
print(hash('a'))
# >>> 8432517439229126278
Start new python terminal with PYTHONHASHSEED=2 python
Run
import pytorch_lightning as pl
pl.seed_everything(100)
print(hash('a'))
# >>> -8333094867672744108
Expected behavior
Expect output of hash function to be the same in both cases. The examples demonstrate that this is not possible.
Environment
* CUDA:
- GPU:
- available: False
- version: 10.2
* Packages:
- numpy: 1.18.5
- pyTorch_debug: False
- pyTorch_version: 1.5.0
- pytorch-lightning: 0.7.6
- tensorboard: 2.2.2
- tqdm: 4.46.1
* System:
- OS: Linux
- architecture:
- 64bit
- ELF
- processor:
- python: 3.8.3
- version: #1 SMP PREEMPT Wed May 27 20:25:12 UTC 2020
|
tensorboard logger should support remote directories
|
[
"feature",
"help wanted"
] |
π Feature
Tensorboard allows you to write to gc, s3, hdfs, etc by specifying paths with the right prefix e.g. logDir='hdfs://path/to/logs/
However the lightning logger breaks this. see tensorboard.py#L99
Motivation
Training often occurs on remote clusters which don't persist the local disk at the time the job ends. The local disk is also not accessible from outside tools so tensorboard can not access the results while training is in progress.
Pitch
Replace all directory operations with some remote-aware tool. tensorboard itself provides a gfile compatible handle . There are other options as well. tensorboard itself supports these things natively so maybe we can just get around doing any local file operations and leverage tensorboard lib to write remotely.
Alternatives
Some other option would be to write locally but add hooks to sync the logs to a remote storage.
|
The docker image tagged with Pytorch 1.5 and Python 3.8, has Pytorch 1.4 installed and is running Python 3.7
|
[
"bug",
"help wanted"
] |
π Bug
The docker image tagged with Pytorch 1.5, eg 0.8.0rc1-py3.8-torch1.5, has torch 1.4 installed in it, as seen via pip list. Also, it is running Python 3.7 instead of Python 3.8, as the tag indicates.
To Reproduce
Steps to reproduce the behavior:
Pull docker image: docker pull pytorchlightning/pytorch_lightning:0.8.0rc1-py3.8-torch1.5
Run the container: docker run --rm -it --init pytorchlightning/pytorch_lightning:0.8.0rc1-py3.8-torch1.5
Check version of python and pytorch:
containeruser@c5c87a61b71c:~$ python --version
Python 3.7.7
containeruser@71d3c9f95de7:~$ pip list | grep torch
pytorch-lightning 0.8.0rc1
torch 1.4.0
Expected behavior
Since the tag of the docker image is 0.8.0rc1-py3.8-torch1.5, I expect to see it running Python 3.8 and Pytorch 1.5.
Environment
N/A
|
Save the whole model object
|
[
"duplicate",
"question"
] |
Is there anyway of saving a whole model object with PyTorch lightning?
e.g. I want something like this:
model = load("mypath")
prediction = model.predict(x)
without needing access to the original Model class.
I know how to load things from a checkpoint, but that requires having access to the model class.
Is this possible? Any help would be much appreciated
|
torch.no_grad() during validation step
|
[
"question"
] |
Does PyTorch lightning call torch.no_grad() under the hood during a validation step?
The documentation here implies it does NOT but I think it definitely should... can someone confirm?
https://pytorch-lightning.readthedocs.io/en/stable/new-project.html
|
Global Gradient calculation is turned off during validation step.
|
[
"bug",
"help wanted"
] |
If an error occurs during the validation step, the tradition calculation is turned off for the runtime, you have to either specifically enable it or restart runtime!
|
How to run algorithms where there isn't a need for dataloaders?
|
[
"question",
"won't fix"
] |
What is your question?
In on-policy algorithms in reinforcement learning, rollouts are generated on the fly and there is no need for a replay buffer and consequently a dataloader. In these cases, the loss function is calculated according to the current states obtained (generally by using multiple parallel environments).
Hence, is it possible to bypass the dataloader requirement for lightning?
|
TypeError: can't pickle _thread.lock objects
|
[
"bug",
"help wanted"
] |
β Questions and Help
What is your question?
Hi, everyone. I run into this problem but I really do not how to solve it. I've been stuck up there for three or four hours.
Code
This is the error:
Traceback (most recent call last):
File "/home/jq/PycharmProjects/Unet/Code/Lit_train.py", line 50, in <module>
trainer.fit(model)
File "/home/jq/.local/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 859, in fit
self.single_gpu_train(model)
File "/home/jq/.local/lib/python3.6/site-packages/pytorch_lightning/trainer/distrib_parts.py", line 503, in single_gpu_train
self.run_pretrain_routine(model)
File "/home/jq/.local/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 1015, in run_pretrain_routine
self.train()
File "/home/jq/.local/lib/python3.6/site-packages/pytorch_lightning/trainer/training_loop.py", line 347, in train
self.run_training_epoch()
File "/home/jq/.local/lib/python3.6/site-packages/pytorch_lightning/trainer/training_loop.py", line 451, in run_training_epoch
self.run_evaluation(test_mode=self.testing)
File "/home/jq/.local/lib/python3.6/site-packages/pytorch_lightning/trainer/evaluation_loop.py", line 391, in run_evaluation
self.log_metrics(log_metrics, {})
File "/home/jq/.local/lib/python3.6/site-packages/pytorch_lightning/trainer/logging.py", line 74, in log_metrics
self.logger.save()
File "/home/jq/.local/lib/python3.6/site-packages/pytorch_lightning/utilities/distributed.py", line 10, in wrapped_fn
return fn(*args, **kwargs)
File "/home/jq/.local/lib/python3.6/site-packages/pytorch_lightning/loggers/tensorboard.py", line 161, in save
save_hparams_to_yaml(hparams_file, self.hparams)
File "/home/jq/.local/lib/python3.6/site-packages/pytorch_lightning/core/saving.py", line 151, in save_hparams_to_yaml
yaml.dump(hparams, fp)
File "/home/jq/.local/lib/python3.6/site-packages/yaml/__init__.py", line 290, in dump
return dump_all([data], stream, Dumper=Dumper, **kwds)
File "/home/jq/.local/lib/python3.6/site-packages/yaml/__init__.py", line 278, in dump_all
dumper.represent(data)
File "/home/jq/.local/lib/python3.6/site-packages/yaml/representer.py", line 27, in represent
node = self.represent_data(data)
File "/home/jq/.local/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data
node = self.yaml_representers[data_types[0]](self, data)
File "/home/jq/.local/lib/python3.6/site-packages/yaml/representer.py", line 207, in represent_dict
return self.represent_mapping('tag:yaml.org,2002:map', data)
File "/home/jq/.local/lib/python3.6/site-packages/yaml/representer.py", line 118, in represent_mapping
node_value = self.represent_data(item_value)
File "/home/jq/.local/lib/python3.6/site-packages/yaml/representer.py", line 52, in represent_data
node = self.yaml_multi_representers[data_type](self, data)
File "/home/jq/.local/lib/python3.6/site-packages/yaml/representer.py", line 343, in represent_object
'tag:yaml.org,2002:python/object:'+function_name, state)
File "/home/jq/.local/lib/python3.6/site-packages/yaml/representer.py", line 118, in represent_mapping
node_value = self.represent_data(item_value)
File "/home/jq/.local/lib/python3.6/site-packages/yaml/representer.py", line 52, in represent_data
node = self.yaml_multi_representers[data_type](self, data)
File "/home/jq/.local/lib/python3.6/site-packages/yaml/representer.py", line 346, in represent_object
return self.represent_sequence(tag+function_name, args)
File "/home/jq/.local/lib/python3.6/site-packages/yaml/representer.py", line 92, in represent_sequence
node_item = self.represent_data(item)
File "/home/jq/.local/lib/python3.6/site-packages/yaml/representer.py", line 52, in represent_data
node = self.yaml_multi_representers[data_type](self, data)
File "/home/jq/.local/lib/python3.6/site-packages/yaml/representer.py", line 343, in represent_object
'tag:yaml.org,2002:python/object:'+function_name, state)
File "/home/jq/.local/lib/python3.6/site-packages/yaml/representer.py", line 118, in represent_mapping
node_value = self.represent_data(item_value)
File "/home/jq/.local/lib/python3.6/site-packages/yaml/representer.py", line 52, in represent_data
node = self.yaml_multi_representers[data_type](self, data)
File "/home/jq/.local/lib/python3.6/site-packages/yaml/representer.py", line 317, in represent_object
reduce = data.__reduce_ex__(2)
TypeError: can't pickle _thread.lock objects
Exception ignored in: <object repr() failed>
Traceback (most recent call last):
File "/home/jq/.local/lib/python3.6/site-packages/tqdm/std.py", line 1086, in __del__
File "/home/jq/.local/lib/python3.6/site-packages/tqdm/std.py", line 1293, in close
File "/home/jq/.local/lib/python3.6/site-packages/tqdm/std.py", line 1471, in display
File "/home/jq/.local/lib/python3.6/site-packages/tqdm/std.py", line 1089, in __repr__
File "/home/jq/.local/lib/python3.6/site-packages/tqdm/std.py", line 1433, in format_dict
TypeError: 'NoneType' object is not iterable
This is my lightning model code:
def training_step(self, batch, batch_idx):
inputs, targets = batch["img"][DATA], batch["label"][DATA]
logits = self(inputs)
prob = torch.sigmoid(logits)
dice, iou = get_dice_score(prob, targets)
if int(batch_idx) != 0 and self.hparams.show_plot and int(batch_idx) % 15 == 0:
slices = BrainSlices(inputs, targets, logits)
slices.visualize(int(batch_idx), self.current_epoch,
outdir=Path(__file__).resolve().parent / "log" / "plot")
loss = F.binary_cross_entropy_with_logits(logits, targets)
tensorboard_logs = {"train_loss": loss, "train_IoU": iou.mean(), "train_dice": dice.mean()}
return {'loss': loss, "log": tensorboard_logs}
def validation_step(self, batch, batch_id):
inputs, targets = batch["img"][DATA], batch["label"][DATA]
logits = self(inputs)
prob = torch.sigmoid(logits)
loss = F.binary_cross_entropy_with_logits(logits, targets)
dice, iou = get_dice_score(prob, targets)
tensorboard_logs = {"val_loss": loss, "val_IoU": iou.mean(), "val_dice": dice.mean()}
return {'val_loss': loss, 'log': tensorboard_logs}
# Called at the end of the validation epoch with the outputs of all validation steps.
def validation_epoch_end(self, outputs):
# torch.stack: Concatenates sequence of tensors along a new dimension.
avg_loss = torch.stack([x['val_loss'] for x in outputs]).mean()
tensorboard_logs = {'val_loss_avg': avg_loss}
return {'val_loss': avg_loss, 'log': tensorboard_logs}
# default used by the Trainer
checkpoint_callback = ModelCheckpoint(
filepath="log/checkpoint/{epoch}-{val_loss:.2f}",
save_top_k=1,
verbose=True,
monitor='val_loss',
mode='min',
prefix=''
)
early_stop_callback = EarlyStopping(
monitor='val_loss',
patience=3,
strict=False,
verbose=False,
mode='min'
)
model = LitUnet(args)
trainer = Trainer.from_argparse_args(
args,
gpus=1,
check_val_every_n_epoch=1,
checkpoint_callback=checkpoint_callback,
early_stop_callback=early_stop_callback,
# runs 1 train, val, test batch and program ends
fast_dev_run=True,
default_root_dir='log/checkpoint',
profiler=True
)
trainer.fit(model)
|
Can you make a new progress bar for each epoch?
|
[
"question"
] |
The progress bar is very slick but a big problem with it is that it overwrites itself. For example, if you are at epoch 10, you cannot see what the validation and training losses were for epoch 9. Could the progress bar perhaps be made to work more like in Keras so that you can see the losses of accuracies of previous epochs?
|
Reloading Models for use elsewhere?
|
[
"help wanted",
"question",
"discussion"
] |
What is your question?
When we save models with a Checkpoint Callback, we can only load it up by having the original LightningModule that we used to create the model and the checkpoint. Is there some extension of save_checkpoint so that I can save out everything that I would need to reload the module and load the checkpoints as part of a save function?
What is best practice around this?
I have created a modular LightningModule that can change depending on hyperparams which makes it more difficult to just reload and use the module. I would need to have the hyperparam file as well to make sure that my module is loaded by the same way.
|
Let's add a `suggested_num_workers()` method?
|
[
"feature",
"good first issue",
"design"
] |
V1 could be:
import subprocess
def suggest_num_workers(num_accelerators):
num_cpus = multiprocessing.cpu_count()
return num_cpus * num_accelerators
@PyTorchLightning/core-contributors
Any other heuristics you guys use?
|
OmegaConf/hydra error unsupported `0.8.0rc2`
|
[
"bug",
"help wanted",
"priority: 0"
] |
Hi,
I am using Hydra to setup my configurations variables for my trainer and model parameters. I'm using PL version 0.8.0rc2. When I pass the params to my PL module and set the passed params to self.hparams, I get the following error:
ValueError: Unsupported config type of <class 'omegaconf.dictconfig.DictConfig'>.
I could add more code if that will help.
@Borda
|
Batch weight for accumulating gradients
|
[
"feature",
"help wanted",
"won't fix"
] |
π Feature
Support different weights for each batch during batch gradient accumulation.
Motivation
In many cases not every batch are equal. For example, when the inputs are sequences of tokens of variable length, the mean loss of several batches is not the average across the individual batch losses, since each batch would have varying number of samples.
Pitch
In training_loop.py, add a 'batch_total_weight' attribute initialized as zero. When fetching returns from training_step, if a certain 'batch_weight' key is in the returning dictionary, the batch's weight would be multiplied by 'batch_weight' during backward step and 'batch_weight' will be added to 'batch_total_weight'. Before optimization step, the gradients are scaled by 'batch_total_weight' and the latter would be set to zero for next accumulating steps.
A typical scenario is when 'batch_weight' is the number of samples within each batch, so that the average across multiple batches are computed correctly.
|
Is there a callback for before "configure_optimizers" is called?
|
[
"question",
"won't fix"
] |
Is there a callback for before configure_optimizers is called?
I didn't notice anything in the docs so wondering if such a callback exists, or if running code just prior to configure_optimzers should be handled elsewhere.
|
[hparams] support haparms and params in the init
|
[
"feature",
"help wanted",
"accelerator: tpu"
] |
π Bug
Gives TypeError while running on Google Colab TPU
To Reproduce
Steps to reproduce the behavior:
https://colab.research.google.com/drive/1K5i4kXzZCvbq3jc8IHPD_bd_EdUSjFml?usp=sharing
running trainer.fit(model) on TPU
TypeError Traceback (most recent call last)
in ()
21 )
22
---> 23 trainer.fit(model)
3 frames
/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/trainer.py in fit(self, model, train_dataloader, val_dataloaders)
872
873 # load weights if not interrupted
--> 874 self.load_spawn_weights(model)
875 self.model = model
876
/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/distrib_data_parallel.py in load_spawn_weights(self, original_model)
415 # load weights saved in ddp
416 path = os.path.join(self.default_root_dir, '__temp_weight_ddp_end.ckpt')
--> 417 loaded_model = original_model.class.load_from_checkpoint(path)
418
419 # copy loaded weights to old model
/usr/local/lib/python3.6/dist-packages/pytorch_lightning/core/lightning.py in load_from_checkpoint(cls, checkpoint_path, map_location, hparams_file, tags_csv, hparam_overrides, *args, **kwargs)
1596 update_hparams(hparams, hparam_overrides)
1597
-> 1598 model = cls._load_model_state(checkpoint, *args, **kwargs)
1599 return model
1600
/usr/local/lib/python3.6/dist-packages/pytorch_lightning/core/lightning.py in _load_model_state(cls, checkpoint, *args, **kwargs)
1630 if cls_takes_hparams:
1631 kwargs.update(hparams=hparams)
-> 1632 model = cls(*args, **kwargs)
1633 model.load_state_dict(checkpoint['state_dict'])
1634
TypeError: init() missing 3 required positional arguments: 'df_train', 'df_test', and 'fold'
Code sample
Expected behavior
I am trying implement 5-fold CV and expected it to proceed to the next fold normally.
Environment
Please copy and paste the output from our
environment collection script
(or fill out the checklist below manually).
You can get the script and run it with:
wget https://raw.githubusercontent.com/PyTorchLightning/pytorch-lightning/master/tests/collect_env_details.py
# For security purposes, please check the contents of collect_env_details.py before running it.
python collect_env_details.py
CUDA:
GPU:
available: False
version: None
Packages:
numpy: 1.18.5
pyTorch_debug: False
pyTorch_version: 1.6.0a0+541814f
pytorch-lightning: 0.7.6
tensorboard: 2.2.2
tqdm: 4.41.1
System:
OS: Linux
architecture:
64bit
processor: x86_64
python: 3.6.9
version: #1 SMP Wed Feb 19 05:26:34 PST 2020
Additional context
Running on Google Colab TPU
|
module 'pytorch_lightning' has no attribute 'metrics'
|
[
"bug",
"help wanted"
] |
module 'pytorch_lightning' has no attribute 'metrics'.
To Reproduce
I am using master branch installation
pip install git+https://github.com/PytorchLightning/pytorch-lightning.git@master --upgrade
pl.metrics.AUROC() throws error.
Expected behavior
As I see metrics are present on master branch but they are not in init.
|
Best practices when Module __init__ contains Dataset?
|
[
"question"
] |
β Questions and Help
What is your question?
What are best practices when the Module init contains the Dataset? This is useful when the input and output sizes are derived from the data set, and not hparams.
However, Module.load_from_checkpoint(...) fails with the following error:
TypeError: __init__() missing 1 required positional argument: 'patchDistanceDataset'
What have you tried?
There could be a module method to load the dataset. However, this will not help init to build the network because it will be unaware of the input and output sizes.
What's your environment?
OS: OSX
Packaging pip3
Version 0.7.6
|
How to programmatically determine the checkpoint directory?
|
[
"question"
] |
β Questions and Help
What is your question?
How do you programmatically determine the checkpoint directory?
pytorch lighting has automatic support for checkpoints and those checkpoints are stored in lightning_logs/VERSION/checkpoints/epoch=BEST.ckpt. However, I don't know how to programmatically determine what the VERSION or directory is, to copy the checkpoint to save it for later in another directory.
Another reason I would like to do this is to store auxiliary training data (e.g. Dataset) with the checkpoint, and I don't know how to do this based upon the docs.
What have you tried?
I have tried manually finding the directory in lightning_logs.
What's your environment?
OS: OSX
Packaging pip3
Version 0.7.6
|
CPU/GPU Template
|
[
"bug",
"help wanted",
"priority: 0"
] |
π Bug
The GPU or CPU template do not run currently on master after changes including the setup hook.
python -m pl_examples.basic_examples.gpu_template --gpus 4 --distributed_backend ddp
python -m pl_examples.basic_examples.cpu_template
CPU Template Error:
Traceback (most recent call last):
File "/usr/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/anthony/Downloads/pytorch-lightning/pl_examples/basic_examples/cpu_template.py", line 53, in <module>
main(args)
File "/home/anthony/Downloads/pytorch-lightning/pl_examples/basic_examples/cpu_template.py", line 34, in main
trainer.fit(model)
File "/home/anthony/Downloads/pytorch-lightning/pytorch_lightning/trainer/trainer.py", line 952, in fit
self.run_pretrain_routine(model)
File "/home/anthony/Downloads/pytorch-lightning/pytorch_lightning/trainer/trainer.py", line 1063, in run_pretrain_routine
self.reset_val_dataloader(ref_model)
File "/home/anthony/Downloads/pytorch-lightning/pytorch_lightning/trainer/data_loading.py", line 331, in reset_val_dataloader
self._reset_eval_dataloader(model, 'val')
File "/home/anthony/Downloads/pytorch-lightning/pytorch_lightning/trainer/data_loading.py", line 253, in _reset_eval_dataloader
dataloaders = self.request_dataloader(getattr(model, f'{mode}_dataloader'))
File "/home/anthony/Downloads/pytorch-lightning/pytorch_lightning/trainer/data_loading.py", line 352, in request_dataloader
dataloader = dataloader_fx()
File "/home/anthony/Downloads/pytorch-lightning/pl_examples/models/lightning_template.py", line 158, in val_dataloader
return DataLoader(self.mnist_test, batch_size=self.batch_size, num_workers=4)
File "/home/anthony/.cache/pypoetry/virtualenvs/robotics-zp-60jGk-py3.6/lib/python3.6/site-packages/torch/nn/modules/module.py", line 594, in __getattr__
type(self).__name__, name))
AttributeError: 'LightningTemplateModel' object has no attribute 'mnist_test'
GPU Template Error:
File "/home/anthony/Downloads/pytorch-lightning/pl_examples/models/lightning_template.py", line 64, in __init__
self.c_d1_drop = nn.Dropout(self.drop_prob)
File "/home/anthony/.cache/pypoetry/virtualenvs/robotics-zp-60jGk-py3.6/lib/python3.6/site-packages/torch/nn/modules/dropout.py", line 10, in __init__
if p < 0 or p > 1:
TypeError: '<' not supported between instances of 'Namespace' and 'int'
Environment
CUDA:
- GPU:
- GeForce RTX 2080 Ti
- GeForce RTX 2080 Ti
- GeForce RTX 2080 Ti
- GeForce RTX 2080 Ti
- available: True
- version: 10.2
Packages:
- numpy: 1.18.4
- pyTorch_debug: False
- pyTorch_version: 1.5.0
- pytorch-lightning: 0.8.0
- tensorboard: 2.2.1
- tqdm: 4.46.0
System:
- OS: Linux
- architecture:
- 64bit
- ELF
- processor: x86_64
- python: 3.6.8
- version: #44~18.04.2-Ubuntu SMP Thu Apr 23 14:27:18 UTC 2020
|
Precision 16 not transforming inputs to Float16 nor having LSTMs as halfs
|
[
"bug",
"help wanted"
] |
π Bug
When using precision 16 to train a model, the LSTM layers are not transformed to accept FP16 and the inputs to the model are FP32 (as mention in the issue #1876). This can be seen with simple modifications of the MNIST model your provide in colab.
To Reproduce
Execute the following code:
import os
import pytorch_lightning as pl
import torch
from torch.nn import functional as F
from torch.utils.data import DataLoader
from torchvision import transforms
from torchvision.datasets import MNIST
class LitClassifier(pl.LightningModule):
def __init__(self):
super().__init__()
self.l0 = torch.nn.Linear(28 * 28, 32)
self.l1 = torch.nn.LSTM(32, 10, bidirectional=True)
def forward(self, x):
print('forward() input dtype: ', x.dtype)
x1 = self.l0(x.view(x.size(0), -1))
return torch.relu(self.l1(x1.view(x1.size(0), 1, -1))[0])
def training_step(self, batch, batch_nb):
x, y = batch
loss = F.cross_entropy(self(x).squeeze(1), y)
tensorboard_logs = {'train_loss': loss}
return {'loss': loss, 'log': tensorboard_logs}
def configure_optimizers(self):
return torch.optim.Adam(self.parameters(), lr=0.02)
print('Pytorch version: ', torch.__version__)
print('PL version: ', pl.__version__)
train_loader = DataLoader(MNIST(os.getcwd(), train=True, download=True, transform=transforms.ToTensor()), batch_size=32)
model = LitClassifier()
trainer = pl.Trainer(gpus=1, precision=16)
trainer.fit(model, train_loader)
This gives the following error:
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "C:\Program Files\JetBrains\PyCharm 2018.3.5\plugins\python\helpers\pydev\_pydev_bundle\pydev_umd.py", line 197, in runfile
pydev_imports.execfile(filename, global_vars, local_vars) # execute the script
File "C:\Program Files\JetBrains\PyCharm 2018.3.5\plugins\python\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "C:/Users/rodri/PycharmProjects/TTMelGAN/src/data/__init__.py", line 38, in <module>
trainer.fit(model, train_loader)
File "C:\Users\rodri\Miniconda3\envs\TTMelGAN\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 921, in fit
self.single_gpu_train(model)
File "C:\Users\rodri\Miniconda3\envs\TTMelGAN\lib\site-packages\pytorch_lightning\trainer\distrib_parts.py", line 171, in single_gpu_train
self.run_pretrain_routine(model)
File "C:\Users\rodri\Miniconda3\envs\TTMelGAN\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 1091, in run_pretrain_routine
self.train()
File "C:\Users\rodri\Miniconda3\envs\TTMelGAN\lib\site-packages\pytorch_lightning\trainer\training_loop.py", line 374, in train
self.run_training_epoch()
File "C:\Users\rodri\Miniconda3\envs\TTMelGAN\lib\site-packages\pytorch_lightning\trainer\training_loop.py", line 457, in run_training_epoch
_outputs = self.run_training_batch(batch, batch_idx)
File "C:\Users\rodri\Miniconda3\envs\TTMelGAN\lib\site-packages\pytorch_lightning\trainer\training_loop.py", line 633, in run_training_batch
loss, batch_output = optimizer_closure()
File "C:\Users\rodri\Miniconda3\envs\TTMelGAN\lib\site-packages\pytorch_lightning\trainer\training_loop.py", line 594, in optimizer_closure
output_dict = self.training_forward(split_batch, batch_idx,
File "C:\Users\rodri\Miniconda3\envs\TTMelGAN\lib\site-packages\pytorch_lightning\trainer\training_loop.py", line 767, in training_forward
output = self.model.training_step(*args)
File "C:/Users/rodri/PycharmProjects/TTMelGAN/src/data/__init__.py", line 24, in training_step
loss = F.cross_entropy(self(x).squeeze(1), y)
File "C:\Users\rodri\Miniconda3\envs\TTMelGAN\lib\site-packages\torch\nn\modules\module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "C:/Users/rodri/PycharmProjects/TTMelGAN/src/data/__init__.py", line 20, in forward
return torch.relu(self.l1(x1.view(x1.size(0), 1, -1))[0])
File "C:\Users\rodri\Miniconda3\envs\TTMelGAN\lib\site-packages\torch\nn\modules\module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "C:\Users\rodri\Miniconda3\envs\TTMelGAN\lib\site-packages\torch\nn\modules\rnn.py", line 576, in forward
result = _VF.lstm(input, hx, self._flat_weights, self.bias, self.num_layers,
RuntimeError: cuDNN error: CUDNN_STATUS_BAD_PARAM
If in the example we remove the l0 layer, the LSTM receives the input directly (in FP32) and therefore works without problem. When we keep it, l0 transforms the FP32 input into FP16 and breaks.
Environment
Pytorch version: 1.6.0.dev20200618
PL version: 0.8.0
(both downloaded before executing to check that I have the latest version in both cases. Error also occurs in Linux.)
CUDA:
GPU:
GeForce MX150
available: True
version: 10.2
Packages:
numpy: 1.18.1
pyTorch_debug: False
pyTorch_version: 1.6.0.dev20200618
pytorch-lightning: 0.8.0
tensorboard: 2.2.2
tqdm: 4.46.1
System:
OS: Windows
architecture:
64bit
WindowsPE
processor: Intel64 Family 6 Model 142 Stepping 10, GenuineIntel
python: 3.8.3
version: 10.0.18362
|
model.setup and model.on_fit_start not working
|
[] |
self.model not yet set so self.is_function_implemented('on_fit_start') and self.is_function_implemented('setup') in trainer.fit will both always return false.
Resolve by checking if model instead of self has the relevant function, e.g.
if callable(getattr(model, f_name, None)): model.f_name()
relevant lines:
pytorch-lightning/pytorch_lightning/trainer/trainer.py
Line 851
in
f8c10fb
if self.is_function_implemented('on_fit_start'):
pytorch-lightning/pytorch_lightning/trainer/trainer.py
Line 855
in
f8c10fb
if self.is_function_implemented('setup'):
|
Steps not incremented correctly with accumulate gradients
|
[
"bug",
"help wanted"
] |
π Bug
global_step and current_epoch do not match up anymore after more than 1 epoch when setting accumulate gradients greater 1.
I think at the end of each epoch optimizer_step (and on_before_zero_grad) is not called in that case.
To Reproduce
Create a pl.LightningModule that logs current_epoch and global_step in every training_step_end.
Have a dataset with 20k samples, set the dataloader batch size to 1171, and set accumulate_grad_batches=7 in the trainer
Train for 100 steps
Look at logs and overall number of steps and epoch
Expected behavior
with 100 steps we should be at epoch 33
in the logs you should see that when current_epoch gets incremented, global_step gets incremented as well
Actual behavior
with 100 steps we are at epoch 50
in the logs we see that global_step increments with every batch, but not if current_epoch get incremented
in this particular example global_step is basically missing every 3rd increment
Code sample
Below is basically what I have.
I am adjusting the learning rate with every global step.
The learning rate adjustment and each training_step_end call gets printed.
class MyModule(pl.LightningModule):
def __init__(self, hparams: dict):
super().__init__()
self.hparams = hparams
self.model = model_fact()
def training_step_end(self, outputs: dict):
print(f'Epoch: {self.current_epoch} Step: {self.global_step} Batch size: {len(outputs["logits"])}')
def on_before_zero_grad(self, optimizer: torch.optim.Optimizer):
current_lr = [d['lr'] for d in optimizer.param_groups][0]
print(f'Step: {self.global_step} LR: {current_lr:.4e}')
def train_dataloader(self):
return DataLoader(MyDataset(), batch_size=1171, shuffle=False)
def forward(self, x):
return self.model(x)
def training_step(self, batch, batch_idx: int) -> dict:
inputs, targets = batch
logits = self.forward(*inputs)
loss = self.criterion(logits, targets)
return {'loss': loss, 'logits': logits}
def configure_optimizers(self):
return optim.Adam(self.model.parameters())
def optimizer_step(self, epoch: int, batch_idx: int, optimizer: optim.Optimizer,
optimizer_idx: int, second_order_closure: callable = None):
# modify learning rate...
optimizer.step()
self.on_before_zero_grad(optimizer)
optimizer.zero_grad()
trainer = pl.Trainer(
max_steps=100,
max_epochs=int(1e6),
gpus=N_GPUS if N_GPUS > 0 else None,
distributed_backend=None if N_GPUS < 2 else 'dp',
num_sanity_val_steps=0,
progress_bar_refresh_rate=0,
accumulate_grad_batches=7,
early_stop_callback=False)
Below is some sample output.
You can see the end of the epoch where the last 93 samples are processed.
Then, current_epoch increases, but global_step does not increase.
Additionally, the learning rate print is missing, so on_before_zero_grad
was not called.
Step: 53 LR: 3.3861e-04
Epoch: 26 Step: 54 Batch size: 1171
Epoch: 26 Step: 54 Batch size: 1171
Epoch: 26 Step: 54 Batch size: 1171
Epoch: 26 Step: 54 Batch size: 93
Epoch: 27 Step: 54 Batch size: 1171
Epoch: 27 Step: 54 Batch size: 1171
Epoch: 27 Step: 54 Batch size: 1171
Epoch: 27 Step: 54 Batch size: 1171
Epoch: 27 Step: 54 Batch size: 1171
Epoch: 27 Step: 54 Batch size: 1171
Epoch: 27 Step: 54 Batch size: 1171
Step: 54 LR: 3.3267e-04
Epoch: 27 Step: 55 Batch size: 1171
Epoch: 27 Step: 55 Batch size: 1171
Epoch: 27 Step: 55 Batch size: 1171
Epoch: 27 Step: 55 Batch size: 1171
Epoch: 27 Step: 55 Batch size: 1171
Epoch: 27 Step: 55 Batch size: 1171
Epoch: 27 Step: 55 Batch size: 1171
Step: 55 LR: 3.2673e-04
Epoch: 27 Step: 56 Batch size: 1171
Epoch: 27 Step: 56 Batch size: 1171
Epoch: 27 Step: 56 Batch size: 1171
Epoch: 27 Step: 56 Batch size: 93
Epoch: 28 Step: 56 Batch size: 1171
Epoch: 28 Step: 56 Batch size: 1171
Epoch: 28 Step: 56 Batch size: 1171
Epoch: 28 Step: 56 Batch size: 1171
Epoch: 28 Step: 56 Batch size: 1171
Epoch: 28 Step: 56 Batch size: 1171
Environment
pytorch-lightning==0.7.5
torch==1.5.0 (conda)
Ubuntu 18.04.4 LTS
python 3.7.7
CUDA/cuDNN version: Cuda compilation tools, release 10.1, V10.1.243
GPU models and configuration: 6 GPUs in data parallel mode
|
dataframes passed outside hparams causing issues
|
[
"bug",
"help wanted"
] |
π Bug
In 0.8.0 (updating from 0.7.5), using hyperparameters parsed by test-tube causes an exception.
In 0.8.0 (updating from 0.7.5), dataframes passed outside hparams errors out
To Reproduce
import argparse
import pandas as pd
import pytorch_lightning as pl
parser = argparse.ArgumentParser()
parser.add_argument('--hparam1', default=8, type=int)
parser.add_argument('--hparam2', default="test", type=str)
parser.add_argument('--hparam3', default=True, type=bool)
hparams = parser.parse_args()
some_data = pd.DataFrame([1, 2, 3])
class module(pl.LightningModule):
def __init__(self, hparams, some_data):
super(module, self).__init__()
self.hparams = hparams
self.data = some_data
def train_dataloader(self):
return
def val_dataloader(self):
return
def configure_optimizers(self):
return
def forward(self, x):
return x
def training_step(self, batch, batch_idx):
return batch
def validation_step(self, batch, batch_idx):
return batch
model = module(hparams, some_data)
Traceback (most recent call last):
File "train_test.py", line 37, in <module>
model = module(hparams, some_data)
File "train_test.py", line 16, in __init__
self.hparams = hparams
File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 638, in __setattr__
object.__setattr__(self, name, value)
File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/core/lightning.py", line 1695, in hparams
self.save_hyperparameters(hp, frame=inspect.currentframe().f_back.f_back)
File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/core/lightning.py", line 1662, in save_hyperparameters
cand_names = [k for k, v in init_args.items() if v == hp]
File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/core/lightning.py", line 1662, in <listcomp>
cand_names = [k for k, v in init_args.items() if v == hp]
File "/opt/conda/lib/python3.6/site-packages/pandas/core/generic.py", line 1479, in __nonzero__
f"The truth value of a {type(self).__name__} is ambiguous. "
ValueError: The truth value of a DataFrame is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().
Expected behavior
hparams is saved correctly as previous versions, disregarding the dataframes
|
Weird DP behaviour on AWS P2 16xlarge with Wandb Logger
|
[
"question",
"logger",
"3rd party"
] |
β Questions and Help
Before asking:
search the issues.
search the docs.
What is your question?
Wandb Logger seems to not log all steps.
AWS P2 16xLarge has 16 K80s on 1 node.
I am using dp distrib mode.
Code
My training steps:
def training_step(self, batch, batch_idx):
loss, logits, *_ = self(batch)
output = {"loss": loss, "log": {"loss": loss}}
return output
My logs:
When I change my training_step to explicitly log metrics, all works again:
def training_step(self, batch, batch_idx):
loss, logits, *_ = self(batch)
self.logger.log_metrics({"loss": loss.cpu()})
output = {"loss": loss}
return output
What's your environment?
OS: [e.g. iOS, Linux, Win]: Linux
Packaging [e.g. pip, conda]: Pip
Version [e.g. 0.5.2.1]: 0.8.0
|
Full batch training
|
[
"question"
] |
β Questions and Help
For smaller datasets, it makes sense to do full batch training, not minibatch. How do you implement fullbatch training in pytorch lightning, given that train and validation might be different sizes?
|
Improve Exception Handling
|
[
"feature",
"help wanted",
"good first issue",
"let's do it!",
"priority: 2"
] |
π Code Quality Improvement
I came across this a few times already:
try:
# something
except Exception:
# something
It is the worst possible way to handle Exceptions. It is better to catch the specific exception or at least log a message.
Alternatives
None. Sooner or later someone has to deal with this anyway :)
|
'Trainer' object has no attribute 'proc_rank'
|
[
"help wanted",
"question"
] |
π Bug
1st epoch runs to completion and the above error is thrown in the is_logger() method.
To Reproduce
AttributeError Traceback (most recent call last)
<ipython-input-14-1b9ebf437115> in <module>()
3 trainer = pl.Trainer(**train_params)
4
----> 5 trainer.fit(model)
8 frames
<ipython-input-3-bb983543bb31> in is_logger(self)
8
9 def is_logger(self):
---> 10 return self.trainer.proc_rank <= 0
11
12 def forward(
AttributeError: 'Trainer' object has no attribute 'proc_rank'
Code sample
class T5FineTuner(pl.LightningModule):
def __init__(self, hparams):
super(T5FineTuner, self).__init__()
self.hparams = hparams
self.model = T5ForConditionalGeneration.from_pretrained(hparams.model_name_or_path)
self.tokenizer = T5Tokenizer.from_pretrained(hparams.tokenizer_name_or_path)
def is_logger(self):
return self.trainer.proc_rank <= 0
def forward(
self, input_ids, attention_mask=None, decoder_input_ids=None, decoder_attention_mask=None, lm_labels=None
):
return self.model(
input_ids,
attention_mask=attention_mask,
decoder_input_ids=decoder_input_ids,
decoder_attention_mask=decoder_attention_mask,
lm_labels=lm_labels,
)
def _step(self, batch):
lm_labels = batch["target_ids"]
lm_labels[lm_labels[:, :] == self.tokenizer.pad_token_id] = -100
outputs = self(
input_ids=batch["source_ids"],
attention_mask=batch["source_mask"],
lm_labels=lm_labels,
decoder_attention_mask=batch['target_mask']
)
loss = outputs[0]
return loss
def training_step(self, batch, batch_idx):
loss = self._step(batch)
tensorboard_logs = {"train_loss": loss}
return {"loss": loss, "log": tensorboard_logs}
def training_epoch_end(self, outputs):
avg_train_loss = torch.stack([x["loss"] for x in outputs]).mean()
tensorboard_logs = {"avg_train_loss": avg_train_loss}
return {"avg_train_loss": avg_train_loss, "log": tensorboard_logs, 'progress_bar': tensorboard_logs}
def validation_step(self, batch, batch_idx):
loss = self._step(batch)
return {"val_loss": loss}
def validation_epoch_end(self, outputs):
avg_loss = torch.stack([x["val_loss"] for x in outputs]).mean()
tensorboard_logs = {"val_loss": avg_loss}
return {"avg_val_loss": avg_loss, "log": tensorboard_logs, 'progress_bar': tensorboard_logs}
def configure_optimizers(self):
"Prepare optimizer and schedule (linear warmup and decay)"
model = self.model
no_decay = ["bias", "LayerNorm.weight"]
optimizer_grouped_parameters = [
{
"params": [p for n, p in model.named_parameters() if not any(nd in n for nd in no_decay)],
"weight_decay": self.hparams.weight_decay,
},
{
"params": [p for n, p in model.named_parameters() if any(nd in n for nd in no_decay)],
"weight_decay": 0.0,
},
]
optimizer = AdamW(optimizer_grouped_parameters, lr=self.hparams.learning_rate, eps=self.hparams.adam_epsilon)
self.opt = optimizer
return [optimizer]
def optimizer_step(self, epoch, batch_idx, optimizer, optimizer_idx, second_order_closure=None):
if self.trainer.use_tpu:
xm.optimizer_step(optimizer)
else:
optimizer.step()
optimizer.zero_grad()
self.lr_scheduler.step()
def get_tqdm_dict(self):
tqdm_dict = {"loss": "{:.3f}".format(self.trainer.avg_loss), "lr": self.lr_scheduler.get_last_lr()[-1]}
return tqdm_dict
def train_dataloader(self):
train_dataset = get_dataset(tokenizer=self.tokenizer, type_path="train", args=self.hparams)
dataloader = DataLoader(train_dataset, batch_size=self.hparams.train_batch_size, drop_last=True, shuffle=True, num_workers=4)
t_total = (
(len(dataloader.dataset) // (self.hparams.train_batch_size * max(1, self.hparams.n_gpu)))
// self.hparams.gradient_accumulation_steps
* float(self.hparams.num_train_epochs)
)
scheduler = get_linear_schedule_with_warmup(
self.opt, num_warmup_steps=self.hparams.warmup_steps, num_training_steps=t_total
)
self.lr_scheduler = scheduler
return dataloader
def val_dataloader(self):
val_dataset = get_dataset(tokenizer=self.tokenizer, type_path="val", args=self.hparams)
return DataLoader(val_dataset, batch_size=self.hparams.eval_batch_size, num_workers=4)
Expected behavior
Environment
Please copy and paste the output from our
environment collection script
(or fill out the checklist below manually).
You can get the script and run it with:
wget https://raw.githubusercontent.com/PyTorchLightning/pytorch-lightning/master/tests/collect_env_details.py
# For security purposes, please check the contents of collect_env_details.py before running it.
python collect_env_details.py
PyTorch Version (e.g., 1.0):
OS (e.g., Linux): Linux
How you installed PyTorch (conda, pip, source): pip
Build command you used (if compiling from source):
Python version: 3.6.9
CUDA/cuDNN version: 10.1
GPU models and configuration: nVidia K80
Any other relevant information:
Additional context
This code does run in 0.7.6 version, but it breaks in the latest release
|
Could I convert lightning module to onnx? Thanks!
|
[
"feature",
"let's do it!"
] |
π Feature
pytorch Lightning works very good, but I cannot find any comments or examples to guide my convert to onnx from a pretrained lightning model, doesn't lightning module only use for researching purpose, without support of onnx cross platform?
|
TensorMetric not updated to cuda device
|
[
"bug",
"help wanted",
"priority: 0"
] |
π Bug
When tensors located on the GPU are passed through a TensorMetric forward method, they are transfered to the CPU by this method. It seems that the 'cpu' device is not updated when training on a gpu. The cpu device seems to be hardcoded here
To Reproduce
Steps to reproduce the behavior:
import torch
import torch.nn.functional as F
from pytorch_lightning.metrics import TensorMetric
class FocalTverskyMetric(TensorMetric):
def __init__(self, alpha: float, beta: float, gamma: float, smooth=1e-8):
super().__init__("FocalTversky")
self.alpha = alpha
self.beta = beta
self.gamma = gamma
self.smooth = smooth
# This line works as a workaround
# self._device = torch.device('cuda')
def _tversky_index_c(self, p: torch.Tensor, g: torch.Tensor):
c = p.shape[1]
p = p.permute(0, 2, 3, 1).reshape((-1, c))
g = F.one_hot(g.flatten().long(), c)
tp = torch.sum(torch.mul(p, g), dim=0)
fn = torch.sum(torch.mul(1. - p, g), dim=0)
fp = torch.sum(torch.mul(p, 1. - g), dim=0)
return (tp + self.smooth) / (tp + self.alpha * fn + self.beta * fp + self.smooth)
def forward(self, x, y):
ti = self._tversky_index_c(x, y)
res = (1 - ti).pow(1 / self.gamma)
return torch.sum(res, dim=0)
if __name__ == '__main__':
metric = FocalTverskyMetric(alpha=0.5, beta=0.5, gamma=1.)
preds = torch.Tensor([[[[1.]], [[0.]]], [[[1.]], [[0.]]], [[[0.]], [[1.]]]]).cuda()
assert preds.is_cuda # Passes
labels = torch.Tensor([[[1]], [[1]], [[0]]]).cuda()
assert labels.is_cuda # Passes
loss = metric(preds, labels)
assert loss.is_cuda # Fails
When training in DDP mode with 16-bit precision, these metrics throws the stack trace below. This disappears in 32-bit mode since it's the amp grad_scaler asserting that the loss value is a cuda tensor.
GPU available: True, used: True
TPU available: False, using: 0 TPU cores
CUDA_VISIBLE_DEVICES: [0,1,2,3]
Using 16bit precision.
Using 16bit precision.
initializing ddp: GLOBAL_RANK: 1, MEMBER: 2/4
Using 16bit precision.
initializing ddp: GLOBAL_RANK: 2, MEMBER: 3/4
Using 16bit precision.
initializing ddp: GLOBAL_RANK: 3, MEMBER: 4/4
initializing ddp: GLOBAL_RANK: 0, MEMBER: 1/4
----------------------------------------------------------------------------------------------------
distributed_backend=ddp
All DDP processes registered. Starting ddp with 4 processes
----------------------------------------------------------------------------------------------------
| Name | Type | Params
-------------------------------------------------
0 | model | DeepLabV3 | 60 M
1 | calc_loss | FocalTverskyMetric | 0
2 | calc_iou | IoU | 0
Epoch 1: 0% 0/493 [00:00<?, ?it/s] Traceback (most recent call last):
File "/opt/code/deeplabv3_lightning.py", line 269, in <module>
'pred': predict
File "/opt/conda/lib/python3.7/site-packages/fire/core.py", line 138, in Fire
component_trace = _Fire(component, args, parsed_flag_args, context, name)
File "/opt/conda/lib/python3.7/site-packages/fire/core.py", line 468, in _Fire
target=component.__name__)
File "/opt/conda/lib/python3.7/site-packages/fire/core.py", line 672, in _CallAndUpdateTrace
component = fn(*varargs, **kwargs)
File "/opt/code/deeplabv3_lightning.py", line 224, in train
trainer.fit(model)
File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 895, in fit
self.ddp_train(task, model)
File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/distrib_data_parallel.py", line 526, in ddp_train
self.run_pretrain_routine(model)
File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1091, in run_pretrain_routine
self.train()
File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 374, in train
self.run_training_epoch()
File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 457, in run_training_epoch
_outputs = self.run_training_batch(batch, batch_idx)
File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 633, in run_training_batch
loss, batch_output = optimizer_closure()
File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 611, in optimizer_closure
model_ref.backward(self, closure_loss, optimizer, opt_idx)
File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/core/hooks.py", line 179, in backward
self.trainer.scaler.scale(loss).backward()
File "/opt/conda/lib/python3.7/site-packages/torch/cuda/amp/grad_scaler.py", line 156, in scale
assert outputs.is_cuda
AssertionError
Traceback (most recent call last):
File "/opt/code/deeplabv3_lightning.py", line 269, in <module>
'pred': predict
File "/opt/conda/lib/python3.7/site-packages/fire/core.py", line 138, in Fire
component_trace = _Fire(component, args, parsed_flag_args, context, name)
File "/opt/conda/lib/python3.7/site-packages/fire/core.py", line 468, in _Fire
target=component.__name__)
File "/opt/conda/lib/python3.7/site-packages/fire/core.py", line 672, in _CallAndUpdateTrace
component = fn(*varargs, **kwargs)
File "/opt/code/deeplabv3_lightning.py", line 224, in train
trainer.fit(model)
File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 895, in fit
self.ddp_train(task, model)
File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/distrib_data_parallel.py", line 526, in ddp_train
self.run_pretrain_routine(model)
File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1091, in run_pretrain_routine
self.train()
File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 374, in train
self.run_training_epoch()
File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 457, in run_training_epoch
_outputs = self.run_training_batch(batch, batch_idx)
File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 633, in run_training_batch
loss, batch_output = optimizer_closure()
File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 611, in optimizer_closure
model_ref.backward(self, closure_loss, optimizer, opt_idx)
File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/core/hooks.py", line 179, in backward
self.trainer.scaler.scale(loss).backward()
File "/opt/conda/lib/python3.7/site-packages/torch/cuda/amp/grad_scaler.py", line 156, in scale
assert outputs.is_cuda
AssertionError
Traceback (most recent call last):
File "deeplabv3_lightning.py", line 269, in <module>
'pred': predict
File "/opt/conda/lib/python3.7/site-packages/fire/core.py", line 138, in Fire
component_trace = _Fire(component, args, parsed_flag_args, context, name)
File "/opt/conda/lib/python3.7/site-packages/fire/core.py", line 468, in _Fire
target=component.__name__)
File "/opt/conda/lib/python3.7/site-packages/fire/core.py", line 672, in _CallAndUpdateTrace
component = fn(*varargs, **kwargs)
File "deeplabv3_lightning.py", line 224, in train
trainer.fit(model)
File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 910, in fit
self.spawn_ddp_children(model)
File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/distrib_data_parallel.py", line 442, in spawn_ddp_children
self.ddp_train(local_rank, model, is_master=True)
File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/distrib_data_parallel.py", line 526, in ddp_train
self.run_pretrain_routine(model)
File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1091, in run_pretrain_routine
self.train()
File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 374, in train
self.run_training_epoch()
File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 457, in run_training_epoch
_outputs = self.run_training_batch(batch, batch_idx)
File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 633, in run_training_batch
loss, batch_output = optimizer_closure()
File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 611, in optimizer_closure
model_ref.backward(self, closure_loss, optimizer, opt_idx)
File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/core/hooks.py", line 179, in backward
self.trainer.scaler.scale(loss).backward()
File "/opt/conda/lib/python3.7/site-packages/torch/cuda/amp/grad_scaler.py", line 156, in scale
assert outputs.is_cuda
AssertionError
Traceback (most recent call last):
File "/opt/code/deeplabv3_lightning.py", line 269, in <module>
'pred': predict
File "/opt/conda/lib/python3.7/site-packages/fire/core.py", line 138, in Fire
component_trace = _Fire(component, args, parsed_flag_args, context, name)
File "/opt/conda/lib/python3.7/site-packages/fire/core.py", line 468, in _Fire
target=component.__name__)
File "/opt/conda/lib/python3.7/site-packages/fire/core.py", line 672, in _CallAndUpdateTrace
component = fn(*varargs, **kwargs)
File "/opt/code/deeplabv3_lightning.py", line 224, in train
trainer.fit(model)
File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 895, in fit
self.ddp_train(task, model)
File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/distrib_data_parallel.py", line 526, in ddp_train
self.run_pretrain_routine(model)
File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1091, in run_pretrain_routine
self.train()
File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 374, in train
self.run_training_epoch()
File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 457, in run_training_epoch
_outputs = self.run_training_batch(batch, batch_idx)
File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 633, in run_training_batch
loss, batch_output = optimizer_closure()
File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 611, in optimizer_closure
model_ref.backward(self, closure_loss, optimizer, opt_idx)
File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/core/hooks.py", line 179, in backward
self.trainer.scaler.scale(loss).backward()
File "/opt/conda/lib/python3.7/site-packages/torch/cuda/amp/grad_scaler.py", line 156, in scale
assert outputs.is_cuda
AssertionError
Exception ignored in: <function tqdm.__del__ at 0x7ff26f244a70>
Traceback (most recent call last):
File "/opt/conda/lib/python3.7/site-packages/tqdm/std.py", line 1086, in __del__
File "/opt/conda/lib/python3.7/site-packages/tqdm/std.py", line 1293, in close
File "/opt/conda/lib/python3.7/site-packages/tqdm/std.py", line 1471, in display
File "/opt/conda/lib/python3.7/site-packages/tqdm/std.py", line 1089, in __repr__
File "/opt/conda/lib/python3.7/site-packages/tqdm/std.py", line 1433, in format_dict
TypeError: cannot unpack non-iterable NoneType object
Expected behavior
The TensorMetric should have self._device updated to equal the current Trainer device during initialization.
Environment
CUDA:
GPU:
GeForce GTX 1050
available: True
version: 10.2
Packages:
numpy: 1.18.5
pyTorch_debug: False
pyTorch_version: 1.6.0.dev20200618
pytorch-lightning: 0.8.0
tensorboard: 2.1.1
tqdm: 4.46.1
System:
OS: Linux
architecture:
64bit
ELF
processor: x86_64
python: 3.8.3
version: #41-Ubuntu SMP Wed Jun 3 18:57:02 UTC 2020
The bug is also present on AWS p3.8xlarge instances using the same environment but with 4 Nvidia Tesla V100s
|
_has_len does not handle NotImplementedError (raised by torchtext)
|
[
"bug",
"help wanted"
] |
π Bug
When using torchtext.data.Iterator with a batch_size_fn function the len function raises a NotImplementedError which is not caught by _has_len function.
A bug-fix is very simple by just returning False if a NotImplementedError is raised. This is unlikely to have any negative side effects since it corresponds with what _hads_len is expected to do. The fix allowed me to train my model using torch text.
I plan to submit a pull request with the fix above.
There are no additional dependencies required; however this problem occurred when using torchtext.
Example stack trace:
Traceback (most recent call last):
File "/Users/thomas/scm/OakDataPrep/oakSkipThoughtTrainer.py", line 18, in <module>
trainer.fit(model)
File "/Users/thomas/virtualenv/Python3/PyTorch/env/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 952, in fit
self.run_pretrain_routine(model)
File "/Users/thomas/virtualenv/Python3/PyTorch/env/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1091, in run_pretrain_routine
self.train()
File "/Users/thomas/virtualenv/Python3/PyTorch/env/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 334, in train
self.reset_train_dataloader(model)
File "/Users/thomas/virtualenv/Python3/PyTorch/env/lib/python3.7/site-packages/pytorch_lightning/trainer/data_loading.py", line 201, in reset_train_dataloader
if not _has_len(self.train_dataloader):
File "/Users/thomas/virtualenv/Python3/PyTorch/env/lib/python3.7/site-packages/pytorch_lightning/trainer/data_loading.py", line 49, in _has_len
if len(dataloader) == 0:
File "/Users/thomas/virtualenv/Python3/PyTorch/env/lib/python3.7/site-packages/torchtext/data/iterator.py", line 136, in __len__
raise NotImplementedError
NotImplementedError
To Reproduce
Sorry I currently don't have a minimal example. The issue will always occur when torchtext.data.Iterator gets a batch_size_fn passed in. If the fix is not convincing I can take the time and construct a code example. Hope this is not necessary.
Code sample
I created my own Iterator for a Skip-Thought model, that dynamically batches sentences together. This might be unnecessary complex, or even not really useful however it revealed that issue described above when using torchtext. For context here is a code excerpt that creates the issue:
import torchtext
...
global max_src_in_batch, max_tgt_in_batch
def batch_size_fn(new, count, sofar):
"Keep augmenting batch and calculate total number of tokens + padding."
global max_src_in_batch, max_tgt_in_batch
if count == 1:
max_src_in_batch = 0
max_tgt_in_batch = 0
max_src_in_batch = max(max_src_in_batch, len(new.current))
max_tgt_in_batch = max(max_tgt_in_batch, len(new.next) + 2)
src_elements = count * max_src_in_batch
tgt_elements = count * max_tgt_in_batch
return max(src_elements, tgt_elements)
class MyIterator(torchtext.data.Iterator):
def create_batches(self):
if self.train:
def pool(d, random_shuffler):
for p in data.batch(d, self.batch_size * 100):
p_batch = data.batch(
sorted(p, key=self.sort_key),
self.batch_size, self.batch_size_fn)
for b in random_shuffler(list(p_batch)):
yield b
self.batches = pool(self.data(), self.random_shuffler)
else:
self.batches = []
for b in data.batch(self.data(), self.batch_size,
self.batch_size_fn):
self.batches.append(sorted(b, key=self.sort_key))
...
class SkipThoughts(pl.LightningModule):
...
@pl.data_loader
def train_dataloader(self):
train_iter = MyIterator(self.my_train_dataloader, batch_size=self.batch_size, repeat=False,
sort_key=lambda x:
data.interleave_keys(len(x.current),
data.interleave_keys(len(x.prev),
len(x.next))),
batch_size_fn=batch_size_fn, train=True,
shuffle=True)
return train_iter
But this happens whenever a batch_size_fn is used in torchtext. Because it is unknown how many batches the data set will have torchtext len method returns a NotImplementedError. See code snipped below:
def __len__(self):
if self.batch_size_fn is not None:
raise NotImplementedError
return math.ceil(len(self.dataset) / self.batch_size)
Expected behavior
The function _has_len tests if len can is available and then returns True, otherwise False. It shoudl return False if NotImplementedError is raised.
Environment
/Users/thomas/virtualenv/Python3/PyTorch/env/bin/python /Users/thomas/scm/OakDataPrep/collect_env_details.py
CUDA:
GPU:
available: False
version: None
Packages:
numpy: 1.18.2
pyTorch_debug: False
pyTorch_version: 1.5.0
pytorch-lightning: 0.8.0
tensorboard: 2.2.0
tqdm: 4.45.0
System:
OS: Darwin
architecture:
64bit
processor: i386
python: 3.7.7
version: Darwin Kernel Version 19.5.0: Tue May 26 20:41:44 PDT 2020; root:xnu-6153.121.2~2/RELEASE_X86_64
Process finished with exit code 0
Additional context
Issue occur with Pytorch-Lighning 0.8 and Torchtext 0.6
|
Clarification for Lr Scheduler ReduceLROnPlateau in PL
|
[
"question"
] |
Maybe I missed it in PL documentation; however, when using the learning rate scheduler " ReduceLROnPlateau" appears to take an argument for which metric to monitor during training.
Does PL handle passing this argument? If so is there anyway for the user to specify which argument is passed to the scheduler?
From Pytorch Docs, example shows scheduler.step takes val_loss ,as the argument for when to reduce learning rate for this particular scheduler.
Example
optimizer = torch.optim.SGD(model.parameters(), lr=0.1, momentum=0.9)
scheduler = ReduceLROnPlateau(optimizer, 'min')
for epoch in range(10):
train(...)
val_loss = validate(...)
# Note that step should be called after validate()
scheduler.step(val_loss)
|
example_input_array dtype
|
[
"bug",
"discussion"
] |
Currently assumed that example_input_array dtype to be equal to model dtype. This is not necessarily correct - e.g. if input is a vector of INT.
pytorch-lightning/pytorch_lightning/core/memory.py
Line 192
in
7dc58bd
input_ = apply_to_collection(input_, torch.Tensor, lambda x: x.type(model.dtype))
|
Scipy LBFGS with Lightning
|
[
"feature",
"help wanted",
"won't fix"
] |
π Feature
Nico Pinto (p.c.) says that the vanilla Scipy LBFGS optimizer has produced really good results for him, better so than Pytorch's LBFGS implementation. I am trying to use this wrapper, but similar to this issue I get error
TypeError: step() missing 1 required positional argument: 'closure'
How can I get the closure to work with Scipy's LBFGS implementation?
|
Validation turns on midway during training instead of epoch end - Progress Bar
|
[
"bug",
"help wanted"
] |
Validation is starting before epoch end. Validation starts during the first training epoch.
I've attached a snippet of the command prompt
EDIT: Epoch progress bar includes validation progress. Please close the issue
|
DDP Bug with Model Checkpoint parsing
|
[
"bug",
"help wanted"
] |
π Bug
My script works with CPU, single-GPU and dp.
I need ddp to do 16 bit training. Also even on a single machine ddp is faster.
Here is my ModelCheckpoint code:
def setup_model_checkpoint(config):
kwargs = config["model_checkpoint_kwargs"]
metrics = kwargs.pop("metrics", ["val_loss"])
if isinstance(metrics, str):
metrics = [metrics]
fp = "checkpoints/{epoch}"
for metric in metrics:
fp += "-{"
fp += str(metric)
fp += ":.2f}"
return ModelCheckpoint(filepath=fp, **kwargs)
In my case it would generate the checkpoint: checkpoints/epoch=4_val_loss=0.6_auc=0.85 for example.
Although I even tried it with just checkpoints and it's the same issue.
The issue is the following:
2020-06-20T14:50:19.704+01:00
File "/home/user/miniconda/envs/py36/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 891, in fit
2020-06-20T14:50:19.704+01:00
File "/home/user/miniconda/envs/py36/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 891, in fit
2020-06-20T14:50:19.704+01:00
File "/home/user/miniconda/envs/py36/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 891, in fit
2020-06-20T14:50:19.705+01:00
self.ddp_train(task, model)
2020-06-20T14:50:19.705+01:00
self.ddp_train(task, model)
2020-06-20T14:50:19.705+01:00
self.ddp_train(task, model)
2020-06-20T14:50:19.705+01:00
File "/home/user/miniconda/envs/py36/lib/python3.6/site-packages/pytorch_lightning/trainer/distrib_data_parallel.py", line 530, in ddp_train
2020-06-20T14:50:19.705+01:00
File "/home/user/miniconda/envs/py36/lib/python3.6/site-packages/pytorch_lightning/trainer/distrib_data_parallel.py", line 530, in ddp_train
2020-06-20T14:50:19.705+01:00
File "/home/user/miniconda/envs/py36/lib/python3.6/site-packages/pytorch_lightning/trainer/distrib_data_parallel.py", line 530, in ddp_train
2020-06-20T14:50:19.705+01:00
self.run_pretrain_routine(model)
2020-06-20T14:50:19.705+01:00
self.run_pretrain_routine(model)
2020-06-20T14:50:19.705+01:00
File "/home/user/miniconda/envs/py36/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 1046, in run_pretrain_routine
2020-06-20T14:50:19.705+01:00
File "/home/user/miniconda/envs/py36/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 1046, in run_pretrain_routine
2020-06-20T14:50:19.705+01:00
self.run_pretrain_routine(model)
2020-06-20T14:50:19.705+01:00
File "/home/user/miniconda/envs/py36/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 1046, in run_pretrain_routine
2020-06-20T14:50:19.705+01:00
self.configure_checkpoint_callback()
2020-06-20T14:50:19.705+01:00
self.configure_checkpoint_callback()
2020-06-20T14:50:19.705+01:00
File "/home/user/miniconda/envs/py36/lib/python3.6/site-packages/pytorch_lightning/trainer/callback_config.py", line 60, in configure_checkpoint_callback
2020-06-20T14:50:19.705+01:00
File "/home/user/miniconda/envs/py36/lib/python3.6/site-packages/pytorch_lightning/trainer/callback_config.py", line 60, in configure_checkpoint_callback
2020-06-20T14:50:19.705+01:00
self.configure_checkpoint_callback()
2020-06-20T14:50:19.705+01:00
File "/home/user/miniconda/envs/py36/lib/python3.6/site-packages/pytorch_lightning/trainer/callback_config.py", line 60, in configure_checkpoint_callback
2020-06-20T14:50:19.705+01:00
"checkpoints"
2020-06-20T14:50:19.705+01:00
"checkpoints"
2020-06-20T14:50:19.705+01:00
File "/home/user/miniconda/envs/py36/lib/python3.6/posixpath.py", line 94, in join
2020-06-20T14:50:19.705+01:00
File "/home/user/miniconda/envs/py36/lib/python3.6/posixpath.py", line 94, in join
2020-06-20T14:50:19.705+01:00
"checkpoints"
2020-06-20T14:50:19.705+01:00
genericpath._check_arg_types('join', a, *p)
2020-06-20T14:50:19.705+01:00
File "/home/user/miniconda/envs/py36/lib/python3.6/genericpath.py", line 149, in _check_arg_types
2020-06-20T14:50:19.705+01:00
File "/home/user/miniconda/envs/py36/lib/python3.6/posixpath.py", line 94, in join
2020-06-20T14:50:19.705+01:00
genericpath._check_arg_types('join', a, *p)
2020-06-20T14:50:19.705+01:00
File "/home/user/miniconda/envs/py36/lib/python3.6/genericpath.py", line 149, in _check_arg_types
2020-06-20T14:50:19.705+01:00
(funcname, s.__class__.__name__)) from None
2020-06-20T14:50:19.705+01:00
genericpath._check_arg_types('join', a, *p)
2020-06-20T14:50:19.705+01:00
TypeError: join() argument must be str or bytes, not 'NoneType'
2020-06-20T14:50:19.706+01:00
File "/home/user/miniconda/envs/py36/lib/python3.6/genericpath.py", line 149, in _check_arg_types
2020-06-20T14:50:19.706+01:00
(funcname, s.__class__.__name__)) from None
2020-06-20T14:50:19.706+01:00
TypeError: join() argument must be str or bytes, not 'NoneType'
2020-06-20T14:50:19.706+01:00
(funcname, s.__class__.__name__)) from None
2020-06-20T14:50:19.706+01:00
TypeError: join() argument must be str or bytes, not 'NoneType'
Environment
PyTorch Version (e.g., 1.0): 1.4
OS (e.g., Linux): Linux
How you installed PyTorch (conda, pip, source): Conda
Build command you used (if compiling from source):
Python version: 3.6.5
CUDA/cuDNN version: 10.1
GPU models and configuration: 4 x V100
Any other relevant information: Pytorch lightning 0.8.0
Additional context
|
Support HTCondor in addition to SLURM
|
[
"feature",
"help wanted",
"won't fix"
] |
π Bug
New DDP implementation is still not working for me. I am in the situation where I need to send a single job i.e. run a single executable due to the batch system we have on our university GPU cluster. Therefore, rather than calling the training script multiple times with different env variables as suggested in the documentation, my solution is to create a super script (see below) which will then call the main training script multiple times each time forking a new subprocess. However, the training never seem to commence in the end.
Code sample
My super script looks as follows
import sys
import subprocess
import time
import os
import random
def main():
# fetch arguments
args = sys.argv[1:]
# Extract the number of gpus from argument list
gpus = 0
for idx in range(len(args)):
if args[idx] == "--gpus":
gpus = int(args[idx+1])
# random master port 0
master_port_0 = random.randint(10000, 19000)
# create a subprocess for each gpu
for gpu in range(gpus):
my_env = os.environ.copy()
my_env["MASTER_ADDR"] = "localhost"
my_env["MASTER_PORT"] = str(master_port_0+gpu)
my_env["WORLD_SIZE"] = str(gpus)
my_env["NODE_RANK"] = str(gpu)
my_env["LOCAL_RANK"] = "0"
subprocess.Popen(["/usr/bin/python3", "train.py"] + args, env=my_env)
print("Sleep for 10 seconds")
time.sleep(10)
print("Sleep for another 60 seconds")
time.sleep(60)
The output that I get when I run for 2 GPUs is:
Allocating cuda
Model signature
Creating trainer
initializing ddp: GLOBAL_RANK: 0, MEMBER: 1/2
Allocating cuda
Model signature
Creating trainer
initializing ddp: GLOBAL_RANK: 2, MEMBER: 3/2
Expected behavior
I would expect some output: "All DDP processes registered.." but I get nothing, it seems that my subprocesses get stuck somewhere.
Environment
PyTorch Version (e.g., 1.0): 1.5.1+cu101
OS (e.g., Linux): Linux
How you installed PyTorch (conda, pip, source): pip3
Python version: 3.7
CUDA/cuDNN version: 10.1
|
Undefined variable on DDP2 trainer
|
[
"bug",
"help wanted"
] |
π Bug
I am trying to run an experiment in multiple gpus, but a single machine. If I understood right, using ddp2 would be faster than dp and would allow me to use the aggregated results in training_step_end and validation_step_end, which ddp doesn't provide. For some reason I am getting this error:
Traceback (most recent call last):
File "main.py", line 50, in <module>
trainer.fit(model)
File "/home/darley.barreto/.local/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 882, in fit
self.ddp_train(task, model)
UnboundLocalError: local variable 'task' referenced before assignment
Which is this line.
To Reproduce
I can't paste my code due to size and complexity, so I tweaked this code a bit and tried to run with ddp2, which gives the same error, but it works for ddp.
import os
import torch
from torch.nn import functional as F
from torch.utils.data import DataLoader
from torchvision.datasets import MNIST
import torchvision.transforms as transforms
import pytorch_lightning as ptl
from pytorch_lightning import Trainer
class CoolModel(ptl.LightningModule):
def __init__(self):
super(CoolModel, self).__init__()
# not the best model...
self.l1 = torch.nn.Linear(28 * 28, 10)
def forward(self, x):
return torch.relu(self.l1(x.view(x.size(0), -1)))
def my_loss(self, y_hat, y):
return F.cross_entropy(y_hat, y)
def training_step(self, batch, batch_nb):
x, y = batch
y_hat = self.forward(x)
return {'loss': self.my_loss(y_hat, y)}
def validation_step(self, batch, batch_nb):
x, y = batch
y_hat = self.forward(x)
return {'val_loss': self.my_loss(y_hat, y)}
def validation_epoch_end(self, outputs):
avg_loss = torch.stack([x['val_loss'] for x in outputs]).mean()
return {'avg_val_loss': avg_loss}
def configure_optimizers(self):
return [torch.optim.Adam(self.parameters(), lr=0.02)]
def train_dataloader(self):
return DataLoader(MNIST(os.getcwd(), train=True, download=True, transform=transforms.ToTensor()), batch_size=32)
def val_dataloader(self):
return DataLoader(MNIST(os.getcwd(), train=True, download=True, transform=transforms.ToTensor()), batch_size=32)
model = CoolModel()
trainer = Trainer(max_epochs=1, gpus=2, distributed_backend="ddp2", train_percent_check=0.1)
trainer.fit(model)
Environment
* CUDA:
- GPU:
- Quadro RTX 5000
- Quadro RTX 5000
- available: True
- version: 10.1
* Packages:
- numpy: 1.18.1
- pyTorch_debug: False
- pyTorch_version: 1.5.1
- pytorch-lightning: 0.8.2-dev
- tensorboard: 2.2.2
- tqdm: 4.46.1
* System:
- OS: Linux
- architecture:
- 64bit
-
- processor: x86_64
- python: 3.7.7
- version: #31-Ubuntu SMP Tue Jul 17 15:39:52 UTC 2018
|
How to make inference right
|
[
"question"
] |
Hello everyone. I'm new to pytorch-lightning, but already excited with this framework. It's very convenient to train my models using lightning. Now my usecase is: I have trained my model and want to do inference on my test data and get results (for example, in csv format). I'd like to do my inference pytorch-lightning-way. What is the best practice to do it?
Before asking:
search the issues.
search the docs.
What's your environment?
OS: [Linux]
Packaging [pip]
Version [0.8.1]
|
how to train a network that doesn't require any training data
|
[
"question"
] |
The Wake-Sleep algorithm doesn't require any data during the sleep phase (effectively it generates it's own data). pytorch-lightning, however, appears to require a train_dataloader() method.
The only way I have to make pytorch-lightning run at all (for this admitted unusual case) is to specify some dummy dataset in train_dataloader, and then to ignore the data that gets passed to training_step. But I don't like that cycles are spent iterating through irrelevant data then. Is there a more elegant way?
I considered defining my own custom DataLoader that returns the simulated data that the sleep phase uses, but this started seeming like even more of a hack than the previous solution. After all, my "dataloader" doesn't load any data; it effectively generates new data every "epoch". It's seems unnatural to split the sleep phase updates in this way.
Is there a more straightforward way in lightning to train a network that doesn't require any data? Thanks!
|
overfit_batches doesn't work
|
[
"bug",
"help wanted",
"priority: 0"
] |
When I try to use overfit_batches:
https://pytorch-lightning.readthedocs.io/en/latest/debugging.html#make-model-overfit-on-subset-of-data
trainer = Trainer(gpus=num_gpus, max_epochs=config.epochs, overfit_batches=0.01, logger=logger)
my code fails with:
trainer.fit(module)
File "/home/andriy/miniconda3/envs/patchy_discs_model/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 918, in fit
self.single_gpu_train(model)
File "/home/andriy/miniconda3/envs/patchy_discs_model/lib/python3.7/site-packages/pytorch_lightning/trainer/distrib_parts.py", line 176, in single_gpu_train
self.run_pretrain_routine(model)
File "/home/andriy/miniconda3/envs/patchy_discs_model/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1065, in run_pretrain_routine
self.reset_val_dataloader(ref_model)
File "/home/andriy/miniconda3/envs/patchy_discs_model/lib/python3.7/site-packages/pytorch_lightning/trainer/data_loading.py", line 331, in reset_val_dataloader
self._reset_eval_dataloader(model, 'val')
File "/home/andriy/miniconda3/envs/patchy_discs_model/lib/python3.7/site-packages/pytorch_lightning/trainer/data_loading.py", line 314, in _reset_eval_dataloader
f'you requested to check {limit_eval_batches} of the {mode} dataloader but'
pytorch_lightning.utilities.exceptions.MisconfigurationException: you requested to check 0.01 of the val dataloader but 0.01*0 = 0. Please increase the limit_val_batches. Try at least limit_val_batches=0.09090909090909091
P.S.: I also tried setting limit_val_batches=0.09090909090909091. Same error.
|
GPU out of memory error after few batches
|
[
"question",
"won't fix"
] |
I am trying to train a complex model involving multiple convolutions. Since my own implementation was very slow (taking ~2 hours for an epoch which increased further after a few epochs), I tried changing my code to incorporate lightning module. With lightning, I'm getting a CUDA OOM after 1/3rd of the total no of batches being processed. Since the library handles .to and device calls internally, I'm not sure what can be the reason
Code sample
class Network(pl.LightningModule):
def __init__(self, in_ch=3, out_ch=1):
super(Network, self).__init__()
n1 = 64
filters = [n1, n1 * 2, n1 * 4, n1 * 8, n1 * 16]
self.pool = nn.MaxPool2d(kernel_size=2, stride=2)
self.Up = nn.Upsample(scale_factor=2, mode='bilinear', align_corners=True)
self.conv0_0 = conv_block_nested(in_ch, filters[0], filters[0])
self.conv1_0 = conv_block_nested(filters[0], filters[1], filters[1])
self.conv2_0 = conv_block_nested(filters[1], filters[2], filters[2])
self.conv3_0 = conv_block_nested(filters[2], filters[3], filters[3])
self.conv4_0 = conv_block_nested(filters[3], filters[4], filters[4])
self.conv0_1 = conv_block_nested(filters[0] + filters[1], filters[0], filters[0])
self.conv1_1 = conv_block_nested(filters[1] + filters[2], filters[1], filters[1])
self.conv2_1 = conv_block_nested(filters[2] + filters[3], filters[2], filters[2])
self.conv3_1 = conv_block_nested(filters[3] + filters[4], filters[3], filters[3])
self.conv0_2 = conv_block_nested(filters[0]*2 + filters[1], filters[0], filters[0])
self.conv1_2 = conv_block_nested(filters[1]*2 + filters[2], filters[1], filters[1])
self.conv2_2 = conv_block_nested(filters[2]*2 + filters[3], filters[2], filters[2])
self.conv0_3 = conv_block_nested(filters[0]*3 + filters[1], filters[0], filters[0])
self.conv1_3 = conv_block_nested(filters[1]*3 + filters[2], filters[1], filters[1])
self.conv0_4 = conv_block_nested(filters[0]*4 + filters[1], filters[0], filters[0])
self.final = nn.Conv2d(filters[0], out_ch, kernel_size=1)
def forward(self, x):
#pdb.set_trace()
x0_0 = self.conv0_0(x)
x1_0 = self.conv1_0(self.pool(x0_0))
x0_1 = self.conv0_1(torch.cat([x0_0, self.Up(x1_0)], 1))
x2_0 = self.conv2_0(self.pool(x1_0))
x1_1 = self.conv1_1(torch.cat([x1_0, self.Up(x2_0)], 1))
x0_2 = self.conv0_2(torch.cat([x0_0, x0_1, self.Up(x1_1)], 1))
x3_0 = self.conv3_0(self.pool(x2_0))
x2_1 = self.conv2_1(torch.cat([x2_0, self.Up(x3_0)], 1))
x1_2 = self.conv1_2(torch.cat([x1_0, x1_1, self.Up(x2_1)], 1))
x0_3 = self.conv0_3(torch.cat([x0_0, x0_1, x0_2, self.Up(x1_2)], 1))
##adding for padding issues
x4_0 = self.conv4_0(self.pool(x3_0))
x4_0_up = self.Up(x4_0)
diffY = x3_0.size()[2] - x4_0_up.size()[2]
diffX = x3_0.size()[3] - x4_0_up.size()[3]
x4_0_up = F.pad(x4_0_up, (diffX // 2, diffX - diffX//2,
diffY // 2, diffY - diffY//2))
x3_1 = self.conv3_1(torch.cat([x3_0, x4_0_up], 1))
x2_2 = self.conv2_2(torch.cat([x2_0, x2_1, self.Up(x3_1)], 1))
x1_3 = self.conv1_3(torch.cat([x1_0, x1_1, x1_2, self.Up(x2_2)], 1))
x0_4 = self.conv0_4(torch.cat([x0_0, x0_1, x0_2, x0_3, self.Up(x1_3)], 1))
output = self.final(x0_4)
output= F.sigmoid(output).squeeze(1)
return output
val_paths = [line.rstrip('\n') for line in open(config.val_data_path)]
train_loader = get_loader(config.train_data_path, 'Train')
val_loader = get_loader(val_paths, 'Val')
#model = torch.nn.DataParallel(Network())
model = Network()
checkpoint_callback = pl.callbacks.ModelCheckpoint( monitor="val_loss",mode="min",save_last=True, save_top_k=-1,verbose=False,)
trainer = pl.Trainer(gpus=[6,7], profiler=True, distributed_backend='ddp', num_sanity_val_steps=0,log_save_interval=1, checkpoint_callback=checkpoint_callback)
trainer.fit(model, train_dataloader=train_loader, val_dataloaders= val_loader)
Expected behavior
I'm using the same hardware so unable to understand the issue
Environment
PyTorch Version (e.g., 1.0):
OS (e.g., Linux): linux
How you installed PyTorch (conda, pip, source):conda
Build command you used (if compiling from source):
Python version: 3
CUDA/cuDNN version: 10.1
GPU models and configuration: TITAN RTX GPUs
Any other relevant information:
|
A simple logger for notebooks or repl
|
[
"feature",
"help wanted"
] |
π Feature
A basic logger that can display results in a table in the repl or in a notebook as the network trains and saves the results to a csv file.
Motivation
A lightweight logger is useful for environments where you might not have port access or for interactive experimentation.
Pitch
I created a logger at this gist that does this: https://gist.github.com/ttumiel/32e6a73d206f4df55aa73d6d4ecdf0c4
Alternatives
In #1803, a CSV logger is proposed which would perform similarly except it would not display interactively. If the csv logger is implemented, a subclass could display the csv in the repl.
Furthermore, lightning doesn't depend on pandas which my implementation above uses. I could remove pandas, though, and build the csv manually.
|
Breaking compatibility with custom datatypes implementing `.to`
|
[
"bug"
] |
π Feature
Bring back compatibility for custom datatypes in collections implementing .to for transferring data.
Motivation
I am using Pytorch Lightning together with Pytorch Geometric. Pytorch Geometric implements several custom datatypes and dataloaders which is really useful for geometric deep learning. Everything worked well with pytorch lightning 0.7.6, as the custom datatypes implement a .to method for transferring the data to different devices.
However, with the recent 0.8.1 update, this is no longer possible and I had to scour the documentation to be able to implement a fix using transfer_batch_to_device(batch, device). This is in my opinion not very pretty, as my batch looks like this
{"data": pytorch geometric batch object, "id": tensor, ...}
i.e. it is just a dictionary of types that all implement the .to method.
Pitch
Make it possible for classes implementing the .to method to be transferred automatically
If part of the batch could not be transferred automatically output a warning letting the user know, that a custom transfer function for the batch might be required, or to implement the .to method for custom datatypes in the batch
Add a note to the introduction guide about custom datatypes and handling for custom datatypes
Alternatives
If this change was intentional and the behavior of trying to call the .to method is not desired, I think there should definitely be some more documentation about this, in a more obvious place.
Additional context
|
Logging on slurm stopped working
|
[
"bug",
"help wanted"
] |
π Bug
Logging and checkpoint saving stopped working for me when I run experiments via slurm system.
I am using log keys in return functions: training_epoch_end/validation_epoch_end.
Version 0.7.6 works.
To Reproduce
Steps to reproduce the behaviour:
Define Tensorboard logger
Run training using slurm system sbatch ...
No logs.
Code sample
Expected behaviour
Environment
PyTorch 1.4.0:
PyTorch-lightning 0.8.1,
Linux,
Python 3.7.6,
CUDA/cuDNN 10.1, 7.6.5,
|
Using s3 backed for Mlflow logger fails
|
[
"feature",
"help wanted",
"logger"
] |
π Bug
MLflow logger support for s3 URI's. It is already supported by MLflow 1.9.0, https://www.mlflow.org/docs/latest/tracking.html#amazon-s3, but passing an s3 URI to the logger's tracking_uri parameter fails.
To Reproduce
Steps to reproduce the behavior:
init a lighting_module and a trainer.
init a mlflow_logger with an s3 URI set for 'tracking_uri'.
run the trainer
mlflow.tracking.registry.UnsupportedModelRegistryStoreURIException: Model registry functionality is unavailable; got unsupported URI 's3://somebucket/' for model registry data storage. Supported URI schemes are: ['', 'file', 'databricks', 'http', 'https', 'postgresql', 'mysql', 'sqlite', 'mssql']. See https://www.mlflow.org/docs/latest/tracking.html#storage for how to run an MLflow server against one of the supported backend storage locations.
Code sample
model = somedummymodel() mlflow_logger = MLFlowLogger( experiment_name='some_name', tracking_uri="s3://somebucket/" ) trainer = LT(logger=mlflow_logger) trainer.fit(model, somedata, somedata_val)
Expected behavior
It should post logs into the bucket using the mlfow tracking server feature.
Environment
CUDA:
- GPU:
- GeForce RTX 2080 Ti
- GeForce RTX 2080 Ti
- available: True
- version: 10.0.130
Packages:
- numpy: 1.17.0
- pyTorch_debug: False
- pyTorch_version: 1.3.1
- pytorch-lightning: 0.8.1
- tensorboard: 2.2.2
- tqdm: 4.46.1
System:
- OS: Linux
- architecture:
- 64bit
-
- processor: x86_64
- python: 3.7.6
- version: #53~18.04.1-Ubuntu SMP Thu Jun 4 14:58:26 UTC 2020
|
Using `overfit_batches` with multiple expected validation dataloaders can cause problems
|
[
"feature",
"help wanted",
"won't fix"
] |
As the title says, my validation_epoch_end expects outputs from 2 dataloaders, but when using overfit_batches it only receives 1, this causes my code to crash.
The most simple solution I can think of is to include the number_of_validation_dataloaders in the validation_epoch_end method to handle more easily this situation.
|
Missing training_step outputs in training_epoch_end
|
[
"bug",
"help wanted"
] |
bugfix of this issue: #2320
|
AttributeError: 'LightningDataParallel' object has no attribute 'teardown'
|
[
"bug",
"help wanted"
] |
π Bug
To Reproduce
Steps to reproduce the behavior:
trainer = pytorch_lightning.Trainer(
gpus=2,
distributed_backend='dp'
)
model = BaseModel.load_from_checkpoint(...)
trainer.test(model)
Traceback (most recent call last):
File "run_kitti.py", line 351, in
trainer.test(model)
File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1198, in test
self.model.teardown('test')
File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 594, in getattr
type(self).name, name))
AttributeError: 'LightningDataParallel' object has no attribute 'teardown'
Code sample
Expected behavior
Environment
CUDA:
GPU:
GeForce GTX 1080 Ti
GeForce GTX 1080 Ti
available: True
version: 10.1
Packages:
numpy: 1.18.1
pyTorch_debug: False
pyTorch_version: 1.5.1
pytorch-lightning: 0.8.1
tensorboard: 2.2.2
tqdm: 4.46.0
System:
OS: Linux
architecture:
64bit
processor: x86_64
python: 3.7.7
version: #53~18.04.1-Ubuntu SMP Thu Jun 4 14:58:26 UTC 2020
Additional context
If I'm not missing something, this AttributeError is a bug on your side.
|
Multi GPU Training: No kernel image is available for execution on the device
|
[
"bug",
"help wanted"
] |
I started using PL thinking about the ease of training my model using multiple GPUs. I'm basically using some Transformers from Huggingface.
LightningModule
import hydra
import torch
from pytorch_lightning.core.lightning import LightningModule
from torchtext import data
from transformers import AutoTokenizer
from source.loss.NPairsLoss import NPairsLoss
class JointEncoder(LightningModule):
"""Encodes the code and docstring into an same space of embeddings."""
def __init__(self,
config,
a_encoder,
b_encoder
):
super(JointEncoder, self).__init__()
self.config = config
self.a_encoder = a_encoder
self.b_encoder = b_encoder
self.tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased', do_lower_case=True)
self.pad_index = self.tokenizer.convert_tokens_to_ids(self.tokenizer.pad_token)
self.loss_fn = NPairsLoss()
def forward(self, x1, x2):
x1 = self.a_encoder(x1)
x2 = self.b_encoder(x2)
return x1, x2
def configure_optimizers(self):
return torch.optim.Adam(self.parameters(), lr=1e-6, betas=(0.9, 0.999), eps=1e-08, weight_decay=0, amsgrad=True)
def training_step(self, batch, batch_idx):
x1, x2 = batch.sentence1, batch.sentence2
predict = self(x1, x2)
target = torch.arange(x1.size()[0])
loss = self.loss_fn(predict, target)
return {'loss': loss}
def test_step(self, batch, batch_idx):
x1, x2 = batch.sentence1, batch.sentence2
predict = self(x1, x2)
target = torch.arange(x1.size()[0])
loss = self.loss_fn(predict, target)
return {'test_loss': loss}
def validation_step(self, batch, batch_idx):
x1, x2 = batch.sentence1, batch.sentence2
predict = self(x1, x2)
# print(x1.shape)
target = torch.arange(x1.size()[0])
loss = self.loss_fn(predict, target)
return {'val_loss': loss}
def train_dataloader(self):
train_dataset = data.TabularDataset(
path=hydra.utils.to_absolute_path(self.config.dataset.train_path),
format="json",
fields=self.get_fields())
return data.BucketIterator(
dataset=train_dataset,
batch_size=self.config.train.batch_size
)
def test_dataloader(self):
test_dataset = data.TabularDataset(
path=hydra.utils.to_absolute_path(self.config.dataset.test_path),
format="json",
fields=self.get_fields()
)
return data.BucketIterator(
dataset=test_dataset,
batch_size=self.config.test.batch_size
)
def val_dataloader(self):
val_dataset = data.TabularDataset(
path=hydra.utils.to_absolute_path(self.config.dataset.val_path),
format="json",
fields=self.get_fields()
)
return data.BucketIterator(
dataset=val_dataset,
batch_size=self.config.val.batch_size
)
def get_fields(self):
text_field = data.Field(
use_vocab=False,
tokenize=self.tokenizer.encode,
batch_first=True,
fix_length=self.config.preprocessing.max_length,
pad_token=self.pad_index
)
return {'sentence1': ('sentence1', text_field),
'sentence2': ('sentence2', text_field)}
entry point:
@hydra.main(config_path="configs/config.yaml")
def dev_run(cfg):
a_encoder = Encoder(encoder=BertModel.from_pretrained('bert-base-uncased'))
b_encoder = Encoder(encoder=BertModel.from_pretrained('bert-base-uncased'))
print(cfg.pretty())
model = JointEncoder(config=cfg, a_encoder=a_encoder, b_encoder=b_encoder)
trainer = Trainer(max_epochs=3, gpus=[0, 1])
trainer.fit(model)
if __name__ == "__main__":
dev_run()
nvidia-smi
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 440.33.01 Driver Version: 440.33.01 CUDA Version: 10.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX TIT... Off | 00000000:02:00.0 Off | N/A |
| 31% 51C P0 84W / 250W | 0MiB / 6083MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 1 GeForce GTX TIT... Off | 00000000:03:00.0 Off | N/A |
| 31% 50C P0 85W / 250W | 0MiB / 6083MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 2 GeForce GTX TIT... Off | 00000000:83:00.0 Off | N/A |
| 29% 46C P0 81W / 250W | 0MiB / 6083MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 3 GeForce GTX TIT... Off | 00000000:84:00.0 Off | N/A |
| 0% 50C P0 68W / 250W | 0MiB / 6083MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
But I'm getting the following error:
-- Process 1 terminated with the following error:
Traceback (most recent call last):
File "/home/celso/projects/semantic_code_search/venv/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 20, in _wrap
fn(i, *args)
File "/home/celso/projects/semantic_code_search/venv/lib/python3.7/site-packages/pytorch_lightning/trainer/distrib_data_parallel.py", line 527, in ddp_train
model = model.configure_ddp(model, device_ids)
File "/home/celso/projects/semantic_code_search/venv/lib/python3.7/site-packages/pytorch_lightning/core/lightning.py", line 900, in configure_ddp
find_unused_parameters=True
File "/home/celso/projects/semantic_code_search/venv/lib/python3.7/site-packages/torch/nn/parallel/distributed.py", line 285, in __init__
self.broadcast_bucket_size)
File "/home/celso/projects/semantic_code_search/venv/lib/python3.7/site-packages/torch/nn/parallel/distributed.py", line 496, in _distributed_broadcast_coalesced
dist._broadcast_coalesced(self.process_group, tensors, buffer_size)
RuntimeError: CUDA error: no kernel image is available for execution on the device
I consider myself a beginner in Deep Learning, so it would be of great help if someone could give me some directions on how to overcome these errors.
|
max_steps does not work if resume_from_checkpoint is specified
|
[
"bug",
"help wanted"
] |
π Bug
max_steps does not work if resume_from_checkpoint is specified
To Reproduce
Steps to reproduce the behavior:
Specify max_steps
Specify resume_from_checkpoint
Code sample
import os
import torch
from torch.nn import functional as F
from torch.utils.data import DataLoader
from torchvision.datasets import MNIST
from torchvision import transforms
import pytorch_lightning as pl
class MNISTModel(pl.LightningModule):
def __init__(self):
super(MNISTModel, self).__init__()
self.l1 = torch.nn.Linear(28 * 28, 10)
def forward(self, x):
return torch.relu(self.l1(x.view(x.size(0), -1)))
def training_step(self, batch, batch_nb):
x, y = batch
loss = F.cross_entropy(self(x), y)
tensorboard_logs = {'train_loss': loss}
return {'loss': loss, 'log': tensorboard_logs}
def configure_optimizers(self):
return torch.optim.Adam(self.parameters(), lr=0.02)
train_loader = DataLoader(MNIST(os.getcwd(), train=True, download=True, transform=transforms.ToTensor()), batch_size=32)
mnist_model = MNISTModel()
trainer = pl.Trainer(gpus=1, max_steps=123, resume_from_checkpoint="./lightning_logs/version_0/checkpoints/epoch=2.ckpt")
trainer.fit(mnist_model, train_loader)
Expected behavior
Training stops at max_steps
Environment
CUDA:
- GPU:
- Tesla T4
- available: True
- version: 10.2
Packages:
- numpy: 1.18.5
- pyTorch_debug: False
- pyTorch_version: 1.6.0.dev20200622
- pytorch-lightning: 0.8.0
- tensorboard: 1.15.0
- tqdm: 4.46.1
System:
- OS: Linux
- architecture:
- 64bit
-
- processor: x86_64
- python: 3.7.7
- version: #28~18.04.1-Ubuntu SMP Sat Jun 6 00:0
Additional context
|
save_hyperparameters incorrect documentation
|
[
"docs"
] |
π Documentation
The documentation has examples
self.save_hyperparameters(['layer_1_dim', 'learning_rate'])
This fails as due to type list not being supported as a hyperparameter. However, looking at the source for save_hyperparameters it appears that the correct usage is
self.save_hyperparameters('layer_1_dim', 'learning_rate')
As an aside I also do have a list as a hyperparameter for one model - it's a time convolutional network with stacked temporal convolutions, I was originally passing in the number of filters per layer as a list.
|
Error when importing metrics on Windows without DDP support
|
[
"feature",
"help wanted"
] |
π Bug
When loading the new metrics the following AttributeError is raised:
AttributeError: module 'torch.distributed' has no attribute 'ReduceOp'.
The problem is the use of torch.distributed.ReduceOp in the type hints and some functions of pytorch_lightning.metrics.converters.py.
A similar (identical?) issue was discussed on the NVIDIA/apex github. The issues seems to be due to the fact that Windows does not support certain distributed features, and as a result the ReduceOp class is completely missing in the Windows binaries. Their approach was to guard those imports with if torch.distributed.is_available().
To Reproduce
import pytorch_lightning.metrics
Expected behavior
Metrics should only raise an error if I try to use DDP on a machine without the necessary support.
Environment
CUDA:
GPU:
GeForce GTX 1050 Ti with Max-Q Design
available: True
version: 10.2
Packages:
numpy: 1.18.4
pyTorch_debug: False
pyTorch_version: 1.5.1
pytorch-lightning: 0.8.1
tensorboard: 2.2.1
tqdm: 4.46.0
System:
OS: Windows
architecture:
64bit
WindowsPE
processor: Intel64 Family 6 Model 158 Stepping 10, GenuineIntel
python: 3.7.4
version: 10.0.17763
|
Model validation code is not called
|
[
"bug",
"help wanted"
] |
π Bug
My defined methods for validation_step as well as validation_epoch_end do not seem to get called.
To Reproduce
Just call the provided code sample. Python should show the NotImplementedError. Instead the model completes 'successfully'.
Code sample
import pytorch_lightning as pl
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.utils.data import DataLoader
class Dataset(torch.utils.data.IterableDataset):
def __init__(self):
super().__init__()
def __iter__(self):
def get_sample():
for _ in range(5):
yield torch.randn(20)
return get_sample()
def __len__(self):
return 5
class Model(pl.LightningModule):
def __init__(self):
super().__init__()
self.enc = nn.Linear(20, 10)
self.dec = nn.Linear(10, 20)
def forward(self, x):
x = self.enc(x)
x = F.relu(x)
x = self.dec(x)
return x
def training_step(self, batch, batchIdx):
x = self.forward(batch)
return {'loss': torch.mean(x)}
def validation_step(self, batch, batchIdx):
raise NotImplementedError()
x = self.forward(batch)
return {'val_loss': torch.mean(x)}
def validation_epoch_end(self, outputs):
return {'val_loss': torch.mean(torch.stack([x['val_loss'] for x in outputs]))}
def configure_optimizers(self):
return torch.optim.AdamW(self.parameters())
if __name__ == '__main__':
trainer = pl.Trainer(num_sanity_val_steps=0)
net = Model()
dataset = Dataset()
trainer.fit(net, train_dataloader=DataLoader(dataset, batch_size=8, num_workers=0), val_dataloaders=DataLoader(dataset, batch_size=8, num_workers=0))
Expected behavior
Instead, I'd expect the code sample above to fail.
Environment
Collecting environment information...
PyTorch version: 1.5.1+cu101
Is debug build: No
CUDA used to build PyTorch: 10.1
OS: Gentoo Base System release 2.7
GCC version: (Gentoo 9.3.0 p1) 9.3.0
CMake version: version 3.17.3
Python version: 3.6
Is CUDA available: Yes
CUDA runtime version: 10.1.243
GPU models and configuration: GPU 0: GeForce GT 730
Nvidia driver version: 440.82
cuDNN version: /opt/cuda/targets/x86_64-linux/lib/libcudnn.so.7.6.5
Versions of relevant libraries:
[pip3] numpy==1.19.0
[pip3] pytorch-lightning==0.8.1
[pip3] torch==1.5.1+cu101
[pip3] torchvision==0.6.1+cu101
[conda] Could not collect
|
Reproducibility issue.
|
[
"question"
] |
I am a newbee in using pytorch and pytorch-lightning. My code is to classify the "Sequential" MNIST image, where 4 pixels are fed into the LSTM cell for each time step and one image is processed in 28*28/4 time steps. Since I provide a seed by pytorch_lightning.utilities.seed.seed_everything, same result is expected for each training, but I see every different result as below. Could you please help me to revise my code to behave correctly?
My environment: Linux python 3.7, pytorch 1.5.1, pytorch_lightning 0.8.1
import os
import torch
import pytorch_lightning
import torch.nn as nn
from torch.nn import functional as F
from torch.utils.data import DataLoader, random_split
from torchvision.datasets import MNIST
from torchvision import transforms
from pytorch_lightning.core.lightning import LightningModule
from pytorch_lightning.core.memory import ModelSummary
from pytorch_lightning.metrics.functional import accuracy
pytorch_lightning.utilities.seed.seed_everything(2)
INPUT_SIZE = 4
TIME_STEP = int(28*28/INPUT_SIZE)
HIDDEN_SIZE = 100
NUM_LAYERS = 1
DROPOUT = 0.1 if NUM_LAYERS > 1 else 0
BATCH_SIZE = 128
NUM_WORKERS = 8
LEARNING_RATE = 1e-3
MAX_EPOCHS = 30
class Model(LightningModule):
def __init__(self, hparams):
super().__init__()
self.hparams = hparams
self.rnn = nn.LSTM(input_size=self.hparams.input_size,
hidden_size=self.hparams.hidden_size,
num_layers=self.hparams.num_layers,
dropout=self.hparams.dropout,
batch_first=True)
self.fc = nn.Linear(in_features=self.hparams.hidden_size, out_features=10)
def forward(self, x):
r_out, _ = self.rnn(x, None)
return self.fc(r_out[:, -1, :])
def training_step(self, batch, batch_idx):
x, y = batch
y_hat = self(x)
loss = F.cross_entropy(y_hat, y)
tensorboard_logs = {'train_loss': loss}
return {'loss': loss, 'log': tensorboard_logs}
def validation_step(self, batch, batch_idx):
x, y = batch
y_hat = self(x)
loss = F.cross_entropy(y_hat, y)
acc = accuracy(torch.argmax(y_hat, dim=1), batch[1])
return {'val_loss': loss, 'val_acc': acc}
def validation_epoch_end(self, outputs):
avg_loss = torch.stack([x['val_loss'] for x in outputs]).mean()
avg_acc = torch.stack([x['val_acc'] for x in outputs]).mean()
tensorboard_logs = {'val_loss': avg_loss, 'val_acc': avg_acc}
return {'val_loss': avg_loss, 'log': tensorboard_logs}
def test_step(self, batch, batch_idx):
x, y = batch
y_hat = self(x)
loss = F.cross_entropy(y_hat, y)
acc = accuracy(torch.argmax(y_hat, dim=1), batch[1])
return {'test_loss': loss, 'test_acc': acc}
def test_epoch_end(self, outputs):
avg_loss = torch.stack([x['test_loss'] for x in outputs]).mean()
avg_acc = torch.stack([x['test_acc'] for x in outputs]).mean()
tensorboard_logs = {'test_loss': avg_loss, 'test_acc': avg_acc}
return {'test_loss': avg_loss, 'log': tensorboard_logs}
def prepare_data(self):
transform = transforms.Compose([transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,)),
transforms.Lambda(lambda x: x.view(-1, self.hparams.input_size))])
mnist_train = MNIST(os.getcwd(), train=True, download=True, transform=transform)
mnist_test = MNIST(os.getcwd(), train=False, download=True, transform=transform)
self.mnist_train, self.mnist_val = random_split(mnist_train, [55000, 5000])
self.mnist_test = mnist_test
def train_dataloader(self):
return DataLoader(self.mnist_train, batch_size=self.hparams.batch_size, num_workers=self.hparams.num_workers)
def val_dataloader(self):
return DataLoader(self.mnist_val, batch_size=self.hparams.batch_size, num_workers=self.hparams.num_workers)
def test_dataloader(self):
return DataLoader(self.mnist_test, batch_size=self.hparams.batch_size, num_workers=self.hparams.num_workers)
def configure_optimizers(self):
optim = torch.optim.Adam(self.parameters(), lr=self.hparams.learning_rate)
sched = torch.optim.lr_scheduler.StepLR(optim, step_size=10, gamma=0.1)
return [optim], [sched]
from pytorch_lightning import Trainer
from pytorch_lightning.callbacks import EarlyStopping
from pytorch_lightning import loggers as pl_loggers
from pytorch_lightning.callbacks import LearningRateLogger
from pytorch_lightning.utilities.parsing import AttributeDict
trainer = Trainer(max_epochs=MAX_EPOCHS,
gpus=1,
num_nodes=1,
callbacks=[LearningRateLogger()],
early_stop_callback=EarlyStopping(patience=3, monitor='val_loss', mode='min', verbose=True))
model = Model(hparams = AttributeDict({'batch_size': BATCH_SIZE,
'num_workers': NUM_WORKERS,
'learning_rate': LEARNING_RATE,
'input_size': INPUT_SIZE,
'time_step': TIME_STEP,
'hidden_size': HIDDEN_SIZE,
'num_layers': NUM_LAYERS,
'dropout': DROPOUT}))
trainer.fit(model)
trainer.test()
|
Problem with loading checkpoint of a model with embeddings
|
[
"bug",
"help wanted"
] |
π Bug
Unable to load from checkpoint for model with embeddings
Code sample
model arch
class Model(pl.LightningModule):
def __init__(self, emb_szs):
super().__init__()
m = get_base()
self.enc = nn.Sequential(*list(m.children())[:-1], nn.Flatten())
nc = list(m.children())[-1].in_features
self.head = nn.Sequential(nn.Linear(2*nc+25,512),Mish(),
nn.BatchNorm1d(512), nn.Dropout(0.5),nn.Linear(512,2))
self.embs = nn.ModuleList([nn.Embedding(c, s) for c,s in emb_szs])
def forward(self, xb, x_cat, x_cont):
x1 = [e(x_cat[:,i]-1) for i,e in enumerate(self.embs)]
x1 = torch.cat(x1, 1)
x_img = self.enc(xb)
x = torch.cat([x1, x_cont.unsqueeze(1)], 1)
x = torch.cat([x, x_img], 1)
return self.head(x)
checkpoint_callback = ModelCheckpoint(
filepath=os.path.join(os.getcwd(), 'model_dir'),
# save_top_k=True,
verbose=True,
monitor='val_loss',
mode='min',
prefix=''
)
trainer = Trainer(max_epochs=15,
early_stop_callback = early_stopping,
gpus=1,
gradient_clip_val=1.0,
weights_save_path=os.getcwd(),
checkpoint_callback = checkpoint_callback,
num_sanity_val_steps=0
)
the training loop has no problem but when I call trainer.test() a runtime error arrises
RuntimeError: Error(s) in loading state_dict for Model:
Unexpected key(s) in state_dict: "embs.0.weight", "embs.1.weight", "embs.2.weight", "embs.3.weight".
Expected behavior
As in the documentation It should have used the best checkpoint for test but loading checkpoint fails
Environment
CUDA:
GPU:
Tesla P100-PCIE-16GB
available: True
version: 10.1
Packages:
numpy: 1.18.1
pyTorch_debug: False
pyTorch_version: 1.5.1
pytorch-lightning: 0.8.1
tensorboard: 2.2.2
tqdm: 4.45.0
System:
OS: Linux
architecture:
64bit
processor: x86_64
python: 3.7.6
version: #1 SMP Sat Jun 13 11:04:33 PDT 2020
|
Trainer.test() returns type error while loading model after upgrading from pl 0.76 to 0.8 and 0.82dev
|
[
"bug",
"help wanted"
] |
π Bug
I am running a Transformer model with a custom data set. Everything worked fine with v0.76, except for early stopping. However after upgrading to v0.8+ Trainer.test() would return type error while loading model (expected int but got dict). I have tried two different datasets with the same model which returned same results. I have downgraded back to v0.76 and Trainer.test() was able to run, but earlystopping would not work properly.
Code sample
trainer.test()
Expected behavior
Trainer.test() should run model.test_step without error
|
How To: Specify input_shape when it's not known in advance
|
[
"question",
"won't fix"
] |
As far as I can see all of the examples assume that the input shape is known in advance, i.e. MNIST images which have fixed C,H and W. But I'm working with multi-variate time series data, superficially investigating transforms which alter the number of input series.
In the example below, transforms is passed to the constructor and used to enhance the dataset loaded in prepare_data. However these transforms may alter the shape of the data, but (unless I'm mistaken) the input and output dimensions has to be known before constructing the LightningModule.
Is there a canonical way of handling this pattern in Lightning?
class TCN(pl.LightningModule):
def __init__(self, input_shape, output_shape, transform, hparams):
super().__init__()
self.hparams = hparams
self.transform = transform
self.tcn = TemporalConvNet(input_shape, [self.hparams.filters] * self.hparams.layers, self.hparams.kernel_size, dropout=self.hparams.dropout)
self.decoder = nn.Linear(self.hparams.filters, output_shape)
def prepare_data(self):
self.train_dataset, self.val_dataset = load_dataset()
self.train_dataset, self.val_dataset = self.transform(self.train_dataset), self.transform(self.val_dataset)
|
Access the logging directory through LightningModule or Trainer
|
[
"question"
] |
Is there a way to access the current logging directory (e.g., lightning_logs/version_x)? I've searched the documentation and the source code but haven't found a solution yet.
I want to save some intermediate raw tensors to that directory.
Thanks,
David
|
Incorrect docs in metrics
|
[
"docs"
] |
π Documentation
Here
pytorch-lightning/pytorch_lightning/metrics/functional/classification.py
Line 137
in
a5f4578
num_classes: Optional[int] = None,
the argument is num_classes but in docs it is class_index
pytorch-lightning/pytorch_lightning/metrics/functional/classification.py
Line 148
in
a5f4578
class_index: class to calculate over
Also, it returns 5 values but only 4 are mentioned.
pytorch-lightning/pytorch_lightning/metrics/functional/classification.py
Lines 152 to 153
in
a5f4578
Return:
True Positive, False Positive, True Negative, False Negative
Also, some more typos and docs needed to be fixed in the metrics module.
|
An Extra argument passed to the class, loaded from load_from_checkpoint.
|
[
"bug",
"help wanted"
] |
π Bug
Hello,
I was facing few issues while using the trainer.test() function, on debugging I found out that the problem was with the _load_model_state class method which is called by load_from_checkpoint.
Code For reference
@classmethod
def _load_model_state(cls, checkpoint: Dict[str, Any], *args, **kwargs):
# pass in the values we saved automatically
if cls.CHECKPOINT_HYPER_PARAMS_KEY in checkpoint:
model_args = {}
# add some back compatibility, the actual one shall be last
for hparam_key in CHECKPOINT_PAST_HPARAMS_KEYS + (cls.CHECKPOINT_HYPER_PARAMS_KEY,):
if hparam_key in checkpoint:
model_args.update(checkpoint[hparam_key])
if cls.CHECKPOINT_HYPER_PARAMS_TYPE in checkpoint:
model_args = checkpoint[cls.CHECKPOINT_HYPER_PARAMS_TYPE](model_args)
args_name = checkpoint.get(cls.CHECKPOINT_HYPER_PARAMS_NAME)
init_args_name = inspect.signature(cls).parameters.keys()
if args_name == 'kwargs':
cls_kwargs = {k: v for k, v in model_args.items() if k in init_args_name}
kwargs.update(**cls_kwargs)
elif args_name:
if args_name in init_args_name:
kwargs.update({args_name: model_args})
else:
args = (model_args, ) + args
# load the state_dict on the model automatically
model = cls(*args, **kwargs)
model.load_state_dict(checkpoint['state_dict'])
# give model a chance to load something
model.on_load_checkpoint(checkpoint)
return model
Consider the case where the model has no arguments, which corresponds to LightModel.load_from_checkpoint('path'). Here, the else clause of the if-elif is being executed where the agrs variable is updated from an empty tuple to a tuple with an empty dictionary args = (model_args, ) + args (as model_args={}). Therefore, while unpacking the args and kwargs (model = cls(*args, **kwargs)), There is an extra argument being passed which raises a TypeError: __init__() takes 1 positional arguments but 2 were given. #2364
In some cases if the model has an argument and the user has forgotten to add it in the load_from_checkpoint, then an empty dictionary will be passed instead and it raises other errors depending on the code. For example, in the issue #2359 an empty dict is passed while loading the model and hence raises RuntimeError: Error(s) in loading state_dict for Model:.
I do not fully understand what is happening in the function. It would be great if someone can suggest changes to make in the comments so that I can start working after updating the changes in my forked repo.
Steps to reproduce
!pip install git+https://github.com/PytorchLightning/pytorch-lightning.git@master --upgrade
import os
import torch
from torch.nn import functional as F
from torch.utils.data import DataLoader
from torchvision.datasets import MNIST
from torchvision import transforms
import pytorch_lightning as pl
class MNISTModel(pl.LightningModule):
def __init__(self):
super(MNISTModel, self).__init__()
self.l1 = torch.nn.Linear(28 * 28, 10)
def forward(self, x):
return torch.relu(self.l1(x.view(x.size(0), -1)))
def training_step(self, batch, batch_nb):
x, y = batch
loss = F.cross_entropy(self(x), y)
tensorboard_logs = {'train_loss': loss}
return {'loss': loss, 'log': tensorboard_logs}
def test_step(self, batch, batch_nb):
x, y = batch
y_hat = self(x)
loss = F.cross_entropy(y_hat, y)
tensorboard_logs = {'train_loss': loss}
return {'loss': loss, 'log': tensorboard_logs}
def configure_optimizers(self):
return torch.optim.Adam(self.parameters(), lr=0.02)
train_loader = DataLoader(MNIST(os.getcwd(), train=True, download=True, transform=transforms.ToTensor()), batch_size=32)
mnist_model = MNISTModel()
trainer = pl.Trainer(gpus=1,max_epochs=3)
trainer.fit(mnist_model, train_loader)
test_loader = DataLoader(MNIST(os.getcwd(), train=False, download=True, transform=transforms.ToTensor()), batch_size=32)
trainer.test(test_dataloaders=test_loader)
Which returns:
TypeError Traceback (most recent call last)
<ipython-input-5-50449ee4f6cc> in <module>()
1 test_loader = DataLoader(MNIST(os.getcwd(), train=False, download=True, transform=transforms.ToTensor()), batch_size=32)
----> 2 trainer.test(test_dataloaders=test_loader)
/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/trainer.py in test(self, model, test_dataloaders, ckpt_path)
1168 if ckpt_path == 'best':
1169 ckpt_path = self.checkpoint_callback.best_model_path
-> 1170 model = self.get_model().load_from_checkpoint(ckpt_path)
1171
1172 self.testing = True
/usr/local/lib/python3.6/dist-packages/pytorch_lightning/core/saving.py in load_from_checkpoint(cls, checkpoint_path, map_location, hparams_file, tags_csv, *args, **kwargs)
167 checkpoint[cls.CHECKPOINT_HYPER_PARAMS_KEY].update(kwargs)
168
--> 169 model = cls._load_model_state(checkpoint, *args, **kwargs)
170 return model
171
/usr/local/lib/python3.6/dist-packages/pytorch_lightning/core/saving.py in _load_model_state(cls, checkpoint, *cls_args, **cls_kwargs)
201
202 # load the state_dict on the model automatically
--> 203 model = cls(*cls_args, **cls_kwargs)
204 model.load_state_dict(checkpoint['state_dict'])
205
TypeError: __init__() takes 1 positional argument but 2 were given
Expected behavior
Start testing
|
How to make LR scheduler verbose?
|
[
"question",
"won't fix"
] |
β Questions and Help
Before asking:
search the issues.
search the docs.
What is your question?
Hi, I am currently using the ReduceLROnPlateau scheduler and returning it as a [dictionnary] in the configure_optimizers method. All other options seem to work, but I cannot seem to be able to make the scheduler verbose.
Am I missing something?
Code
def configure_optimizers(self):
optim = torch.optim.Adam(self.parameters(), lr=self.learning_rate)
sched = {
"scheduler": torch.optim.lr_scheduler.ReduceLROnPlateau(optim),
"verbose": True,
"monitor": 'val_loss',
"mode": "min",
"patience": self.hparams.scheduler_patience,
"interval": 'epoch',
"frequency": 1,
"eps": 1E-6
}
return [optim], [sched]
What have you tried?
What's your environment?
OS: Win10
Packaging pip
Version pl 0.8.1 torch 1.6
|
cannot unpack non-iterable NoneType object when predicting with test function/dataloader
|
[
"bug",
"help wanted"
] |
Hi,
When trying to evaluate the autoencoder with test dataset, we got the error:
cannot unpack non-iterable NoneType object
def training_step(self, train_batch, batch_idx):
x, _ = train_batch
decoded, encoded = self.forward(x)
loss = self.mse(decoded,x)
tensorboard_logs = {'train_loss': loss}
return {'loss': loss, 'log': tensorboard_logs}
def test_step(self,test_batch, batch_idx):
x, _ = test_batch
decoded, encoded = self.forward(x)
return decoded, encoded
def test_dataloader(self):
return DataLoader(dataset=Train_x_t_e, batch_size=self.hparams.batch_size)
# save model
me = Autoencoder.load_from_checkpoint(checkpoint_path="testmodel.ckpt")
# predict autoencoder result
trainer = pl.Trainer()
y_hat, z = trainer.test(me)
Got this error:
/opt/conda/lib/python3.7/site-packages/pytorch_lightning/utilities/distributed.py:23: RuntimeWarning: You passed in a `test_dataloader` and have defined a `test_step()`, you may also want to define `test_epoch_end()` for accumulating stats.
warnings.warn(*args, **kwargs)
/opt/conda/lib/python3.7/site-packages/pytorch_lightning/utilities/distributed.py:23: UserWarning: The dataloader, test dataloader 0, does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` in the `DataLoader` init to improve performance.
warnings.warn(*args, **kwargs)
HBox(children=(FloatProgress(value=1.0, bar_style='info', description='Testing', layout=Layout(flex='2'), max=β¦
--------------------------------------------------------------------------------
TEST RESULTS
{}
--------------------------------------------------------------------------------
TypeErrorTraceback (most recent call last)
<ipython-input-111-f27519f8b5a7> in <module>
----> 1 y_hat, z = trainer.test(me)
TypeError: cannot unpack non-iterable NoneType object
|
Will load_from_checkpoint load Huggingface models as well?
|
[
"question"
] |
What is your question?
Just wanted to know will using the load_from_checkpoint for a LightningModule load the state_dict for the HuggingFace models as well?
Eg: for the given example in the docs, will state_dict be loaded for BertModel.from_pretrained thing as well?
Ideally, load_from_checkpoint should load state_dict for Bert as well like BertModel.from_pretrained(same_checkpoint) would do.
Code
class BertMNLIFinetuner(LightningModule):
def __init__(self):
super().__init__()
self.bert = BertModel.from_pretrained('bert-base-cased', output_attentions=True)
self.W = nn.Linear(bert.config.hidden_size, 3)
self.num_classes = 3
def forward(self, input_ids, attention_mask, token_type_ids):
h, _, attn = self.bert(input_ids=input_ids,
attention_mask=attention_mask,
token_type_ids=token_type_ids)
h_cls = h[:, 0]
logits = self.W(h_cls)
return logits, attn
|
Logging hyperparams and metrics
|
[
"feature",
"question",
"discussion"
] |
My question is how do I log both hyperparams and metrics so that tensorboard works "properly". I've copied pytorch_lightning.loggers.TensorBoardLogger into a catboost/hyperopt project, and using the code below after each iteration I get the result I'm after, on the tensorboard HPARAMS page both the hyperparameters and the metrics appear and I can view the Parallel Coords View etc.
self.logger.log_hyperparams(
params=dict(n_estimators=n_estimators, max_leaves=max_leaves, l2_leaf_reg=l2_leaf_reg, min_data_in_leaf=min_data_in_leaf),
metrics=dict(val_loss=val_loss, train_loss=train_loss))
However when I follow the lightning tutorials, only the hyperparams are being logged, there's no metrics so the charts don't display.
Is there something additional I should be doing to ensure that log_hyperparams passes the metrics on_train_end (or wherever is appropriate, since on_train_end also doesn't appear to pass outputs)
Edit:
I can see in run_pretrain_routine that log_hyperparams is invoked with no metrics
# log hyper-parameters
if self.logger is not None:
# save exp to get started
self.logger.log_hyperparams(ref_model.hparams)
self.logger.save()
This seems to be the wrong place to call log_hyperparams, for tensorboard at least it should really be post training with the validation loss.
Edit 2:
And my example which does work, only sort of works. Looking closer it seems to treat all the hyperparameters as categorical (i,e, they're not ordered). Examples of use with keras callbacks have the hyperparameters and metrics defined first (i.e. type and range).
|
Cluster job that spawns its own processes for use with DDP
|
[
"feature",
"help wanted",
"won't fix"
] |
π Feature
Not sure if the title is appropriate. This feature would support the use case where
The manager sets MASTER_ADDR and MASTER_PORT
User knows how to set LOCAL_RANK, GLOBAL_RANK, and WORLD_SIZE
Each node has N_g GPUs
N_j jobs are spawned (in my case, MPI on SLURM) for each gpu, i.e., world_size= N_j * N_g
Each job can see both GPUs on each node, i.e., local_rank = global_rank % N_g and torch.cuda.set_device(local_rank)
Motivation
I'm able to write a class that overrides pl.Trainer to support this, but thought 1) this might be a use case for others and 2) I'd prefer not to override your code as much as possible. Here is the sbatch file header
#!/bin/bash
#SBATCH --job-name job
#SBATCH -o jobs/%j.log
#SBATCH -N 4
#SBATCH --tasks-per-node=2
#SBATCH --partition=gaia
#SBATCH --gres=gpu:volta:2
export MASTER_ADDR=$(hostname -s)
export MASTER_PORT=$(python -c 'import socket; s=socket.socket(); s.bind(("", 0)); print(s.getsockname()[1]); s.close()')
mpirun <options> <command>
Each job sees 2 GPUs (and the device ids are not integers, another issue). To setup my run I set the following environment variables:
global_rank, world_size, hostname = get_dist_env()
os.environ['WORLD_SIZE'] = f'{world_size}'
os.environ['NODE_RANK'] = f'{global_rank}'
os.environ['LOCAL_RANK'] = f'{global_rank % 2}'
where get_dist_env knows how to get world_size and global_rank from the environment. For mpirun this is
world_size = int(os.getenv('OMPI_COMM_WORLD_SIZE'))
global_rank = int(os.getenv('OMPI_COMM_WORLD_RANK'))
With those variables (which I think are standard in your code) I should be able to run in DDP mode. Yet, the issue is, because each node sees both GPUs, I cannot define a setting in Trainer that will allow this to execute correctly. Either I set num_gpus=1 and the local_rank is not calculated correctly or if I set num_gpus=2 then your code will try to spawn an additional job.
Pitch
I'm not sure what the best API approach is, but if the user sets MASTER_ADDR, MASTER_PORT, WORLD_SIZE, GLOBAL_RANK, and LOCAL_RANK then that should be everything you need to execute a distributed job.
Additional context
I'm clearly not expert in distributed processing so I'm not sure if I'm asking for something that only works on my cluster with my settings and cannot be generalized. In this case, I am able to override Trainer to support my use case without you needing to change anything.
Thanks for a great package!
|
save the model/training source code to model checkpoint or logs directory
|
[
"feature",
"help wanted",
"won't fix"
] |
π Feature
Save the model/training source code to model checkpoint or logs directory
Motivation
Now, the hparams has been saved in yaml file. Sometimes, we not only change the hparams but also the network arch, the pre-process flow, so if we save the relate source code to model, we will get all the information to restore the model, because the source code is with the model hparams and all other things.
Pitch
add Trainer args parse params, or Checkpoint callback params, one for wether to save source code, and one for which codes to save
|
Cannot Transfer Batch Data to Device
|
[
"bug",
"help wanted",
"priority: 0"
] |
π Bug
After upgrading from 0.8.1 to 0.8.2, error occurs during training when the data is being transferred to the device. There is no problem with 0.8.1 but only with 0.8.2.
To Reproduce
It might be a little difficult to share the code here, but I suspect that might due to a mistake of defining "dtype" variable somewhere in the latest version. You may check the error message:
Validation sanity check: 0it [00:00, ?it/s]
Traceback (most recent call last):
File "train.py", line 95, in <module>
main()
File "train.py", line 90, in main
trainer.fit(model)
File "/home/username/.local/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 928, in fit
self.single_gpu_train(model)
File "/home/username/.local/lib/python3.7/site-packages/pytorch_lightning/trainer/distrib_parts.py", line 183, in single_gpu_train
self.run_pretrain_routine(model)
File "/home/username/.local/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1086, in run_pretrain_routine
False)
File "/home/username/.local/lib/python3.7/site-packages/pytorch_lightning/trainer/evaluation_loop.py", line 291, in _evaluate
output = self.evaluation_forward(model, batch, batch_idx, dataloader_idx, test_mode)
File "/home/username/.local/lib/python3.7/site-packages/pytorch_lightning/trainer/evaluation_loop.py", line 450, in evaluation_forward
batch = self.transfer_batch_to_gpu(batch, root_gpu)
File "/home/username/.local/lib/python3.7/site-packages/pytorch_lightning/trainer/distrib_parts.py", line 156, in transfer_batch_to_gpu
return self.__transfer_batch_to_device(batch, device)
File "/home/username/.local/lib/python3.7/site-packages/pytorch_lightning/trainer/distrib_parts.py", line 161, in __transfer_batch_to_device
return model.transfer_batch_to_device(batch, device)
File "/home/username/.local/lib/python3.7/site-packages/pytorch_lightning/core/hooks.py", line 243, in transfer_batch_to_device
return move_data_to_device(batch, device)
File "/home/username/.local/lib/python3.7/site-packages/pytorch_lightning/utilities/apply_func.py", line 109, in move_data_to_device
return apply_to_collection(batch, dtype=(TransferableDataType, Batch), function=batch_to)
File "/home/username/.local/lib/python3.7/site-packages/pytorch_lightning/utilities/apply_func.py", line 34, in apply_to_collection
if isinstance(data, dtype):
TypeError: isinstance() arg 2 must be a type or tuple of types
Code sample
trainer.fit(model)
Environment
pytorch-lightning==0.8.2
|
TPU MNIST demo hangs in last batch
|
[
"bug",
"help wanted"
] |
I am afraid this is not working for me.
_Remark : There have been various posts about this or very similar issues, but as far as I can see they have all been closed.
Example: #1590
In fact I posted this exact comment in the following issue, when it was already closed.
#1403
I am therefore creating this issue, because I think the closed issue is probably not receiving any attention, which is understandable._
now:
I have tried all the versions, given in the notebook. (The links are given below in NB1)
Additionally, I have also tried it with the version 20200516. That version is given in the official colab TPU MNIST example notebook which does not use pytorch-lightening, ie 20200516. A reference is below in NB2.
The summary of the results are:
"1.5" : wont run at all
"20200325" hangs in the final epoch (with 10 epochs in the 10th, with 3 epochs in the 3rd)
"nightly" crashes with : Exception: process 0 terminated with signal SIGABRT
"20200516" hangs after one epoch
I have tried this several times over the last few days. With the exception of the nightly all these results have always been the same.
NB1:
Locally I am on a Mac, not sure whether this makes a difference.
My terminal gives this
uname -a
Darwin osx-lhind6957 18.7.0 Darwin Kernel Version 18.7.0: Mon Apr 27 20:09:39 PDT 2020; root:xnu-4903.278.35~1/RELEASE_X86_64 x86_64
NB2:
The links for that official colab TPU MNIST example notebook which does not use pytorch lightning are here:
https://cloud.google.com/tpu/docs/colabs?hl=de
https://colab.research.google.com/github/pytorch/xla/blob/master/contrib/colab/mnist-training.ipynb?authuser=1#scrollTo=sPJVqAKyml5W
(The official notebook which does not use pytorch lightning has no problem and runs through with 20200516)
|
testing gets stuck when num_workers is set to value >0 in tests/base/model_utilities.py
|
[
"bug",
"help wanted"
] |
π Bug
While executing bash .run_local_tests.sh the test hangs frequently (but not always) if parallel data loading is enabled in tests/base/model_utilities.py by setting num_workers to a value larger than 0. If an manual keyboard interrupt (CTRL-c) is done the test continues with a "PASSED" message. This is an issue that does not always occur which could be an indicator that the test-frame work is waiting for the process to finish, but does not realize that the process has been completed.
The same issue occurs during CI testing with the result that time is exceeded.
The issue does not happen for num_workers=0 in tests/base/model_utilities.py; which is the reason for the change in PR #2307 to set this value to zero. This is an unsatisfying solution since it excludes the case of using multiple workers in dataloaders from testing, and risk that someone might inadvertently set the value to a different.
It is unclear of this is caused by an issue with Pythorch-Lightning, Pytorch, or the testing framework.
This issues serves to inform others what to expect when num_workers is set to a value >0, and that this needs to be investigated. It might need someone wth good understanding of the pytest framework and concurrency issues in that framework.
To Reproduce
Steps to reproduce the behavior:
set num_workers=3 in tests/base/model_utilities.py
Run bash .run_local_tests.sh or pytest -v -x tests/trainer/test_dataloaders.py (might need to run several times)
The pytest -v -x tests/trainer/test_dataloaders.py should finish successfully in 10-15 seconds but often does not
If test gets stuck doing a CRTL-c continues the testing and ends them successfully
Code sample
Expected behavior
Environment
This issue also happens in the CI environments.
My environment:
/Users/thomas.schaaf/virtualenv/Python37/pytorch-lightning/lib/python3.7/site-packages/graphql/type/directives.py:55: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3,and in 3.9 it will stop working
assert isinstance(locations, collections.Iterable), 'Must provide locations for directive.'
CUDA:
GPU:
available: False
version: None
Packages:
numpy: 1.18.5
pyTorch_debug: False
pyTorch_version: 1.5.1
pytorch-lightning: 0.8.2-dev
tensorboard: 2.2.2
tqdm: 4.46.1
System:
OS: Darwin
architecture:
64bit
processor: i386
python: 3.7.7
version: Darwin Kernel Version 19.5.0: Tue May 26 20:41:44 PDT 2020; root:xnu-6153.121.2~2/RELEASE_X86_64
Additional context
This issue came up during PR #2307 and was discussed with @Borda and @awaelchli.
My pytest version:
`pytest -V
This is pytest version 5.4.3, imported from /Users/thomas.schaaf/virtualenv/Python37/pytorch-lightning/lib/python3.7/site-packages/pytest/init.py
setuptools registered plugins:
pytest-flake8-1.0.6 at /Users/thomas.schaaf/virtualenv/Python37/pytorch-lightning/lib/python3.7/site-packages/pytest_flake8.py
pytest-cov-2.10.0 at /Users/thomas.schaaf/virtualenv/Python37/pytorch-lightning/lib/python3.7/site-packages/pytest_cov/plugin.py`
|
Imagenet example use num_workers 0
|
[
"question"
] |
https://github.com/PyTorchLightning/pytorch-lightning/blob/master/pl_examples/domain_templates/imagenet.py#L160
Is there a specific reason for that?
|
validation_epoch_end only gets the outputs from one process
|
[
"bug"
] |
Hi,
I need the whole validation set to get the validation result.
Current validation_epoch_end only gets the outputs from current process.
Can I collect the gather the outputs from different gpus, and then run validation_epoch_end. And also I don't necessary need it to run on all processes, I only need it to run once.
How can I achieve that?
|
Questions about DDP and slurm
|
[
"won't fix"
] |
Hi,
When running on slurm with say 8 GPUs on a single node with ddp as backend, each process occupies all GPUs. It only uses small portion (less than 1gb) of non-root GPUs. However, multiplying it by 7 is still a large chuck of memory unused for training. I followed the multi-GPUs and slurm document. Maybe I am missing something but I could not empty that memory. Unfortunately, I cannot find a similar issue. Could you send me some pointers? Thanks!
version
pytorch 1.5.1 py3.7_cuda10.1.243_cudnn7.6.3_0 pytorch
pytorch-lightning 0.8.1 pypi_0 pypi
nvidia-smi (similar output for all GPUs)
| 0 163 C python 24409MiB |
| 0 164 C python 933MiB |
| 0 165 C python 933MiB |
| 0 166 C python 933MiB |
| 0 167 C python 933MiB |
| 0 168 C python 933MiB |
| 0 169 C python 933MiB |
| 0 170 C python 933MiB |
| 1 163 C python 933MiB |
| 1 164 C python 24411MiB |
| 1 165 C python 933MiB |
| 1 166 C python 933MiB |
| 1 167 C python 933MiB |
| 1 168 C python 933MiB |
| 1 169 C python 933MiB |
| 1 170 C python 933MiB |
...
log
1: GPU available: True, used: True
1: TPU available: False, using: 0 TPU cores
1: Multi-processing is handled by Slurm.
1: CUDA_VISIBLE_DEVICES: [0,1,2,3,4,5,6,7]
1: Using 16bit precision.
1: initializing ddp: GLOBAL_RANK: 1, MEMBER: 2/8
6: GPU available: True, used: True
6: TPU available: False, using: 0 TPU cores
6: Multi-processing is handled by Slurm.
6: CUDA_VISIBLE_DEVICES: [0,1,2,3,4,5,6,7]
6: Using 16bit precision.
6: initializing ddp: GLOBAL_RANK: 6, MEMBER: 7/8
4: GPU available: True, used: True
4: TPU available: False, using: 0 TPU cores
4: Multi-processing is handled by Slurm.
4: CUDA_VISIBLE_DEVICES: [0,1,2,3,4,5,6,7]
4: Using 16bit precision.
4: initializing ddp: GLOBAL_RANK: 4, MEMBER: 5/8
7: GPU available: True, used: True
7: TPU available: False, using: 0 TPU cores
7: Multi-processing is handled by Slurm.
7: CUDA_VISIBLE_DEVICES: [0,1,2,3,4,5,6,7]
7: Using 16bit precision.
7: initializing ddp: GLOBAL_RANK: 7, MEMBER: 8/8
5: GPU available: True, used: True
5: TPU available: False, using: 0 TPU cores
5: Multi-processing is handled by Slurm.
5: CUDA_VISIBLE_DEVICES: [0,1,2,3,4,5,6,7]
5: Using 16bit precision.
5: initializing ddp: GLOBAL_RANK: 5, MEMBER: 6/8
3: GPU available: True, used: True
3: TPU available: False, using: 0 TPU cores
3: Multi-processing is handled by Slurm.
3: CUDA_VISIBLE_DEVICES: [0,1,2,3,4,5,6,7]
3: Using 16bit precision.
3: initializing ddp: GLOBAL_RANK: 3, MEMBER: 4/8
2: GPU available: True, used: True
2: TPU available: False, using: 0 TPU cores
2: Multi-processing is handled by Slurm.
2: CUDA_VISIBLE_DEVICES: [0,1,2,3,4,5,6,7]
2: Using 16bit precision.
2: initializing ddp: GLOBAL_RANK: 2, MEMBER: 3/8
0: GPU available: True, used: True
0: TPU available: False, using: 0 TPU cores
0: Multi-processing is handled by Slurm.
0: CUDA_VISIBLE_DEVICES: [0,1,2,3,4,5,6,7]
0: Using 16bit precision.
0: initializing ddp: GLOBAL_RANK: 0, MEMBER: 1/8
0: ----------------------------------------------------------------------------------------------------
0: distributed_backend=ddp
0: All DDP processes registered. Starting ddp with 8 processes
0: ----------------------------------------------------------------------------------------------------
1: Set SLURM handle signals.
2: Set SLURM handle signals.
5: Set SLURM handle signals.
6: Set SLURM handle signals.
7: Set SLURM handle signals.
0: Set SLURM handle signals.
3: Set SLURM handle signals.
4: Set SLURM handle signals.
|
element 0 of tensors does not have a grad_fn
|
[
"question"
] |
β Questions and Help
What is your question?
Hi,
I got RuntimeError
element 0 of tensors does not have a grad_fn in TEST phase
Code
def test_step(self, batch, batch_nb):
imgs, labels = batch
self.last_imgs = imgs
# get gradients
self.classifier.eval()
copied_input = imgs.clone()
copied_input.requires_grad_(True)
output = self.classifier(copied_input)
loss = self.criterion_ce(output, labels)
gradients = torch.autograd.grad(loss, copied_input)[0]
self.last_grad = gradients
def criterion_ce(self, y_hat, y):
return F.cross_entropy(y_hat, y)
What have you tried?
Of course, the above code snippet works well in the training phase.
However, grad_fn of output and loss is None in the test/val phase (In the training phase, there are AddmmBackward and NllLossBackward, respectively).
How can I get the input tensor gradients in the Test/Val phase?
What's your environment?
OS: Linux
conda
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.