title
stringlengths 5
164
| labels
sequence | bodyText
stringlengths 0
46.7k
|
---|---|---|
Error when Implementing training_epoch_end() for GAN example | [
"bug",
"help wanted"
] | π Bug
To Reproduce
Steps to reproduce the behavior:
Go to GAN example
Install latest version of pytorch-lightning (0.9.0)
pip install pytorch-lightning==0.9.0
Implement training_epoch_end() method for GAN class
def training_epoch_end(self, outputs):
return outputs
Run training code cell
gan_model = GAN(hparams)
trainer = pl.Trainer(gpus=1)
trainer.fit(gan_model)
See error
138 # all keys not progress_bar or log are candidates for callbacks
139 callback_metrics = {}
--> 140 for k, v in output.items():
141 if k not in ['progress_bar', 'log', 'hiddens']:
142 callback_metrics[k] = v
AttributeError: 'list' object has no attribute 'items'
Expected behavior
No Error |
TPU available: true when there are no TPUs | [
"bug",
"accelerator: tpu"
] | π Bug
I am using a DGX machine (and so, no TPUs), but on initiating Trainer, it logs TPU available: True. This ends up returning Missing XLA configuration when I run my script.
To Reproduce
Code sample
Simply running the following lines on my machine:
>> trainer = pl.Trainer(gpus=[0])
GPU available: True, used: True
TPU available: True, using: 0 TPU cores
Expected behavior
>> trainer = pl.Trainer(gpus=[0])
GPU available: True, used: True
TPU available: False, using: 0 TPU cores
Environment
* CUDA:
- GPU:
- Tesla V100-SXM2-32GB
- available: True
- version: 10.2
* Packages:
- numpy: 1.18.2
- pyTorch_debug: False
- pyTorch_version: 1.6.0
- pytorch-lightning: 0.9.0
- tensorboard: 2.2.0
- tqdm: 4.45.0
* System:
- OS: Linux
- architecture:
- 64bit
-
- processor: x86_64
- python: 3.6.9
- version: #168-Ubuntu SMP Wed Jan 16 21:00:45 UTC 2019 |
[Bug] Batch Size mismatch | [
"help wanted",
"question",
"won't fix",
"waiting on author"
] | π Bug
A batch size of 16 is used in one of the steps instead of the specified batch size.
To Reproduce
Steps to reproduce the behavior:
View collab notebook
Code sample
Minimum code reproduction: Collab Notebook
Expected behavior
Environment
Please copy and paste the output from our
environment collection script
(or fill out the checklist below manually).
You can get the script and run it with:
wget https://raw.githubusercontent.com/PyTorchLightning/pytorch-lightning/master/tests/collect_env_details.py
# For security purposes, please check the contents of collect_env_details.py before running it.
python collect_env_details.py
CUDA:
GPU:
Tesla P100-PCIE-16GB
available: True
version: 10.1
Packages:
numpy: 1.18.5
pyTorch_debug: False
pyTorch_version: 1.6.0+cu101
pytorch-lightning: 0.9.0
tensorboard: 2.2.0
tqdm: 4.41.1
System:
OS: Linux
architecture:
64bit
processor: x86_64
python: 3.6.9
version: #1 SMP Thu Jul 23 08:00:38 PDT 2020
Additional context |
Horovod with native 16 precision not working | [
"bug",
"help wanted"
] | π Bug
To Reproduce
Steps to reproduce the behavior:
using precision=16 with distributed_backend=horovod
Traceback (most recent call last):
File "/workspace/main_lightning.py", line 500, in <module>
main(hyperparams)
File "/workspace/main_lightning.py", line 492, in main
trainer.fit(model)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/states.py", line 48, in wrapped_fn
result = fn(self, *args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/trainer.py", line 1068, in fit
results = self.horovod_train(model)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/distrib_parts.py", line 213, in horovod_train
model, optimizers = model.configure_apex(amp, model, self.optimizers, self.amp_level)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/core/lightning.py", line 954, in configure_apex
model, optimizers = amp.initialize(model, optimizers, opt_level=amp_level)
Code sample
trainer = Trainer(
precision=16,
gpus=1,
distributed_backend="horovod")
Environment
PyTorch Version: 1.6.0+cu101
How you installed PyTorch: pip |
Metatags are saved over and over in TensorBoardLogger when logging metrics | [
"feature",
"help wanted",
"good first issue"
] | Is it okay that the metatags are saved every time metrics are logged in TensorBoardLogger? I find that it slows down training a bit.
I found a related discussion.
This is the part I'm talking about:
pytorch-lightning/pytorch_lightning/loggers/tensorboard.py
Lines 201 to 211
in
83ab3ea
def save(self) -> None:
super().save()
dir_path = self.log_dir
if not gfile.isdir(dir_path):
dir_path = self.save_dir
# prepare the file path
hparams_file = os.path.join(dir_path, self.NAME_HPARAMS_FILE)
# save the metatags file
save_hparams_to_yaml(hparams_file, self.hparams)
which is called by:
pytorch-lightning/pytorch_lightning/trainer/logging.py
Lines 88 to 91
in
83ab3ea
# log actual metrics
if self.is_global_zero and self.logger is not None:
self.logger.agg_and_log_metrics(scalar_metrics, step=step)
self.logger.save()
which is in turn called by:
pytorch-lightning/pytorch_lightning/trainer/training_loop.py
Lines 749 to 757
in
83ab3ea
def save_train_loop_metrics_to_loggers(self, batch_idx, batch_output):
# when metrics should be logged
should_log_metrics = (batch_idx + 1) % self.row_log_interval == 0 or self.should_stop
if should_log_metrics or self.fast_dev_run:
# logs user requested information to logger
metrics = batch_output.batch_log_metrics
grad_norm_dic = batch_output.grad_norm_dic
if len(metrics) > 0 or len(grad_norm_dic) > 0:
self.log_metrics(metrics, grad_norm_dic) |
Retrieve exact number of training steps in `configure_optimizers` | [
"question"
] | β Questions and Help
What is your question?
Some schedulers may require the total number of training steps. How can I retrieve it?
Code
def configure_optimizers(self) -> (torch.optim.Optimizer, torch.optim.lr_scheduler.LambdaLR):
steps = self.trainer.total_training_steps
What have you tried?
I tried computing them from the dataset length and the training parameters. But I don't think this is correct in a multi-gpu, multi-node setting.
def get_total_number_of_steps_for_schedulers(hparams, trainer):
if trainer.max_steps is not None:
return trainer.max_steps
else:
training_batches_per_epoch = math.ceil(trainer.num_training_batches / (hparams.batch_size * trainer.world_size))
steps = (hparams.max_epochs * training_batches_per_epoch) // hparams.accumulate_grad_batches
return steps
What's your environment?
OS Independent
pytorch_lightning: 0.9.0 |
Add logger for Azure Machine Learning | [
"feature",
"help wanted"
] | π Feature
We already have PyTorch Lightning loggers for Comet, MLFlow, and Neptune. It would be great if we also had one for Azure Machine Learning (AML).
Motivation
A data scientist hoping to use PyTorch Lightning in AML currently has to build their own "logger adapter" to get logs from their training runs to show up in the AML "metrics" UI like this:
It would be great if a user could just "drop in" an AML logger and get those kinds of metrics for free.
Pitch
The AML logging API is very similar to PyTorch Lightning's, with the small caveat that AML uses the terms "experiment" and "run" slightly differently than PyTorch Lightning does. I've even coded up a preliminary implementation with unit tests here: dkmiller/pytorch-lightning.
Alternatives
The only alternative I can think of is for individual users to build their own AML :: PyTorch Lightning loggers.
Additional context
Full disclosure: I'm a data scientist in the Azure Machine Learning team. |
Logging train metrics every N steps with TrainResult | [
"question"
] | What is your question?
Is there a way to log metrics at every N training steps? Currently, I returning a TrainResult object at the end of training_step with on_step=True (see code below). However, this setup is still logging data every 50 steps (see image)
Code
def training_step(self, inputs, batch_idx):
assert self.model.training, (
f"{self._get_name()} model was changed to eval mode!"
)
data_time = torch.tensor(
self.trainer.profiler.recorded_durations["get_train_batch"][-1]
)
outputs = self.model(inputs)
metrics_dict = self._compute_train_metrics(inputs, outputs)
metrics_dict["data_time"] = data_time
metrics_dict["train_loss"] = metrics_dict["loss"].clone()
result = TrainResult(minimize=metrics_dict.pop("loss"))
result.log_dict(
metrics_dict,
on_step=True,
on_epoch=False,
prog_bar=False,
logger=True,
sync_dist=False, # don't sync metrics during training
)
return result
What have you tried?
Specifying trainer's log_save_interval (e.g. Trainer(log_save_interval=20))
What's your environment?
OS: Linux
Packaging: conda
Version: 0.9 |
Unable to load checkpoint; __init__'s missing positional arguments | [
"bug",
"help wanted"
] | π Bug
To Reproduce
Steps to reproduce the behavior:
Create a very simple module with named arguments (of type inspect.Parameter.POSITIONAL_OR_KEYWORD), train it with a custom model checkpoint, and then try reload it.
Code sample
import pytorch_lightning as pl
import torch
class Foo(pl.LightningModule):
def __init__(self, m, n):
super().__init__()
self.m = m
self.n = n
self.linear = torch.nn.Linear(m, n)
def configure_optimizers(self):
return torch.optim.Adam(self.parameters())
def forward(self, x):
return self.linear(x)
def training_step(self, batch, batch_idx):
x, y = batch
return {"loss": torch.nn.functional.mse_loss(self(x), y)}
def train_dataloader(self):
return torch.utils.data.DataLoader(
torch.utils.data.TensorDataset(
torch.randn(size=(100, 5)),
torch.randn(size=(100, 2)),
)
)
def validation_step(self, batch, batch_idx):
x, y = batch
return {"val_loss": torch.nn.functional.mse_loss(self(x), y)}
def val_dataloader(self):
return torch.utils.data.DataLoader(
torch.utils.data.TensorDataset(
torch.randn(size=(50, 5)),
torch.randn(size=(50, 2)),
)
)
def validation_epoch_end(self, outputs):
loss = torch.stack([x["val_loss"] for x in outputs]).mean()
return {"val_loss": loss, "log": {"val_loss": loss}}
foo = Foo(m=5, n=2)
trainer = pl.Trainer(
max_epochs=10,
checkpoint_callback=pl.callbacks.ModelCheckpoint(
"models/test_foo/{epoch}")
trainer.fit(foo)
Foo.load_from_checkpoint("models/test_foo/epoch=9.ckpt")
This outputs:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-62-8b8f67e5e924> in <module>
----> 1 Foo.load_from_checkpoint("models/epoch=9_val_loss=9.653e-01.ckpt")
~/miniconda3/envs/dts/lib/python3.8/site-packages/pytorch_lightning/core/saving.py in load_from_checkpoint(cls, checkpoint_path, map_location, hparams_file, tags_csv, *args, **kwargs)
167 checkpoint[cls.CHECKPOINT_HYPER_PARAMS_KEY].update(kwargs)
168
--> 169 model = cls._load_model_state(checkpoint, *args, **kwargs)
170 return model
171
~/miniconda3/envs/dts/lib/python3.8/site-packages/pytorch_lightning/core/saving.py in _load_model_state(cls, checkpoint, *cls_args, **cls_kwargs)
203 if len(cls_spec.args) <= 1 and not cls_spec.kwonlyargs:
204 cls_args, cls_kwargs = [], {}
--> 205 model = cls(*cls_args, **cls_kwargs)
206 # load the state_dict on the model automatically
207 model.load_state_dict(checkpoint['state_dict'])
TypeError: __init__() missing 1 required positional argument: 'n'
Expected behavior
I should be able to load the model, or, be missing both its named parameters. After all, load_from_checkpoint allows *args and **kwargs to be passed to the initialization. So why, here, am I getting a value of m but not n?
Environment
* CUDA:
- GPU:
- GeForce GT 710
- available: True
- version: 10.2
* Packages:
- numpy: 1.19.1
- pyTorch_debug: False
- pyTorch_version: 1.6.0
- pytorch-lightning: 0.8.5
- tensorboard: 2.2.1
- tqdm: 4.48.2
* System:
- OS: Linux
- architecture:
- 64bit
- ELF
- processor: x86_64
- python: 3.8.5
- version: #46-Ubuntu SMP Fri Jul 10 00:24:02 UTC 2020
Additional context |
continue training | [
"question"
] | β Questions and Help
What is your question?
I have 10 training datasets. I want to train a model sequentially on these datasets i.e train on first training dataset then train the same model on the second dataset and so on.
How to do this?
model = ClassificationModel()
for dataset in training_datasets:
trainer = pl.Trainer(gpus=-1)
trainer.fit(model) |
ONNX model does not save on GPU | [
"bug",
"help wanted",
"priority: 0"
] | π Bug
Attempting to export on ONNX after training model on GPU, throws an error is the input_sample or example_input_array is not a CUDA tensor.
To Reproduce
Steps to reproduce the behavior:
Train a model on GPU
Try to export to ONNX when self.example_input_array = torch.zeros(1, 1, 500, 500) or input_sample = torch.zeros(1, 1, 500, 500)
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-32-cd8009a0b6a3> in <module>
1 filepath = 'model.onnx'
----> 2 model.to_onnx(filepath, export_params=True)
/opt/conda/lib/python3.7/site-packages/pytorch_lightning/core/lightning.py in to_onnx(self, file_path, input_sample, **kwargs)
1721 if 'example_outputs' not in kwargs:
1722 self.eval()
-> 1723 kwargs['example_outputs'] = self(input_data)
1724
1725 torch.onnx.export(self, input_data, file_path, **kwargs)
/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
548 result = self._slow_forward(*input, **kwargs)
549 else:
--> 550 result = self.forward(*input, **kwargs)
551 for hook in self._forward_hooks.values():
552 hook_result = hook(self, input, result)
<ipython-input-24-51cae3b5e57f> in forward(self, inputs)
20
21 def forward(self, inputs):
---> 22 return self.model(inputs)
23
24 def training_step(self, batch, batch_idx):
/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
548 result = self._slow_forward(*input, **kwargs)
549 else:
--> 550 result = self.forward(*input, **kwargs)
551 for hook in self._forward_hooks.values():
552 hook_result = hook(self, input, result)
/opt/conda/lib/python3.7/site-packages/torch/nn/modules/container.py in forward(self, input)
98 def forward(self, input):
99 for module in self:
--> 100 input = module(input)
101 return input
102
/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
548 result = self._slow_forward(*input, **kwargs)
549 else:
--> 550 result = self.forward(*input, **kwargs)
551 for hook in self._forward_hooks.values():
552 hook_result = hook(self, input, result)
/opt/conda/lib/python3.7/site-packages/torch/nn/modules/conv.py in forward(self, input)
351
352 def forward(self, input):
--> 353 return self._conv_forward(input, self.weight)
354
355 class Conv3d(_ConvNd):
/opt/conda/lib/python3.7/site-packages/torch/nn/modules/conv.py in _conv_forward(self, input, weight)
348 _pair(0), self.dilation, self.groups)
349 return F.conv2d(input, weight, self.bias, self.stride,
--> 350 self.padding, self.dilation, self.groups)
351
352 def forward(self, input):
RuntimeError: Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same
Code sample
filepath = 'model.onnx'
model.to_onnx(filepath, export_params=True)
Expected behavior
Should automatically convert example_input_array or input_sample to the device type and save the model to ONNX. |
Rename train_dataloader or update docs for trainer.fit()? | [
"feature",
"help wanted",
"won't fix",
"discussion"
] | π Feature
Either rename train_dataloder arg to something else, like train_data_source.
or
Update docs to make it clear that is trainer.fit() and other modules (e.g. trainer.lr_find()) where a LightningDataModule can be passed instead of DataLoader.
Motivation
The new datamodule allows users to decouple data and models.
After #2755, trainer.fit(model, dataloader) and trainer.fit(model, datamodule) both work. Basically there's a line that checks whether the 2nd arg is datamodule instance.
It's nice that we can do positional args without the need for named arguments. But, I think it also causes confusion since, train_dataloader doesn't have to be DataLoader and can be a LightningDataModule instead.
Could be just me, but I got confused when I found trainer.lr_find() was be fine with having a LightningDataModule as its 2nd arg. I had to look into the source code to see that the 2nd arg is just passed and handled with trainer.fit(). So I think the current state is not that friendly to beginners.
Pitch
With #2755 the first arg of trainer.fit() has conceptually become something different from train_dataloader. It can be either a DataLoader or a LightningDataModule.
Therefore I think it may be a good idea to rename the first arg of train_dataloader to something else like train_data_source or even introduce a higher-level concept.
However, regarding backwards compatibility, I'm not sure how appropriate such a change is. How can one rename an important arg with breaking backwards compatibility?
Alternatives
Just update the docs for now. I.e. add a note where a LightningDataModule can be passed instead of DataLoader.
The type hints are incorrect (haven't been updated) in some places so fix those as well.
Additional context |
Data DistributedSampler Error when using Multi-GPU setting (with ddp). | [
"bug",
"help wanted"
] | π Bug
Hi,
I have converted my pure PyTorch code into pytorch-lightning code, however, the pl code would be crashed when using multi-gpus settings, while the code runs successfully when I set gpus=1.
My task is a binary classification task, and the error happens in AUC-ROC-score computing using sklearn:
File "/usr/local/lib/python3.6/dist-packages/sklearn/metrics/ranking.py", line 256, in _binary_roc_auc_score raise ValueError("Only one class present in y_true. ROC AUC score "
ValueError: Only one class present in y_true. ROC AUC score is not defined in that case.
Code sample
the trainer:
trainer = pl.Trainer(gpus=opt.gpu_num, distributed_backend='ddp')
the core part in model.py: Note that I need to collect the outputs of all validation-steps first, and then to compute the metric in the validation_epoch_end.
def training_step(self, batch, batch_idx):
label_list = batch[0]
data = batch[1:]
score = self(data)
loss = F.cross_entropy(score, label_list.max(1)[1])
result = pl.TrainResult(loss)
result.log('train_loss', loss, sync_dist=True)
return result
def validation_step(self, batch, batch_idx):
label_list = batch[0]
data = batch[1:]
score = self(data)
loss = F.cross_entropy(score, label_list.max(1)[1])
result = pl.EvalResult(loss)
result.label_list = label_list.squeeze(1).tolist()
result.pred_list = score.squeeze(1).tolist()
return result
def validation_epoch_end(self, val_outputs):
labels = [i for k in val_outputs.label_list for i in k]
preds = [i for k in val_outputs.pred_list for i in k]
res = cal_metric(all_labels, all_preds)
result = pl.EvalResult(checkpoint_on=torch.tensor(res['group_auc']))
result.log_dict(res, prog_bar=True, sync_dist=True)
return result
Environment
CUDA:
- GPU:
- GeForce GTX 1080 Ti
- GeForce GTX 1080 Ti
- available: True
- version: 10.2
Packages:
- numpy: 1.17.4
- pyTorch_debug: False
- pyTorch_version: 1.6.0
- pytorch-lightning: 0.9.0
- tensorboard: 2.0.2
- tqdm: 4.48.2
System:
- OS: Linux
- architecture:
- 64bit
- ELF
- processor: x86_64
- python: 3.6.9
- version: #110-Ubuntu SMP Tue Jun 23 02:39:32 UTC 2020
So what might be the reason that leads to the error in validation_epoch_end in multi-gpus setting. (it goes well in gpus=1).
Thanks. |
Have an example of showing explicitly how to calculate metrics in DDP | [
"feature",
"help wanted",
"example"
] | π Feature
Given the new updates in 0.9.0, it is desirable to have an example of showing exactly and explicitly how to calculate metrics in DDP. The metrics of interest are those that requires all the labels and prediction for an entire epoch, such as F1 score or average precision.
Motivation
As a big fan of this project and a data scientist who already used Lightning in my work, I am still not sure if I have done metrics in DDP correctly or not. While it is easy to spot some obvious mistakes in F1 if the calculated F1 goes over 1 (by definition, F1 is between 0 and 1 and would never go above 1) with DDP, it is hard to know for sure if this is calculated correctly as there is no official document detailing how to do that exactly. Having the proposed example would greatly boost the adoption of Lightning in the PyTorch community.
Pitch
Have an example of calculating metrics such as F1 in DDP
Alternatives
Additional context
There are many issues related to calculating metrics in DDP and honestly this could be the most challenging part for further adoption. Having the proposed example will greatly help in this regard. |
Validation Step for Epoch Clarified | [
"feature",
"help wanted",
"won't fix"
] | π Feature
This simple change would add the num_validation_sanity_steps flag to also include fullEpoch in addition to the synonymous -1 flag, to add clarity to the end-user.
Motivation
I grew frustrated with trying to find a way to easily run an entire validation epoch before using training loops in pytorch-lightning. After finding this parameter (#2246), I still did not like how the -1 flag works. This also confused a colleague of mine, so I decided to submit this change as a means to add clarity to the end-user.
Pitch
Allow -1 and fullEpoch flags to run the validation sanity step on an entire batch prior to training
Alternatives
Additional context
Pull Request
#3166 |
RuntimeError: Cannot replicate if number of devices (1) is different from 8 Exception in device=TPU:2: Cannot replicate if number of devices (1) is different from 8 | [
"bug",
"help wanted"
] | I am getting the above (title) error on Kaggle when I am doing
trainer.fit(model, dm)
where trainer = Trainer(tpu_cores = 8, logger = wandblogger, max_epochs = 20)
Following is the trace:
File "/opt/conda/lib/python3.7/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 330, in _mp_start_fn
_start_fn(index, pf_cfg, fn, args)
File "/opt/conda/lib/python3.7/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 323, in _start_fn
_setup_replication()
Traceback (most recent call last):
File "/opt/conda/lib/python3.7/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 330, in _mp_start_fn
_start_fn(index, pf_cfg, fn, args)
Exception in device=TPU:2: Cannot replicate if number of devices (1) is different from 8
File "/opt/conda/lib/python3.7/site-packages/torch_xla/core/xla_model.py", line 287, in xla_replication_devices
format(len(local_devices), len(kind_devices)))
File "/opt/conda/lib/python3.7/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 316, in _setup_replication
xm.set_replication(device, [device])
File "/opt/conda/lib/python3.7/site-packages/torch_xla/core/xla_model.py", line 315, in set_replication
replication_devices = xla_replication_devices(devices)
Traceback (most recent call last):
File "/opt/conda/lib/python3.7/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 323, in _start_fn
_setup_replication()
I am using Kaggle Kernel. Here is my LightningDataModule.
class ProteinModule(pl.LightningDataModule):
def __init__(self, config, path, invalid_proteins):
super().__init__()
self.path = path
self.bs = config['bs']
self.state = config['seed']
self.invalid = invalid_proteins
def setup(self, stage=None):
all_protein_ids = os.listdir(self.path)
all_protein_ids = [protein for protein in all_protein_ids if protein not in self.invalid]
train_ids, val_ids = train_test_split(all_protein_ids, test_size = 0.2, random_state = self.state)
self.traindataset = ProteinDataset(path, train_ids)
self.valdataset = ProteinDataset(path, val_ids)
def train_dataloader(self):
return DataLoader(self.traindataset, batch_size = self.bs, shuffle = True)
def val_dataloader(self):
return DataLoader(self.valdataset, batch_size = self.bs, shuffle = False) |
log_softmax doesn't return correct dtype within training_step in 16-bit precision | [
"bug",
"help wanted"
] | π Bug
Calling log_softmax (either torch.nn.functional.log_softmax or torch.Tensor.log_softmax, as functional calls the Tensor version) from the training_step of a LightningModule returns dtype float32, even when float16 was given.
This only happens with precision=16, precision=32 returns the correct type.
I suspect that Pytorch Lightning overrides something in there (maybe Tensor::empty_like?) as this does not happen within a vanilla torch.Module.
Code sample
I hijacked the quick start code for this minimal example, it works, but it's just there to call softmax within a module:
import torch
from pytorch_lightning import LightningModule, Trainer
from torch.nn import functional as F
from torch.utils.data import DataLoader, IterableDataset
class RandomDataset(IterableDataset):
def __init__(self):
super().__init__()
def __iter__(self):
while True:
yield torch.randn(10), 1
class LitModel(LightningModule):
def __init__(self):
super().__init__()
self.l1 = torch.nn.Linear(10, 5)
# here it works
float16b = torch.randn(5, dtype=torch.float16, device='cuda') # - dtype: float16
float16b_softmax = float16b.log_softmax(0) # - dtype: float16
float32b = torch.randn(5, dtype=torch.float32, device='cuda') # - dtype: float32
float32b_softmax = float32b.log_softmax(0) # - dtype: float32
print(f"Should be float16: {float16b_softmax.dtype}\n"
f"Should be float32: {float32b_softmax.dtype}")
def forward(self, x):
return torch.relu(self.l1(x.view(x.size(0), -1)))
def training_step(self, batch, batch_idx):
# here it doesn't
float16b = torch.randn(5, dtype=torch.float16, device='cuda') # - dtype: float16
float16b_softmax = float16b.log_softmax(0) # - dtype: *float32*
float32b = torch.randn(5, dtype=torch.float32, device='cuda') # - dtype: float32
float32b_softmax = float32b.log_softmax(0) # - dtype: float32
print(f"Should be float16: {float16b_softmax.dtype}\n"
f"Should be float32: {float32b_softmax.dtype}")
x, y = batch
y_hat = self(x)
loss = F.cross_entropy(y_hat, y)
tensorboard_logs = {'train_loss': loss}
return {'loss': loss, 'log': tensorboard_logs}
def configure_optimizers(self):
return torch.optim.Adam(self.parameters(), lr=0.001)
def train_dataloader(self):
loader = DataLoader(RandomDataset(), batch_size=6, num_workers=4)
return loader
if __name__ == '__main__':
model = LitModel()
trainer = Trainer(gpus=1, precision=16, fast_dev_run=True)
trainer.fit(model)
Expected behavior
Tensor.log_softmax should return the same dtype as the Tensor it is called from.
Environment
CUDA:
- GPU:
- GeForce GTX 1050
- available: True
- version: 10.2
Packages:
- numpy: 1.19.1
- pyTorch_debug: False
- pyTorch_version: 1.6.0
- pytorch-lightning: 0.9.0
- tensorboard: 2.2.0
- tqdm: 4.48.2
System:
- OS: Windows
- architecture:
- 64bit
- WindowsPE
- processor: Intel64 Family 6 Model 158 Stepping 9, GenuineIntel
- python: 3.8.5
- version: 10.0.18362
Additional context
How did I come across this issue? PyTorch's AdaptiveLogSoftmaxWithLoss calls log_softmax within and thus fails when trying to use it in 16-bit precision. My quickfix is specifying the desired dtype when calling log_softmax within adaptive.py. |
Add support for training on IPU's | [
"feature",
"help wanted"
] | π Feature
Graphcore IPU's are a new breed of processors for training machine learning models.
Pytorch support for training on Graphcore IPU's is available in Preview. Add support for training on IPU's.
Reference: https://docs.graphcore.ai/projects/poptorch-user-guide/en/latest/
Motivation
The IPU benchmarks seem pretty impressive, and with azure supporting them, they might soon grow in popularity. With Lightning's idea of the running the same code run on CPU, GPU, and TPU, it seems natural that it should also support IPU's.
Pitch
Add support for ipus in trainer just as gpus and tpu_cores.
Additional context
Here are the benchmarks: https://www.graphcore.ai/benchmarks
IPU's are available on Azure cloud: https://www.hpcwire.com/2019/11/15/microsoft-azure-adds-graphcores-ipu |
Early Stopping + result dictionary + no validation not working. | [
"bug",
"help wanted"
] | π Bug
The case where the user does not use validation and returns a dictionary (instead of a TrainResult) during training does not work in combination with early stopping.
The test case which should check this is here:
pytorch-lightning/tests/callbacks/test_early_stopping.py
Lines 136 to 159
in
bd35c86
def test_early_stopping_no_val_step(tmpdir):
"""Test that early stopping callback falls back to training metrics when no validation defined."""
class CurrentModel(EvalModelTemplate):
def training_step(self, *args, **kwargs):
output = super().training_step(*args, **kwargs)
output.update({'my_train_metric': output['loss']}) # could be anything else
return output
model = CurrentModel()
model.validation_step = None
model.val_dataloader = None
stopping = EarlyStopping(monitor='my_train_metric', min_delta=0.1)
trainer = Trainer(
default_root_dir=tmpdir,
early_stop_callback=stopping,
overfit_batches=0.20,
max_epochs=2,
)
result = trainer.fit(model)
assert result == 1, 'training failed to complete'
assert trainer.current_epoch < trainer.max_epochs
The check in the last line is wrong. It should actually compare:
assert trainer.current_epoch < trainer.max_epochs - 1
To Reproduce
Steps to reproduce the behavior:
Fix the test case
Run tests.
Code sample
I guess using the test case is simpler and easier.
Expected behavior
That is an interesting question indeed. Possibilities are:
Test case should pass with correct comparison
The docs and @williamFalcon in #3193 (comment) suggest that only 'loss' should work.
So before fixing this issue, it should be settled what the expected behavior is.
If you tell me, I'm happy to help.
I could also include it in the pull request where I already tried to bring the docs in line with the test cases.
Environment
CUDA:
GPU:
GeForce GTX 1080 Ti
available: True
version: 10.1
Packages:
numpy: 1.19.1
pyTorch_debug: False
pyTorch_version: 1.6.0
pytorch-lightning: 0.9.1dev
tensorboard: 2.2.0
tqdm: 4.48.2
System:
OS: Linux
architecture:
64bit
ELF
processor: x86_64
python: 3.8.3
version: #113~16.04.1-Ubuntu SMP Fri Jul 10 04:37:08 UTC 2020
Additional context |
validation_epoch_end not logging validation_step EvalResult values | [
"bug",
"help wanted",
"design"
] | π Bug
When overwriting validation_epoch_end the EvalResult values from validation_step are not logged.
For my experiments I keep track of several metrics that I only log to TensorBoard at the end of each validation epoch. For most metrics I can specify EvalResult().log(on_epoch=True), but one of the metrics I can only calculate at the end of the epoch. If I use validation_epoch_end to calculate this metric, the results from validation_step are not logged.
To Reproduce
def validation_step(self, batch, batch_idx):
x, y = batch
y_hat = self(x)
loss = F.cross_entropy(y_hat, y)
result = pl.EvalResult(checkpoint_on=loss)
result.log('val/loss', loss, on_step=False, on_epoch=True)
result.log('val/y_hat', y_hat, logger=False, on_step=False, on_epoch=False)
return result
def validation_epoch_end(self, outputs):
y_hat = outputs['val/y_hat']
example = y_hat.exp().mean()
result = pl.EvalResult()
result.log('val/epoch_end_metric', example)
return result
The code above writes 'val/epoch_end_metric' to TensorBoard, but does nothing with the result from validation_step.
As a result, the checkpoint metric specified in validation_step is also lost. Warnings are printed to screen, but it still prints a statement at the end of fit claiming to save the checkpoint (which is not happening).
...
| Name | Type | Params
--------------------------------
0 | l1 | Linear | 7 K
Epoch 0: 50%|βββββββ | 5/10 [00:00<00:00, 549.90it/s, loss=2.454, v_num=2]
Validating: 0it [00:00, ?it/s] ---/pytorch_lightning/utilities/distributed.py:37: RuntimeWarning: The metric you returned None must be a `torch.Tensor` instance, checkpoint not saved HINT: what is the value of loss in validation_epoch_end()?
warnings.warn(*args, **kwargs)
---/pytorch_lightning/utilities/distributed.py:37: RuntimeWarning: Can save best model only with loss available, skipping.
warnings.warn(*args, **kwargs)
...
Epoch 4: 100%|ββββββββββββ| 10/10 [00:00<00:00, 911.31it/s, loss=1.847, v_num=2]
Saving latest checkpoint..
Epoch 4: 100%|ββββββββββββ| 10/10 [00:00<00:00, 761.27it/s, loss=1.847, v_num=2]
Expected behavior
Results from validation_step and validation_epoch_end should both be logged according to what is specified. The .log(..., logger, on_step, on_epoch) method provides enough granularity to specify how each metric should be processed for both validation methods.
I have tried using return [result, outputs] in validation_epoch_end to work around this, but the metrics in the EvalResult are assumed to be reduced to scalars if validation_epoch_end is overwritten and are therefore not handled correctly.
Environment
PyTorch-Lightning Version: 0.9.0
PyTorch Version: 1.6.0
OS: linux
How you installed PyTorch: pip
Python version: 3.6.9 |
Logging accuracy with batch accumulation | [
"question"
] | I wanted to ask how pytorch handles accuracy (and maybe even loss) logging when we have something like pl.Trainer(accumulate_grad_batches=ACCUMULATIONS).
My training looks like this:
def training_step(self, batch, batch_idx):
x, y = batch
y_hat = self(x)
loss = F.cross_entropy(y_hat, y, weight=self.weight)
result = pl.TrainResult(loss)
result.log("train_loss", loss, prog_bar=True)
result.log("train_accuracy", self.accuracy(y_hat.argmax(dim=-1), y), prog_bar=True)
return result
where self.accuracy = pl.metrics.classification.Accuracy(). Is there a way to make sure that the loss and accuracy is averaged across the accumulated batches?
If this is not currently the case, I'm happy to do a PR if someone can show me where to look in the source code to make such a change.
Thanks in advance |
Sync metrics between all GPUs before logging when using DDP | [
"bug",
"duplicate",
"help wanted",
"distributed"
] | Issue: When using DDP, tensorboard logger only logs for GPU0, which results in wrong overall metric shown in tensorboard. |
Log epoch as step when on_epoch=True and on_step=False | [
"feature",
"help wanted"
] | π Feature
When using the new structured Result API, it is no longer possible to force PL to report the epoch as the current step to loggers instead of the global step (or elapsed number of training steps).
Motivation
This results in confusing results when viewing results in a tool that uses step count by default (e.g. tensorboard's scalars view). Intuitively, one would think .log(..., on_epoch=True, on_step=False) would count epochs and not steps.
It is possible to obtain this behaviour by overriding both training_epoch_end and validation_epoch_end and returning an EvalResult with a "step" metric. Unfortunately, this adds back a bunch of boilerplate and many of the nice metric aggregation features PL offers when *_epoch_end is not implemented.
Pitch
Either a) allow for overriding the step for a given result, or b) default that step to current_epoch when on_epoch=True and on_step=False.
Alternatives
Use a "step" key with the old dict-based logging system (not documented, but worked as of 0.8.3)
Override train/validation_epoch_end and add return an EvalResult with .log('step', self.current_epoch).
Additional context
Original discussion: https://forums.pytorchlightning.ai/t/is-there-a-way-to-only-log-on-epoch-end-using-the-new-result-apis/74/3 |
Test all metrics against sklearn (with many input trials) | [
"feature",
"help wanted",
"good first issue",
"ci"
] | Hand-chosen values are not enough, we need to test with a large batch of inputs where possible.
Something in this style, maybe with a fixed seed:
def test_auroc_versus_sklearn():
for i in range(100):
target = torch.randint(0, 2, size=(10, ))
pred = torch.randint(0, 2, size=(10,))
score_sk = sklearn.metrics.roc_auc_score(target.numpy(), pred.numpy()) # sklearn
score_pl = auroc(pred, target) # Pytorch Lightning
assert torch.allclose(torch.tensor(score_pl).float(), torch.tensor(score_sk).float()) |
auto_scale_batch_size not working with datamodule | [
"bug",
"help wanted"
] | π Bug
The Trainer expects the LightningModule to have self.batch_size (see scale_batch_size() in training_tricks.py). However, if one is using the new LightningDataModule, that should be the class with self.batch_size defined.
To Reproduce
assert hasattr(lightning_data_module, "batch_size")
trainer = Trainer(auto_scale_batch_size=True)
trainer.fit(lightning_module, datamodule=lightning_data_module)
pytorch_lightning.utilities.exceptions.MisconfigurationException: Field batch_size not found in both `model` and `model.hparams`
Expected behavior
auto_scale_batch_size should work using LightningDataModule
Environment
* Packages:
- numpy: 1.18.5
- pyTorch_debug: False
- pyTorch_version: 1.6.0
- pytorch-lightning: 0.9.1rc1
- tensorboard: 2.2.0
- tqdm: 4.48.2 |
Error using Custom DistributedSampler | [
"bug",
"help wanted"
] | π Bug
When using a custom DistributedSampler, I get the following error while requesting the train dataloader:
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/accelerators/tpu_backend.py", line 81, in train
self.tpu_train_in_process(self.trainer.tpu_id, model, self.trainer, self.mp_queue)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/accelerators/tpu_backend.py", line 112, in tpu_train_in_process
results = trainer.run_pretrain_routine(model)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/trainer.py", line 1239, in run_pretrain_routine
self.train()
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/training_loop.py", line 355, in train
self.reset_train_dataloader(model)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/data_loading.py", line 205, in reset_train_dataloader
self.train_dataloader = self.request_dataloader(model.train_dataloader)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/data_loading.py", line 360, in request_dataloader
dataloader = dataloader_fx()
File "/content/lib/coreml/coreml/data/data_module.py", line 41, in train_dataloader
drop_last=False)
File "/content/lib/coreml/coreml/data/dataloader.py", line 127, in get_dataloader
'rank': xm.get_ordinal()
File "/content/lib/coreml/coreml/data/sampler.py", line 139, in __init__
sampler.dataset, num_replicas, rank, shuffle)
File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/distributed.py", line 58, in __init__
rank = dist.get_rank()
File "/usr/local/lib/python3.6/dist-packages/torch/distributed/distributed_c10d.py", line 598, in get_rank
_check_default_pg()
File "/usr/local/lib/python3.6/dist-packages/torch/distributed/distributed_c10d.py", line 210, in _check_default_pg
"Default process group is not initialized"
AssertionError: Default process group is not initialized
When no custom sampler is used, the code works fine. The DistributedSampler object is created as a wrapper over my actual sampler - the definition can be found here and the object creation happens here.
A quick Google search led me to this PyTorch issue where this comment highlighted that the call to torch.distributed.init_process_group might be missing.
To Reproduce
Use this colab notebook. |
Monitor metric not found for Learning Schedulers when using Result() | [
"bug",
"duplicate",
"help wanted",
"priority: 0"
] | π Bug
If you are using Result() (TrainResult() and EvalResult()) you cannot use a Learning Scheduler that monitors a metric as it will not find the metrics logged/stored by the Result() class. The available metrics for me that are listed below in the error are not the ones that exist in my TrainResult() and EvalResult().
either the update_learning_rates() function in training_loop.py is not looking in the right place for metrics or the Results() metric aggregation/updating is not updating in the correct places.
Am i right or am i doing something wrong?
ERROR:
....
~/opt/anaconda3/envs/axlnlp/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py in run_pretrain_routine(self, model)
1237
1238 # CORE TRAINING LOOP
-> 1239 self.train()
1240
1241 def _run_sanity_check(self, ref_model, model):
~/opt/anaconda3/envs/axlnlp/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py in train(self)
399
400 # update LR schedulers
--> 401 self.update_learning_rates(interval='epoch')
402
403 # early stopping
~/opt/anaconda3/envs/axlnlp/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py in update_learning_rates(self, interval, monitor_metrics)
1279 avail_metrics = ','.join(list(self.callback_metrics.keys()))
1280 raise MisconfigurationException(
-> 1281 f'ReduceLROnPlateau conditioned on metric {monitor_key}'
1282 f' which is not available. Available metrics are: {avail_metrics}.'
1283 ' Condition can be set using `monitor` key in lr scheduler dict'
MisconfigurationException: ReduceLROnPlateau conditioned on metric val-seg-f1 which is not available. Available metrics are: val_early_stop_on,val_checkpoint_on,checkpoint_on. Condition can be set using `monitor` key in lr scheduler dict |
Metric Aggregation | [
"feature",
"help wanted"
] | π Feature
To offer a better metric aggregation I discussed a potential way to go with @SkafteNicki .
We agreed on the following:
add a new aggregated-property that aggregates the metric over batches.
default ddp-sync to broadcasting + aggregation
add a reset function to metrics to reset the internal persistent aggregation state (can be called by user whenever they want; usually on epoch end)
in pseudo code the metric workflow will now be:
interm_outputs = do_some_first_calc(batch)
broadcasted_interm = broadcast(interm_outputs)
persistent_state.extend(broadcasted_interm)
return aggregate(broadcasted_interm)
cc @PyTorchLightning/core-contributors |
Enable passing result from *_step Model Hook to corresponding *_batch_end Callback | [
"feature",
"help wanted",
"won't fix",
"design"
] | π Feature
Give users the option to pass a result at the end of *_step directly to the corresponding *_batch_end Callback.
Example:
validation_step outputs prediction and probabilities. Pass these to validation_batch_end user-defined Callback for advanced logging.
Motivation
This will remove the need for some boilerplate in the model code and also remove the need for calling the model's forward() on the batch inside the callback just to do logging. |
Adding input sanitation for distributed backend and related trainer flags | [
"feature",
"help wanted",
"won't fix"
] | π Feature
Error should be thrown or a warning raised if the passed distributed_backend flag doesn't match one of the expected types (e.g ddp, ddp-spawn, etc). This maybe applicable to other trainer flags too.
Motivation
This is really minor, but I just spent an embarrassing amount of time trying to figure out why my ddp models were not using GPUs. After stepping through the the pytorch-lightning accelerator selection logic, I eventually realized I had a typo in my distributed_backend flag. This mistake could have easily been caught by input sanitation.
For more detail, when you enter an invalid backend, like dpp instead of ddp, and select gpus = 4, the logs at the top of the program show the 4 available GPUs and lead the user to believe that nothing is wrong. However, the logic of the accelerator selection silently chooses cpu in this case, result in surprising behavior.
Pitch
For various flags, like "distributed_backend" unrecognized values, e.g "dpp" instead of "ddp", pl should raise an error or warn the
user that this isn't recognized / valid value for the flag. |
Cap batch size by number of training samples when using auto_scale_batch_size | [
"bug",
"help wanted"
] | π Bug
The batch size finder sets an unrealistically high batch size if all samples of the training dataset fit into one batch.
...
Batch size 8388608 succeeded, trying batch size 16777216
Batch size 16777216 succeeded, trying batch size 33554432
Batch size 33554432 succeeded, trying batch size 67108864
Finished batch size finder, will continue with full run using batch size 67108864
To Reproduce
Steps to reproduce the behavior:
Run Mnist Example with auto_scale_batch_size=True (one needs to remove hardcoded batch size and set self.batch_size).
Expected behavior
Batch size search space should not be larger than number of available training samples. |
Issue with multi gpu training | [
"question"
] | I get the following error on training with multi gpus:
File "training_on_each_split.py", line 271, in <module>
trainer.fit(model)
File "/home/nvarshn2/.conda/envs/pytorch_lightning/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 997, in fit
results = self.dp_train(model)
File "/home/nvarshn2/.conda/envs/pytorch_lightning/lib/python3.6/site-packages/pytorch_lightning/trainer/distrib_parts.py", line 270, in dp_train
result = self.run_pretrain_routine(model)
File "/home/nvarshn2/.conda/envs/pytorch_lightning/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 1213, in run_pretrain_routine
self.train()
File "/home/nvarshn2/.conda/envs/pytorch_lightning/lib/python3.6/site-packages/pytorch_lightning/trainer/training_loop.py", line 370, in train
self.run_training_epoch()
File "/home/nvarshn2/.conda/envs/pytorch_lightning/lib/python3.6/site-packages/pytorch_lightning/trainer/training_loop.py", line 452, in run_training_epoch
batch_output = self.run_training_batch(batch, batch_idx)
File "/home/nvarshn2/.conda/envs/pytorch_lightning/lib/python3.6/site-packages/pytorch_lightning/trainer/training_loop.py", line 632, in run_training_batch
self.hiddens
File "/home/nvarshn2/.conda/envs/pytorch_lightning/lib/python3.6/site-packages/pytorch_lightning/trainer/training_loop.py", line 783, in optimizer_closure
training_step_output = self.process_output(training_step_output, train=True)
File "/home/nvarshn2/.conda/envs/pytorch_lightning/lib/python3.6/site-packages/pytorch_lightning/trainer/logging.py", line 112, in process_output
callback_metrics = self.reduce_distributed_output(callback_metrics, num_gpus)
File "/home/nvarshn2/.conda/envs/pytorch_lightning/lib/python3.6/site-packages/pytorch_lightning/trainer/logging.py", line 193, in reduce_distributed_output
output[k] = self.reduce_distributed_output(output[k], num_gpus)
File "/home/nvarshn2/.conda/envs/pytorch_lightning/lib/python3.6/site-packages/pytorch_lightning/trainer/logging.py", line 201, in reduce_distributed_output
output[k] = torch.mean(output[k])
RuntimeError: Can only calculate the mean of floating types. Got Long instead.
Code
def loss_function(self, logits, labels):
with torch.no_grad():
predicted_label = logits.max(dim=1)[1]
prediction_label_count = {}
for label in range(parameters["num_labels"]):
prediction_label_count[label] = predicted_label.eq(label).sum()
loss_fn = torch.nn.CrossEntropyLoss()
eturn loss_fn(logits, labels), prediction_label_count
def training_step(self, batch, batch_idx):
logits, softmax_logits = self.forward(**batch)
loss, prediction_label_count = self.loss_function(logits, batch["labels"])
return {"loss": loss,
"prediction_label_count": prediction_label_count,
"log": {"train_loss": loss}}
Need help in debugging the issue. |
Incorrect aggregation when logging predictions into EvalResult with distributed_backend DP | [
"bug",
"help wanted",
"strategy: dp"
] | π Bug
It it possible to log some data in EvalResult like this:
result = pl.EvalResult(checkpoint_on=val_loss)
result.y_hat = y_hat.float()
Suppose I log some value with dimensions (32, 10) - batch size 32 and 10 classes. And I have N samples in total.
If distributed_backend is None, in validation_epoch_end outputs.y_hat.shape is (N, 10)
If distributed_backend is ddp, in validation_epoch_end outputs.y_hat.shape is (N, 10)
But if distributed_backend is dp, in validation_epoch_end outputs.y_hat.shape is (N) - the values are averaged over samples.
Code sample
https://colab.research.google.com/drive/16NQaMTho5gI0URzgh3CNsZvXmHHyynvh?usp=sharing |
How to free up the CUDA memory | [
"question",
"won't fix"
] | I just wanted to build a model to see how pytorch-lightning works. I am working on jupyter notebook and I stopped the cell in the middle of training. I wanted to free up the CUDA memory and couldn't find a proper way to do that without restarting the kernel. Here I tried these:
del model # model is a pl.LightningModule
del trainer # pl.Trainer
del train_loader # torch DataLoader
torch.cuda.empty_cache()
# this is also stuck
pytorch_lightning.utilities.memory.garbage_collection_cuda()
Deleting model and torch.cuda.empty_cache() works in PyTorch.
Version 0.9.0 |
Logging non-tensor scalar with result breaks subsequent epoch aggregation | [
"bug",
"help wanted",
"good first issue"
] | π Bug
Logging non-tensor scalar with result breaks subsequent epoch/tbptt aggregation
(on both 0.9 and master)
-- Process 1 terminated with the following error:
Traceback (most recent call last):
File "/opt/conda/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 20, in _wrap
fn(i, *args)
File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/accelerators/ddp_spawn_backend.py", line 165, in ddp_train
results = self.trainer.run_pretrain_routine(model)
File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 1237, in run_pretrain_routine
self.train()
File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/training_loop.py", line 396, in train
self.run_training_epoch()
File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/training_loop.py", line 543, in run_training_epoch
self.run_training_epoch_end(epoch_output, checkpoint_accumulator, early_stopping_accumulator, num_optimizers)
File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/training_loop.py", line 672, in run_training_epoch_end
epoch_log_metrics, epoch_progress_bar_metrics = self.__auto_reduce_results_on_epoch_end(epoch_output)
File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/training_loop.py", line 696, in __auto_reduce_results_on_epoch_end
tbptt_outs = tbptt_outs[0].__class__.reduce_across_time(tbptt_outs)
File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/core/step_result.py", line 392, in reduce_across_time
result[k] = tbptt_reduce_fx(value)
TypeError: mean(): argument 'input' (position 1) must be Tensor, not list
To Reproduce
def training_step(self, batch, batch_idx):
x, y = batch[0], batch[1]
x = self.forward(x)
loss = self.loss(x, y)
result = pl.TrainResult(loss)
result.log("non tensor scalar", 1.0)
result.log("loss", loss, on_step=False, on_epoch=True)
To Fix
result.log("non tensor scalar", torch.tensor(1.0))
Expected behavior
In log() of result objects, value should accept non tensor values as value: Any and not cause issues with other metrics to be logged
Additional context
log() can be changed to only accept tensors, or have a built-in conversion, will update as I investigate further |
on_fit_start not triggering (master) | [
"bug",
"help wanted",
"good first issue",
"refactor"
] | π Bug
on_fit_start is not being triggered on master as of f46318e
To Reproduce
Steps to reproduce the behavior:
Install master
Run template with on_fit_start added (included below)
(identical behavior for single gpu, ddp_spawn and ddp)
Code sample
"""
Runs a model on a single node across multiple gpus.
"""
import os
from argparse import ArgumentParser
from pl_examples.models.lightning_template import LightningTemplateModel
from pytorch_lightning import Trainer, seed_everything
seed_everything(234)
class custom_template(LightningTemplateModel):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
def on_epoch_start(self):
print("on_epoch_start")
def on_fit_start(self):
print("on_fit_start")
def main(args):
""" Main training routine specific for this project. """
# ------------------------
# 1 INIT LIGHTNING MODEL
# ------------------------
model = custom_template(**vars(args))
# ------------------------
# 2 INIT TRAINER
# ------------------------
trainer = Trainer.from_argparse_args(args)
# ------------------------
# 3 START TRAINING
# ------------------------
trainer.fit(model)
def run_cli():
# ------------------------
# TRAINING ARGUMENTS
# ------------------------
# these are project-wide arguments
root_dir = os.path.dirname(os.path.realpath(__file__))
parent_parser = ArgumentParser(add_help=False)
# each LightningModule defines arguments relevant to it
parser = LightningTemplateModel.add_model_specific_args(parent_parser, root_dir)
parser = Trainer.add_argparse_args(parser)
parser.set_defaults(gpus=2, distributed_backend='ddp')
args = parser.parse_args()
# ---------------------
# RUN TRAINING
# ---------------------
main(args)
if __name__ == '__main__':
run_cli()
Expected behavior
"on_fit_start" should be printed along with "on_epoch_start" but only "on_epoch_start" is printed |
θ‘δΊΊζ£ζ΅ηθ―δ»·ζ ε沑ζι’ε
εε₯½η | [
"won't fix"
] | π Feature
Motivation
Pitch
Alternatives
Additional context |
Using multiple dataloaders at training time | [
"question",
"won't fix"
] | I Try to train on two dataloaders, one attached to a dataset where each __get_item__ call fetches a predefined batch of varying length (thus the batch_size I transfer to the dataloader object is 1), and one where I sample randomly from a set of sequences, thus __get_item__ call fetches one sample each time.
I'm looking for something like
loader = DataLoader(
batched_dataset,
batch_size=1,
)
loader_tdm = DataLoader(
random_samples_dataset,
batch_size=8,
)
return [loader1,loader2]
I see this option is available for the validation and test dataloaders, while for training you suggested (previous issues) to use the ConcatDataset.
Is there a workaround? |
TrainResult erroring with deepcopy on epoch end | [
"bug",
"accelerator: tpu"
] | When detaching the Result object from the graph the code first calls deepcopy.copy() which throws an error
*** RuntimeError: Can't call numpy() on Tensor that requires grad. Use tensor.detach().numpy() instead.
as the Result object still contains the graphs. It seems like it needs to perform some sort of recursive detach (like the dict implementation in the lines below), since calling detach() on its own will modify training_step_output_for_epoch_end and training_step_output in place.
pytorch-lightning/pytorch_lightning/trainer/training_loop.py
Line 1038
in
f3c63f7
training_step_output_for_epoch_end = copy(training_step_output)
N.B. This was run on TPU |
On the relationship between Result and Callback monitor | [
"bug",
"help wanted",
"priority: 0",
"discussion",
"design"
] | π¬ Discussion
Since I started using Lightning, I have noticed many users, as well as myself, having trouble with the way metrics are saved for callbacks by the Result class.
The current design of having only {early_stop_on,checkpoint_on} forces different callbacks to use the same monitor. I believe that wanting different monitors for EarlyStopping, ModelCheckpoint, and LearningRateLogger is a very common use case. This is something we will also need to address if we ever want to support multiple ModelCheckpoint callbacks.
This is also unintuitive for new users since most callbacks include a monitor parameter which becomes completely useless when the Result object is used. On the other hand, using a Result object is the new recommended way to structure {training,validation,test}_step().
Additionally, the approach is not future-proof since we would need to update lightning to include, say, batchnorm_on or whatever new technique that becomes widespread.
I believe this is an important issue that needs a solution.
Requirements:
We could use this issue to have a productive discussion about how the final design should look and what are the necessary requirements to fit most use cases:
To start, I would say:
Have it be easy to understand: avoid the conflict between monitor in Callbacks and {early_stop_on,checkpoint_on} in Result
Allow any number of different monitors to be used simultaneously
Allow a callback monitor to be changed on-the-run
Own proposal
Remove {early_stop_on,checkpoint_on}
Add callback to result.log() function (or add result.callback() - both would have the same functionality) for users to save metrics on-the-run
train_result.log(
"train_loss", train_loss,
on_step=False, on_epoch=True,
prog_bar=True, callback=True
)
...
...
# current behaviour would override `train_result`s checkpoint_on
# if it is set also in `eval_result`. This causes some limitations.
# `callback=True` here should not override `train_loss`
# in `train_result` but keep both.
eval_result.log(
"valid_loss", valid_loss,
on_step=False, on_epoch=True,
prog_bar=True, callback=True
)
# note: prog_bar should also be removed from `.log()`
# and the ProgressBar class should instead have a `--monitor` parameter.
Have each callback set its monitor via --monitor
Set --monitor type to be Optional[Union[str, Callable[..., str]] so the following is possible:
# basic behaviour
pl.callbacks.ModelCheckpoint(
monitor="valid_accuracy",
mode="max"
)
# Can change monitor on-the-run
# Note: maybe trainer should be injected into the callback monitor function
pl.callbacks.EarlyStopping(
monitor=lambda: "train_loss" if trainer.current_epoch <= 20 else "val_loss",
mode="min"
)
# Could even have
complex_monitor = VeryComplexClassWhichDoesSomethingVeryFancyWithState()
assert callable(complex_monitor)
pl.callbacks.LearningRateLogger(monitor=complex_monitor)
# this would cover a big chunk of usecases
I am positive you guys have other great ideas.
cc: @williamFalcon @rohitgr7
#2976 (previous discussion)
#3243 (duplicate)
#2908 (related, would require these improvements)
#3254 (related, could use any callback_metric)
#3291
https://forums.pytorchlightning.ai/t/does-evalresults-also-work-with-early-stopping
Probably missing other related issues. Feel free to tag |
Debugging docs uses the same argument for both cases of overfit_batches | [
"docs"
] | On this example of how to use the overfit_batches both options shown use the fractional argument 0.1. The method can take either a fraction of the total batches or a fixed number of batches. The argument passed in the second example should be changed to reflect that. |
Unexpected key(s) in state_dict Error when calling Trainer.test | [
"bug",
"help wanted",
"waiting on author",
"checkpointing"
] | Dear all,
I have a trainer
import torch
from torch.optim.lr_scheduler import ReduceLROnPlateau
from pytorch_lightning import LightningModule
from torch.nn import functional as F
from pytorch_lightning.metrics.functional import accuracy, f1_score, auroc
class TraversabilityModule(LightningModule):
def __init__(self, model: torch.nn.Module):
super().__init__()
self.model = model
def forward(self, x):
return self.model(x)
def get_metrics(self, pred, y):
return {
'accuracy': accuracy(pred, y, num_classes=2),
'f1': f1_score(pred, y, num_classes=2),
# 'roc_auc': auroc(pred, y)
}
def step(self, batch, batch_idx):
x, y = batch
y_hat = self(x)
pred = torch.argmax(y_hat, dim=1)
metrics = self.get_metrics(pred, y)
loss = F.cross_entropy(y_hat, y)
return loss, metrics
def training_step(self, batch, batch_idx):
loss, metrics = self.step(batch, batch_idx)
return {'loss': loss, 'log': metrics }
def validation_step(self, batch, batch_idx):
loss, metrics = self.step(batch, batch_idx)
return {'val_loss': loss, 'log': metrics }
def validation_epoch_end(self, outputs):
val_loss_mean = torch.stack([x['val_loss'] for x in outputs]).mean()
acc_mean = torch.stack([x['log']['accuracy'] for x in outputs]).mean()
f1_mean = torch.stack([x['log']['f1'] for x in outputs]).mean()
# roc_auc_mean = torch.stack([x['log']['roc_auc'] for x in outputs]).mean()
return {
'val_loss': val_loss_mean,
'val_f1': f1_mean,
'progress_bar': {'f1': f1_mean},
'log': {
'val_accuracy': acc_mean,
'val_f1': f1_mean,
# 'roc_auc': roc_auc_mean
}
}
def test_step(self, batch, batch_idx):
loss, metrics = self.step(batch, batch_idx)
return {'test_loss': loss, 'log': metrics}
def test_epoch_end(self, outputs):
val_loss_mean = torch.stack([x['test_loss'] for x in outputs]).mean()
acc_mean = torch.stack([x['log']['accuracy'] for x in outputs]).mean()
f1_mean = torch.stack([x['log']['f1'] for x in outputs]).mean()
# roc_auc_mean = torch.stack([x['log']['roc_auc'] for x in outputs]).mean()
return {
'test_loss': val_loss_mean,
'test_f1': f1_mean,
'progress_bar': {'f1': f1_mean},
'log': {
'test_accuracy': acc_mean,
'test_f1': f1_mean,
# 'roc_auc': roc_auc_mean
}
}
def configure_optimizers(self):
optimizer = torch.optim.Adam(self.parameters(), lr=0.001)
scheduler = {'scheduler': ReduceLROnPlateau(optimizer, verbose=True),
'monitor': 'val_f1'}
return [optimizer], [scheduler]
And I have a training loop
module = TraversabilityModule(model)
trainer = pl.Trainer(
gpus=1,
max_epochs=params['epoches'],
logger=comet_logger,
checkpoint_callback=ModelCheckpoint(
monitor='val_f1',
filepath=project.checkpoint_dir / params['model'] / 'model',
),
early_stop_callback=EarlyStopping(monitor='val_f1',
patience=15))
trainer.fit(module, train_dl, val_dl)
trainer.test(test_dataloaders=test_dl)
I get the following error when I try to validate the test set (at trainer.test)
RuntimeError: Error(s) in loading state_dict for TraversabilityModule:
Unexpected key(s) in state_dict: "model.encoder.gate.0.weight", "model.encoder.gate.1.weight", "model.encoder.gate.1.bias", "model.encoder.gate.1.running_mean", "model.encoder.gate.1.running_var", "model.encoder.gate.1.num_batches_tracked", "model.encoder.layers.0.layer.0.shortcut.0.weight", "model.encoder.layers.0.layer.0.shortcut.1.weight", "model.encoder.layers.0.layer.0.shortcut.1.bias", "model.encoder.layers.0.layer.0.shortcut.1.running_mean", "model.encoder.layers.0.layer.0.shortcut.1.running_var", "model.encoder.layers.0.layer.0.shortcut.1.num_batches_tracked", "model.encoder.layers.0.layer.0.convs.0.weight", "model.encoder.layers.0.layer.0.convs.0.bias", "model.encoder.layers.0.layer.0.convs.0.running_mean", "model.encoder.layers.0.layer.0.convs.0.running_var", "model.encoder.layers.0.layer.0.convs.0.num_batches_tracked", "model.encoder.layers.0.layer.0.convs.2.weight", "model.encoder.layers.0.layer.0.convs.3.weight", "model.encoder.layers.0.layer.0.convs.3.bias", "model.encoder.layers.0.layer.0.convs.3.running_mean", "model.encoder.layers.0.layer.0.convs.3.running_var", "model.encoder.layers.0.layer.0.convs.3.num_batches_tracked", "model.encoder.layers.0.layer.0.convs.5.weight", "model.encoder.layers.0.layer.0.se.se.0.weight", "model.encoder.layers.0.layer.0.se.se.0.bias", "model.encoder.layers.0.layer.0.se.se.2.weight", "model.encoder.layers.0.layer.0.se.se.2.bias", "model.encoder.layers.1.layer.0.shortcut.0.weight", "model.encoder.layers.1.layer.0.shortcut.1.weight", "model.encoder.layers.1.layer.0.shortcut.1.bias", "model.encoder.layers.1.layer.0.shortcut.1.running_mean", "model.encoder.layers.1.layer.0.shortcut.1.running_var", "model.encoder.layers.1.layer.0.shortcut.1.num_batches_tracked", "model.encoder.layers.1.layer.0.convs.0.weight", "model.encoder.layers.1.layer.0.convs.0.bias", "model.encoder.layers.1.layer.0.convs.0.running_mean", "model.encoder.layers.1.layer.0.convs.0.running_var", "model.encoder.layers.1.layer.0.convs.0.num_batches_tracked", "model.encoder.layers.1.layer.0.convs.2.weight", "model.encoder.layers.1.layer.0.convs.3.weight", "model.encoder.layers.1.layer.0.convs.3.bias", "model.encoder.layers.1.layer.0.convs.3.running_mean", "model.encoder.layers.1.layer.0.convs.3.running_var", "model.encoder.layers.1.layer.0.convs.3.num_batches_tracked", "model.encoder.layers.1.layer.0.convs.5.weight", "model.encoder.layers.1.layer.0.se.se.0.weight", "model.encoder.layers.1.layer.0.se.se.0.bias", "model.encoder.layers.1.layer.0.se.se.2.weight", "model.encoder.layers.1.layer.0.se.se.2.bias", "model.encoder.layers.2.layer.0.shortcut.0.weight", "model.encoder.layers.2.layer.0.shortcut.1.weight", "model.encoder.layers.2.layer.0.shortcut.1.bias", "model.encoder.layers.2.layer.0.shortcut.1.running_mean", "model.encoder.layers.2.layer.0.shortcut.1.running_var", "model.encoder.layers.2.layer.0.shortcut.1.num_batches_tracked", "model.encoder.layers.2.layer.0.convs.0.weight", "model.encoder.layers.2.layer.0.convs.0.bias", "model.encoder.layers.2.layer.0.convs.0.running_mean", "model.encoder.layers.2.layer.0.convs.0.running_var", "model.encoder.layers.2.layer.0.convs.0.num_batches_tracked", "model.encoder.layers.2.layer.0.convs.2.weight", "model.encoder.layers.2.layer.0.convs.3.weight", "model.encoder.layers.2.layer.0.convs.3.bias", "model.encoder.layers.2.layer.0.convs.3.running_mean", "model.encoder.layers.2.layer.0.convs.3.running_var", "model.encoder.layers.2.layer.0.convs.3.num_batches_tracked", "model.encoder.layers.2.layer.0.convs.5.weight", "model.encoder.layers.2.layer.0.se.se.0.weight", "model.encoder.layers.2.layer.0.se.se.0.bias", "model.encoder.layers.2.layer.0.se.se.2.weight", "model.encoder.layers.2.layer.0.se.se.2.bias", "model.decoder.decoder.weight", "model.decoder.decoder.bias".
They are clearly the weights in my model (a variant of ResNet). The model path exists and the model is stored there.
Thank you.
Best Regards,
Francesco |
MulticlassAUROC: Implement a multi-class version of the AUROC metric | [
"feature",
"help wanted",
"good first issue"
] | π Feature
Create the metric MulticlassAUROC to allow for the AUROC metric to be used in multi-class problem settings. Or,
Expand the AUROC metric to support multi-class data, which would also directly solve this AUROC bug that instead gives a random value when used in multi-class problems: #3303
Motivation
AUROC is a useful metric for model performance, but the current AUROC metric is not made for multi-class problems.
An out of the box working multi-class AUROC metric would be great.
Pitch
According to this post on the PyTorch Lightning forum, implementation would just require wrapping existing functionality together.
Implementing this metric would lower the barrier of its usage, instead of needing to be aware of the different metrics available that could be combined to effectively get his metric.
Alternatives
Use sklearn's implementation of AUROC: sklearn.metrics.roc_auc_score.
This slows down the training, however, due to the need of transferring prediction and target data to CPU.
Implement manually by combining multiclass auc and a multiclass roc calculation
Additional context
Relevant discussion: https://forums.pytorchlightning.ai/t/pytorch-lightning-auroc-value-for-multi-class-seems-to-be-completely-off-compared-to-sklearn-using-it-wrong/61/6
Relevant AUROC bug when used for multi-class: #3303 |
Add a callback to support opacus | [
"feature",
"help wanted",
"won't fix",
"callback"
] | Opacus is a library that enables training PyTorch models with differential privacy. It supports training with minimal code changes required on the client, has little impact on training performance and allows the client to online track the privacy budget expended at any given moment.
https://github.com/pytorch/opacus |
Init tensors using type_as for multi gpu training error | [] | I get the following error when using type_as for multi device trainig:
TypeError: type_as() got an unexpected keyword argument 'device'
Code to reproduce:
!pip install torchvision
!pip install pytorch-lightning==0.8.3 --upgrade
class MNISTModel(pl.LightningModule):
def __init__(self):
super(MNISTModel, self).__init__()
self.l1 = torch.nn.Linear(28 * 28, 10)
def forward(self, x):
return torch.relu(self.l1(x.view(x.size(0), -1)))
def training_step(self, batch, batch_nb):
x, y = batch
loss = F.cross_entropy(self(x), y)
tensorboard_logs = {'train_loss': loss}
#test multi gpu use compatibility
some_var = torch.tensor([1,3])
some_var = some_var.type_as(x, device=self.device)
return {'loss': loss, }
def configure_optimizers(self):
return torch.optim.Adam(self.parameters(), lr=0.02)
train_loader = DataLoader(MNIST(os.getcwd(), train=True, download=True, transform=transforms.ToTensor()), batch_size=32)
mnist_model = MNISTModel()
trainer = pl.Trainer(gpus=1, progress_bar_refresh_rate=20)
trainer.fit(mnist_model, train_loader)
error:
<ipython-input-9-2eae42e2fcba> in training_step(self, batch, batch_nb)
13 tensorboard_logs = {'train_loss': loss}
14 some_var = torch.tensor([1,3])
---> 15 some_var = some_var.type_as(x, device=self.device)
16 return {'loss': loss, }
17
TypeError: type_as() got an unexpected keyword argument 'device' |
Example code does not run | [
"bug",
"help wanted"
] | π Bug
Official example code, only modifying # of GPUs, does not run.
To Reproduce
Steps to reproduce the behavior:
import os
import torch
from torch.nn import functional as F
from torch.utils.data import DataLoader
from torchvision.datasets import MNIST
from torchvision import transforms
import pytorch_lightning as pl
class MNISTModel(pl.LightningModule):
def __init__(self):
super(MNISTModel, self).__init__()
self.l1 = torch.nn.Linear(28 * 28, 10)
def forward(self, x):
return torch.relu(self.l1(x.view(x.size(0), -1)))
def training_step(self, batch, batch_nb):
x, y = batch
loss = F.cross_entropy(self(x), y)
tensorboard_logs = {'train_loss': loss}
return {'loss': loss, 'log': tensorboard_logs}
def configure_optimizers(self):
return torch.optim.Adam(self.parameters(), lr=0.02)
train_loader = DataLoader(MNIST(os.getcwd(), train=True, download=True, transform=transforms.ToTensor()), batch_size=3000)
mnist_model = MNISTModel()
trainer = pl.Trainer(gpus=2) <-- ONLY LINE I CHANGE
trainer.fit(mnist_model, train_loader)
ExceptionTraceback (most recent call last)
<ipython-input-3-8d3997ec0919> in <module>
3 mnist_model = MNISTModel()
4 trainer = pl.Trainer(gpus=2)
----> 5 trainer.fit(mnist_model, train_loader)
/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/states.py in wrapped_fn(self, *args, **kwargs)
46 if entering is not None:
47 self.state = entering
---> 48 result = fn(self, *args, **kwargs)
49
50 # The INTERRUPTED state can be set inside the run function. To indicate that run was interrupted
/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py in fit(self, model, train_dataloader, val_dataloaders, datamodule)
1050 self.accelerator_backend = DDPSpawnBackend(self)
1051 self.accelerator_backend.setup()
-> 1052 self.accelerator_backend.train(model, nprocs=self.num_processes)
1053 results = self.accelerator_backend.teardown(model)
1054
/opt/conda/lib/python3.7/site-packages/pytorch_lightning/accelerators/ddp_spawn_backend.py in train(self, model, nprocs)
41
42 def train(self, model, nprocs):
---> 43 mp.spawn(self.ddp_train, nprocs=nprocs, args=(self.mp_queue, model,))
44
45 def teardown(self, model):
/opt/conda/lib/python3.7/site-packages/torch/multiprocessing/spawn.py in spawn(fn, args, nprocs, join, daemon, start_method)
198 ' torch.multiprocessing.start_process(...)' % start_method)
199 warnings.warn(msg)
--> 200 return start_processes(fn, args, nprocs, join, daemon, start_method='spawn')
/opt/conda/lib/python3.7/site-packages/torch/multiprocessing/spawn.py in start_processes(fn, args, nprocs, join, daemon, start_method)
156
157 # Loop on join until it returns True or raises an exception.
--> 158 while not context.join():
159 pass
160
/opt/conda/lib/python3.7/site-packages/torch/multiprocessing/spawn.py in join(self, timeout)
111 raise Exception(
112 "process %d terminated with exit code %d" %
--> 113 (error_index, exitcode)
114 )
115
Exception: process 0 terminated with exit code 1
If I change the one line to:
trainer = pl.Trainer(gpus=2, distributed_backend="ddp"), I get:
GPU available: True, used: True
TPU available: False, using: 0 TPU cores
CUDA_VISIBLE_DEVICES: [0,1]
Hanging permanently (no progress after > 1 hour). If I inturrupt this manually, I see the model is stuck on:
f.write(fp.getbuffer())
Within a multiprocess call.
However, I only got this traceback maybe 3 times out of 30, and didn't record it, and couldn't recreate it again to get the full traceback for this error.
with:
trainer = pl.Trainer(gpus=2, distributed_backend="ddp_spawn"), I get:
ExceptionTraceback (most recent call last)
<ipython-input-3-e6d489f1a457> in <module>
3 mnist_model = MNISTModel()
4 trainer = pl.Trainer(gpus=2, distributed_backend="ddp_spawn")
----> 5 trainer.fit(mnist_model, train_loader)
/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/states.py in wrapped_fn(self, *args, **kwargs)
46 if entering is not None:
47 self.state = entering
---> 48 result = fn(self, *args, **kwargs)
49
50 # The INTERRUPTED state can be set inside the run function. To indicate that run was interrupted
/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py in fit(self, model, train_dataloader, val_dataloaders, datamodule)
1050 self.accelerator_backend = DDPSpawnBackend(self)
1051 self.accelerator_backend.setup()
-> 1052 self.accelerator_backend.train(model, nprocs=self.num_processes)
1053 results = self.accelerator_backend.teardown(model)
1054
/opt/conda/lib/python3.7/site-packages/pytorch_lightning/accelerators/ddp_spawn_backend.py in train(self, model, nprocs)
41
42 def train(self, model, nprocs):
---> 43 mp.spawn(self.ddp_train, nprocs=nprocs, args=(self.mp_queue, model,))
44
45 def teardown(self, model):
/opt/conda/lib/python3.7/site-packages/torch/multiprocessing/spawn.py in spawn(fn, args, nprocs, join, daemon, start_method)
198 ' torch.multiprocessing.start_process(...)' % start_method)
199 warnings.warn(msg)
--> 200 return start_processes(fn, args, nprocs, join, daemon, start_method='spawn')
/opt/conda/lib/python3.7/site-packages/torch/multiprocessing/spawn.py in start_processes(fn, args, nprocs, join, daemon, start_method)
156
157 # Loop on join until it returns True or raises an exception.
--> 158 while not context.join():
159 pass
160
/opt/conda/lib/python3.7/site-packages/torch/multiprocessing/spawn.py in join(self, timeout)
111 raise Exception(
112 "process %d terminated with exit code %d" %
--> 113 (error_index, exitcode)
114 )
115
Exception: process 0 terminated with exit code 1
PyTorch Version (e.g., 1.0): 1.4 / 1.6
OS (e.g., Linux): Linux
How you installed PyTorch (conda, pip, source): pip
Python version: 3.6, 3.7
CUDA/cuDNN version: CUDA: 10.1, 10.2, 11.0
GPU models and configuration: GTX1080ti, GTX1080, GTX1070, P100
I've tried running this on 5 different VMs, and got the same issues on every single one. |
import pytorch_lightning [1] 2770657 illegal hardware instruction (core dumped) python3 | [
"bug",
"help wanted"
] | π Bug
Can't import pytorch_lightning.
I've seen this before with tensorflow on computers with older cpus and had to build the package from source. Is this the case here as well? PyTorch seems to run just fine on this cpu, does PyTorch Lightning not support it?
To Reproduce
Python 3.8.2 (default, Jul 16 2020, 14:00:26)
[GCC 9.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> import pytorch_lightning
[1] 2770763 illegal hardware instruction (core dumped) python3
Code sample
Doesn't seem to be occur with base torch package. I am able to run the torch tutorial model here without issue: https://pytorch.org/tutorials/beginner/pytorch_with_examples.html#pytorch-custom-nn-modules
Environment
Since this error is seems often related to cpu instruction sets
$ sudo cat /proc/cpuinfo
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 44
model name : Intel(R) Xeon(R) CPU W3690 @ 3.47GHz
stepping : 2
microcode : 0x1f
cpu MHz : 1783.276
cache size : 12288 KB
physical id : 0
siblings : 12
core id : 0
cpu cores : 6
apicid : 0
initial apicid : 0
fpu : yes
fpu_exception : yes
cpuid level : 11
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid dca sse4_1 sse4_2 popcnt aes lahf_lm pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid dtherm ida arat flush_l1d
bugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs itlb_multihit
bogomips : 6929.01
clflush size : 64
cache_alignment : 64
address sizes : 36 bits physical, 48 bits virtual
power management:
processor : 1
vendor_id : GenuineIntel
cpu family : 6
model : 44
model name : Intel(R) Xeon(R) CPU W3690 @ 3.47GHz
stepping : 2
microcode : 0x1f
cpu MHz : 1835.384
cache size : 12288 KB
physical id : 0
siblings : 12
core id : 1
cpu cores : 6
apicid : 2
initial apicid : 2
fpu : yes
fpu_exception : yes
cpuid level : 11
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid dca sse4_1 sse4_2 popcnt aes lahf_lm pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid dtherm ida arat flush_l1d
bugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs itlb_multihit
bogomips : 6929.01
clflush size : 64
cache_alignment : 64
address sizes : 36 bits physical, 48 bits virtual
power management:
gcc -march=native -Q --help=target | grep march
-march= westmere
Please copy and paste the output from our
environment collection script
(or fill out the checklist below manually).
You can get the script and run it with:
PyTorch Version (e.g., 1.0): 0.9.0
OS (e.g., Linux): Linux (Ubuntu 20.04.1 LTS)
How you installed PyTorch (conda, pip, source): pip
Build command you used (if compiling from source):
Python version: 3.8.2
CUDA/cuDNN version: CUDA V10.0.130, cuDNN (7605 as returned by torch.backends.cudnn.version())
GPU models and configuration: GeForce GTX 1070
Any other relevant information: |
TypeError: cannot pickle 'socket' object | [
"question"
] | β Questions and Help
Before asking:
search the issues.
search the docs.
What is your question?
when I try to pass a socket.socket parameter to the model, trainer.fit(model) occurred an error:
File "/disks/disk1/damon/remote_src/YPlatformServer/src/handler.py", line 79, in main
trainer.fit(model)
File "/disks/disk1/damon/anaconda3/envs/pytorch/lib/python3.8/site-packages/pytorch_lightning/trainer/states.py", line 48, in wrapped_fn
result = fn(self, *args, **kwargs)
File "/disks/disk1/damon/anaconda3/envs/pytorch/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1064, in fit
results = self.accelerator_backend.train()
File "/disks/disk1/damon/anaconda3/envs/pytorch/lib/python3.8/site-packages/pytorch_lightning/accelerators/dp_backend.py", line 97, in train
results = self.trainer.run_pretrain_routine(model)
File "/disks/disk1/damon/anaconda3/envs/pytorch/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1169, in run_pretrain_routine
self.logger.save()
File "/disks/disk1/damon/anaconda3/envs/pytorch/lib/python3.8/site-packages/pytorch_lightning/utilities/distributed.py", line 27, in wrapped_fn
return fn(*args, **kwargs)
File "/disks/disk1/damon/anaconda3/envs/pytorch/lib/python3.8/site-packages/pytorch_lightning/loggers/tensorboard.py", line 212, in save
save_hparams_to_yaml(hparams_file, self.hparams)
File "/disks/disk1/damon/anaconda3/envs/pytorch/lib/python3.8/site-packages/pytorch_lightning/core/saving.py", line 368, in save_hparams_to_yaml
yaml.dump(hparams, fp)
File "/disks/disk1/damon/anaconda3/envs/pytorch/lib/python3.8/site-packages/yaml/__init__.py", line 290, in dump
return dump_all([data], stream, Dumper=Dumper, **kwds)
File "/disks/disk1/damon/anaconda3/envs/pytorch/lib/python3.8/site-packages/yaml/__init__.py", line 278, in dump_all
dumper.represent(data)
File "/disks/disk1/damon/anaconda3/envs/pytorch/lib/python3.8/site-packages/yaml/representer.py", line 27, in represent
node = self.represent_data(data)
File "/disks/disk1/damon/anaconda3/envs/pytorch/lib/python3.8/site-packages/yaml/representer.py", line 48, in represent_data
node = self.yaml_representers[data_types[0]](self, data)
File "/disks/disk1/damon/anaconda3/envs/pytorch/lib/python3.8/site-packages/yaml/representer.py", line 207, in represent_dict
return self.represent_mapping('tag:yaml.org,2002:map', data)
File "/disks/disk1/damon/anaconda3/envs/pytorch/lib/python3.8/site-packages/yaml/representer.py", line 118, in represent_mapping
node_value = self.represent_data(item_value)
File "/disks/disk1/damon/anaconda3/envs/pytorch/lib/python3.8/site-packages/yaml/representer.py", line 52, in represent_data
node = self.yaml_multi_representers[data_type](self, data)
File "/disks/disk1/damon/anaconda3/envs/pytorch/lib/python3.8/site-packages/yaml/representer.py", line 317, in represent_object
reduce = data.__reduce_ex__(2)
File "/disks/disk1/damon/anaconda3/envs/pytorch/lib/python3.8/socket.py", line 272, in __getstate__
raise TypeError(f"cannot pickle {self.__class__.__name__!r} object")
TypeError: cannot pickle 'socket' object
Code
args: dict = update_params(vars(parser_tmp), params)
args['client']: socket.socket = client
pl.seed_everything(args['seed'])
model = p_model(**args) # it's ok here
trainer.fit(model) # occurred an error (described above)
What have you tried?
It seems trainer.fit() will pickle the model, and I know socket cannot be pickled, so I use a global variable. Are there any other methods to solve this problem?
What's your environment?
OS: [e.g. iOS, Linux, Win] Linux
Packaging [e.g. pip, conda] pip
Version [e.g. 0.5.2.1] 0.9.0 |
multi-gpu training is slow in lightning | [
"help wanted",
"distributed"
] | π Bug
I have migrated my code from pytorch to lightning. However, I noticed the iteration time is almost double in lightning. I have tried with one server and two servers (each have 4 V100 GPUs). It is always slower in lightning. I am using ddp distributed backend. However, I have noticed the same with horovod as well. I passed number --gpus for my script in the argument so I can set num_workers > 0.
Expected behavior
I expected that lightning is just a wrapper over pytorch. I dont get why I am having reduction in training speed.
Environment
PyTorch: 1.6.0
PyTorch-Lightning: 0.9.0
How you installed PyTorch: pip
GPU models and configuration: (2 nodes with 4 V100 GPUs each). |
Lightning internal refactors | [
"feature",
"priority: 0",
"waiting on author",
"refactor"
] | π Bug
To Reproduce
Steps to reproduce the behavior:
Go to '...'
Run '....'
Scroll down to '....'
See error
Code sample
Expected behavior
Environment
Please copy and paste the output from our
environment collection script
(or fill out the checklist below manually).
You can get the script and run it with:
wget https://raw.githubusercontent.com/PyTorchLightning/pytorch-lightning/master/tests/collect_env_details.py
# For security purposes, please check the contents of collect_env_details.py before running it.
python collect_env_details.py
PyTorch Version (e.g., 1.0):
OS (e.g., Linux):
How you installed PyTorch (conda, pip, source):
Build command you used (if compiling from source):
Python version:
CUDA/cuDNN version:
GPU models and configuration:
Any other relevant information:
Additional context |
log_gpu_memory should use a GPUStatsMonitor | [
"feature",
"help wanted",
"good first issue",
"won't fix",
"refactor"
] | π Bug
Trainer.log_gpu_memory should use a GPUStatsMonitor callback
The only extra feature of log_gpu_memory is the ability to print the min and max memory used. We could add that to GPUStatsMemory if deemed necessary.
cc @rohitgr7
#2932 (comment) |
CPU training is broken for more than one process | [
"bug",
"help wanted",
"working as intended"
] | π Bug
open demo notebook on Colab
change MNIST trainer line to trainer = pl.Trainer(num_processes=1, progress_bar_refresh_rate=20). Observe that this works
change MNIST line to trainer = pl.Trainer(num_processes=2, progress_bar_refresh_rate=20). Observe that training does not commence. |
EarlyStopping not working / wrong keys in log | [
"bug",
"help wanted"
] | π Bug
Iβm trying to implement EarlyStopping when validation loss stops decreasing. I add callback as follows:
def validation_step(self, batch, batch_idx):
x, y = batch
y_hat = self(x)
loss = F.l1_loss(y_hat, y)
result = pl.EvalResult(checkpoint_on=loss)
result.log("val_loss", loss, sync_dist=True)
return result
early_stop_callback = EarlyStopping(
monitor="val_loss",
min_delta=0.1,
patience=1,
verbose=True,
mode="min")
trainer = pl.Trainer(
gpus=-1,
max_epochs=50,
distributed_backend="ddp",
early_stop_callback=early_stop_callback,
logger=wandb_logger)
This does not work - it is returning False at from the _validate_condition_metric function
When I checked whatβs in the log dictionary, the values looked like
{'val_early_stop_on': None, 'val_checkpoint_on': tensor(0.5601, device='cuda:0')} - which is slightly confusing. Where does βval_checkpoint_onβ come from and why it is not called βval_lossβ?
It feels like it might be slightly connected to the result = pl.EvalResult(checkpoint_on=loss) line.
I was reading documentation, but frankly speaking I found
checkpoint_on (Union[Tensor, bool, None]) β Metric to checkpoint on. to be slightly not intuitive. What does it mean for the metric to be checkpoints on? And does it really connect to keys in log being renamed in a strange way?
Code sample
https://github.com/matsuokalab/cosmoflow/blob/ac75fe317f8daf3444c96b837bb109064aa81dab/main.py
Expected behavior
Expecting EarlyStopping to work, log to have val_loss key
Environment
* CUDA:
- GPU:
- Tesla V100-SXM2-16GB
- Tesla V100-SXM2-16GB
- Tesla V100-SXM2-16GB
- Tesla V100-SXM2-16GB
- available: True
- version: 10.2
* Packages:
- numpy: 1.19.1
- pyTorch_debug: False
- pyTorch_version: 1.6.0
- pytorch-lightning: 0.9.0
- tensorboard: 2.2.0
- tqdm: 4.46.1
* System:
- OS: Linux
- architecture:
- 64bit
- ELF
- processor: x86_64
- python: 3.8.2
- version: #1 SMP Fri Apr 20 16:44:24 UTC 2018 |
How to determine the device from within a `LightningDataModule` | [
"question"
] | β Questions and Help
What is your question?
Is there a recommended way (or is it even at all possible) to determine which device is used from within the new LightningDataModule?
I ask the question because right now I decide how to set pin_memory when initializing the dataloaders based on the device. Prior to 0.9, the _dataloaders() methods were part of the LightningModule and I could simply access self.device to check the device the data would be transferred to.
Code
Example of what I did before 0.9:
def train_dataloader(self) -> DataLoader:
return DataLoader(..., pin_memory=self.device.type == "cuda")
What have you tried?
I've checked the doc and API of the LightningDataModule, but I don't see any indications of how to get the device.
What's your environment?
Version: 0.9.0 |
ValueError: All dicts must have the same number of keys on model evaluation output. | [
"bug",
"help wanted",
"strategy: dp"
] | Any ideas to debug this issue?
Is happening to me in many different models, after I refactored the Result logging from training_step, validation_step and test_step methods, changed the old dictionary-based return to the new Result scheme, training on two GPUs at the same time.
The error doesn't pop if i use distributed_backend='ddp' instead of dp on trainer.
π Bug
When doing evaluation or test routines on Trainer (either with .fit evaluation at the end of an epoch or calling .test directly),
throws ValueError: All dicts must have the same number of keys.
After seeing the error log i think it has something to do with the metric logging but i can't figure out what exactly. The error pops very inconsistently over epochs and runs. So i'm trying to find any ideas on how maybe i could get more details to get to the root of the issue.
Stack Trace:
File "model_manager.py", line 263, in <module>
helper.train()
File "model_manager.py", line 97, in train
self.trainer.fit(self.module)
File "/data/anaconda3/envs/aidio2/lib/python3.8/site-packages/pytorch_lightning/trainer/states.py", line 48, in wrapped_fn
result = fn(self, *args, **kwargs)
File "/data/anaconda3/envs/aidio2/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1064, in fit results = self.accelerator_backend.train()
File "/data/anaconda3/envs/aidio2/lib/python3.8/site-packages/pytorch_lightning/accelerators/dp_backend.py", line 97, in train
results = self.trainer.run_pretrain_routine(model)
File "/data/anaconda3/envs/aidio2/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1239, in run_pretrain_routine
self.train()
File "/data/anaconda3/envs/aidio2/lib/python3.8/site-packages/pytorch_lightning/trainer/training_loop.py", line 394, in train
self.run_training_epoch()
File "/data/anaconda3/envs/aidio2/lib/python3.8/site-packages/pytorch_lightning/trainer/training_loop.py", line 516, in run_training_epoch
self.run_evaluation(test_mode=False)
File "/data/anaconda3/envs/aidio2/lib/python3.8/site-packages/pytorch_lightning/trainer/evaluation_loop.py", line 582, in run_evaluation
eval_results = self._evaluate(self.model, dataloaders, max_batches, test_mode)
File "/data/anaconda3/envs/aidio2/lib/python3.8/site-packages/pytorch_lightning/trainer/evaluation_loop.py", line 333, in _evaluate
output = self.evaluation_forward(model, batch, batch_idx, dataloader_idx, test_mode)
File "/data/anaconda3/envs/aidio2/lib/python3.8/site-packages/pytorch_lightning/trainer/evaluation_loop.py", line 661, in evaluation_forward
output = model(*args)
File "/data/anaconda3/envs/aidio2/lib/python3.8/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/data/anaconda3/envs/aidio2/lib/python3.8/site-packages/pytorch_lightning/overrides/data_parallel.py", line 86, in forward
outputs = self.__gather_structured_result(outputs)
File "/data/anaconda3/envs/aidio2/lib/python3.8/site-packages/pytorch_lightning/overrides/data_parallel.py", line 101, in __gather_structured_result
outputs = self.gather(outputs)
File "/data/anaconda3/envs/aidio2/lib/python3.8/site-packages/pytorch_lightning/overrides/data_parallel.py", line 141, in gather
res = gather_map(outputs)
File "/data/anaconda3/envs/aidio2/lib/python3.8/site-packages/pytorch_lightning/overrides/data_parallel.py", line 129, in gather_map
raise ValueError('All dicts must have the same number of keys')
ValueError: All dicts must have the same number of keys
Exception ignored in: <function tqdm.__del__ at 0x7f83fe2ecb80>
Traceback (most recent call last):
File "/data/anaconda3/envs/aidio2/lib/python3.8/site-packages/tqdm/std.py", line 1087, in __del__
File "/data/anaconda3/envs/aidio2/lib/python3.8/site-packages/tqdm/std.py", line 1294, in close
File "/data/anaconda3/envs/aidio2/lib/python3.8/site-packages/tqdm/std.py", line 1472, in display
File "/data/anaconda3/envs/aidio2/lib/python3.8/site-packages/tqdm/std.py", line 1090, in __repr__
File "/data/anaconda3/envs/aidio2/lib/python3.8/site-packages/tqdm/std.py", line 1434, in format_dict
TypeError: cannot unpack non-iterable NoneType object
To Reproduce
Steps to reproduce the behavior:
Get a simple model for classification. For example, I used PyTorch resnetx model.
Implement training and validation methods returning Results objects. As shown here:
def training_step(self, batch, batch_idx):
"""
Lightning calls this inside the training loop
:param batch:
:return:
"""
# forward pass
x, y = batch['x'], batch['y']
y_pred = self.forward(x)
# calculate loss
loss = self.loss(y_pred, y)
result = ptl.TrainResult(loss)
result.log('train_loss', loss, prog_bar=True)
return result
def validation_step(self, batch, batch_idx):
"""
Lightning calls this inside the validation loop
:param batch:
:return:
"""
x, y = batch['x'], batch['y']
y_pred = self.forward(x)
# calculate loss
loss = self.loss(y_pred, y)
# calculate accurracy
labels_hat = torch.argmax(y_pred, dim=1)
accuracy = torch.sum(y == labels_hat).item() / (len(y) * 1.0)
accuracy = torch.tensor(accuracy)
if self.on_gpu:
accuracy = accuracy.cuda(loss.device.index)
# Checkpoint model based on validation loss
result = ptl.EvalResult(early_stop_on=None, checkpoint_on=loss)
result.log('val_loss', loss, prog_bar=True)
result.log('val_acc', accuracy, prog_bar=True)
return result
Train the Trainer to get training and evaluation steps working a few times. The error will pop up at some random epoch. For me it usually pops at the first 20 epochs. Also if I run Trainer.test() on a crashed epoch probably will fail with the same error.
Expected behavior
More detailed error. I think it has something to do with the Result objects but I cannot get more detail easily, as I'm running the models on a remote server.
Environment
PyTorch Version (e.g., 1.0): 1.6.0
OS (e.g., Linux): Arch Linux
How you installed PyTorch (conda, pip, source): conda
Python version: 3.8.5
CUDA/cuDNN version: 11.0
GPU models and configuration: 2 x GeForce GTX 1080 12 GB
Any other relevant information: It worked on previous versions of pytorch-lightning. The error doesnt pop if i use 'ddp' |
Distributed training on the Sun Grid Engine (SGE) | [
"won't fix"
] | Has anyone ever managed to get PyTorch Lightning's distributed training to work on the Sun Grid Engine (or also called Oracle Grid Engine)? PyTorch Lightning only supports distributed training on multiple nodes with SLURM.
In SLURM you can also easily set the number of nodes in the submission script, I believe the closest SGE has to that is using array jobs (qsub -t <n>) but then nodes are lacking the necessary communication interface for distributed training.
One would probably have to implement some kind of synchronization mechanism for the individual jobs in the job array to get them to share gradients, which could be placed into a custom pytorch_lightning.core.LightningModule.configure_ddp. Additionally, a custom subclass of torch.nn.parallel.DistributedDataParallel seems to be necessary, similar to PyTorch Lightning's LightningDistributedDataParallel.
Is anyone aware of an implementation for distributed training with PyTorch Lightning on the Sun Grid Engine? |
No error message when distributed_backend = "invalid choice", Trainer runs on CPU | [
"bug",
"help wanted"
] | π Bug
I'm trying to implemented and run new BERT-based model as always, used gpus option, but strangely my model is still running on CPU. I know this from 1.the training is too slow, 2.print(self.device) -> "cpu.", 3.The logs (right below). I never encountered this before so I'm confused. I'm using pytorch-lightning=0.9.0
GPU available: True, used: True
[2020-09-05 08:54:00,565][lightning][INFO] - GPU available: True, used: True
TPU available: False, using: 0 TPU cores
[2020-09-05 08:54:00,565][lightning][INFO] - TPU available: False, using: 0 TPU cores
CUDA_VISIBLE_DEVICES: [0]
[2020-09-05 08:54:00,566][lightning][INFO] - CUDA_VISIBLE_DEVICES: [0]
[GPU memory used, but GPU utility is zero]
I also attach a strange warning message I see here.
...\pytorch_lightning\utilities\distributed.py:37: UserWarning: Could not log
computational graph since the `model.example_input_array` attribute is not set or `input_array` was not given
The code for the model I initialize inside my LightningModule is here (ColBERT). Below is how I initialize my LightningModule. ColBERT.from_pretrained() initializes the model of the link. I print(self.device) at the end of __init__ and I see "cpu" as a result.
class ColBERTLightning(pl.LightningModule):
def __init__(self, hparams):
super().__init__()
self.hparams = hparams
# BERT-based sub-module initialized here
model_params = hparams.model
self.model = ColBERT.from_pretrained(
model_params.base,
query_maxlen=model_params.query_maxlen,
doc_maxlen=model_params.doc_maxlen,
dim=model_params.projection_dim,
similarity_metric=model_params.similarity_metric,
)
self.labels = torch.zeros(
hparams.train.batch_size, dtype=torch.long, device=self.device
)
print(self.device) # it prints "cpu" even when I use gpus=1
This is the code for trainer. I'm using hydra and DataModule. I'm using pandas inside DataModule to load data.
@hydra.main(config_path="conf", config_name="config")
def main(cfg: DictConfig) -> None:
print(OmegaConf.to_yaml(cfg))
hparams = cfg
# if hparams.train.gpus is not None:
# hparams.train.gpus = str(hparams.train.gpus)
# init model
model = ColBERTLightning(hparams)
# init data module
data_dir = hparams.dataset.dir
batch_size = hparams.train.batch_size
dm = TripleTextDataModule(data_dir, batch_size=batch_size)
# dm.setup("fit")
# logger
source_files_path = str(Path(hydra.utils.get_original_cwd()) / "**/*.py")
## TODO: Neptune or wandb?
# # trainer
trainer = Trainer(
accumulate_grad_batches=hparams.train.accumulate_grad_batches,
distributed_backend=hparams.train.distributed_backend,
fast_dev_run=hparams.train.fast_dev_run,
gpus=hparams.train.gpus,
auto_select_gpus=True,
gradient_clip_val=hparams.train.gradient_clip_val,
max_steps=hparams.train.max_steps,
benchmark=True,
profiler=hparams.train.use_profiler,
# profiler=AdvancedProfiler(),
# sync_batchnorm=True,
# log_gpu_memory="min_max",
)
# # fit
trainer.fit(model, dm)
Environment
Two envs I've tested.
* CUDA:
- GPU:
- GeForce GTX 1080 Ti
- GeForce GTX 1080 Ti
- available: True
- version: 10.2
* Packages:
- numpy: 1.18.1
- pyTorch_debug: False
- pyTorch_version: 1.5.1
- pytorch-lightning: 0.9.0
- tensorboard: 2.2.0
- tqdm: 4.48.2
* System:
- OS: Linux
- architecture:
- 64bit
-
- processor: x86_64
- python: 3.7.6
- version: #46-Ubuntu SMP Fri Jul 10 00:24:02 UTC 2020
* CUDA:
- GPU:
- GeForce GTX 1070 Ti
- available: True
- version: 10.2
* Packages:
- numpy: 1.18.5
- pyTorch_debug: False
- pyTorch_version: 1.6.0
- pytorch-lightning: 0.9.0
- tensorboard: 2.2.0
- tqdm: 4.46.1
* System:
- OS: Windows
- architecture:
- 64bit
- WindowsPE
- processor: AMD64 Family 23 Model 8 Stepping 2, AuthenticAMD
- python: 3.7.7
- version: 10.0.19041 |
Inconsistent behavior between `validation_epoch_end` and `training_epoch_end` | [
"bug",
"help wanted",
"priority: 0"
] | Inconsistent behavior between validation_epoch_end and training_epoch_end
Assignation of a variable to results in train_step (result.y =y) does not work the same as in validation. Take a look at the code in the section below to make it more clear.
For context, I'm trying to calculate AUC after both training and validation epochs. Calculating on batch might not work if the particular batch gets labels for only one class, so it would be undefined for the particular step.
To Reproduce
Include this in a pl module:
def training_step(self, batch, batch_idx):
x, y = batch
y_hat = self(x)
loss = self.criterion(y_hat, y)
result = pl.TrainResult(loss)
result.log('train_loss', loss)
y_pred = F.softmax(y_hat, dim=0)[:, 1]
result.y = y
result.y_pred = y_pred
return result
def training_epoch_end(self, outputs):
y = outputs.y
y_pred = outputs.y_pred
roc_auc_fn = pl.metrics.classification.AUROC()
roc_auc = roc_auc_fn(y_pred, y)
result = pl.TrainResult()
result.log('train_roc_auc', roc_auc)
return result
Then the stacktrace shows:
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-48-01044cd5ab91> in <module>()
7 # train
8 trainer = pl.Trainer(gpus=1, max_epochs=50)
----> 9 trainer.fit(model, dm)
8 frames
/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/states.py in wrapped_fn(self, *args, **kwargs)
46 if entering is not None:
47 self._state = entering
---> 48 result = fn(self, *args, **kwargs)
49
50 # The INTERRUPTED state can be set inside the run function. To indicate that run was interrupted
/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/trainer.py in fit(self, model, train_dataloader, val_dataloaders, datamodule)
1071 """
1072 # --------------------------
-> 1073 # Setup??
1074 # --------------------------
1075 ref_model = model
/usr/local/lib/python3.6/dist-packages/pytorch_lightning/accelerators/gpu_backend.py in train(self, model)
49
50 self.trainer.model = model
---> 51
52 def train(self):
53 model = self.trainer.model
/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/trainer.py in run_pretrain_routine(self, model)
1237 trainer = Trainer()
1238 model = LightningModule()
-> 1239
1240 trainer.fit(model)
1241 trainer.test(test_dataloaders=test, ckpt_path='path/to/checkpoint.ckpt')
/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/training_loop.py in train(self)
392
393 # early stopping
--> 394 met_min_epochs = epoch >= self.min_epochs - 1
395 met_min_steps = self.global_step >= self.min_steps if self.min_steps else True
396
/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/training_loop.py in run_training_epoch(self)
548 # TODO bake this logic into the checkpoint callback
549 should_activate = not is_overridden('validation_step', self.get_model()) and not should_check_val
--> 550 if should_activate:
551 checkpoint_callbacks = [c for c in self.callbacks if isinstance(c, ModelCheckpoint)]
552 [c.on_validation_end(self, self.get_model()) for c in checkpoint_callbacks]
/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/training_loop.py in run_training_epoch_end(self, epoch_output, checkpoint_accumulator, early_stopping_accumulator, num_optimizers)
655 epoch_log_metrics = {}
656 epoch_progress_bar_metrics = {}
--> 657 for opt_outputs in epoch_output:
658 # reduce across time first
659 time_reduced_outputs = []
/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/training_loop.py in __gather_result_across_time_and_optimizers(self, epoch_output)
731 is_last_batch_for_infinite_dataset = (is_last_batch and self.val_check_batch == float('inf'))
732 should_check_val = can_check_val and (should_check_val or is_last_batch_for_infinite_dataset)
--> 733
734 return should_check_val
735
/usr/local/lib/python3.6/dist-packages/pytorch_lightning/core/step_result.py in padded_gather(cls, outputs)
336
337 if is_reserved:
--> 338 padding_key = default_padding_idx
339 else:
340 padding_key = meta[name]['tbptt_pad_token']
KeyError: 'y'
Code sample
Full module to give full context.
class LitModel(pl.LightningModule):
def set_parameter_requires_grad(self):
if self.feature_extract:
for param in self.model.parameters():
param.requires_grad = False
def __init__(self, num_classes, use_pretrained=True, feature_extract = True):
super().__init__()
self.feature_extract = feature_extract
self.criterion = nn.CrossEntropyLoss()
self.model = models.resnet18(pretrained=use_pretrained)
self.set_parameter_requires_grad()
num_ftrs = self.model.fc.in_features
self.model.fc = nn.Linear(num_ftrs, num_classes)
self.input_size = 224
def forward(self, x):
x = self.model(x)
return x
def configure_optimizers(self):
print("Params to learn:")
if self.feature_extract:
params_to_update = []
for name, param in self.model.named_parameters():
if param.requires_grad == True:
params_to_update.append(param)
print("\t",name)
else:
params_to_update = self.model.parameters()
for name,param in model_ft.named_parameters():
if param.requires_grad == True:
print("\t",name)
optimizer = optim.Adam(params_to_update, lr=0.0001, betas=(0.9, 0.999), eps=1e-08, weight_decay=0)
return optimizer
def training_step(self, batch, batch_idx):
x, y = batch
y_hat = self(x)
loss = self.criterion(y_hat, y)
result = pl.TrainResult(loss)
result.log('train_loss', loss)
y_pred = F.softmax(y_hat, dim=0)[:, 1]
result.y = y
result.y_pred = y_pred
return result
def training_epoch_end(self, outputs):
y = outputs.y
y_pred = outputs.y_pred
roc_auc_fn = pl.metrics.classification.AUROC()
roc_auc = roc_auc_fn(y_pred, y)
result = pl.TrainResult()
result.log('train_roc_auc', roc_auc)
return result
def validation_step(self, batch, batch_idx):
x, y = batch
y_hat = self(x)
loss = self.criterion(y_hat, y)
result = pl.EvalResult(checkpoint_on=loss)
result.log('val_loss', loss)
y_pred = F.softmax(y_hat, dim=0)[:, 1]
result.y = y
result.y_pred = y_pred
return result
def validation_epoch_end(self, outputs):
y = outputs.y
y_pred = outputs.y_pred
roc_auc_fn = pl.metrics.classification.AUROC()
roc_auc = roc_auc_fn(y_pred, y)
result = pl.EvalResult(checkpoint_on=roc_auc)
result.log('val_roc_auc', roc_auc)
return result
Expected behavior
I would expect the behavior in both to be quite similar, in particular, if both *_epoch_end methods have the same signatures. The current docs do not mention how both methods differ so I interpreted you could do the same.
Environment
pytorch lightning on colab, installed wit
pytroch lightning version 0.9.0
Please copy and paste the output from our
environment collection script
(or fill out the checklist below manually).
You can get the script and run it with:
PyTorch Version (e.g., 1.0):1.6.0+cu101
OS (e.g., Linux): colab |
val_dataloader is called twice in each worker | [
"help wanted",
"won't fix",
"docs"
] | π Bug
I'm trying a LightningDataModule class to manage the data.
Using horovod backend, if that matters.
I've noticed that each rank is calling train_dataloader once, but val_dataloader two times somehow.
To Reproduce
run LIghtning with Dataclass and horovod, add some debug print on when val_dataloader is called
soemthing like
def train_dataloader(self):
print(f"\n#####worker {hvd.rank()} of {hvd.size()} creating train_loader\n")
return load_ds_from_dir(os.path.join(self.path, "train"), self.batch_size)
def val_dataloader(self):
print(f"\n#####worker {hvd.rank()} of {hvd.size()} creating val\n")
return load_ds_from_dir(os.path.join(self.path, "validation"), self.batch_size)
Expected behavior
expect val loader to be called only once...
Environment
* CUDA:
- GPU:
- Tesla V100-SXM2-16GB
- Tesla V100-SXM2-16GB
- Tesla V100-SXM2-16GB
- Tesla V100-SXM2-16GB
- available: True
- version: 10.2
* Packages:
- numpy: 1.19.1
- pyTorch_debug: False
- pyTorch_version: 1.6.0
- pytorch-lightning: 0.9.0
- tensorboard: 2.2.0
- tqdm: 4.46.1
* System:
- OS: Linux
- architecture:
- 64bit
- ELF
- processor: x86_64
- python: 3.8.2
- version: #1 SMP Fri Apr 20 16:44:24 UTC 2018 |
Trainer was signaled to stop but required minimum epochs (100) or minimum steps (None) has not been met. Training will continue... | [
"feature",
"help wanted",
"won't fix"
] | When I initialise the trainer as follows:
trainer = pl.Trainer.from_argparse_args(args, early_stop_callback=True, min_epochs=100, logger=mlflow, gpus=[0])
I cannot halt training early with Ctrl-C, I get the message
Trainer was signaled to stop but required minimum epochs (100) or minimum steps (None) has not been met. Training will continue...
It seems to me that this message makes sense if the early stopping callback tries to stop before 100 epochs but Ctrl-C should really stop training unconditionally in my opinion. |
TypeError: 'generator' object is not callable | [
"help wanted",
"question"
] | I'm getting the exception TypeError: 'generator' object is not callable when I train with multiple GPU's
I'm not sure where it's coming from, my datasets are subclasses of torchtext.data.Dataset and the data loaders are torchtext.data.BucketIterator.
What's the easiest way of identifying what's causing the exception?
# create trainer
mlflow = loggers.MLFlowLogger("Transformer")
trainer = pl.Trainer.from_argparse_args(args, early_stop_callback=True, min_epochs=10, logger=mlflow, gpus=[0,1])
# prepare data
train_dataset, val_dataset = load_datasets(path=args.path, files=args.files)
train_loader, val_loader = create_dataloaders(train_dataset, val_dataset, batch_size=args.batch_size)
# init the model
hparams = vars(args)
transformer = Transformer(src_vocab=train_dataset.vocab, tgt_vocab=val_dataset.vocab, **hparams)
# train
trainer.fit(transformer, train_loader, val_dataloaders=val_loader)
transformer.freeze() |
How to log epoch's accuracy using Result object | [
"question",
"won't fix"
] | What is your question?
Hi,
Before version #0.9.0, I used to log the predictions and targets at each step, and then at the "_epoch_end" method I aggregated the predication and targets and used these aggregations as input to the Accuracy metric to calculate the epoch's accuracy.
Is there a way to use the Result object in order to achieve the same? specifically, I would like to utilize the Result object convenience feature that allows me to only define a "_step" method without the necessity of defining "_epoch_end" method in the LightningModule |
mlflow training loss not reported until end of run | [
"bug",
"help wanted"
] | I think I'm logging correctly, this is my training_step
result = pl.TrainResult(loss)
result.log('loss/train', loss)
return result
and validation_step
result = pl.EvalResult(loss)
result.log('loss/validation', loss)
return result
The validation loss is updated in mlflow each epoch, however the training loss isn't displayed until training has finished. Then it's available for every step. This may be a mlflow rather than pytorch-lighting issue - somewhere along the line it seems to be buffered?
Versions:
pytorch-lightning==0.9.0
mlflow==1.11.0
Edit: logging TrainResult with on_epoch=True results in the metric appearing in mlflow during training, it's only the default train logging which gets delayed. i.e.
result.log('accuracy/train', acc, on_epoch=True)
is fine |
Gradient Flow Summary | [
"question",
"won't fix"
] | β Questions and Help
Before asking:
search the issues. done
search the docs. done
What is your question?
I want to add a summary where I can track the gradient flow in my model.
The out-of-the-box gradient tracking is not sufficient for me because I need a more customized behavior.
I wanted to do it using a callback, but I couldn't find a way to access the gradients from a callback.
I noticed there is a model hook for model.on_before_zero_grad(optimizer), which seem to have access for the gradients before resting them, but it seem to don't fit my use case, I would want access to training state information as well when logging..
What is the recommended way to go about it?
thanks,
Alex
Code
What have you tried?
I was aiming fo something of this sort:
class TBGradFlowImageSummary(pl.Callback):
def on_train_batch_end_before_zero_grad(self, trainer, pl_module, batch, batch_idx, dataloader_idx):
isLastBatch = batch_idx == len(trainer.train_dataloaders) -1
if isLastBatch:
pl_module.logger.experiment.log_something()
What's your environment?
OS: [Linux]
Version [0.9.0]
thanks |
`before_batch_transfer` and `after_batch_transfer` hooks in LightningDataModule | [
"feature",
"data handling",
"design"
] | π Feature
Can we have two additional hooks in LightningDataModule?
Something like before_batch_transfer_to_device, after_batch_transfer_to_device, although these can be renamed to something else.
Motivation
before_batch_transfer_to_device: This can help apply transformations or augmentations to a batch before it is transferred to the device.
after_batch_transfer_to_device: This can help apply transformations or augmentations to a batch after it is transferred to the device. For eg we can perform data augmentation in the GPU for a batch using Kornia.
Alternatives
As of now, it can be done by overriding the transfer_batch_to_device hook but a separate hook makes more sense here I think.
If approved I can send a PR :) |
Specify mlflow<1.11 | [
"feature",
"help wanted"
] | π Feature
Specify mlflow<1.11 in the environment.yml file.
Motivation
As I pointed out in this issue in the mlflow repo, the most recent version of that package creates an mlruns directory at the location of any script that imports it. Importing PL also results in this directory being created. This is problematic since it pollutes development environments, and especially test suites.
Pitch
Specify a maximum mlflow version.
Alternatives
Add mlruns to the gitignore. |
Iterations completing out of order (possibly) in ddp with torchelastic? | [
"bug",
"help wanted",
"waiting on author",
"distributed"
] | This might be bug or might be expected. I'm running a pytorchlightning with torchelastic and ddp. I'm noticing the iterations are being dumped out of order (below iteration 632 preceeds iteration 574). This could be due to delays in parallel writing... or perhaps just issues in logging. Is this expected behavior?
Validating: 60it [00:21, 3.61it/s]οΏ½[A
Epoch 26: : 632it [08:13, 1.28it/s, loss=0.111, v_num=0]
Validating: 62it [00:22, 4.62it/s]οΏ½[A
Validating: 0it [00:00, ?it/s]οΏ½[A
Epoch 26: : 572it [07:51, 1.21it/s, loss=0.111, v_num=0]
Validating: 2it [00:00, 18.62it/s]οΏ½[A
Epoch 26: : 574it [07:52, 1.22it/s, loss=0.111, v_num=0]
Running with 6 gpus in ddp. |
Models defined by data | [
"docs"
] | Is this correct (https://pytorch-lightning.readthedocs.io/_/downloads/en/latest/pdf/ pg17)
The first code example shows prepare_data() and setup() being called on the dataloader, but then it is not passed to fit()?
The second box has no explanation - I assume it's an alternative to the first code example? |
Loss display during epoch | [
"question",
"won't fix"
] | During the training, I'm using the custom loss function to train my model. However the loss are displayed as 0.000, but when I display the same value to display as different variable it gives 4.73e-5 (some value in exponential format).
Epoch 80: 10%|βββ | 100/1013 [01:33<14:11, 1.07it/s, loss=0.000, v_num=None, train_loss=4.73e-5]
both loss and train_loss display the same value. why one displays in exponential format and other doesn't.
Is it possible does it prohibits the model from converging. because when i use the same parameters to train the model in normal way it converges, however with the pytorch lightning the model doesn't converge beyond certain limit. |
Early stopping does not work with structured result returned in training_epoch_end | [
"bug",
"help wanted"
] | π Bug
Situation: A structured result is returned in training_epoch_end and early_stop_on was set.
Illustrative example:
def training_epoch_end(self, outputs):
avg_loss = outputs['minimize'].mean().squeeze()
result = pl.TrainResult(early_stop_on=avg_loss)
return result
Expected Behaviour: early stopping will be triggered
Actual Behaviour: it is not triggered
To Reproduce
Run the code sample below.
Code sample
Try running my example: https://gist.github.com/Lucas-Steinmann/f900b3ef5636028d5917d4cb0183a291
Expected behavior
Should early stop after about 20 epochs, since metric given to early_stop_on is artificially increased.
Environment
CUDA:
- GPU:
- GeForce GTX 1080 Ti
- available: True
- version: 10.1
Packages:
- numpy: 1.19.1
- pyTorch_debug: False
- pyTorch_version: 1.6.0
- pytorch-lightning: 0.9.1rc1
- tensorboard: 2.2.0
- tqdm: 4.48.2
System:
- OS: Linux
- architecture:
- 64bit
- ELF
- processor: x86_64
- python: 3.8.3
- version: #116~16.04.1-Ubuntu SMP Wed Aug 26 17:36:48 UTC 2020
Additional context
First discussed here: #3193 (comment) |
turning off GPU usage monitoring | [] | I'm using pl with Neptune logger, and get the following error every few secs:
NVMLError: Not Supported - GPU usage metrics may not be reported
is there a quick way to turn off the GPU usage monitoring ?
Thanks,
Yair |
DDP doesn't work properly with CUDA_VISIBLE_DEVICE | [
"bug",
"help wanted",
"priority: 0",
"distributed"
] | π Bug
DDP trainer doesn't work properly when CUDA_VISIBLE_DEVICE is set.
To Reproduce
Steps to reproduce the behavior:
Set CUDA_VISIBLE_DEVICE=1,2
Run DDP trainer with 2 GPUs
The main process will use available_gpus[self.trainer.local_rank] that is equal to 1
The second process will use GPU process_idx that is again equal to 1
Thus both processes will use the same single GPU, instead of both
Expected behavior
Training should use both GPUs.
Suggestion
Looks like the following if-statement causes the problem:
pytorch-lightning/pytorch_lightning/accelerators/ddp_backend.py
Line 204
in
5b4db52
if is_master:
Why do we use int(available_gpus[self.trainer.local_rank]) instead of simple process_idx?
As far as I understand, the master process should always use GPU 0, which is equal to the first GPU in the CUDA_VISIBLE_DEVICE list. Please, correct me if I am wrong. |
How to disable printings about GPU/TPU | [
"question"
] | How to disable this printings? |
Cometml Logger epoch is not set. | [
"feature",
"help wanted",
"logger"
] | π Bug
While logging using comet ml there is an argument to set epoch https://www.comet.ml/docs/python-sdk/Experiment/#experimentlog_metrics
The info is available in metrics dict, but instead of passing it as an arg, it is passed as metrics value. I will suply a PR in a moment |
TypeError: can't pickle _thread.lock objects - Error while logging model into mlflow in multi gpu scenario | [
"question",
"won't fix"
] | β Questions and Help
What is your question?
Trying to log model into mlflow using mlflow.pytorch.log_model in train end. Getting the above error only in multi gpu scenario.
Code
mnist script file -
import pytorch_lightning as pl
import torch
from argparse import ArgumentParser
#from mlflow.pytorch.pytorch_autolog import __MLflowPLCallback
from pytorch_lightning.logging import MLFlowLogger
from sklearn.metrics import accuracy_score
from torch.nn import functional as F
from torch.utils.data import DataLoader, random_split
from torchvision import datasets, transforms
class LightningMNISTClassifier(pl.LightningModule):
def __init__(self):
"""
Initializes the network
"""
super(LightningMNISTClassifier, self).__init__()
# mnist images are (1, 28, 28) (channels, width, height)
self.layer_1 = torch.nn.Linear(28 * 28, 128)
self.layer_2 = torch.nn.Linear(128, 256)
self.layer_3 = torch.nn.Linear(256, 10)
# transforms for images
self.transform = transforms.Compose(
[transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))]
)
@staticmethod
def add_model_specific_args(parent_parser):
parser = ArgumentParser(parents=[parent_parser], add_help=False)
parser.add_argument(
"--batch-size",
type=int,
default=64,
metavar="N",
help="input batch size for training (default: 64)",
)
parser.add_argument(
"--num-workers",
type=int,
default=0,
metavar="N",
help="number of workers (default: 0)",
)
parser.add_argument(
"--lr",
type=float,
default=1e-3,
metavar="LR",
help="learning rate (default: 1e-3)",
)
return parser
def forward(self, x):
"""
Forward Function
"""
batch_size, channels, width, height = x.size()
# (b, 1, 28, 28) -> (b, 1*28*28)
x = x.view(batch_size, -1)
# layer 1 (b, 1*28*28) -> (b, 128)
x = self.layer_1(x)
x = torch.relu(x)
# layer 2 (b, 128) -> (b, 256)
x = self.layer_2(x)
x = torch.relu(x)
# layer 3 (b, 256) -> (b, 10)
x = self.layer_3(x)
# probability distribution over labels
x = torch.log_softmax(x, dim=1)
return x
def cross_entropy_loss(self, logits, labels):
"""
Loss Fn to compute loss
"""
return F.nll_loss(logits, labels)
def training_step(self, train_batch, batch_idx):
"""
training the data as batches and returns training loss on each batch
"""
x, y = train_batch
logits = self.forward(x)
loss = self.cross_entropy_loss(logits, y)
return {"loss": loss}
def validation_step(self, val_batch, batch_idx):
"""
Performs validation of data in batches
"""
x, y = val_batch
logits = self.forward(x)
loss = self.cross_entropy_loss(logits, y)
return {"val_loss": loss}
def validation_epoch_end(self, outputs):
"""
Computes average validation accuracy
"""
avg_loss = torch.stack([x["val_loss"] for x in outputs]).mean()
tensorboard_logs = {"val_loss": avg_loss}
return {"avg_val_loss": avg_loss, "log": tensorboard_logs}
def test_step(self, test_batch, batch_idx):
"""
Performs test and computes test accuracy
"""
x, y = test_batch
output = self.forward(x)
a, y_hat = torch.max(output, dim=1)
test_acc = accuracy_score(y_hat.cpu(), y.cpu())
return {"test_acc": torch.tensor(test_acc)}
def test_epoch_end(self, outputs):
"""
Computes average test accuracy score
"""
avg_test_acc = torch.stack([x["test_acc"] for x in outputs]).mean()
return {"avg_test_acc": avg_test_acc}
def prepare_data(self):
"""
Preprocess the input data.
"""
return {}
def train_dataloader(self):
"""
Loading training data as batches
"""
mnist_train = datasets.MNIST(
"dataset", download=True, train=True, transform=self.transform
)
return DataLoader(
mnist_train,
batch_size=64,
num_workers=1
)
def val_dataloader(self):
"""
Loading validation data as batches
"""
mnist_train = datasets.MNIST(
"dataset", download=True, train=True, transform=self.transform
)
mnist_train, mnist_val = random_split(mnist_train, [55000, 5000])
return DataLoader(
mnist_val,
batch_size=64,
num_workers=1
)
def test_dataloader(self):
"""
Loading test data as batches
"""
mnist_test = datasets.MNIST(
"dataset", download=True, train=False, transform=self.transform
)
return DataLoader(
mnist_test,
batch_size=64,
num_workers=1
)
def configure_optimizers(self):
"""
Creates and returns Optimizer
"""
self.optimizer = torch.optim.Adam(self.parameters(), lr=1e-3)
self.scheduler = {
"scheduler": torch.optim.lr_scheduler.ReduceLROnPlateau(
self.optimizer,
mode="min",
factor=0.2,
patience=2,
min_lr=1e-6,
verbose=True,
)
}
return [self.optimizer], [self.scheduler]
def optimizer_step(
self,
epoch,
batch_idx,
optimizer,
optimizer_idx,
second_order_closure=None,
on_tpu=False,
using_lbfgs=False,
using_native_amp=False,
):
self.optimizer.step()
self.optimizer.zero_grad()
if __name__ == "__main__":
from pytorch_autolog import autolog
autolog()
model = LightningMNISTClassifier()
mlflow_logger = MLFlowLogger(
experiment_name="Default", tracking_uri="http://localhost:5000/"
)
trainer = pl.Trainer(
logger=mlflow_logger,
gpus=2,
distributed_backend="ddp",
max_epochs=1
)
trainer.fit(model)
trainer.test()
Sample code from autolog - Callback class.
class __MLflowPLCallback(pl.Callback):
def __init__(self):
super().__init__()
def on_train_end(self, trainer, pl_module):
"""
Logs the model checkpoint into mlflow - models folder on the training end
"""
mlflow.set_tracking_uri(trainer.logger._tracking_uri )
mlflow.set_experiment(trainer.logger._experiment_name)
mlflow.start_run(trainer.logger.run_id)
mlflow.pytorch.log_model(trainer.model, "models")
mlflow.end_run()
Stack Trace
Traceback (most recent call last):
File "mnist.py", line 231, in <module>
trainer.fit(model)
File "/home/ubuntu/mnist/pytorch_autolog.py", line 218, in fit
return _run_and_log_function(self, original, args, kwargs)
File "/home/ubuntu/mnist/pytorch_autolog.py", line 209, in _run_and_log_function
result = original(self, *args, **kwargs)
File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 992, in fit
results = self.spawn_ddp_children(model)
File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/pytorch_lightning/trainer/distrib_data_parallel.py", line 462, in spawn_ddp_children
results = self.ddp_train(local_rank, q=None, model=model, is_master=True)
File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/pytorch_lightning/trainer/distrib_data_parallel.py", line 560, in ddp_train
results = self.run_pretrain_routine(model)
File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1213, in run_pretrain_routine
self.train()
File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 392, in train
self.run_training_teardown()
File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 872, in run_training_teardown
self.on_train_end()
File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/pytorch_lightning/trainer/callback_hook.py", line 72, in on_train_end
callback.on_train_end(self, self.get_model())
File "/home/ubuntu/mnist/pytorch_autolog.py", line 120, in on_train_end
mlflow.pytorch.log_model(trainer.model, "models")
File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/mlflow/pytorch/__init__.py", line 179, in log_model
signature=signature, input_example=input_example, **kwargs)
File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/mlflow/models/model.py", line 154, in log
**kwargs)
File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/mlflow/pytorch/__init__.py", line 300, in save_model
torch.save(pytorch_model, model_path, pickle_module=pickle_module, **kwargs)
File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/torch/serialization.py", line 370, in save
_legacy_save(obj, opened_file, pickle_module, pickle_protocol)
File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/torch/serialization.py", line 443, in _legacy_save
pickler.dump(obj)
File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/cloudpickle/cloudpickle.py", line 491, in dump
return Pickler.dump(self, obj)
File "/home/ubuntu/anaconda3/lib/python3.7/pickle.py", line 437, in dump
self.save(obj)
File "/home/ubuntu/anaconda3/lib/python3.7/pickle.py", line 549, in save
self.save_reduce(obj=obj, *rv)
File "/home/ubuntu/anaconda3/lib/python3.7/pickle.py", line 662, in save_reduce
save(state)
File "/home/ubuntu/anaconda3/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/home/ubuntu/anaconda3/lib/python3.7/pickle.py", line 859, in save_dict
self._batch_setitems(obj.items())
File "/home/ubuntu/anaconda3/lib/python3.7/pickle.py", line 885, in _batch_setitems
save(v)
File "/home/ubuntu/anaconda3/lib/python3.7/pickle.py", line 549, in save
self.save_reduce(obj=obj, *rv)
File "/home/ubuntu/anaconda3/lib/python3.7/pickle.py", line 659, in save_reduce
self._batch_setitems(dictitems)
File "/home/ubuntu/anaconda3/lib/python3.7/pickle.py", line 890, in _batch_setitems
save(v)
File "/home/ubuntu/anaconda3/lib/python3.7/pickle.py", line 549, in save
self.save_reduce(obj=obj, *rv)
File "/home/ubuntu/anaconda3/lib/python3.7/pickle.py", line 662, in save_reduce
save(state)
File "/home/ubuntu/anaconda3/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/home/ubuntu/anaconda3/lib/python3.7/pickle.py", line 859, in save_dict
self._batch_setitems(obj.items())
File "/home/ubuntu/anaconda3/lib/python3.7/pickle.py", line 885, in _batch_setitems
save(v)
File "/home/ubuntu/anaconda3/lib/python3.7/pickle.py", line 549, in save
self.save_reduce(obj=obj, *rv)
File "/home/ubuntu/anaconda3/lib/python3.7/pickle.py", line 662, in save_reduce
save(state)
File "/home/ubuntu/anaconda3/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/home/ubuntu/anaconda3/lib/python3.7/pickle.py", line 859, in save_dict
self._batch_setitems(obj.items())
File "/home/ubuntu/anaconda3/lib/python3.7/pickle.py", line 885, in _batch_setitems
save(v)
File "/home/ubuntu/anaconda3/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/home/ubuntu/anaconda3/lib/python3.7/pickle.py", line 819, in save_list
self._batch_appends(obj)
File "/home/ubuntu/anaconda3/lib/python3.7/pickle.py", line 846, in _batch_appends
save(tmp[0])
File "/home/ubuntu/anaconda3/lib/python3.7/pickle.py", line 549, in save
self.save_reduce(obj=obj, *rv)
File "/home/ubuntu/anaconda3/lib/python3.7/pickle.py", line 662, in save_reduce
save(state)
File "/home/ubuntu/anaconda3/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/home/ubuntu/anaconda3/lib/python3.7/pickle.py", line 859, in save_dict
self._batch_setitems(obj.items())
File "/home/ubuntu/anaconda3/lib/python3.7/pickle.py", line 885, in _batch_setitems
save(v)
File "/home/ubuntu/anaconda3/lib/python3.7/pickle.py", line 524, in save
rv = reduce(self.proto)
TypeError: can't pickle _thread.lock objects
What have you tried?
Tried out the possibilities mentioned in the similar thread - #2186
Tried wrapping the code inside, trainer.is_global_zero . And also tried trainer.global_rank == 0. Also tried decorating the method as @rank_zero_only. But no luck. Getting the same error.
What's your environment?
OS: Ubuntu
Packaging - torch, pytorch-lightning, torchvision, mlflow |
Very slow training on SLURM cluster | [
"bug",
"help wanted",
"priority: 0"
] | π Bug
When switching from my local machine (old and slow 960m laptop GPU) to a SLURM cluster with Titan X GPU's, I see a significant drop in performance from 4.71it/s to 7s/it (!) - so basically training becomes unusably slow.
To Reproduce
Take a model and submit it as a SLURM job. Happy to provide further details if requested, not sure what is relevant here.
2020-09-10 22:11:58,979 : lightning : INFO GPU available: True, used: True
2020-09-10 22:11:58,979 : lightning : INFO TPU available: False, using: 0 TPU cores
2020-09-10 22:11:58,979 : lightning : INFO CUDA_VISIBLE_DEVICES: [0,1]
2020-09-10 22:12:01,698 : lightning : INFO Set SLURM handle signals.
Epoch 1: 40%|ββββ | 150/377 [19:11<29:02, 7.68s/it, loss=2.401, v_num=19]
Code sample
The is setup as follows
logger = TensorBoardLogger(str(DATA_DIR_PATH / 'lightning_logs'), name=Path(__file__).stem)
trainer_kwargs = {'logger': logger,
'default_root_dir': str(DATA_DIR_PATH / 'lightning_logs'),
'val_check_interval': 0.2, # check (1 / value) * times per train epoch
'gpus': 1,
# 'distributed_backend': 'ddp', # checked with ddp enabled, 2 GPUs, multiple workers etc., same slow behaviour
'fast_dev_run': False}
trainer = pl.Trainer(**trainer_kwargs)
Environment
Packages:
numpy: 1.18.1
pyTorch_debug: False
pyTorch_version: 1.4.0
pytorch-lightning: 0.8.5
tensorboard: 1.15.0
tqdm: 4.43.0
PyTorch Version (e.g., 1.0):
OS (e.g., Linux): Linux
How you installed PyTorch (conda, pip, source): conda
Python version: 3.7.9
CUDA/cuDNN version: cudatoolkit 10.0
GPU models and configuration: TitanX (on SLURM cluster) |
Model summarize displayed twice before training starts | [
"bug",
"help wanted",
"good first issue"
] | π Bug
Model summarize is called in two spots, which results in duplication in the logs:
It's called once in the training loop: https://github.com/PyTorchLightning/pytorch-lightning/blob/master/pytorch_lightning/trainer/training_loop.py#L160
And I'm unsure how it's called again
The first time is with the logger.info(...) since it's formatted. The second time it appears to be logged with print(...) since the table formatting doesn't handle newlines
To Reproduce
Run any training with weights_summary set on the trainer
Expected behavior
We only call summarize once |
Missing details in the document for the random seed | [
"docs"
] | π Documentation
Even though in the documentation, lightning very thoughtfully reminds readers of setting the random seed when using distributed data parallel. However, it does not mention where should we set them. Based on my experience on PyTorch, the seed can be set after calling the torch.distributed.init_process_group. For lightning, this call is handled by the trainer, so where should we set the random seed? One thought from me is that we set the seed in the __init__() of our LightningModule, any other suggestions? What would be the best practice for lightning?
Also, I believe that the seed_everything() function is called in the wrong place on the lightning's official example. |
Enabling batch transforms | [
"feature",
"help wanted",
"won't fix",
"docs"
] | π Feature
Hi ! Well the feature is quite simple, it is to have a simple way to code batch level transforms without having to put them directly in training_step.
Motivation
Motivation comes from the fact that in a project of mine, I want to change the color space of my input images directly on the batch rather than on individual images so that I gain some time. Problem is I didn't find a way to automate that using lightning. When going through code and documentation, I even stumbled upon something that confused me a lot : in the documentation for LightningDataModule, there are numerous examples of places where transforms are passed directly to a pytorch DataLoader constructor. However we can't do that, there is no transform argument in pytorch DataLoader. I looked for an override somewhere in lightning's source code, and tried to find if there is somewhere in the code where the batch could get modified based on transforms but I couldn't find any. So please if I missed something just tell me and I guess this issue can be closed. Else I'm pretty sure this feature could come in handy for others than me.
Pitch
The idea is to allow the user to specify batch transforms in a LightningDataModule or somewhere in the trainer and have them be called automatically after the dataloader is iterated through. The idea for me is to having it be called right after the batch is transferred to GPU in training_forward. I'm pretty sure further details need to be discussed as the implementation will depend on the coding philosophy of the library.
Alternatives
For now the alternative I use is to manually insert he batch transform in training_step. This could also be automated by adding a hook that allows batch modification right before training_step. It would allow for a broader use and could be what is used behind the scene fore a higher level batch transform feature. Again, the details probably need to be discussed.
Additional context
I'll add a screenshot of the confusing part of the documentation I was refering to earlier.
I'll also add that I am willing to work on this feature myself once details are set. |
Logging Intervals and Evaluation Intervals | [
"feature",
"help wanted"
] | π Feature
Logging and validation intervals should be performed using the global step.
Motivation
Currently, the row_log_interval and val_check_interval do not work as intended. If my batch size is large or my training set size is small, the number of batches per epoch is also small. Row log interval and validation check interval seem to use the steps within the epoch, rather than the global step count.
This means that logging does not occur if row_log_interval > num batches per epoch. For the validation check interval, the trainer throws an exception.
Pitch
Both intervals should rely on the global step count or count steps across epochs to solve the issues mentioned above.
Alternatives
Before initializing the trainer, I've used the dataloader/dataset to calculate the number of batches per epoch, and then used check_val_every_n_epochs. However this is not ideal if the get_dataloader is baked into the lightning module. It also does not allow for fine grain control over when to run validation.
Additional context
It might also be nice to have val_check_interval based on max_steps. For example, if we have max_steps=10000 and want to validate ten times through training, we could set val_check_interval=0.1 or equivalently, val_check_interval=1000 |
How optimizer frequencies works | [
"question"
] | β Questions and Help
What is your question?
I don't understand how optimizer frequencies works
#1269
Code
When I tried to work with them as a list of optimizers, I got this error
It arises after training_step()
What's your environment?
OS: [Win]
Packaging [conda]
Version [0.9.0]
P.S.
If you have a pytorch-lightning WGAN implementation or something with n_critics I would appreciate if you could share ) |
Turn off torch profilers for faster training | [
"feature",
"help wanted",
"won't fix"
] | π Feature
Motivation
PyTorch by default does not disable the autograd and other profilers while training there are a lot of them listed in one of the talks here.
To enhance speed while training it is recommended to turn them off manually.
Pitch
Since this feature should be on by default as it gives a good stack trace for errors. Can we provide a utility function as we did for seeding everything?
from utils.profiler import profiler
profiler(state=off) # to turn off
profiler(state=on) # to turn on
Something like this will allow ease of toggling the profiler.
Alternatives
Already users can use a context manager using pytorch (but nobody does sadly)
with torch.autograd.profiler.profile(enabled=False) as prof:
PyTorch code
Alternatively, it can be an argument in trainer itself. But I guess this is not good.
Trainer(profiler=off)
Additional context
Do verify if we are following other recommend guideliness by NVIDIA
szymon_migacz-pytorch-performance-tuning-guide.pdf |
wandb logger.experiment does not seem to be the same as run object in wandb API | [
"question"
] | β Questions and Help
What is your question?
I use pytorch lightning + wandb logger. I do not know how to extract history of training (training losses, validation losses...) from pytorch lightning or from the logger.
What have you tried?
In the docs, the logger has a property (experiment) returning a wandb run object
https://pytorch-lightning.readthedocs.io/en/latest/loggers.html?highlight=wandb#weights-and-biases
this Run object looks like the run object described here
https://docs.wandb.com/library/reference/wandb_api#run
but it is missing some elements (no member called state) and missing some funcionalities. For example, the member history is not callable, so I cannot obtain information about the training history.
What's your environment?
OS: Linux
Packaging: pip
Version latest versions in pip (wandb 0.10, pytorch-lightning 0.9.1) |
Add an example of how to subclass base DDP to docs | [
"docs",
"priority: 0"
] | Some cases you want to customize DDP. We need to add that to our docs. |
Add missing callback hook for optimizer dicts | [
"feature",
"won't fix",
"priority: 0",
"design"
] | Need a callback hook to consolidate the state of optimizer dicts before saving the final state_dict during model checkpointing. |
Looking for an example with custom metric for early_stop | [
"question"
] | β Questions and Help
What is your question?
Hi!
I was checking the docs and early_stop_callback and the only example I could find was https://pytorch-lightning.readthedocs.io/en/0.4.9/examples/Examples/#trainer-example. Is there any sample code where this is used? Im looking for a Lightning model that returns a custom metric and the early stop is based on that metric.
For my case, im working on NLP and the metric WER(Word error rate) is my main focus during training. I would like to configure the training so it stops when WER in the test set doesnt decrease for 2 epochs. Is this possible?
Code
This is my test step if it helps, right now I'm logging WER but my training loop doesnt use it for early stop
def test_step(self, batch, batch_idx):
spectrograms, labels, input_lengths, label_lengths = batch
y_hat = self(spectrograms) # (batch, time, n_class)
output = F.log_softmax(y_hat, dim=2)
output = output.transpose(0, 1) # (time, batch, n_class)
loss = self.criterion(output, labels, input_lengths, label_lengths)
decoded_preds, decoded_targets = GreedyDecoder(output.transpose(0, 1), labels, label_lengths)
n_correct_pred = sum([int(a == b) for a, b in zip(decoded_preds, decoded_targets)])
test_cer, test_wer = [], []
for j in range(len(decoded_preds)):
test_cer.append(cer(decoded_targets[j], decoded_preds[j]))
test_wer.append(wer(decoded_targets[j], decoded_preds[j]))
avg_cer = sum(test_cer) / len(test_cer)
avg_wer = sum(test_wer) / len(test_wer)
logs = {
"cer": avg_cer,
"wer": avg_wer,
}
return {
"val_loss": loss,
"n_correct_pred": n_correct_pred,
"n_pred": len(spectrograms),
"log": logs,
}
What's your environment?
OS: [Linux, ]
Packaging [pip, ]
Version [0.9.0] |
DDP TensorMetric and NumpyMetric exception | [
"bug",
"help wanted",
"priority: 0"
] | π Bug
when trying to train with metrics inherited from tensormetric or numpy metric an exception occurs
pytorch 1.6.0
pytorch-lightning == 0.9.0
CUDA_VISIBLE_DEVICES: [0,1,2]
Traceback (most recent call last):
File "/test.py", line 38, in <module>
main()
File "/test.py", line 34, in main
trainer.test(pl_model)
File "/lib/python3.8/site-packages/pytorch_lightning/trainer/states.py", line 48, in wrapped_fn
result = fn(self, *args, **kwargs)
File "/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1355, in test
results = self.__test_given_model(model, test_dataloaders)
File "/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1418, in __test_given_model
results = self.fit(model)
File "/lib/python3.8/site-packages/pytorch_lightning/trainer/states.py", line 48, in wrapped_fn
result = fn(self, *args, **kwargs)
File "/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1052, in fit
self.accelerator_backend.train(model, nprocs=self.num_processes)
File "/lib/python3.8/site-packages/pytorch_lightning/accelerators/ddp_spawn_backend.py", line 43, in train
mp.spawn(self.ddp_train, nprocs=nprocs, args=(self.mp_queue, model,))
File "/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 200, in spawn
return start_processes(fn, args, nprocs, join, daemon, start_method='spawn')
File "/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 149, in start_processes
process.start()
File "/lib/python3.8/multiprocessing/process.py", line 121, in start
self._popen = self._Popen(self)
File "/lib/python3.8/multiprocessing/context.py", line 284, in _Popen
return Popen(process_obj)
File "/lib/python3.8/multiprocessing/popen_spawn_posix.py", line 32, in __init__
super().__init__(process_obj)
File "/lib/python3.8/multiprocessing/popen_fork.py", line 19, in __init__
self._launch(process_obj)
File "/lib/python3.8/multiprocessing/popen_spawn_posix.py", line 47, in _launch
reduction.dump(process_obj, fp)
File "/lib/python3.8/multiprocessing/reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
AttributeError: Can't pickle local object '_apply_to_outputs.<locals>.decorator_fn.<locals>.new_func' |
RPC support | [
"feature",
"help wanted",
"won't fix"
] | π Feature
Pytorch 1.6 adds rpc. When will that be added to pytorch-lightning? |
Allow auto-reduce of Result even if _epoch_end is implemented | [
"duplicate",
"feature",
"help wanted",
"design"
] | π Feature
Currently, if Result is used and *_epoch_end is implemented, auto-reduce of result object is skipped.
The code for the validation loop is linked below. The same logic applies to the train and test loops.
pytorch-lightning/pytorch_lightning/trainer/evaluation_loop.py
Lines 202 to 210
in
3281586
if is_overridden('validation_epoch_end', model=model):
if using_eval_result:
eval_results = self.__gather_epoch_end_eval_results(outputs)
eval_results = model.validation_epoch_end(eval_results)
user_reduced = True
if using_eval_result and not user_reduced:
eval_results = self.__auto_reduce_result_objs(outputs)
So "doing something special with the outputs" together with auto-reduce is not possible. Reduce of all metrics must be done manually by the user in _epoch_end. I think a mechanism to use auto-reduce while adding something special on top of that would be beneficial. |
Logging to progress bar doesn't work when calling trainer.test() | [
"bug",
"help wanted",
"working as intended"
] | π Bug
The metric logged by EvalResult.log() doesn't show up in the progress bar when setting prog_bar=True.
Code sample
import time
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.utils.data import TensorDataset, DataLoader
import pytorch_lightning as pl
class MyModel(pl.LightningModule):
def __init__(self):
super(MyModel, self).__init__()
self.layer = nn.Linear(64, 1)
def forward(self, x):
return self.layer(x)
def test_step(self, batch, batch_idx):
X, y = batch
loss = F.mse_loss(self.forward(X), y)
result = pl.EvalResult()
result.log("test_loss", loss, prog_bar=True)
time.sleep(1)
return result
test_ds = TensorDataset(torch.randn(64 * 5, 64), torch.randn(64 * 5, 1))
test_loader = DataLoader(test_ds, batch_size=64)
model = MyModel()
trainer = pl.Trainer(logger=False)
trainer.test(model, test_dataloaders=test_loader)
Expected behavior
test_loss shows up in the progress bar when running the above script.
Environment
PyTorch Version: 1.6
OS: Linux
How you installed PyTorch: pip
Python version: 3.8.5 |
Enable training purely based on number of iterations instead of epochs | [
"feature",
"help wanted",
"good first issue",
"won't fix",
"design"
] | π Feature
Enable training purely based on number of iterations instead of epochs
Motivation
This can be useful for certain training runs. Without this feature, the user must set an unreachably high value for max_epochs and set max_steps to the desired iteration count. With this setup, the trainer will break from the training loop based on max_steps since we'd never reach max_epochs. For example, Detectron2 uses iteration-based training in their train loop.
Pitch
The solution could be pretty simple. We can make min_epochs and max_epochs as Optional. If all of min/max_epochs and max_steps are unset, use the defaults we have today (min_epochs=1, max_epochs=1000). If only max_steps is set, use that (and keep min/max_epochs as None). If all are set, stop based on whichever condition is hit first.
Specifically, we'd need to:
Initialize things correctly here:
pytorch-lightning/pytorch_lightning/trainer/training_loop.py
Line 46
in
c64520e
def on_trainer_init(self, max_epochs, min_epochs, max_steps, min_steps, num_sanity_val_steps):
Update the training loop here to not depend on max_epochs: https://github.com/PyTorchLightning/pytorch-lightning/blob/master/pytorch_lightning/trainer/trainer.py#L340-L375 .
Are there other spots I'm missing?
Alternatives
Without touching the Trainer code, a super hacky solution would be setting max_epochs to be an outrageously large value and set max_steps to my desired iteration count. then we'll break from the train loop according to max_steps since we'd meet this condition first instead of max_epochs. However this could be better handled |
in `child_modules` the forward pass example should use the encoder subnet | [
"docs"
] | π Documentation
In my understanding, the forward pass of the example autoencoder should return the latent representation extracted by the encoder.
If my assumption is correct, then the forward, training_step, and _shared_eval functions should be corrected to reflect that. In this case, I'd be happy to create and submit a PR.
Otherwise, I am deeply confused and could use an explanation π
Thanks for the amazing library! |
Retire legacy code in torchtext | [
"feature",
"let's do it!",
"design",
"priority: 1"
] | β Questions and Help
As the maintainer of the torchtext library, we plan to retire torchtext.data.batch.Batch and torchtext.data.example.Example in the next release (by the end of October). Those two components will be still available in torchtext.legacy. To handle the compatibility issue, should we send a PR and fix this on GitHub first, or we could just make some changes in fbcode?
Here are the proposed changes:
# old API
from torchtext.data.batch import Batch
from torchtext.data.example import Example
# new API
from torchtext.legacy.data.batch import Batch
from torchtext.legacy.data.example import Example
Relevant issue: pytorch/text#985 |
add support for 1.7 nightly | [
"feature",
"let's do it!"
] | π Feature
@ydcjeff created a CI with Conda to have tested with nighly 1.7 but in #3074 some tests were failing for 1.7 some let's move it as separate work
Motivation
be continuously compatible even with new PT versions when they come
ToDo
uncomment CI and enable 1.7 testing
fix test so 1.7 passes |
Support automatic torchscript checkpointing. | [
"feature",
"help wanted",
"won't fix",
"let's do it!"
] | π Feature
Support exporting to TorchScript natively in the ModelCheckpoint callback. This would be a natural follow-up to #3080.
Motivation
It's common for users to rely on checkpoints (eg, usually last, but sometimes other intermediate models especially if using model ensembles or picking the best model the last k models). Frequently these models need to be served in a non-Python environment, meaning users have to convert their checkpoints to TorchScript.
With the implementation of #3080 it seems natural to add more support for TorchScript throughout the Lightning framework. In particular, we should allow users to export TorchScript models easily in checkpoints.
Pitch
We can rely on the to_torchscript method in LightningModule's to access a TorchScript version of the model when checkpointing. We would then provide an parameter in the ModelCheckpoint callback
class ModelCheckpoint(Callback):
r"""
Save the model after every epoch if it improves.
After training finishes, use :attr:`best_model_path` to retrieve the path to the
best checkpoint file and :attr:`best_model_score` to retrieve its score.
Args:
...
save_torchscript: When set, each exported model includes a Torchscript
version. Default: ``False``.
...
"""
Additional context
See #3080. |
Generator API for train_step with multiple optimizers | [
"feature",
"help wanted",
"won't fix"
] | π Feature
Add a generator API version of LightningModule.train_step() when dealing with multiple losses/optimizers.
Motivation
When dealing with multiple optimizers, the current API pass an additional optimizer_idx argument to train_step(). This is cumbersome with optional steps and shared variables. For example, this code section
self.q1_network_optimizer.zero_grad()
q1_value = self.q1_network(state, action)
q1_loss = self.q_network_loss(q1_value, target_q_value)
q1_loss.backward()
self.q1_network_optimizer.step()
if self.q2_network:
self.q2_network_optimizer.zero_grad()
q2_value = self.q2_network(state, action)
q2_loss = self.q_network_loss(q2_value, target_q_value)
q2_loss.backward()
self.q2_network_optimizer.step()
would have to stash target_q_value into a member variable. It would also need to have a member variable to track the optimizer index of q2_network_optimizer, e.g.,
def __init__(self, ...):
self.q2_optimizer_index = 1 if self.q2_network else None
self.actor_network_optimizer = 2 if self.q2_network else 1
self.target_q_value = None
def train_step(self, batch, batch_idx, optimizer_idx):
if self.target_q_value is None:
self.target_q_value = self.compute_target_q_value(...)
if optimizer_idx == 1:
q1_loss = self.q1_loss(self.target_q_value, ...)
return q1_loss
if optimizer_idx == self.q2_optimizer_index:
q2_loss = self.q2_loss(self.target_q_value, ...)
return q2_loss
# reset self.target_q_value to None if this is the last optimizer
if optimizer_idx == self.actor_network_optimizer:
...
Pitch
With a generator API, it would be very natural
q1_value = self.q1_network(state, action)
q1_loss = self.q_network_loss(q1_value, target_q_value)
yield q1_loss
if self.q2_network:
q2_value = self.q2_network(state, action)
q2_loss = self.q_network_loss(q2_value, target_q_value)
yield q2_loss
This is much more readable and would be a lot less error-prone
Additional context |
GPUs requested but none are available | [
"question",
"won't fix",
"waiting on author"
] | my server has 8 GPUs, but when I use the trainer class and set gpus = -1, it gets the run error GPUs requested but none are available, use torch to check the gpus , get the number of gpu is 8, and cuda.is_available is true. Does any one can tell me what's wrong ? |
on_save_checkpoint callbacks runs in rank zero only | [
"bug",
"help wanted",
"discussion",
"design"
] | π Bug
If any callback implements on_save_checkpoint, then that function runs only in the rank zero worker. I think this is suboptimal as you might want to do some communication across workers before saving state.
The lineage of calls here is:
model checkpoint callback's on_validation_end is decorated with rank_zero_only
this calls the checkpoint callback's _save_model
which calls it's save_function
the save_function is trainer.save_checkpoint
save_checkpoint in checkpoint_connector calls dump_checkpoint
dump_checkpoint calls trainer.on_save_checkpoint()
trainer on_save_checkpoint() calls any callback implementing on_save_checkpoint
I think this could be avoided with more judicious usage of rank_zero_only. the main benefit of rank_zero_only in the model checkpoint callback to avoid redundant file I/O. For saving the checkpoint, that is taken care of by this check.
Other file I/O in the model checkpoint callback could be similarly guarded, and we should remove the decorator from on_validation_end |
WandbLogger does not seem to support model and parameter tracking | [] | I am unsure whether this issue can be fixed from PyTorch Lightning's side or whether the issue lies with Wandb. The problem is that the WandbLogger.watch() function
pytorch-lightning/pytorch_lightning/loggers/wandb.py
Lines 136 to 137
in
c46de8a
def watch(self, model: nn.Module, log: str = 'gradients', log_freq: int = 100):
self.experiment.watch(model, log=log, log_freq=log_freq)
calls the watch() function in wandb/sdk/watch_run.py (on Wandb's side), which is basically commented out.
def watch(self, models, criterion=None, log="gradients", log_freq=100, idx=None):
logger.info("Watching")
# wandb.run.watch(watch)
If the last line of that function is changed to wandb.watch( models, criterion, log, log_freq, idx) (this function is in wandb/sdk/watch_watch.py) then everything seems to work like it should. Alternatively, I think it should be possible for WandbLogger.watch() to call wandb.watch( models, criterion, log, log_freq, idx) directly.
It seems @borisdayma previously worked a lot on Wandblogger, so I'll mention him. |
Why training with lightning is 1.5x slower than vanilla pytorch | [
"question"
] | I am assuming this is due to user error but cannot figure it out
Running pytorch-lightning==0.9.0 on Linux
Converted pytorch code for training bert transformer model to pytorch lightning.
Vanilla code trains ~1.5x faster than lightning code. (4.8 it/s vs 2.7 it/s)
Running both code bases on 1x GPU, both with 32 bit precision.
Tried various configs on lightning code
Tried distributed_backend: dp/None both same it/sec; ddp ~10% slower than dp/None
lightning code
pytorch code
Unclear as to what to try next to resolve this |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.