title
stringlengths 5
164
| labels
list | bodyText
stringlengths 0
46.7k
|
---|---|---|
Train in run_pretrain_routine?
|
[
"feature",
"help wanted"
] |
I see the train() is run in run_pretrain_routine, doesn't it look wierd? At least violating the function name.
pytorch-lightning/pytorch_lightning/trainer/trainer.py
Line 1003
in
d4a02e3
def run_pretrain_routine(self, model: LightningModule):
|
Fix horovod tests that try to access filepath on global rank > 0
|
[
"bug",
"help wanted",
"priority: 0",
"ci"
] |
π Bug
We had to skip two tests in #2425, namely
test_horovod_cpu
test_horovod_cpu_implicit
Problem is that since they run in ddp and the test tries to access the trainer internal variable for the checkpoint path, it gets a NoneType error when trying to os.join() None paths.
To Reproduce
Steps to reproduce the behavior:
Run these two tests:
test_horovod_cpu
test_horovod_cpu_implicit
|
`validation_epoch_end` and `test_epoch_end` can't return nothing
|
[
"bug",
"help wanted",
"good first issue"
] |
π Bug
If validation_epoch_end or test_epoch_end returns nothing (as presented as an option in the documentation), an error occurs.
(Happy to work on a PR to fix this)
To Reproduce
Steps to reproduce the behavior:
Overwrite test_epoch_end and remove return (same for validation_epoch_end
File "/.conda/envs/PPI-env/lib/python3.7/site-packages/pytorch_lightning/trainer/logging.py", line 106, in process_output
for k, v in output.items():
AttributeError: 'NoneType' object has no attribute 'items'
Code sample
import os
import torch
from torch.nn import functional as F
from torch.utils.data import DataLoader
from torchvision.datasets import MNIST
from torchvision import transforms
from pytorch_lightning.core.lightning import LightningModule
from pytorch_lightning import Trainer, seed_everything
class LitModel(LightningModule):
def __init__(self):
super().__init__()
self.l1 = torch.nn.Linear(28 * 28, 10)
def forward(self, x):
return torch.relu(self.l1(x.view(x.size(0), -1)))
def training_step(self, batch, batch_idx):
x, y = batch
y_hat = self(x)
loss = F.cross_entropy(y_hat, y)
tensorboard_logs = {'train_loss': loss}
return {'loss': loss, 'log': tensorboard_logs}
def configure_optimizers(self):
return torch.optim.Adam(self.parameters(), lr=0.001)
def train_dataloader(self):
dataset = MNIST(os.getcwd(), train=True, download=True, transform=transforms.ToTensor())
loader = DataLoader(dataset, batch_size=32, num_workers=4, shuffle=True)
return loader
def validation_step(self, batch, batch_idx):
x, y = batch
y_hat = self(x)
return {'val_loss': F.cross_entropy(y_hat, y)}
def validation_epoch_end(self, outputs):
avg_loss = torch.stack([x['val_loss'] for x in outputs]).mean()
tensorboard_logs = {'val_loss': avg_loss}
# return {'val_loss': avg_loss, 'log': tensorboard_logs}
def val_dataloader(self):
# TODO: do a real train/val split
dataset = MNIST(os.getcwd(), train=False, download=True, transform=transforms.ToTensor())
loader = DataLoader(dataset, batch_size=32, num_workers=4)
return loader
def main():
seed_everything(42)
model = LitModel()
# most basic trainer, uses good defaults
trainer = Trainer(fast_dev_run=True)
trainer.fit(model)
if __name__ == '__main__':
main()
Expected behavior
It should check if there is nothing returned and carry on.
Environment
CUDA:
GPU:
available: False
version: None
Packages:
numpy: 1.17.4
pyTorch_debug: False
pyTorch_version: 1.5.1
pytorch-lightning: 0.8.3
tensorboard: 2.2.2
tqdm: 4.41.1
System:
OS: Darwin
architecture:
64bit
processor: i386
python: 3.7.6
version: Darwin Kernel Version 19.4.0: Wed Mar 4 22:28:40 PST 2020; root:xnu-6153.101.6~15/RELEASE_X86_64
Additional context
|
validation_epoch_end needs to return CUDA tensors
|
[
"bug",
"help wanted"
] |
π Bug
I'm not sure if this is expected behaviour or not, but upgrading to the latest version (from 0.8.1) caused my validation_epoch_end to break. It appears that a CUDA tensor is expected for the metric where before the tensor was device agnostic.
This was using sklearn's roc_auc_score. I haven't yet got around to testing PL's new metrics.
Feel free to close if this is expected behaviour.
To Reproduce
This is my validation_epoch_end. Uncommenting .to(avg_loss.device) allows this to run with the dev version of PL.
This was run with ddp, precision=16 using Apex AMP.
def validation_epoch_end(self, outputs):
avg_loss = torch.stack([x["val_loss"] for x in outputs]).mean()
y_pred = torch.cat([x["y_pred"].squeeze() for x in outputs]).cpu().numpy()
y_true = torch.cat([x["y_true"].squeeze() for x in outputs]).cpu().numpy()
metric = torch.tensor(roc_auc_score(y_true, y_pred)) # .to(avg_loss.device)
tensorboard_logs = {
"loss/validation": avg_loss,
"auc": metric,
}
return {"val_loss": avg_loss, "log": tensorboard_logs, "auc": metric}
The error message can be seen here: #2411 (comment)
Code sample
Expected behavior
Environment
Please copy and paste the output from our
environment collection script
(or fill out the checklist below manually).
You can get the script and run it with:
wget https://raw.githubusercontent.com/PyTorchLightning/pytorch-lightning/master/tests/collect_env_details.py
# For security purposes, please check the contents of collect_env_details.py before running it.
python collect_env_details.py
PyTorch Version (e.g., 1.0): 1.5
OS (e.g., Linux): Ubuntu 20.04
How you installed PyTorch (conda, pip, source):
Build command you used (if compiling from source):
Python version: 3.8.2
CUDA/cuDNN version: 10.2
GPU models and configuration: Dual GPU (ddp)
Any other relevant information: Apex AMP
Additional context
|
Test function
|
[
"question"
] |
Hello,
My model was worked well on version 0.7.3. Then, I have tried to update the version of pytorch-lightning to 0.8.3.
I have re-trained and tested my model. The training have passed well, but in method test(), I have meet an error:
Traceback (most recent call last):
File "src/main_cv.py", line 283, in <module>
main(hyperparams)
File "src/main_cv.py", line 66, in main
trainer.test()
File "/home/vle/miniconda3/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1174, in test
model = self.get_model().load_from_checkpoint(ckpt_path)
File "/home/vle/miniconda3/lib/python3.7/site-packages/pytorch_lightning/core/saving.py", line 169, in load_from_checkpoint
model = cls._load_model_state(checkpoint, *args, **kwargs)
File "/home/vle/miniconda3/lib/python3.7/site-packages/pytorch_lightning/core/saving.py", line 205, in _load_model_state
model = cls(*cls_args, **cls_kwargs)
**TypeError: __init__() missing 3 required positional arguments: 'train_loader', 'valid_loader', and 'test_loader'**
Even I have defined test_dataloader function in my model. Could you check if it is a issue of trainer class?
This is my program:
train_loader, valid_loader, test_loader = create_dataloader(...) # I write this function to create the loaders
model = MyModel(hparams, train_loader, valid_loader, test_loader) # define the model and parse data
trainer = pl.Trainer(max_epochs =1 )
trainer.fit(model)
trainer.test()
Many thanks!
|
training_epoch_end log output gets combined with next epoch training
|
[
"bug",
"help wanted",
"priority: 0"
] |
π Bug
So, I put 'training_epoch_end' function in my LightningModule. I have it return this dictionary
{'log':{'train_loss': tensor(0.3616, device='cuda:0'), 'epoch': 0}
I check the run_training_epoch_end function in the Pytorch library, it looks like it is working normally as log_epoch_metrics is showing the 'log' part in the dictionary produced by 'training_epoch_end' function
{'train_loss': tensor(0.3616, device='cuda:0'), 'epoch': 0}
So, they send it off to the logger. But there is problem, it is trying to combine the dictionary above with the results from the training step fo the next epoch. When I check the variable self._metrics_to_agg, I get the following result. Of course, it is impossible to combine these dictionaries as they have different keys. I guess the main problem is that the code is combining the log results of run_training_epoch_end function with the results of the next training batch.
[{'train_loss': tensor(0.3616, device='cuda:0'), 'epoch': 0}, {'loss': 0.48756, 'input': ..., 'ouput': ...}]
Any ideas to solve this problem? I will appreciate your help! Here is the whole error stack:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-5-a4569c0ab5cd> in <module>
3 logging.getLogger('allennlp').setLevel(logging.INFO)
4
----> 5 train("/shared/labs/workflow/configs/ner_span_path_2.jsonnet", "/Dropbox/EvidLabs/atee/lab_data_enrichment_project/train/DataEnrichmentSpanPath/DataEnrichmentSpanPath__INV_M_span_path_42_43", AllenNlpLightningModule, tags=[])
6
7 # train("/shared/labs/workflow/configs/candidate_span_grouper.jsonnet", "./DataEnrichmentBunchedCrfTaggerTest", AllenNlpLightningModule, tags=["Frozen"])
/code/evid-research/evid2/pytorch_lightning/trainer/config_file_trainer.py in train(config_file, project_dir, lightning_module, run_id, run_name_override, tags, dry_run)
171 # 3 START TRAINING
172 # ------------------------
--> 173 trainer.fit(model)
174
175
/code/pytorch-lightning/pytorch_lightning/trainer/trainer.py in fit(self, model, train_dataloader, val_dataloaders)
971 # easier to avoid NCCL issues
972 elif self.use_dp:
--> 973 self.dp_train(model)
974
975 elif self.use_horovod:
/code/pytorch-lightning/pytorch_lightning/trainer/distrib_parts.py in dp_train(self, model)
265 model = LightningDataParallel(model, device_ids=device_ids)
266
--> 267 self.run_pretrain_routine(model)
268
269 model.forward = model_autocast_original_forward
/code/pytorch-lightning/pytorch_lightning/trainer/trainer.py in run_pretrain_routine(self, model)
1154
1155 # CORE TRAINING LOOP
-> 1156 self.train()
1157
1158 def test(
/code/pytorch-lightning/pytorch_lightning/trainer/training_loop.py in train(self)
368 # RUN TNG EPOCH
369 # -----------------
--> 370 self.run_training_epoch()
371
372 if self.max_steps and self.max_steps <= self.global_step:
/code/pytorch-lightning/pytorch_lightning/trainer/training_loop.py in run_training_epoch(self)
480 # SAVE METRICS TO LOGGERS
481 # -----------------------------------------
--> 482 self.save_train_loop_metrics_to_loggers(batch_idx, batch_output)
483
484 # progress global step according to grads progress
/code/pytorch-lightning/pytorch_lightning/trainer/training_loop.py in save_train_loop_metrics_to_loggers(self, batch_idx, batch_output)
564 # logs user requested information to logger
565 print("batch_output.batch_log_metrics", batch_output.batch_log_metrics.keys())
--> 566 self.log_metrics(batch_output.batch_log_metrics, batch_output.grad_norm_dic)
567
568 def save_loggers_in_training_loop(self, batch_idx):
/code/pytorch-lightning/pytorch_lightning/trainer/logging.py in log_metrics(self, metrics, grad_norm_dic, step)
72 if self.is_global_zero and self.logger is not None:
73 print("scalar_metrics", scalar_metrics.keys())
---> 74 self.logger.agg_and_log_metrics(scalar_metrics, step=step)
75 self.logger.save()
76
/code/pytorch-lightning/pytorch_lightning/loggers/base.py in agg_and_log_metrics(self, metrics, step)
131 """
132 print("metrics", metrics.keys())
--> 133 agg_step, metrics_to_log = self._aggregate_metrics(metrics=metrics, step=step)
134
135 if metrics_to_log:
/code/pytorch-lightning/pytorch_lightning/loggers/base.py in _aggregate_metrics(self, metrics, step)
88
89 # compute the metrics
---> 90 agg_step, agg_mets = self._reduce_agg_metrics()
91
92 # as new step received reset accumulator
/code/pytorch-lightning/pytorch_lightning/loggers/base.py in _reduce_agg_metrics(self)
109 print("self._agg_key_funcs", self._agg_key_funcs)
110 print("self._agg_default_func", self._agg_default_func)
--> 111 agg_mets = merge_dicts(self._metrics_to_agg, self._agg_key_funcs, self._agg_default_func)
112 return self._prev_step, agg_mets
113
/code/pytorch-lightning/pytorch_lightning/loggers/base.py in merge_dicts(dicts, agg_key_funcs, default_func)
386 print("fn or default_func", fn or default_func)
387 print("values_to_agg", values_to_agg)
--> 388 d_out[k] = (fn or default_func)(values_to_agg)
389
390 return d_out
<__array_function__ internals> in mean(*args, **kwargs)
/venvs/dev3.7/lib/python3.7/site-packages/numpy/core/fromnumeric.py in mean(a, axis, dtype, out, keepdims)
3333
3334 return _methods._mean(a, axis=axis, dtype=dtype,
-> 3335 out=out, **kwargs)
3336
3337
/venvs/dev3.7/lib/python3.7/site-packages/numpy/core/_methods.py in _mean(a, axis, dtype, out, keepdims)
149 is_float16_result = True
150
--> 151 ret = umr_sum(arr, axis, dtype, out, keepdims)
152 if isinstance(ret, mu.ndarray):
153 ret = um.true_divide(
TypeError: unsupported operand type(s) for +: 'dict' and 'dict'
To Reproduce
Steps to reproduce the behavior:
Add a training_epoch_end function to your Lightning Module and run it. You can use mine in the code sample section if you want. The key is that the "log" section of training_epoch_end dictionary output must be of a different format than the dictionary that contains the results of your training sample while training (like in the example I provided {'train_loss': tensor(0.3616, device='cuda:0'), 'epoch': 0} and {'loss': 0.48756, 'input': ..., 'ouput': ...} are different formats as they don't share the same keys).
Code sample
def training_epoch_end(self, outputs):
print("training_epoch_end")
print(len(outputs))
prefix = "train_"
metric_modules = self.training_metric_modules
# Handle loss:
avg_loss = torch.stack([x['loss'] for x in outputs]).mean()
result = {'log': {}, 'progress_bar': {}}
result[prefix + 'loss'] = avg_loss
result['log'][prefix + 'loss'] = avg_loss
result['progress_bar'][prefix + 'loss'] = avg_loss
# Add tracking variables
result['log']['epoch'] = self.current_epoch
return result
Expected behavior
I thought I would be able to run training_epoch_end function with no combination with the training samples in the next epoch. I expected no error, like running validation_epoch_end.
Environment
CUDA:
- GPU:
- Tesla V100-SXM2-16GB
- available: True
- version: 10.2
Packages:
- numpy: 1.18.5
- pyTorch_debug: False
- pyTorch_version: 1.5.0
- pytorch-lightning: 0.8.4
- tensorboard: 2.2.2
- tqdm: 4.46.1
System:
- OS: Linux
- architecture:
- 64bit
-
- processor: x86_64
- python: 3.7.7
- version: #113-Ubuntu SMP Wed Jan 29 14:54:54 UTC 2020
|
Wandb Flatten Dict
|
[
"bug",
"help wanted",
"logger"
] |
Wandb logger should flatten the dictionary of parameters before logging. Every other logger has the bellow pattern of code:
params = self._convert_params(params)
params = self._flatten_dict(params)
π Bug
Wandb logger does not flatten parameters resulting in dictionaries being logged to Wandb, which are not searchable causing for some loss of features in wandb.
To Reproduce
Run the cpu_template with wandb logger, and log a nested dictionary.
Expected behavior
Solution, just call params = self._flatten_dict(params) this in the wandb logger.
Environment
CUDA:
GPU:
available: False
version: None
Packages:
numpy: 1.18.5
pyTorch_debug: False
pyTorch_version: 1.5.0
pytorch-lightning: 0.8.4
tensorboard: 2.2.2
tqdm: 4.46.1
System:
OS: Darwin
architecture:
64bit
processor: i386
python: 3.7.7
version: Darwin Kernel Version 19.4.0: Wed Mar 4 22:28:40 PST 2020; root:xnu-6153.101.6~15/RELEASE_X86_64
|
Trainer auto_lr_find flag cannot be set to boolean through argparse
|
[
"bug",
"help wanted"
] |
π Bug
It seems that the Trainer auto_lr_find flag can only be set to str type via argparse. If it is not a boolean, Trainer will seek the finder with variable name defined via --auto_lr_find instead of using the default lr_finder. It's not a big issue as we can work around this by some processing after parsing the arguments. However, I hope the auto_lr_find flag can be set in a better way as it may cause trouble for people who don't know about this. Thanks!
To Reproduce
This works:
python train.py
trainer = pl.Trainer(gpus=args.gpus,
auto_lr_find=True)
This will have error because of the string type:
python train.py --auto_lr_find True
parser = pl.Trainer.add_argparse_args(parser)
args = parser.parse_args()
trainer = pl.Trainer.from_argparse_args(args)
Environment
0.8.4
|
WandB Logger always resumes even when `Trainer(resume_from_checkpoing=None)`
|
[
"bug",
"help wanted",
"logger"
] |
π Bug
@awaelchli As requested, bug above in the title.
To Reproduce
Steps to reproduce the behavior:
model = CoolSystem.load_from_checkpoint(checkpoint)
logger = WandbLogger()
trainer = Trainer(resume_from_checkpoint=None)
trainer.fit(model)
The above resumes the wandb run.
PL 0.8.4
PyTorch Version (e.g., 1.0): 1.6 nightly
OS (e.g., Linux): Linux
How you installed PyTorch (conda, pip, source): Conda
Build command you used (if compiling from source):
Python version: 3.7
CUDA/cuDNN version:
GPU models and configuration: V100
Any other relevant information:
Additional context
|
Model parallelism in Multi-GPUs
|
[
"question",
"won't fix"
] |
Hi everyone!
Does lightning has any way to split very large networks between GPUs like in the example in link?
https://discuss.pytorch.org/t/model-parallelism-in-multi-gpus-forward-backward-graph/27679/4
Thanks!
|
How can I perform only validation without training
|
[
"question",
"won't fix"
] |
β Questions and Help
It seems the metric 0.8737 in the checkpoint 'm10-f1_1=0.8737.ckpt' can not be found in progress_bar,
I want to load the .ckpt to perform validation without training,
How should I config the trainer?
|
Trainer.scale_batch_size requires model.batch_size instead of model.hparams.batch_size
|
[
"bug",
"help wanted",
"good first issue"
] |
π Bug
Trainer.scale_batch_size only works if a model has the batch_size property and does not work with model.hparams.batch_size even though all documentation points to the reverse.
To Reproduce
All of my hyperparameters are available as model.hparams like suggested in the documentation: (hyperparameters, option 3.
This means that my batch_size is available as model.hparams.batch_size.
This should be fully compatible with the documented example code of Trainer.scale_batch_size() since that code also uses model.hparams.batch_size instead of model.batch_size.
However, when I put my model in Trainer.scale_batch_size, I get the following error:
pytorch_lightning.utilities.exceptions.MisconfigurationException: Field batch_size not found in `model.hparams`
Example code
class LitModel(pl.LightningModule):
def __init__(self, hparams):
super().__init__()
self.hparams = args
model = LitModel(args)
trainer = Trainer()
trainer.scale_batch_size(model)
Expected behavior
Either Trainer.scale_batch_size should work with model.hparams or the error message, linked documentation examples and docstrings should all change (i.e. here, here and here).
(I would prefer the second option. I think that it should work with both model.batch_size and model.hparams.batch_size.)
Environment
pytorch-lightning 0.8.4
|
Initialising model in setup not compatible with auto_scale_batch_size / auto_lr_find
|
[
"bug",
"help wanted"
] |
π Bug
To Reproduce
Define your model in setup() as per the introduction guide's recommendation for when the model depends on the dataset https://pytorch-lightning.readthedocs.io/en/latest/introduction_guide.html#models-defined-by-data
Try to use either auto_scale_batch_size / auto_lr_find
File "/home/frankier/.cache/pypoetry/virtualenvs/skelshop-NSMg5dGi-py3.8/lib/python3.8/site-packages/click/core.py", line 610, in invoke
return callback(*args, **kwargs)
File "train.py", line 76, in train
trainer.fit(model)
File "/home/frankier/.cache/pypoetry/virtualenvs/skelshop-NSMg5dGi-py3.8/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 879, in fit
self._run_lr_finder_internally(model)
File "/home/frankier/.cache/pypoetry/virtualenvs/skelshop-NSMg5dGi-py3.8/lib/python3.8/site-packages/pytorch_lightning/trainer/lr_finder.py", line 50, in _run_lr_finder_internally
lr_finder = self.lr_find(model)
File "/home/frankier/.cache/pypoetry/virtualenvs/skelshop-NSMg5dGi-py3.8/lib/python3.8/site-packages/pytorch_lightning/trainer/lr_finder.py", line 175, in lr_find
optimizers, _, _ = self.init_optimizers(model)
File "/home/frankier/.cache/pypoetry/virtualenvs/skelshop-NSMg5dGi-py3.8/lib/python3.8/site-packages/pytorch_lightning/trainer/optimizers.py", line 18, in init_optimizers
optim_conf = model.configure_optimizers()
File "litmod.py", line 184, in configure_optimizers
return torch.optim.SGD(
File "/home/frankier/.cache/pypoetry/virtualenvs/skelshop-NSMg5dGi-py3.8/lib/python3.8/site-packages/torch/optim/sgd.py", line 68, in __init__
super(SGD, self).__init__(params, defaults)
File "/home/frankier/.cache/pypoetry/virtualenvs/skelshop-NSMg5dGi-py3.8/lib/python3.8/site-packages/torch/optim/optimizer.py", line 46, in __init__
raise ValueError("optimizer got an empty parameter list")
ValueError: optimizer got an empty parameter list
Additional context
auto_scale_batch_size / auto_lr_find rely on the model being initailsed, but if it's initialised in setup(), it won't be ready yet.
I'm not sure whether this is a documentation bug or a behaviour bug.
|
Dynamic Data Loaders
|
[
"feature",
"help wanted",
"won't fix"
] |
π A dataloader with changable sampling behavior
A DataLoader that takes in a confusion matrix, or class specific recall at the end of the epoch. It then oversamples classes that are the least known to the network in the next epoch and hereby helps the network train more efficiently.
Motivation
I often work with highly imbalanced datasets, which I have previously been dealing with artificially oversampling classes that are less prevalent (label-balancing). Also I have been experimenting with extending this stratification from the plain labels to other confounders aswell (confounder-balancing). (* example provided at the end)
It would be great to make this process dynamic! Since the data stays the same and duplicating images seems a waste of memory and only the distribution changes over which samples are drawn from the data.
The idea is the following:
At the end of the epoch, I check how well the model is doing on the validation set and see if there are any subgroups (either on label or confounder level) and would pass the information on to my train dataloader (or its pytorch.utils.data.Sampler). Sticking to the example in the footnotes, I'd get the following validation errors:
Dog pics taken on the inside 0.4 MSE
Dog pics taken on the outside 0.3 MSE
Cat pics taken on the inside 0.5 MSE
Cat pics taken on the outside 0.2 MSE
Then I would call something like:
self.train_dataloader.sampler.distribution = [0.4, 0.3, 0.5, 0.2]
and self.train_dataloader.sampler would yield indices antiproportional to that distribution from its data_source, e.g. via torch.nn.functional.softmax(-torch.Tensor([0.4, 0.3, 0.5, 0.2])) =
My concern
While pytorch offeres many strategies to tackle this, e.g. subclassing torch.utils.data.DataLoader or torch.utils.data.Sampler, I am not sure, how Pytorch Lightning is handling the dataloaders in the back, where they are stored and what the implications of that would be for the awesome functionalities Pytorch Lightning offeres (especially data-parallel methods etc.)
As this would be my first Pytorch Lightning project, I am yet unfamiliar with the inside functionality, so any ideas from experts with the API are highly appreaciated. I'll keep you posted with my progress! Best wishes and thanks for your input!
Example stratification (label-/confounder-balancing):
If I tried to classify cats vs. dogs from images and I have 20% dogs in my dataset, I would duplicate each dog picture 4 times and had a balanced dataset (label-balancing). If the pictures in my dataset had one of two backgrounds (outside/inside) and I had an imbalance like 25% dogs-inside 25% dogs-outside 40% cats-inside 10% cats-outside, then I had 50/50 labels, but my network is more likely to classify pictures taken inside to be from cats, since the label is more prevalent in that subclass. Knowing that this irregularity exists, I would oversample cat pictures taken outside and a few more dog pics from both classes so I'd have a balance dataset (confounder-balancing). Of course, these imbalances are not exclusive and for me it just boils down to, finding out as much as I can over my data and trying my best to reduce the all imbalances I find, unless the groups are super small, or the clientdoesn't care and thinks classifying images based on the background of the image is something he wants to pay me for (happen's more often than you think).
|
Create base TPU image
|
[
"feature",
"help wanted",
"good first issue"
] |
π Feature
Create a base Docker image with TPU requirements and all PL dependencies - base + extra
Build this docker with a week crone and push to PL docker hub...
Motivation
Speedup TPU tests, because the repetitive build takes about 7min
Pitch
It could be used also by other developers
Additional context
See Docker build workflow - https://github.com/PyTorchLightning/pytorch-lightning/blob/master/.github/workflows/docker-builds.yml
|
CI: proper Conda caching
|
[
"feature",
"help wanted",
"good first issue"
] |
π Feature
Fix the Github action test fro Conda, it seems that the environment is always made from search which takes about 8min even according action readme we shall cache the environment...
Motivation
Significantly speed-up CI tests using Conda
https://github.com/goanpeca/setup-miniconda#caching
|
Training slows down with long epoch
|
[
"question"
] |
β Questions and Help
Before asking:
search the issues.
search the docs.
What is your question?
I'm doing BERT transfer-learning on a single GPU (the same happens with 2 or 4 GPUs...) and on a large dataset.
Each epoch has about 1.7M steps and training speed linearly slows down such that at some point, the
estimated remaining time starts to increase.
Is it possibly related to the fact that pytorch_lightning is concatenating the outputs of each step to give them to training_epoch_end?
Is it possible to disable this behaviour such that losses and logs are stored just for a few seconds to be written to disk and then discarded?
Code
What have you tried?
What's your environment?
Ubuntu 20.04 Server 4.15.0-108-generic
CUDA 10.2, CuDNN 7.6.5
Pytorch Lightning 0.8.4
PyTorch 1.5.1
|
Training with DataParallel (DP) is broken
|
[
"bug",
"help wanted"
] |
π Bug
Currently, the distributed training backend DataParallel (DP) seems to be broken. Using DP will result in error
TypeError: zip argument #1 must support iteration. Below is the last few lines of the call stack:
File "/home/ubuntu/anaconda3/envs/trfm/lib/python3.7/site-packages/pytorch_lightning/overrides/data_parallel.py", line 66, in forward
return self.gather(outputs, self.output_device)
File "/home/ubuntu/anaconda3/envs/trfm/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 168, in gather
return gather(outputs, output_device, dim=self.dim)
File "/home/ubuntu/anaconda3/envs/trfm/lib/python3.7/site-packages/torch/nn/parallel/scatter_gather.py", line 68, in gather
res = gather_map(outputs)
File "/home/ubuntu/anaconda3/envs/trfm/lib/python3.7/site-packages/torch/nn/parallel/scatter_gather.py", line 62, in gather_map
for k in out))
File "/home/ubuntu/anaconda3/envs/trfm/lib/python3.7/site-packages/torch/nn/parallel/scatter_gather.py", line 62, in <genexpr>
for k in out))
File "/home/ubuntu/anaconda3/envs/trfm/lib/python3.7/site-packages/torch/nn/parallel/scatter_gather.py", line 63, in gather_map
return type(out)(map(gather_map, zip(*outputs)))
TypeError: zip argument #1 must support iteration
To Reproduce
Run the following code will always result in this issue.
from pytorch_lightning import Trainer, seed_everything
from pl_examples.models.lightning_template import LightningTemplateModel
seed_everything(234)
def main():
# model = LightningTemplateModel(**vars(args))
model = LightningTemplateModel()
trainer = Trainer(
gpus=2,
num_nodes=1,
distributed_backend='dp',
)
trainer.fit(model)
if __name__ == '__main__':
main()
Expected behavior
The trainer should initialize correctly and start training.
Environment
CUDA:
- GPU:
- Tesla V100-SXM2-16GB
- Tesla V100-SXM2-16GB
- Tesla V100-SXM2-16GB
- Tesla V100-SXM2-16GB
- available: True
- version: 10.2
Packages:
- numpy: 1.18.5
- pyTorch_debug: False
- pyTorch_version: 1.7.0.dev20200702
- pytorch-lightning: 0.8.5-dev
- tensorboard: 2.2.2
- tqdm: 4.47.0
System:
- OS: Linux
- architecture:
- 64bit
-
- processor: x86_64
- python: 3.7.7
- version: #30~18.04.1-Ubuntu SMP Mon Jun 22 15:48:21 UTC 2020
|
Pass parameters on train_step/validation_step/test_step
|
[
"feature",
"help wanted",
"won't fix"
] |
π Feature
As title
Motivation
I'm currently working with Seq-to-seq architecture, which requires a variable called max_length when decoding outputs. I mean, while training, it could be fixed as a model hyperparameter. However, during testing, we could vary its value to make predict longer or shorter in need. Therefore, I think there should be a way to pass other arguments in the validating/testing phase to make it more flexible, especially with argparse. This also helps in case we have different strategies during evaluating, such as I could select either greedy or beam search algorithms.
For example:
I could run
# training model with max_length = 15 and using Greedy (as default) to save training time
python train.py --max_length 15
# eval model with a longer length and use Beamsearch to increase performance
python eval.py epoch=0.ckpt --max_length 20 --using_beamsearch
|
AttributeError on using multi gpu even on using ddp
|
[
"bug",
"help wanted"
] |
π Bug
I am getting attribute not found error on using multi gpu. The code works fine on using a single gpu. I am also using ddp as suggested. Here's a traceback.
Traceback (most recent call last):
File "/home/sohigre/STL/stl_bert_trial_lightning.py", line 245, in <module>
trainer.fit(model)
File "/home/sohigre/anaconda3/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 952, in fit
self.ddp_train(task, model)
File "/home/sohigre/anaconda3/lib/python3.7/site-packages/pytorch_lightning/trainer/distrib_data_parallel.py", line 500, in ddp_train
self.optimizers, self.lr_schedulers, self.optimizer_frequencies = self.init_optimizers(model)
File "/home/sohigre/anaconda3/lib/python3.7/site-packages/pytorch_lightning/trainer/optimizers.py", line 18, in init_optimizers
optim_conf = model.configure_optimizers()
File "/home/sohigre/STL/stl_bert_trial_lightning.py", line 178, in configure_optimizers
total_steps = len(self.train_dataloader()) * self.max_epochs
File "/home/sohigre/STL/stl_bert_trial_lightning.py", line 109, in train_dataloader
return DataLoader(self.ds_train, batch_size=self.batch_size,num_workers=4)
File "/home/sohigre/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 594, in __getattr__
type(self).__name__, name))
AttributeError: 'Abuse_lightning' object has no attribute 'ds_train'
Code sample
def prepare_data(self):
# download only (not called on every GPU, just the root GPU per node)
df = pd.read_csv(self.filename)
self.df_train, self.df_test = train_test_split(df, test_size=0.2, random_state=self.RANDOM_SEED)
self.df_val, self.df_test = train_test_split(self.df_test, test_size=0.5, random_state=self.RANDOM_SEED)
self.tokenizer = BertTokenizer.from_pretrained(self.PRE_TRAINED_MODEL_NAME)
self.ds_train = AbuseDataset(reviews=self.df_train.comment.to_numpy(), targets=self.df_train.Score.to_numpy(),
tokenizer=self.tokenizer,max_len=self.max_len)
self.ds_val = AbuseDataset(reviews=self.df_val.comment.to_numpy(), targets=self.df_val.Score.to_numpy(),
tokenizer=self.tokenizer,max_len=self.max_len)
self.ds_test = AbuseDataset(reviews=self.df_test.comment.to_numpy(), targets=self.df_test.Score.to_numpy(),
tokenizer=self.tokenizer,max_len=self.max_len)
@pl.data_loader
def train_dataloader(self):
return DataLoader(self.ds_train, batch_size=self.batch_size,num_workers=4)
Environment
CUDA:
GPU:
GeForce GTX 1080 Ti
GeForce GTX 1080 Ti
GeForce GTX 1080 Ti
GeForce GTX 1080 Ti
available: True
version: 10.1
Packages:
numpy: 1.18.1
pyTorch_debug: False
pyTorch_version: 1.5.1
pytorch-lightning: 0.8.4
tensorboard: 2.2.2
tqdm: 4.42.1
System:
OS: Linux
architecture:
64bit
processor:
python: 3.7.6
version: #1 SMP Debian 4.19.118-2+deb10u1 (2020-06-07)
|
How to prepare list of files for dataloader, avoiding the duplicated work?
|
[
"question",
"won't fix"
] |
I do
self.image_paths = sorted(Path(self.hparams["image_path"]).rglob("*.jpg"))
in setup
and use self.image_paths to initialize the data loader in train_dataloader
I have 10m+ files and rglob takes some time.
My model is trained on 4 GPUs and as I understand I do rglob 4 times.
What is the best way to do it, so that it is done once and not 4 times?
__init__
setup ?
train_dataloader
somewhere else?
|
How to disable Detected KeyboardInterrupt
|
[
"bug",
"help wanted"
] |
π Bug
To Reproduce
Steps to reproduce the behavior:
I use pycharm
Enter F5 key or click pycharm debug
Epoch 1: 77%|ββββββββ | 27/35 [00:12<00:03, 2.08it/s, loss=6.617, v_num=16, train_loss=5.91]/home/blake/anaconda3/envs/torch/lib/python3.7/site-packages/pytorch_lightning/utilities/distributed.py:25: UserWarning: Detected KeyboardInterrupt, attempting graceful shutdown...
warnings.warn(*args, **kwargs)
Epoch 1: 77%|ββββββββ | 27/35 [00:13<00:03, 2.06it/s, loss=6.617, v_num=16, train_loss=5.91]
See you again!
Code sample
trainer = Trainer(logger=tb_logger,
gpus=cfg.NUM_GPUS,
max_epochs=cfg.SOLVER.MAX_STEPS,
checkpoint_callback=checkpoint_callback,
distributed_backend = 'dp',
resume_from_checkpoint= None if cfg.MODEL.LANDMARK_DET.PRE_TRAIN_DIR == 'None' else cfg.MODEL.LANDMARK_DET.PRE_TRAIN_DIR,
num_sanity_val_steps=0)
#start training
trainer.fit(train_model)
Expected behavior
Environment
CUDA:
- GPU:
- GeForce GTX 1080
- GeForce GTX 1080
- available: True
- version: 10.0
Packages:
- numpy: 1.18.1
- pyTorch_debug: False
- pyTorch_version: 1.4.0+cu100
- pytorch-lightning: 0.8.4-dev
- tensorboard: 2.2.2
- tqdm: 4.46.1
System:
- OS: Linux
- architecture:
- 64bit
-
- processor: x86_64
- python: 3.7.7
- version: #41~16.04.1-Ubuntu SMP Wed Oct 10 20:16:04 UTC 2018
|
checkpoint save dir is not correctly set when _save_dir is given by wandb logger
|
[
"bug",
"help wanted",
"logger"
] |
π Bug
When using ModelCheckpoint with default parameter and Wandb Logger with save_dir set to some dir,
The checkpoint is still dumped to os.getcwd()
To Reproduce
........
logger = WandbLogger(save_dir='/path/to/experiment')
trainer = Trainer.from_argparse_args(other_args, logger = logger)
Expected behavior
The checkpoint should be saved under /path/to/experiment defined by Wandb logger's save_dir argument.
Additional context
The pl version i am use is by pip install, i.e. 0.8.4
I think the problem is related to the logic in on_train_start() function in model_checkpoint.py,
if trainer.logger is not None:
# weights_save_path overrides anything
if getattr(trainer, 'weights_save_path', None) is not None:
save_dir = trainer.weights_save_path
else:
save_dir = (getattr(trainer.logger, 'save_dir', None)
or getattr(trainer.logger, '_save_dir', None)
or trainer.default_root_dir)
Unfortunately, the default of "weights_save_path" is not None; it is set to default_root_dir which is os.getcwd() (See pytorch_lightning/trainer/callback_config.py, line 57):
# if weights_save_path is still none here, set to current working dir
if self.weights_save_path is None:
self.weights_save_path = self.default_root_dir
Thus, the ckpt_path is always set to weights_save_path instead of save_dir from logger.
Fix
A quick patch for this might be as follows:
if trainer.logger is not None:
# weights_save_path overrides anything
# unless if it is os.getcwd() and we have a logger set its save_dir to other folder
weights_save_path = getattr(trainer, 'weights_save_path', None)
loggers_save_path = (getattr(trainer.logger, 'save_dir', None)
or getattr(trainer.logger, '_save_dir', None)
or trainer.default_root_dir)
avoid_weights_save_path = (weight_save_path == trainer.default_root_dir and loggers_save_path != trainer.default_root_dir)
if (weights_save_path is not None and not avoid_weights_save_path):
save_dir = weights_save_path
else:
save_dir = loggers_save_path
I would be happy to fork the code and submit a PR, btw.
|
TypeError with multiple validation loaders and overfit_batches
|
[
"bug",
"help wanted"
] |
π Bug
A TypeError when using multiple validation datasets and overfit_batches != 0
To Reproduce
Steps to reproduce the behavior:
Use multiple val_dataloaders
Use overfit_batches != 0, e.g. overfit_batches=0.5
Code sample
https://colab.research.google.com/drive/1BtQBCoP5fK-aZm_2uLMOUbf2c9cu-yFb?usp=sharing
Traceback
TypeError Traceback (most recent call last)
<ipython-input-5-c33b987ae54f> in <module>()
1 trainer = pl.Trainer(overfit_batches=0.5)
----> 2 trainer.fit(model)
3 frames
/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/trainer.py in fit(self, model, train_dataloader, val_dataloaders)
1018 self.optimizers, self.lr_schedulers, self.optimizer_frequencies = self.init_optimizers(model)
1019
-> 1020 self.run_pretrain_routine(model)
1021
1022 # callbacks
/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/trainer.py in run_pretrain_routine(self, model)
1137 self.val_dataloaders,
1138 max_batches,
-> 1139 False)
1140
1141 # allow no returns from eval
/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/evaluation_loop.py in _evaluate(self, model, dataloaders, max_batches, test_mode)
291 output = self.evaluation_forward(model, batch, batch_idx, dataloader_idx, test_mode)
292 else:
--> 293 output = self.evaluation_forward(model, batch, batch_idx, dataloader_idx, test_mode)
294
295 # on dp / ddp2 might still want to do something with the batch parts
/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/evaluation_loop.py in evaluation_forward(self, model, batch, batch_idx, dataloader_idx, test_mode)
483 output = model.test_step(*args)
484 else:
--> 485 output = model.validation_step(*args)
486
487 return output
TypeError: validation_step() missing 1 required positional argument: 'dataloader_idx'
Expected behavior
If the codebase is working with multiple validation loaders, it should continue to work even when using overfit_batches != 0
Possible solution
Check if there were multiple val dataloaders, in case call validation_step with dataloader_idx=0
Repeat the train loader to match the number of val dataloaders
Add the possibility to overfit on train but validate and test normally. It is already possible with limit_train_batches, so it would be only a doc change "If there are multiple val_dataloaders, use limit_train_batches instead of overfit_batches"
Reason
When using multiple validation loaders, validation_step takes a dataloader_idx.
However if later on we set the overfit_batches to something that is not 0, line 268 is executed to use the train loader instead than the validation loaders:
pytorch-lightning/pytorch_lightning/trainer/data_loading.py
Lines 266 to 270
in
a91b06e
# use the training loader as val and test when overfitting
if self.overfit_batches > 0:
dataloaders = self.request_dataloader(getattr(model, 'train_dataloader'))
else:
dataloaders = self.request_dataloader(getattr(model, f'{mode}_dataloader'))
Now there is only one validation loader, thus the validation_step function that had a dataloader_idx parameter breaks.
Environment
CUDA:
GPU:
available: False
version: 10.1
Packages:
numpy: 1.18.5
pyTorch_debug: False
pyTorch_version: 1.5.1+cu101
pytorch-lightning: 0.8.4
tensorboard: 2.2.2
tqdm: 4.41.1
System:
OS: Linux
architecture:
64bit
processor: x86_64
python: 3.6.9
version: 1 SMP Wed Feb 19 05:26:34 PST 2020
|
format_checkpoint_name takes global step
|
[
"feature",
"help wanted",
"won't fix"
] |
π Feature
Since there is val_check_interval, the model checkpointer could also be enhanced to save in the form of, for example checkpoint_iter2000.ckpt, by defining
checkpoint_callback = ModelCheckpoint(filepath='checkpoint_iter{global_step}')
https://github.com/PyTorchLightning/PyTorch-Lightning/blob/master/pytorch_lightning/callbacks/model_checkpoint.py#L191
|
Save checkpoint and validate every n steps
|
[
"feature",
"help wanted"
] |
β Questions and Help
How to save checkpoint and validate every n steps.
I saw there is a val_check_interval, but it seems it's not for that purpose.
|
TPU fp16 requires apex installed
|
[
"bug",
"help wanted"
] |
When I tried to use precision=16 on TPU, pytorch-lightning is trying to find amp, which is unnecessary.
The backtrace is
GPU available: False, used: False
TPU available: True, using: 8 TPU cores
Traceback (most recent call last):
File "bert_ner/light/fp16_debug.py", line 16, in <module>
trainer = pl.Trainer(tpu_cores=8, precision=16)
File "/anaconda3/envs/torch-xla-1.5/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 607, in __init__
self.init_amp()
File "/anaconda3/envs/torch-xla-1.5/lib/python3.6/site-packages/pytorch_lightning/trainer/auto_mix_precision.py", line 27, in init_amp
"You set `use_amp=True` but do not have apex installed."
ModuleNotFoundError: You set `use_amp=True` but do not have apex installed.Install apex first using this guide and rerun with use_amp=True:https://github.com/NVIDIA/apex#linux his run will NOT use 16 bit precision
To Reproduce
Steps to reproduce the behavior:
build a whatever Trainer in TPU and use fp16
Code sample
import pytorch_lightning as pl
trainer = pl.Trainer(tpu_cores=8, precision=16)
Expected behavior
Should have nothing error.
Environment
PyTorch Version (e.g., 1.5.0):
OS (e.g., Linux): Linux
How you installed PyTorch (conda, pip, source): conda
Build command you used (if compiling from source):
Python version:
CUDA/cuDNN version:
GPU models and configuration:
Any other relevant information: actually I directly use pytorch-xla-1.5 docker on Google Cloud
Additional context
|
Possible Bug for multiple optimizers, require_grads=False
|
[
"bug",
"help wanted"
] |
π Bug
Hey first of all, I really like your repository (great work).
I am using your framework to train two models at the same time, i.e. I followed the GAN example. This means that I have also 2 optimizers. The problem that I am facing is the error:
line 99, in backward allow_unreachable=True) # allow_unreachable flag RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
I investigated a little and have a pretty good idea what solves the problem. Namely in the file https://github.com/PyTorchLightning/pytorch-lightning/blob/master/pytorch_lightning/trainer/training_loop.py
on line 610 we loop over our optimizers (I already saw in previous issues that there was some discussion about how to do this). When we then use one optimizer, we set the parameters of the other model to require_grad=False. However if I look at the way this was implemented, assuming we are currently in the second iteration, i.e. we are optimizing with our second optimizer the following happens:
I set all my parameters to requires_grad=False. I then go to my second optimizer and set all the parameters that are being optimized by it to requires_grad=True. I can then optimize these parameters.
However if I then do the next gradient update over my next batch, the parameters of my first model were still set to requires_grad=False.
One can easily look at this with the following code
for param in self.get_model().parameters:
print(param.requires_grad)
I am not sure, I could be completely wrong, but i feel like after the last optimizer backpropagates, you forget to reset all the parameters to param.requires_grad. The reason why I could be wrong, is because somehow I guess in your code exmaple with the GAN's it all worked fine (I didn't test that). But in my case it yields this error.
Another way with which I was able to fix it, is by setting my loss to loss.requires_grad(True)
Environment
Mac
torch==1.3.0
pytorch-lightning==0.8.3
|
Support easily reading and writing checkpoints from different systems instead of just to disk
|
[
"duplicate",
"feature",
"help wanted"
] |
π Feature
I want to easily be able to extend model_checkpoint so that it can be used with blob stores instead of always writing to disk. One way to do this is to abstract to replace the calls to os with an interface.
Motivation
I want to read and write my files from a blob store, not to disk.
Pitch
I want to easily be able to swap out reads and writes to disk with writes to a blob store.
Alternatives
Write a nearly identical class to ModelCheckpoint
Additional context
|
apex amp state dict
|
[
"bug",
"help wanted",
"good first issue"
] |
π Bug
pytorch-lightning/pytorch_lightning/trainer/training_io.py
Line 310
in
25ee51b
if self.use_amp and NATIVE_AMP_AVALAIBLE and 'native_amp_scaling_state' in checkpoint:
It seems for native amp support, the scalar state dict is saved. But for non native amp, the amp state dict is not saved?
|
trainer.test(model) on 0.8.4 with a checkpoint saved in 0.6.0 expects attribute 'checkpoint_callback_best_model_path'
|
[
"bug",
"help wanted",
"priority: 0",
"waiting on author"
] |
π Bug
To Reproduce
Steps to reproduce the behavior:
Save a checkpoint in 0.6.0
Load the model in 0.8.4 (no problem)
model = My_model.load_from_checkpoint(checkpoint_path)
Run trainer.test(model)
See error
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-3-eb476187792a> in <module>
84
85
---> 86 trainer.test(model)
87 print("Test finished!\n")
~/.conda/envs/qwe3/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py in test(self, model, test_dataloaders, ckpt_path)
1240 if model is not None:
1241 self.model = model
-> 1242 self.fit(model)
1243
1244 # on tpu, .spawn means we don't have a trained model
~/.conda/envs/qwe3/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py in fit(self, model, train_dataloader, val_dataloaders)
977
978 elif self.single_gpu:
--> 979 self.single_gpu_train(model)
980
981 elif self.use_tpu: # pragma: no-cover
~/.conda/envs/qwe3/lib/python3.6/site-packages/pytorch_lightning/trainer/distrib_parts.py in single_gpu_train(self, model)
183 self.reinit_scheduler_properties(self.optimizers, self.lr_schedulers)
184
--> 185 self.run_pretrain_routine(model)
186
187 def tpu_train(self, tpu_core_idx, model):
~/.conda/envs/qwe3/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py in run_pretrain_routine(self, model)
1110
1111 # restore training and model before hpc call
-> 1112 self.restore_weights(model)
1113
1114 # when testing requested only run test and return
~/.conda/envs/qwe3/lib/python3.6/site-packages/pytorch_lightning/trainer/training_io.py in restore_weights(self, model)
180 if not did_restore_hpc_weights:
181 if self.resume_from_checkpoint is not None:
--> 182 self.restore(self.resume_from_checkpoint, on_gpu=self.on_gpu)
183
184 # wait for all models to restore weights
~/.conda/envs/qwe3/lib/python3.6/site-packages/pytorch_lightning/trainer/training_io.py in restore(self, checkpoint_path, on_gpu)
312
313 # load training state (affects trainer only)
--> 314 self.restore_training_state(checkpoint)
315
316 def dump_checkpoint(self, weights_only: bool = False) -> dict:
~/.conda/envs/qwe3/lib/python3.6/site-packages/pytorch_lightning/trainer/training_io.py in restore_training_state(self, checkpoint)
427 )
428 checkpoint_callbacks[-1].best_model_score = checkpoint['checkpoint_callback_best']
--> 429 checkpoint_callbacks[-1].best_model_path = checkpoint['checkpoint_callback_best_model_path']
430
431 if early_stopping_callbacks:
KeyError: 'checkpoint_callback_best_model_path'
But checkpoint's attribute ''checkpoint_callback_best_model_path' doesn't exist in my old PTL version...
Temporal Hack
Load the model
checkpoint_path = 'lightning_logs/version_149/checkpoints/_ckpt_epoch_3.ckpt'
checkpoint = torch.load(checkpoint_path)
Add a dummy path to the required argument
checkpoint['checkpoint_callback_best_model_path'] = ''
Save this checkpoint
torch.save(checkpoint, checkpoint_path)
Load from the new checkpoint in 0.8.4 and run trainer.test(model) without any problem
Environment
PyTorch Version: 1.5.1
OS: Linux
How you installed PyTorch: conda
Python version: 3.6.10
CUDA/cuDNN version: 10.2
GPU models and configuration: Tesla V100-SXM3-32GB
Probably I am misunderstanding something, but this is my quick fix, thanks for your help!
|
Simultaneously run multiple optimizers
|
[
"feature",
"help wanted",
"won't fix"
] |
Current optimizers are run sequentially.
However, there are cases that two optimizers are used optimized different parts of the network, for example, a CNN+LSTM. one would want CNN to be updated SGD and LSTM to be updated Adam.
|
Is it possible to plot val and train losses on the same figure?
|
[
"question",
"logger"
] |
I could not find the way to do it, maybe there is something I've been missing.
|
Logging loss is extremely large when training with fp16
|
[
"bug",
"help wanted"
] |
π Bug
Logging loss is extremely large(e.g. 10000) when training with fp16, but I find that the loss in tfboard is quite normal(e.g. 1.0). I guess the loss in logging does not discard unstable training steps?
Expected behavior
Loss in logging should be approximate between turning on/off fp16. Also, logging loss should be consistent with tfboard loss.
Environment
Please copy and paste the output from our
environment collection script
(or fill out the checklist below manually).
You can get the script and run it with:
wget https://raw.githubusercontent.com/PyTorchLightning/pytorch-lightning/master/tests/collect_env_details.py
# For security purposes, please check the contents of collect_env_details.py before running it.
python collect_env_details.py
PyTorch Version (e.g., 1.0): nightly
OS (e.g., Linux): Linux
How you installed PyTorch (conda, pip, source): pip
Build command you used (if compiling from source):
Python version: 3.6
CUDA/cuDNN version: 10.2
GPU models and configuration:
Any other relevant information:
Additional context
|
Flush Events for Tensorboard Logger
|
[
"feature",
"help wanted",
"won't fix"
] |
π Feature
Add an option to automatically flush events every epoch for TensorBoardLogger
Motivation
TensorboardX and Tensorboard only refresh once their "buffer" is filled. Often, the user would like to see an update before the buffer is filled (say, after every epoch). In tensorboardX/tensorflow, this is accomplished by calling "flush_events."
Alternatives
If you could share how to manually call flush_events so I can insert it into my training loop, that would be great. This feature is not currently described anywhere in the documentation.
|
Resume and load optimizer from ckpt
|
[
"question"
] |
What is your question?
I'm trying to load checkpoint following this https://pytorch-lightning.readthedocs.io/en/latest/weights_loading.html#checkpoint-loading. And it seems to me that state_dict is loaded to model without problems, however, optimizer_states does not seem to be loaded.
Code
import os
import pytorch_lightning as pl
from pytorch_lightning.callbacks import ModelCheckpoint
import torch
from torch.nn import functional as F
from torch.utils.data import DataLoader, random_split
from torchvision import transforms
from torchvision.datasets import MNIST
class LightningMNISTClassifier(pl.LightningModule):
def __init__(self):
super(LightningMNISTClassifier, self).__init__()
# mnist images are (1, 28, 28) (channels, width, height)
self.layer_1 = torch.nn.Linear(28 * 28, 128)
self.layer_2 = torch.nn.Linear(128, 256)
self.layer_3 = torch.nn.Linear(256, 10)
def forward(self, x):
batch_size, channels, width, height = x.size()
# (b, 1, 28, 28) -> (b, 1*28*28)
x = x.view(batch_size, -1)
# layer 1 (b, 1*28*28) -> (b, 128)
x = self.layer_1(x)
x = torch.relu(x)
# layer 2 (b, 128) -> (b, 256)
x = self.layer_2(x)
x = torch.relu(x)
# layer 3 (b, 256) -> (b, 10)
x = self.layer_3(x)
# probability distribution over labels
x = torch.log_softmax(x, dim=1)
return x
def cross_entropy_loss(self, logits, labels):
return F.nll_loss(logits, labels)
def training_step(self, train_batch, batch_idx):
x, y = train_batch
logits = self.forward(x)
loss = self.cross_entropy_loss(logits, y)
logs = {'train_loss': loss}
return {'loss': loss, 'log': logs}
def validation_step(self, val_batch, batch_idx):
x, y = val_batch
logits = self.forward(x)
loss = self.cross_entropy_loss(logits, y)
return {'val_loss': loss}
def validation_epoch_end(self, outputs):
# called at the end of the validation epoch
# outputs is an array with what you returned in validation_step for each batch
# outputs = [{'loss': batch_0_loss}, {'loss': batch_1_loss}, ..., {'loss': batch_n_loss}]
avg_loss = torch.stack([x['val_loss'] for x in outputs]).mean()
tensorboard_logs = {'val_loss': avg_loss}
return {'avg_val_loss': avg_loss, 'log': tensorboard_logs}
def prepare_data(self):
# transforms for images
transform = transforms.Compose([transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))])
# prepare transforms standard to MNIST
mnist_train = MNIST(os.getcwd(), train=True, download=True, transform=transform)
self.mnist_train, self.mnist_val = random_split(mnist_train, [55000, 5000])
def train_dataloader(self):
return DataLoader(self.mnist_train, batch_size=64)
def val_dataloader(self):
return DataLoader(self.mnist_val, batch_size=64)
def configure_optimizers(self):
optimizer = torch.optim.Adam(self.parameters(), lr=1e-3)
return optimizer
model = LightningMNISTClassifier()
checkpoint_callback = ModelCheckpoint(monitor='avg_val_loss', mode='min', save_last=True)
trainer = pl.Trainer(
checkpoint_callback=checkpoint_callback,
max_epochs=2,
)
trainer.fit(model)
model = LightningMNISTClassifier.load_from_checkpoint("./lightning_logs/version_0/checkpoints/last.ckpt")
print(model.learning_rate)
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-6-1cc355e36cfa> in <module>()
1 model = LightningMNISTClassifier.load_from_checkpoint(
2 "./lightning_logs/version_0/checkpoints/last.ckpt")
----> 3 print(model.learning_rate)
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in __getattr__(self, name)
592 return modules[name]
593 raise AttributeError("'{}' object has no attribute '{}'".format(
--> 594 type(self).__name__, name))
595
596 def __setattr__(self, name, value):
AttributeError: 'LightningMNISTClassifier' object has no attribute 'learning_rate'
What have you tried?
You can replicate the issue here: https://colab.research.google.com/drive/17Ira4hpvwea5um7O7T316qhx8IryoNOc?usp=sharing
What's your environment?
OS: Linux
Packaging: pip
Version: 0.8.4
|
Unable to launch multiple gpus nodes
|
[
"bug",
"help wanted",
"priority: 0"
] |
π Bug
I'm having trouble launching multiple GPU nodes with pytorch-lightning-0.8.5-dev. I'm getting the following error
Traceback (most recent call last):
File "/home/jmorton/miniconda3/envs/alignment/bin/deepblast-train", line 7, in <module>
exec(compile(f.read(), __file__, 'exec'))
File "/home/jmorton/research/gert/deepblast/scripts/deepblast-train", line 67, in <module>
main(hparams)
File "/home/jmorton/research/gert/deepblast/scripts/deepblast-train", line 47, in main
trainer.fit(model)
File "/home/jmorton/miniconda3/envs/alignment/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 964, in fit
self.set_random_port()
File "/home/jmorton/miniconda3/envs/alignment/lib/python3.8/site-packages/pytorch_lightning/trainer/distrib_data_parallel.py", line 392, in set_random_port
assert self.num_nodes == 1, 'random port can only be called from single node training'
AssertionError: random port can only be called from single node training
To Reproduce
I've setup my model similar to as follows
parser = LightningAligner.add_model_specific_args(parser)
args = parser.parse_args()
model = LightningAligner(args)
trainer = Trainer(
max_epochs=10,
gpus=4,
num_nodes=2,
accumulate_grad_batches=10,
distributed_backend='ddp',
precision=32,
check_val_every_n_epoch=1,
fast_dev_run=False
)
Environment
Output of python collect_env_details.py
* CUDA:
- GPU:
- Tesla V100-PCIE-32GB
- Tesla V100-PCIE-32GB
- Tesla V100-PCIE-32GB
- Tesla V100-PCIE-32GB
- available: True
- version: 10.1
* Packages:
- numpy: 1.17.5
- pyTorch_debug: False
- pyTorch_version: 1.5.1
- pytorch-lightning: 0.8.5-dev
- tensorboard: 2.2.2
- tqdm: 4.47.0
* System:
- OS: Linux
- architecture:
- 64bit
- ELF
- processor: x86_64
- python: 3.8.1
- version: #1 SMP Wed Jun 3 14:28:03 UTC 2020
Additional context
Only 4 out of 8 GPUs are recognized.
I'm curious why the assert statement is there.
|
Log debug information to a file
|
[
"feature",
"help wanted",
"good first issue"
] |
π Feature
Create a log file for each run with debug information.
Motivation
User can read the logs and understand in which order things got executed. For example, logs could contain this information:
prepare_data_called
setup called
started train loop
training step called
backward called
zero grad called
etc...
Pitch
Use Python logging and redirect all messages with level "debug" to a file in e.g. the logger folder.
Additional context
Suggested by @williamFalcon in a slack discussion.
|
ddp no_sync
|
[
"feature",
"help wanted"
] |
π Feature
DDP has this no_sync context manager.
https://pytorch.org/docs/stable/nn.html?highlight=no_sync#torch.nn.parallel.DistributedDataParallel.no_sync
Could be good to add it when doing gradient accumulation.
|
The documents about override backward()
|
[
"won't fix",
"docs"
] |
I'm` trying to set retain_graph=True in loss.backward()
The document says backward function could be overridden like:
https://pytorch-lightning.readthedocs.io/en/0.7.6/introduction_guide.html#extensibility
class LitMNIST(LightningModule):
def backward(self, use_amp, loss, optimizer):
# do a custom way of backward
loss.backward(retain_graph=True)
But the error occurs:
An exception has occurred: TypeError backward() takes 4 positional arguments but 5 were given
I check the code, it seems the arguments for backward are
model_ref.backward(self, closure_loss, optimizer, opt_idx)
pytorch-lightning/pytorch_lightning/trainer/training_loop.py
Line 820
in
1d565e1
model_ref.backward(self, closure_loss, optimizer, opt_idx)
I think this part of the document needs to be updated.
|
batch sampler set_epoch is not called
|
[
"won't fix"
] |
pytorch-lightning/pytorch_lightning/trainer/training_loop.py
Line 353
in
1d565e1
self.train_dataloader.sampler.set_epoch(epoch)
In the line above batch_sampler can be used to generate batches, but it is impossible to use set_epoch for it.
|
Unable to run one of the domain_templates
|
[
"bug",
"help wanted"
] |
π Bug
``
Hi.
I've tried to run computer_vision_fine_tuning.py from domain_templates on Kaggle.
After first epoch(5) in the second stage, there was an error.
Error
/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/callback_hook.py in on_epoch_start(self)
55 """Called when the epoch begins."""
56 for callback in self.callbacks:
---> 57 callback.on_epoch_start(self, self.get_model())
58
59 def on_epoch_end(self):
/opt/conda/lib/python3.7/site-packages/pytorch_lightning/callbacks/lr_logger.py in on_epoch_start(self, trainer, pl_module)
71
72 def on_epoch_start(self, trainer, pl_module):
---> 73 latest_stat = self._extract_lr(trainer, 'epoch')
74 if trainer.logger and latest_stat:
75 trainer.logger.log_metrics(latest_stat, step=trainer.global_step)
/opt/conda/lib/python3.7/site-packages/pytorch_lightning/callbacks/lr_logger.py in _extract_lr(self, trainer, interval)
85 for i, pg in enumerate(param_groups):
86 lr, key = pg['lr'], f'{name}/pg{i + 1}'
---> 87 self.lrs[key].append(lr)
88 latest_stat[key] = lr
89 else:
KeyError: 'learning_rate/pg1'
PyTorch Version torch-1.7.0.dev20200701+cu101 torchvision-0.8.0.dev20200701+cu101:
Pytorch-Lightning 0.8.5
|
Trainer flag overfit_batches does not overwrite train dataloaders shuffle flag
|
[
"bug",
"help wanted",
"won't fix"
] |
π Bug
Setting the trainer flag overfit_batches (e.g. =10) does not overwrite the shuffle flag set in the training dataloader, even though the warning reads:
UserWarning: You requested to overfit but enabled training dataloader shuffling. We are turning it off for you.
To Reproduce
Steps to reproduce the behavior:
Create lightning module with method train_dataloader with flag shuffle=True:
def train_dataloader(self) -> loading.DataLoader:
dataset = ProstateX(train=True)
batch_transforms, gpu_transforms, sample_transforms = self.get_transformations()
dataloader = loading.DataLoader(dataset,
batch_size=self.hparams.tr_batch_size,
batch_transforms=batch_transforms,
shuffle=True,
sample_transforms= sample_transforms,
gpu_transforms=gpu_transforms,
pseudo_batch_dim=True,
num_workers=self.hparams.num_workers)
return dataloader
( I use a rising dataloader, bug should also occur with pytorch dataloaders though)
Create main.py with:
mymodel = model.Model3D(cfg)
trainer = pl.Trainer(gpus=1, precision=16, overfit_batches=10)
trainer.fit(mymodel)`
Run main.py
Find out that your model does not converge.
set shuffle=False when creating Dataloader in train_dataloader
See that your model converges after some epochs.
(Or log the samples loaded by the dataloader and check if they are the same each epoch.)
Code sample
Expected behavior
Either model also converges with shuffle=True, since warning says that it got overwritten (assuming model converges with shuffle=False) or at least warning should read that user has to change shuffle to False.
Environment
CUDA:
- GPU:
- GeForce GTX 1080 Ti
- available: True
- version: 10.1
Packages:
- numpy: 1.19.0
- pyTorch_debug: False
- pyTorch_version: 1.7.0.dev20200705+cu101
- pytorch-lightning: 0.8.5
- tensorboard: 2.2.2
- tqdm: 4.47.0
System:
- OS: Linux
- architecture:
- 64bit
-
- processor: x86_64
- python: 3.7.7
- version: #109-Ubuntu SMP Fri Jun 19 11:33:10 UTC 2020
Additional context
|
How to get the dictionary returned by the validation_epoch_end() method?
|
[
"question",
"won't fix"
] |
β Questions and Help
What is your question?
How do I fetch the dictionary returned by the validation_epoch_end() method? I don't want to add this to tensorflow logs
Code
return {"label_freq": label_freq, log: {"loss": loss}}
I want to get the value of label_freq
What's your environment?
OS: [e.g. iOS, Linux, Win]
Packaging [e.g. pip, conda]
Version [e.g. 0.5.2.1]
|
Why is `accumulate_grad_batches` set to 1 for first epoch when the argument is provided as int?
|
[
"question"
] |
π Feature
Is there any reasons why is it set to 1 for first epoch?
I think it should be set to the number users specify because of a lot of confusing.
Alternatives
Change the key of schedule dict to 0 in training_tricks.py:
def configure_accumulated_gradients(self, accumulate_grad_batches):
if isinstance(accumulate_grad_batches, dict):
self.accumulation_scheduler = GradientAccumulationScheduler(accumulate_grad_batches)
elif isinstance(accumulate_grad_batches, int):
schedule = {1: accumulate_grad_batches} # => schedule = {0: accumulate_grad_batches}
self.accumulation_scheduler = GradientAccumulationScheduler(schedule)
else:
raise TypeError("Gradient accumulation supports only int and dict types")
|
Minimal example with custom RNNs cells and sliding window support
|
[
"good first issue",
"question",
"won't fix",
"example"
] |
β Questions and Help
Hello,
I would like to ask you about the support of pytorch-lighting regarding custom RNN cells and sliding window predictions of a sequence ( i.e. video)
I am implementing a kind of conv-LSTM variant for video prediction, where each frame is estimated from the previous frames in a sliding window manner.
Before asking:
I have searched the issues and the docs and I have found this:
# backprop every 5 steps in a batch
trainer = Trainer(truncated_bptt_steps=5)
.....
# Truncated back-propagation through time
def training_step(self, batch, batch_idx, hiddens):
# hiddens are the hiddens from the previous truncated backprop step
out, hiddens = self.lstm(data, hiddens)
return {
"loss": ...,
"hiddens": hiddens # remember to detach() this
}
I have stored all the information I need (across timesteps) in the class (as members) so this is not an issue I think.
What is your question?
How can I provide my sequence as a sliding window and train the model for each step of the sequence (grads+backprop)?
Does the implemented supported function for truncated back-prop through time support this?
What have you tried?
I have tried to overwrite:
def tbptt_split_batch(self, batch, split_size):
....
but is seems it is not called on the Validation sanity check
What's your environment?
OS: Linux
Packaging: pip
It would be nice to create a minimal example on how the lib could support this kind of recurrent model predictions with sliding window support. If I can make this to work I can provide a minimal collab notebook.
Any ideas?
Thanks a lot,
N.A.
|
How to logging custom metrics during training, validation, and testing?
|
[
"question"
] |
Following the Implement a metric documentation I implemented the following metric:
class MRRMetric(TensorMetric):
def forward(self, x1, x2):
"""
Return the mean reciprocal rank using x1 as query and x2 as retrieved results.
:param x1: batch of queries embeddings.
:param x2: batch of results embeddings.
:return: mean reciprocal rank.
"""
return mrr(x1,x2)
However, it was not clear to me how to integrate and display the metric value during the training, validation, and testing process.
For instance, given the following model:
class MyModel(LightningModule):
"""Encodes the x1 and x2 into an same space of embeddings."""
def __init__(self, config):
super(JointEncoder, self).__init__()
self.config = config
self.x1_encoder = Encoder(config)
self.x2_encoder = Encoder(config)
self.tokenizer = Tokenizer(config)
self.loss_fn = NPairLoss()
self.mrr = MRRMetric(name="MRR")
def forward(self, x1, x2):
x1 = self.x1_encoder(x1)
x2 = self.x2_encoder(x2)
return x1, x2
def configure_optimizers(self):
return torch.optim.Adam(self.parameters(), lr=1e-6, betas=(0.9, 0.999), eps=1e-08, weight_decay=0, amsgrad=True)
def training_step(self, batch, batch_idx):
x1, x2 = batch["x1"], batch["x2"]
predict = self(x1, x2)
loss = self.loss_fn(predict, target)
return {'loss': loss}
def test_step(self, batch, batch_idx):
x1, x2 = batch["x1"], batch["x2"]
predict = self(x1, x2)
loss = self.loss_fn(predict, target)
return {'test_loss': loss}
def test_epoch_end(self, outputs):
avg_loss = torch.stack([x['test_loss'] for x in outputs]).mean()
return {'avg_test_loss': avg_loss}
def validation_step(self, batch, batch_idx):
x1, x2 = batch["x1"], batch["x2"]
x1, x2 = self(x1, x2)
mrr = self.mrr(x1, x2)
return {'val_loss': mrr}
def validation_epoch_end(self, outputs):
avg_loss = torch.stack([x['val_loss'] for x in outputs]).mean()
return {'val_loss': avg_loss}
def train_dataloader(self):
train_dataset = CodeSearchDataset(
path=self.config.dataset.train_path,
tokenizer=self.tokenizer,
max_length=self.config.preprocessing.max_length)
return DataLoader(
train_dataset,
batch_size=self.config.train.batch_size,
drop_last=True,
num_workers=self.config.preprocessing.num_workers
)
def test_dataloader(self):
test_dataset = CodeSearchDataset(
path=self.config.dataset.train_path,
tokenizer=self.tokenizer,
max_length=self.config.preprocessing.max_length)
return DataLoader(
test_dataset,
batch_size=self.config.train.batch_size,
drop_last=True,
num_workers=self.config.preprocessing.num_workers
)
def val_dataloader(self):
val_dataset = CodeSearchDataset(
path=self.config.dataset.train_path,
tokenizer=self.tokenizer,
max_length=self.config.preprocessing.max_length)
return DataLoader(
val_dataset,
batch_size=self.config.train.batch_size,
drop_last=True,
num_workers=self.config.preprocessing.num_workers
)
what is the best approach to logging something like this:
Epoch 1: 17%|ββ | 43/252 [00:11<00:55, 3.78it/s, loss=4.528, mrr=0.725, v_num=0]
What have you tried?
Include mrr metric value along side the returned dictionary:
def training_step(self, batch, batch_idx):
x1, x2 = batch["x1"], batch["x2"]
predict = self(x1, x2)
loss = self.loss_fn(predict, self.train_target)
b_mrr = self.mrr(x1,x2)
return {'loss': loss, 'mrr': b_mrr}
What's your environment?
CUDA:
GPU:
GeForce RTX 2080
available: True
version: 10.2
Packages:
numpy: 1.19.0
pyTorch_debug: False
pyTorch_version: 1.5.1
pytorch-lightning: 0.8.5
tensorboard: 2.2.2
tqdm: 4.46.1
System:
OS: Linux
architecture:
64bit
ELF
processor: x86_64
python: 3.7.3
version: #41-Ubuntu SMP Tue Dec 3 00:27:35 UTC 2019
|
Trainer(resume_from_checkpoint=...) does not load optimizer state
|
[
"bug",
"help wanted"
] |
π Bug
The optimizer state is not loaded from the checkpoint. This is important if you want to correctly continue training. In practice, I had serious convergence issues if the optimizer state wasn't loaded.
To Reproduce
See code sample
Code sample
import os
import torch
from torch.nn import functional as F
from torch.utils.data import DataLoader
from torchvision.datasets import MNIST
from torchvision import transforms
import pytorch_lightning as pl
from pytorch_lightning.core.lightning import LightningModule
class LitModel(LightningModule):
def __init__(self):
super().__init__()
self.l1 = torch.nn.Linear(28 * 28, 10)
def forward(self, x):
return torch.relu(self.l1(x.view(x.size(0), -1)))
def training_step(self, batch, batch_idx):
x, y = batch
y_hat = self(x)
loss = F.cross_entropy(y_hat, y)
tensorboard_logs = {'train_loss': loss}
return {'loss': loss, 'log': tensorboard_logs}
def configure_optimizers(self):
return torch.optim.Adam(self.parameters(), lr=0.001)
def train_dataloader(self):
dataset = MNIST(os.getcwd(), train=True, download=True, transform=transforms.ToTensor())
loader = DataLoader(dataset, batch_size=32, num_workers=4, shuffle=True)
return loader
model = LitModel()
trainer = pl.Trainer(fast_dev_run=True)
trainer.fit(model)
old_state = trainer.optimizers[0].state
trainer.save_checkpoint('checkpoint.pl')
new_trainer = pl.Trainer(resume_from_checkpoint='checkpoint.pl')
# currently new_trainer does not have optimizers, but inside it uses .init_optimizers to load them
new_optimizers, _, _ = new_trainer.init_optimizers(model)
new_state = new_optimizers[0].state
# new state is empty
old_state == new_state
Expected behavior
Initializing Trainer from checkpoint loads optimizer state.
Environment
PyTorch Version (e.g., 1.0): 1.5.0
OS (e.g., Linux): Linux
How you installed PyTorch (conda, pip, source): pip
Build command you used (if compiling from source):
Python version: 3.7.6
CUDA/cuDNN version: N/A
GPU models and configuration: CPU
Any other relevant information:
Additional context
What workaround can you use to load the optimizer state before the fix? The issue is that .init_optimizers is called inside .fit and even if you manually set trainer.optimizers is will be overwritten.
|
torch.save(model) in dump_checkpoint
|
[
"feature",
"help wanted",
"won't fix"
] |
π Feature
Current PL save the only weight of network, but param searched network change their architecture flexibly while training.
Trainer.(..., save_entier_model=True) could be option
Motivation
After training 'param searched network' with pl and closed session, recognized I cannot load weight! :(
Saving the only state_dict is reasonable in most cases, but give the option to users for saving the whole network will be nice in some cases.
Alternatives
Manually save by users
Additional context
Always thx for cool dev project!
|
set_epoch isn't called for TPU training
|
[
"bug",
"help wanted",
"accelerator: tpu"
] |
π Bug
This line https://github.com/PyTorchLightning/pytorch-lightning/blob/master/pytorch_lightning/trainer/training_loop.py#L350 doesn't call set_epoch when training on TPUs unless using self.use_horovod as self.use_ddp is False.
|
Data is on GPU, some of the nn.Module still on CPU
|
[
"bug",
"help wanted"
] |
π Bug
A pytorch module consisting of a Python list containing multiple nn.Conv1d objects and 3 Fully-Connected layers are raising the following error:
RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same
When trying to do a forward pass on the first nn.Conv1d contained in the Python List.
To Reproduce
Pytorch Module
class Conv1DClassifier(nn.Module):
def __init__(self, num_classes):
super(Conv1DClassifier, self).__init__()
encoder_out_channels = 256
encoder_kernel_size = 64
encoder_stride = 64
n_layers = int(math.log2(encoder_out_channels))
self.conv_layers = []
for layer_idx in range(n_layers):
self.conv_layers.append(
nn.Conv1d(
in_channels=2 ** layer_idx,
out_channels=2 ** (layer_idx + 1),
kernel_size=encoder_kernel_size,
stride=encoder_stride
)
)
self.fc1 = nn.Linear(encoder_out_channels, 128)
self.fc2 = nn.Linear(128, 64)
self.fc3 = nn.Linear(64, num_classes)
def forward(self, x):
for conv_layer in self.conv_layers:
# on the first pass, the error is raised here!
x = conv_layer(x)
x, _ = x.max(2) # max pooling over the sequence dim; drop sequence axis
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
The Lightning Module i used it's just the Lit Module presented on the tutorial.
Steps to reproduce the behavior:
Try to do a forward pass on the presented pytorch module with GPU.
Watch the console log to notice that the data is in GPU but the (some of) model parameters are still on CPU.
I think it has something to do with the python list that contains the many convolutions layers that should be transfered to the GPU but it doesnt. As the CPU to GPU and vice-versa data transfers are handled by Lightning i think i can get help here.
|
What's the true meaning of gpus argument in Trainer
|
[
"good first issue",
"docs"
] |
Hello, I have a question to the "gpus" argument for the Trainer() class. I don't think the documentation is clear in my view. From the docs, I only know that "gpus: Which GPUs to train on.". However, from the link here: https://pytorch-lightning.readthedocs.io/en/latest/multi_gpu.html, I get different answer.
1.
DEFAULT (int) specifies how many GPUs to use
Trainer(gpus=k)
2.
train on 32 GPUs (4 nodes)
trainer = Trainer(gpus=8, distributed_backend='ddp', num_nodes=4)
Does the gpus mean number of gpu per node or total number of gpus in use?
|
Expose load_state_dict strict=False
|
[
"feature",
"help wanted"
] |
π Feature
In contrastive learning, we normally train a representation learning backbone then adding the classifier. Sometimes, I wish to play with different classifiers. It would be best to have strict=False exposed to make the model still load with a user's permission.
model = TransferLearningModel.load_from_checkpoint(ckpt_path, strict=False)
The implementation is intuitive but an extremely useful feature in my case. Thank you!
|
Custom callbacks are lost after resume_from_checkpoint
|
[
"bug",
"help wanted"
] |
π Bug
A checkpoint should state everything needed to restore a training session including the state of all callbacks. However, custom callbacks are lost after resuming the trainer from Trainer(resume_from_checkpoint="...ckpt").
To Reproduce
Steps to reproduce the behaviour:
Define a trainer with a custom callback.
trainer = pl.Trainer(gpus=1, max_epochs=3, progress_bar_refresh_rate=20, callbacks=[MyPrintingCallback()])
(Probably not necessary) run trainer.fit()
Save trainer trainer.save_checkpoint('checkpoint.ckpt')
Load trainer loaded_trainer = pl.Trainer(resume_from_checkpoint='checkpoint.ckpt')
Problem trainer.callbacks is not the same as loaded_trainer.callbacks
Code sample
import os
import torch
from torch.nn import functional as F
from torch.utils.data import DataLoader
from torchvision.datasets import MNIST
from torchvision import transforms
import pytorch_lightning as pl
class MyPrintingCallback(pl.Callback):
def on_test_end(self, trainer, pl_module):
print('Custom Callback!')
class MNISTModel(pl.LightningModule):
def __init__(self):
super(MNISTModel, self).__init__()
self.l1 = torch.nn.Linear(28 * 28, 10)
def forward(self, x):
return torch.relu(self.l1(x.view(x.size(0), -1)))
def training_step(self, batch, batch_nb):
x, y = batch
loss = F.cross_entropy(self(x), y)
return {'loss': loss}
def test_step(self, batch, batch_nb):
x, y = batch
loss = F.cross_entropy(self(x), y)
loss = F.cross_entropy(self(x), y)
return {'loss': loss}
def configure_optimizers(self):
return torch.optim.Adam(self.parameters(), lr=0.02)
data_loader = DataLoader(MNIST(os.getcwd(), train=True, download=True, transform=transforms.ToTensor()), batch_size=32)
model = MNISTModel()
trainer = pl.Trainer(gpus=1, callbacks=[MyPrintingCallback()])
trainer.test(model, data_loader)
trainer.save_checkpoint('checkpoint.ckpt')
loaded_model = MNISTModel.load_from_checkpoint(checkpoint_path="checkpoint.ckpt")
loaded_trainer = pl.Trainer(resume_from_checkpoint='checkpoint.ckpt')
loaded_trainer.test(loaded_model, data_loader) # using test because cpu
# MyPrintingCallback has disappeared from loaded_trainer
print('trainer callbacks', *trainer.callbacks)
print('loaded_trainer callbacks', *loaded_trainer.callbacks)
Colab with code and comments:
https://colab.research.google.com/drive/1ksECPaUWMPN76fuUWEa443ZVnvTwvbZB?usp=sharing
Expected behavior
The original trainer and loaded trainer should have the same callbacks.
trainer.callbacks == loaded_trainer.callbacks # Thought this would be True
Environment
Just google colab with gpu.
CUDA:
GPU:
Tesla K80
available: True
version: 10.1
Packages:
numpy: 1.18.5
pyTorch_debug: False
pyTorch_version: 1.5.1+cu101
pytorch-lightning: 0.8.3
tensorboard: 2.2.2
tqdm: 4.41.1
System:
OS: Linux
architecture:
64bit
processor: x86_64
python: 3.6.9
version: #1 SMP Wed Feb 19 05:26:34 PST 2020
Additional context
This may be intentional and just haven't realized it. However, the docs do specify that the state of all callbacks are restored.
Also on a related note, a trainer resumed from a checkpoint is not sent to the gpu even if the original is using a gpu. This was also a little confusing since it felt unintuitive. Code sample below.
# Also I realized that the loaded_trainer is on cpu. Is this intentional?
print('model device', model.device)
print('loaded_model device', loaded_model.device)
# This means that using a loaded_trainer with the orignal model won't work (feels unintuitive)
try:
loaded_trainer.test(model, test_loader)
except RuntimeError as e:
print(e)
|
to() got an unexpected keyword argument 'non_blocking' for DGLGraph
|
[
"bug",
"help wanted"
] |
π Bug
To Reproduce
I use dgl library to make a gnn and batch the DGLGraph.
No problem during training, but in test, I got a TypeError: to() got an unexpected keyword argument 'non_blocking'
<class 'dgl.graph.DGLGraph'> .to() function has no keyword argument 'non_blocking'
Code sample
Expected behavior
Environment
OS: Linux
CUDA: 10.1
Python Version: 3.7
PyTorch Version: 1.5.1
DGL Version: 0.4.3post2
PyTorch-Lightning Version: 0.8.5
Additional context
File "../src/main.py", line 131, in <module>
run(params)
File "../src/main.py", line 92, in run
trainer.test(model)
File "/home/jiangyize/miniconda3/envs/galixir/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1279, in test
results = self.__test_given_model(model, test_dataloaders)
File "/home/jiangyize/miniconda3/envs/galixir/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1346, in __test_given_model
results = self.fit(model)
File "/home/jiangyize/miniconda3/envs/galixir/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1003, in fit
results = self.single_gpu_train(model)
File "/home/jiangyize/miniconda3/envs/galixir/lib/python3.7/site-packages/pytorch_lightning/trainer/distrib_parts.py", line 186, in single_gpu_train
results = self.run_pretrain_routine(model)
File "/home/jiangyize/miniconda3/envs/galixir/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1166, in run_pretrain_routine
results = self.run_evaluation(test_mode=True)
File "/home/jiangyize/miniconda3/envs/galixir/lib/python3.7/site-packages/pytorch_lightning/trainer/evaluation_loop.py", line 391, in run_evaluation
eval_results = self._evaluate(self.model, dataloaders, max_batches, test_mode)
File "/home/jiangyize/miniconda3/envs/galixir/lib/python3.7/site-packages/pytorch_lightning/trainer/evaluation_loop.py", line 293, in _evaluate
output = self.evaluation_forward(model, batch, batch_idx, dataloader_idx, test_mode)
File "/home/jiangyize/miniconda3/envs/galixir/lib/python3.7/site-packages/pytorch_lightning/trainer/evaluation_loop.py", line 458, in evaluation_forward
batch = self.transfer_batch_to_gpu(batch, root_gpu)
File "/home/jiangyize/miniconda3/envs/galixir/lib/python3.7/site-packages/pytorch_lightning/trainer/distrib_parts.py", line 159, in transfer_batch_to_gpu
return self.__transfer_batch_to_device(batch, device)
File "/home/jiangyize/miniconda3/envs/galixir/lib/python3.7/site-packages/pytorch_lightning/trainer/distrib_parts.py", line 164, in __transfer_batch_to_device
return model.transfer_batch_to_device(batch, device)
File "/home/jiangyize/miniconda3/envs/galixir/lib/python3.7/site-packages/pytorch_lightning/core/hooks.py", line 242, in transfer_batch_to_device
return move_data_to_device(batch, device)
File "/home/jiangyize/miniconda3/envs/galixir/lib/python3.7/site-packages/pytorch_lightning/utilities/apply_func.py", line 109, in move_data_to_device
return apply_to_collection(batch, dtype=(TransferableDataType, Batch), function=batch_to)
File "/home/jiangyize/miniconda3/envs/galixir/lib/python3.7/site-packages/pytorch_lightning/utilities/apply_func.py", line 40, in apply_to_collection
for k, v in data.items()})
File "/home/jiangyize/miniconda3/envs/galixir/lib/python3.7/site-packages/pytorch_lightning/utilities/apply_func.py", line 40, in <dictcomp>
for k, v in data.items()})
File "/home/jiangyize/miniconda3/envs/galixir/lib/python3.7/site-packages/pytorch_lightning/utilities/apply_func.py", line 35, in apply_to_collection
return function(data, *args, **kwargs)
File "/home/jiangyize/miniconda3/envs/galixir/lib/python3.7/site-packages/pytorch_lightning/utilities/apply_func.py", line 107, in batch_to
return data.to(device, non_blocking=True)
TypeError: to() got an unexpected keyword argument 'non_blocking'
|
Where to use `to(self.device)` ?
|
[
"question"
] |
What is your question?
So currently in my lightning module, I create the multibox module inside the training_step, in order to send the priorboxes to the right device, but when I define the multibox loss in the __init__ method, the priors doesn't seem to be in the right device.
Code
for eg. below works:-
class SSD_simple(pl.LightningModule):
def __init__(self,config : dict):
super().__init__()
self.config = config
self.model = SSD300()
def forward(self, x):
return self.model(x)
def training_step(self, batch, batch_nb):
images, bboxes, labels = batch
locs, confs = self(images)
priors = PriorBox(self.config)
priors = priors.create_priors().to(self.device)
multibox_loss = MultiBoxLoss(num_classes=self.config['num_classes'],priors=priors,cfg=self.config,device=self.device)
loc_loss, conf_loss = multibox_loss(locs, confs, bboxes, labels)
loss = conf_loss + loc_loss
logs = {'train_loss' : loss, 'conf_loss':conf_loss, 'loc_loss' : loc_loss}
return {'loss' : loss, 'log' : logs}
But the below doesn't work :-
class SSD_simple(pl.LightningModule):
def __init__(self,config : dict):
super().__init__()
self.config = config
self.model = SSD300()
priors = PriorBox(self.config)
priors = priors.create_priors().to(self.device)
self.multibox_loss = MultiBoxLoss(num_classes=self.config['num_classes'],priors=priors,cfg=self.config,device=self.device)
def forward(self, x):
return self.model(x)
def training_step(self, batch, batch_nb):
images, bboxes, labels = batch
locs, confs = self(images)
loc_loss, conf_loss = self.multibox_loss(locs, confs, bboxes, labels)
loss = conf_loss + loc_loss
logs = {'train_loss' : loss, 'conf_loss':conf_loss, 'loc_loss' : loc_loss}
return {'loss' : loss, 'log' : logs}
It throws the following error:
RuntimeError: expected device cuda:0 but got device cpu
I want to get the later method to work since it is not computationally expensive as I am not creating priors again and again.
Any help ? I was thinking whether there was a pre_training method ?
What's your environment?
OS: [e.g. Linux]
Packaging [pip]
|
Apex with auto lr finder fails
|
[
"bug",
"help wanted"
] |
Hi! I have this error when using together flags precision=16, auto_lr_find=True in Trainer . The error occurs when learning rate finder finish and before training starts. I guess it can happen because of calling amp.initialize for already initialized model i.e the first one it is called when for auto_lr_find and then the second one for the same model for training initialization.
Traceback (most recent call last):
File "train.py", line 136, in <module>
trainer.fit(model)
File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 1003, in fit
results = self.single_gpu_train(model)
File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/distrib_parts.py", line 182, in single_gpu_train
model, optimizers = model.configure_apex(amp, model, self.optimizers, self.amp_level)
File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/core/lightning.py", line 1006, in configure_apex
model, optimizers = amp.initialize(model, optimizers, opt_level=amp_level)
File "/opt/conda/lib/python3.6/site-packages/apex/amp/frontend.py", line 358, in initialize
return _initialize(models, optimizers, _amp_state.opt_properties, num_losses, cast_model_outputs)
File "/opt/conda/lib/python3.6/site-packages/apex/amp/_initialize.py", line 171, in _initialize
check_params_fp32(models)
File "/opt/conda/lib/python3.6/site-packages/apex/amp/_initialize.py", line 87, in check_params_fp32
name, param.type()))
File "/opt/conda/lib/python3.6/site-packages/apex/amp/_amp_state.py", line 32, in warn_or_err
raise RuntimeError(msg)
RuntimeError: Found param unet.input_block.conv1.0.weight with type torch.cuda.HalfTensor, expected torch.cuda.FloatTensor.
When using amp.initialize, you do not need to call .half() on your model
before passing it, no matter what optimization level you choose.
|
Trouble tracing why convergence is slower in Lightning
|
[
"bug",
"help wanted"
] |
I recently refactored some code from [this tutorial])(https://www.assemblyai.com/blog/end-to-end-speech-recognition-pytorch) (trains speech-to-text using librispeech 100 hr) into Lightning and found it to be converging slower and never reaching the same level of loss. I made a lot of changes when I refactored into Pytorch lightning, and I slowly undid them until I was left with the original code and the lightning version. I even set comet to be the logger so that they had the same logging, and the results are the same. I can't figure out what's going on. As far as I can tell, the code is identical (except for the training loop) and I don't know how to proceed. (Edit: I've now added the training loop code from both notebooks at the bottom of this post for ease of access)
Here are the notebooks: The final few cells (train/test/main functions) are the relevant/distinct portion.
Non-lightning Notebook - converges faster
Lightning Notebook - converges slower
I am using torch 1.4.0 and torchaudio 0.4.0.
Train loss vs step in both versions, other metrics (char error rate and word error rate) are also worse.
The most recent comparison (visualized below) didn't use a seed, but I have run many variations, including ones where torch/np use the same seed, and the results were similar.
)
Non-Lightning Version:
def train(model, device, train_loader, criterion, optimizer, scheduler, epoch, iter_meter, experiment):
model.train()
data_len = len(train_loader.dataset)
with experiment.train():
for batch_idx, _data in enumerate(train_loader):
spectrograms, labels, input_lengths, label_lengths = _data
spectrograms, labels = spectrograms.to(device), labels.to(device)
optimizer.zero_grad()
output = model(spectrograms) # (batch, time, n_class)
output = F.log_softmax(output, dim=2)
output = output.transpose(0, 1) # (time, batch, n_class)
loss = criterion(output, labels, input_lengths, label_lengths)
loss.backward()
experiment.log_metric('loss', loss.item(), step=iter_meter.get())
experiment.log_metric('learning_rate', scheduler.get_lr(), step=iter_meter.get())
optimizer.step()
scheduler.step()
iter_meter.step()
if batch_idx % 100 == 0 or batch_idx == data_len:
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
epoch, batch_idx * len(spectrograms), data_len,
100. * batch_idx / len(train_loader), loss.item()))
# if(batch_idx >= 100):
# print("Exiting Early")
# break
def test(model, device, test_loader, criterion, epoch, iter_meter, experiment):
print('\nevaluating...')
model.eval()
test_loss = 0
test_cer, test_wer = [], []
with experiment.test():
with torch.no_grad():
for i, _data in enumerate(test_loader):
spectrograms, labels, input_lengths, label_lengths = _data
spectrograms, labels = spectrograms.to(device), labels.to(device)
output = model(spectrograms) # (batch, time, n_class)
output = F.log_softmax(output, dim=2)
output = output.transpose(0, 1) # (time, batch, n_class)
loss = criterion(output, labels, input_lengths, label_lengths)
test_loss += loss.item() / len(test_loader)
decoded_preds, decoded_targets = GreedyDecoder(output.transpose(0, 1), labels, label_lengths)
for j in range(len(decoded_preds)):
print("\nTarget:", decoded_targets[j])
print("\nPredicted:", decoded_preds[j])
test_cer.append(cer(decoded_targets[j], decoded_preds[j]))
test_wer.append(wer(decoded_targets[j], decoded_preds[j]))
avg_cer = sum(test_cer)/len(test_cer)
avg_wer = sum(test_wer)/len(test_wer)
experiment.log_metric('test_loss', test_loss, step=iter_meter.get())
experiment.log_metric('cer', avg_cer, step=iter_meter.get())
experiment.log_metric('wer', avg_wer, step=iter_meter.get())
print('Test set: Average loss: {:.4f}, Average CER: {:4f} Average WER: {:.4f}\n'.format(test_loss, avg_cer, avg_wer))
def main(learning_rate=5e-4, batch_size=20, epochs=10,
train_url="train-clean-100", test_url="test-clean",
experiment=Experiment(api_key='dummy_key', disabled=True)):
hparams = {
"n_cnn_layers": 3,
"n_rnn_layers": 5,
"rnn_dim": 512,
"n_class": 29,
"n_feats": 128,
"stride":2,
"dropout": 0.1,
"learning_rate": learning_rate,
"batch_size": batch_size,
"epochs": epochs
}
experiment.log_parameters(hparams)
use_cuda = torch.cuda.is_available()
torch.manual_seed(7)
device = torch.device("cuda" if use_cuda else "cpu")
if not os.path.isdir("./data"):
os.makedirs("./data")
train_dataset = torchaudio.datasets.LIBRISPEECH("./data", url=train_url, download=True)
test_dataset = torchaudio.datasets.LIBRISPEECH("./data", url=test_url, download=True)
kwargs = {'num_workers': 1, 'pin_memory': True} if use_cuda else {}
train_loader = data.DataLoader(dataset=train_dataset,
batch_size=hparams['batch_size'],
shuffle=True,
collate_fn=lambda x: data_processing(x, 'train'),
**kwargs)
test_loader = data.DataLoader(dataset=test_dataset,
batch_size=hparams['batch_size'],
shuffle=False,
collate_fn=lambda x: data_processing(x, 'valid'),
**kwargs)
model = SpeechRecognitionModel(
hparams['n_cnn_layers'], hparams['n_rnn_layers'], hparams['rnn_dim'],
hparams['n_class'], hparams['n_feats'], hparams['stride'], hparams['dropout']
).to(device)
print(model)
print('Num Model Parameters', sum([param.nelement() for param in model.parameters()]))
optimizer = optim.AdamW(model.parameters(), hparams['learning_rate'])
criterion = nn.CTCLoss(blank=28).to(device)
scheduler = optim.lr_scheduler.OneCycleLR(optimizer, max_lr=hparams['learning_rate'],
steps_per_epoch=int(len(train_loader)),
epochs=hparams['epochs'],
anneal_strategy='linear')
iter_meter = IterMeter()
for epoch in range(1, epochs + 1):
train(model, device, train_loader, criterion, optimizer, scheduler, epoch, iter_meter, experiment)
test(model, device, test_loader, criterion, epoch, iter_meter, experiment)
learning_rate = 5e-4
batch_size = 20
epochs = 10
libri_train_set = "train-clean-100"
libri_test_set = "test-clean"
main(learning_rate, batch_size, epochs, libri_train_set, libri_test_set, experiment)
Lightning Version:
class SpeechTrainModel(pl.LightningModule):
def __init__(self, hparams):
super().__init__()
self.hparams = hparams
self.model = SpeechRecognitionModel(
hparams['n_cnn_layers'], hparams['n_rnn_layers'], hparams['rnn_dim'],
hparams['n_class'], hparams['n_feats'], hparams['stride'], hparams['dropout']
)
self.criterion = nn.CTCLoss(blank=28)
def forward(self, x):
return self.model(x)
def training_step(self, batch, batch_nb):
spectrograms, labels, input_lengths, label_lengths = batch
output = self(spectrograms) # (batch, time, n_class)
output = F.log_softmax(output, dim=2)
output = output.transpose(0, 1) # (time, batch, n_class)
loss = self.criterion(output, labels, input_lengths, label_lengths)
tensorboard_logs = {'train_loss': loss}
return {'loss': loss, 'log': tensorboard_logs}
def validation_step(self, batch, batch_nb):
spectrograms, labels, input_lengths, label_lengths = batch
output = self(spectrograms) # (batch, time, n_class)
output = F.log_softmax(output, dim=2)
output = output.transpose(0, 1) # (time, batch, n_class)
loss = self.criterion(output, labels, input_lengths, label_lengths)
# decoded_preds, decoded_targets = GreedyDecoder(output.transpose(0, 1), labels, label_lengths)
# for j in range(len(decoded_preds)):
# test_cer.append(cer(decoded_targets[j], decoded_preds[j]))
# test_wer.append(wer(decoded_targets[j], decoded_preds[j]))
return {'val_loss': loss}
def validation_epoch_end(self, outputs):
# OPTIONAL
avg_loss = torch.stack([x['val_loss'] for x in outputs]).mean()
tensorboard_logs = {'val_loss': avg_loss}
return {'val_loss': avg_loss, 'log': tensorboard_logs}
def prepare_data(self):
if not os.path.isdir("./data"):
os.makedirs("./data")
self.train_dataset = torchaudio.datasets.LIBRISPEECH("./data", url="train-clean-100", download=True)
self.test_dataset = torchaudio.datasets.LIBRISPEECH("./data", url="test-clean", download=True)
def configure_optimizers(self):
hparams = self.hparams
optimizer = optim.AdamW(self.parameters(), hparams['learning_rate'])
scheduler = optim.lr_scheduler.OneCycleLR(optimizer, max_lr=hparams['learning_rate'],
steps_per_epoch=int(len(self.train_dataset)),
epochs=hparams['epochs'],
anneal_strategy='linear')
scheduler_dict = {'scheduler': scheduler, 'interval': 'step'}
return [optimizer], [scheduler_dict]
def train_dataloader(self):
self.train_loader = data.DataLoader(dataset=self.train_dataset,
batch_size=self.hparams['batch_size'],
shuffle=True,
collate_fn=lambda x: data_processing(x, 'train'),
)
return self.train_loader
def val_dataloader(self):
return data.DataLoader(dataset=self.test_dataset,
batch_size=self.hparams['batch_size'],
shuffle=False,
collate_fn=lambda x: data_processing(x, 'valid'),
)
learning_rate=5e-4
batch_size=20
epochs=10
hparams = {
"n_cnn_layers": 3,
"n_rnn_layers": 5,
"rnn_dim": 512,
"n_class": 29,
"n_feats": 128,
"stride":2,
"dropout": 0.1,
"learning_rate": learning_rate,
"batch_size": batch_size,
"epochs": epochs
}
mdl = SpeechTrainModel(hparams)
trainer = pl.Trainer(gpus=1, max_epochs=epochs, logger=comet_logger)
trainer.fit(mdl)
|
Adding a `warmup` period to EarlyStopping and ModelCheckpoint
|
[
"feature",
"help wanted",
"won't fix"
] |
π Feature
Add an optional warmup period for EarlyStopping and ModelCheckpoint callbacks.
Motivation
Sometimes the metric you want to monitor can take a number of epochs to stabilize and become meaningful.
For example: with GANs, you might want to monitor and minimize G's loss, but usually it starts out unreasonably low because it's based on the output of D, which hasn't learned anything about discriminating yet.
Pitch
I'd like to have this result in the callbacks having no effect for the first 10 epochs:
early_stop = EarlyStopping(..., warmup=10)
model_checkpoint = ModelCheckpoint(..., warmup=10)
Alternatives
I added this option through inheritance:
from pytorch_lightning.callbacks import EarlyStopping
class EarlyStoppingWithWarmup(EarlyStopping):
"""
EarlyStopping, except don't watch the first `warmup` epochs.
"""
def __init__(self, warmup=10, **kwargs):
super().__init__(**kwargs)
self.warmup = warmup
def on_validation_end(self, trainer, pl_module):
pass
def on_epoch_end(self, trainer, pl_module):
if trainer.current_epoch < self.warmup:
return
else:
super()._run_early_stopping_check(trainer, pl_module)
Additional context
Edit: couldn't get this working at first, but got it figured out after upgrading my pl version. I would be happy to PR something like this if anyone can provide guidance on where to add it.
|
How to perform inference on multiple GPUs?
|
[
"question",
"won't fix"
] |
I have 4 GPUs, 1m images.
I would like to use all 4 of them to run the method test.
Is there any tutorial that shows how to do this?
|
Checkpoints cannot be loaded in non-pl env
|
[
"bug",
"help wanted"
] |
## π Feature
Add an option to save only state_dict for ModelCheckpoint callbacks
π Bug
PL checkpoints cannot be loaded in non-pl envs
Motivation
To be able to move trained models and weights into pytorch only environments
Additional context
Currently when you do torch.load() on a pl generated checkpoint in an environment without pl, there is a pickling error. For my current use case I have to load the checkpoints in my training environment and save them again with only state_dict for the weights.
See reply below for more info
|
How to reload partial weights from the trained checkpoint?
|
[
"question"
] |
What is your question?
Now I want to reload partial weights from trained checkpoint and let the remaining parameters trained from scratch. But I didn't find the api that allows me to reload partial parameters.
Code
my code is like this, I find pl only support resume_from_checkpoint path to reload all weights from checkpoint.
trainer = pl.Trainer(gpus=1, early_stop_callback=None, resume_from_checkpoint=resume_checkpoint, val_check_interval=1000)
trainer.fit(model)
What should I do to reload partial parameters? Thank you!
|
Pytorch lightning switched to cpu in the middle of training. How can I debug this?
|
[
"bug",
"help wanted"
] |
π Bug
So I am training a model and suddenly PyTorch lightning started using the CPU. I checked the GPU status and it is working normally.
I have never seen this before. Can Pytorch start using CPU in the middle of training (at the next epoch)?
The weird thing is the next epoch was using the gpu.
EDIT: Some asked how I noticed it is not using GPU.
I noticed the difference by looking at the it/s displayed in the verbose. It dropped to 13 it/s from 130 in one epoch and then rebounded to 150 it/s. I immediately took a look at the CUDA usage and It was very low.
To Reproduce
This is the training loop: Pretty standard
` model = MatrixFactorizationAttention(att_mf_config)
train_loader = sample_generator.instance_a_train_loader_attention(att_mf_config['batch_size'])
val_loader = sample_generator.instance_val_loader_attention
# wandb_logger.log_hyperparams(att_mf_config)
early_stop_callback = EarlyStopping(
monitor='HR',
min_delta=0.01,
patience=10,
verbose=True,
mode='max'
)
trainer = pl.Trainer(gpus=1, max_epochs=150, precision=16,checkpoint_callback=False,logger = False,early_stop_callback=early_stop_callback)
trainer.fit(model, train_dataloader=train_loader, val_dataloaders=val_loader)
`
EDIT: apart from this training loop, I am using a standard PyTorch model with a regular classification pipeline.
Environment
Win 10 with pytorch lightning running on pycharm
Cuda 10.2
GPU 2060 super
pytorch: nightly version
|
Logging gradients and a possible bug in the docs?
|
[
"question",
"won't fix"
] |
TL;DR Question: How do we correctly log gradients to tensorboard?
I am trying to log gradients to tensorboard to track NaN loss in a speech application. The docs suggest using on_after_backward along with this code that appears to be incorrect:
# example to inspect gradient information in tensorboard
if self.trainer.global_step % 25 == 0: # don't make the tf file huge
params = self.state_dict()
for k, v in params.items():
grads = v
name = k
self.logger.experiment.add_histogram(tag=name, values=grads,
global_step=self.trainer.global_step)
This code seems to log the weights instead of gradients (assuming lightning state_dict is the same structure as pytorch). I'm happy to fix it and submit a PR as long as I'm not mistaken. I would log the weights like this...
# example to inspect gradient information in tensorboard
if self.trainer.global_step % 25 == 0: # don't make the tf file huge
params = self.state_dict()
for name, weight in params.items():
self.logger.experiment.add_histogram(tag=name, values=weight,
global_step=self.trainer.global_step)
but for gradients, if I try to log weights.grad, I get None. How do we correctly log gradients to tensorboard?
# example to inspect gradient information in tensorboard
if self.trainer.global_step % 25 == 0: # don't make the tf file huge
params = self.state_dict()
for name, weight in params.items():
self.logger.experiment.add_histogram(tag=name, values=weight.grad,
global_step=self.trainer.global_step)
|
Document ddp_cpu
|
[
"feature",
"good first issue",
"won't fix",
"docs"
] |
π Documentation
For typos and doc fixes, please go ahead and:
Create an issue.
Fix the typo.
Submit a PR.
Thanks!
|
Plotting learning rate from a lr_scheduler via a Callback
|
[
"feature",
"good first issue",
"question"
] |
I think the title explains a lot. But let me elaborate, I have a LightningModule which has a configure_optimizers method returns an optimizer and a scheduler. Later in a Callback I have a on_batch_end function in which I try to log the learning rate.
Of course if the scheduler was accessible as a class member, we could self.scheduler.get_lr() on it and use the value to plot. Since this is not how it has been implemented, I am wondering how to do this?
Would appreciate any pointers.
PytorchLightning - 0.8.5
|
bug in pytorch_lightning.metrics.functional.auroc
|
[
"bug",
"help wanted"
] |
the code:
def validation_epoch_end(self, outputs):
.........
print(total_y_hat.device)
print(total_y_true.device)
print(total_y_hat)
print(total_y_true)
print(total_y_hat.shape)
print(total_y_true.shape)
auc_score = auroc(total_y_hat, total_y_true)
the output is:
Get data done!
Validation sanity check: 50%|βββββ | 1/2 [00:00<00:00, 1.06it/s]
cuda:0
cuda:0
tensor([0.5084, 0.5084, 0.5084, ..., 0.5084, 0.5084, 0.5084], device='cuda:0')
tensor([0., 0., 0., ..., 0., 0., 0.], device='cuda:0')
torch.Size([16384])
torch.Size([16384])
Traceback (most recent call last):
File "lighting_sales.py", line 443, in <module>
main(hparams)
File "lighting_sales.py", line 392, in main
trainer.fit(model)
File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 918, in fit
self.single_gpu_train(model)
File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/distrib_parts.py", line 176, in single_gpu_train
self.run_pretrain_routine(model)
File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1076, in run_pretrain_routine
False)
File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/evaluation_loop.py", line 330, in _evaluate
eval_results = model.validation_epoch_end(outputs)
File "lighting_sales.py", line 252, in validation_epoch_end
auc_score = auroc(total_y_hat, total_y_true)
File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/metrics/functional/classification.py", line 817, in auroc
return _auroc(pred=pred, target=target, sample_weight=sample_weight, pos_label=pos_label)
File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/metrics/functional/classification.py", line 766, in new_func
x, y = func_to_decorate(*args, **kwargs)[:2]
File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/metrics/functional/classification.py", line 815, in _auroc
return roc(pred, target, sample_weight, pos_label)
File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/metrics/functional/classification.py", line 553, in roc
pos_label=pos_label)
File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/metrics/functional/classification.py", line 504, in _binary_clf_curve
torch.tensor([target.size(0) - 1])])
RuntimeError: All input tensors must be on the same device. Received cuda:0 and cpu
|
trainer.test after fp16 training with apex
|
[
"bug",
"help wanted",
"priority: 0"
] |
Summary
In the current huggingface examples/seq2seq/finetune.py,
trainer.test fails in fp16 mode with torch 1.5.1.
Nowhere in the huggingface code is model.half called.
Models maybe saved to disk in either fp16 or fp32 format, but since we are resuming from a pl checkpoint, I think pl is controlling the saving and loading here.
Bonus: this test will hang if you pass gpus=2 by switching this line.
To reproduce
You must be on a machine with cuda installed
git clone git@github.com:huggingface/transformers.git
cd transformers
pip install -e .
pip install -e .[examples] # installs pytorch-lightning==0.8.5
git checkout broken-fp16-test
RUN_SLOW=1 USE_CUDA=1 pytest examples/seq2seq/test_bash_script.py
Small Traceback
RuntimeError: Found param model.model.shared.weight with type torch.cuda.HalfTensor, expected torch.cuda.FloatTensor.
When using amp.initialize, you do not need to call .half() on your model
before passing it, no matter what optimization level you choose.
Full Traceback
(failure from this line)
============================= test session starts ==============================
platform linux -- Python 3.7.4, pytest-5.3.5, py-1.8.1, pluggy-0.13.1
rootdir: /home/shleifer/transformers_fork, inifile: pytest.ini
plugins: forked-1.1.3, xdist-1.31.0, requests-mock-1.8.0
collected 1 item
examples/seq2seq/test_bash_script.py Selected optimization level O2: FP16 training with FP32 batchnorm and FP32 master weights.
Defaults for this optimization level are:
enabled : True
opt_level : O2
cast_model_type : torch.float16
patch_torch_functions : False
keep_batchnorm_fp32 : True
master_weights : True
loss_scale : dynamic
Processing user overrides (additional kwargs that are not None)...
After processing overrides, optimization options are:
enabled : True
opt_level : O2
cast_model_type : torch.float16
patch_torch_functions : False
keep_batchnorm_fp32 : True
master_weights : True
loss_scale : dynamic
Validation sanity check: 100%|ββββββββββ| 2/2 [00:00<00:00, 2.76it/s]
Training: 0it [00:00, ?it/s]
Training: 0%| | 0/125 [00:00<?, ?it/s]
Epoch 1: 0%| | 0/125 [00:00<?, ?it/s] Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 32768.0
Epoch 1: 1%| | 1/125 [00:00<00:10, 11.29it/s, loss=12.438, v_num=219]
... οΏ½[A
Epoch 2: 74%|ββββββββ | 93/125 [00:14<00:04, 6.55it/s, loss=12.427, v_num=219]
# DONE TRAINING
Selected optimization level O2: FP16 training with FP32 batchnorm and FP32 master weights.
Defaults for this optimization level are:
enabled : True
opt_level : O2
cast_model_type : torch.float16
patch_torch_functions : False
keep_batchnorm_fp32 : True
master_weights : True
loss_scale : dynamic
Processing user overrides (additional kwargs that are not None)...
After processing overrides, optimization options are:
enabled : True
opt_level : O2
cast_model_type : torch.float16
patch_torch_functions : False
keep_batchnorm_fp32 : True
master_weights : True
loss_scale : dynamic
F
=================================== FAILURES ===================================
______________________ test_train_mbart_cc25_enro_script _______________________
@slow
@pytest.mark.skipif(not CUDA_AVAILABLE, reason="too slow to run on CPU")
def test_train_mbart_cc25_enro_script():
data_dir = "examples/seq2seq/test_data/wmt_en_ro"
env_vars_to_replace = {
"$MAX_LEN": 200,
"$BS": 4,
"$GAS": 1,
"$ENRO_DIR": data_dir,
"facebook/mbart-large-cc25": MBART_TINY,
}
# Clean up bash script
bash_script = Path("examples/seq2seq/train_mbart_cc25_enro.sh").open().read().split("finetune.py")[1].strip()
bash_script = bash_script.replace("\\\n", "").strip().replace("$@", "")
for k, v in env_vars_to_replace.items():
bash_script = bash_script.replace(k, str(v))
output_dir = tempfile.mkdtemp(prefix="output")
if CUDA_AVAILABLE:
gpus = 1 # torch.cuda.device_count()
else:
bash_script = bash_script.replace("--fp16", "")
gpus = 0
testargs = (
["finetune.py"]
+ bash_script.split()
+ [f"--output_dir={output_dir}", f"--gpus={gpus}", "--learning_rate=3e-1"]
)
with patch.object(sys, "argv", testargs):
parser = argparse.ArgumentParser()
parser = pl.Trainer.add_argparse_args(parser)
parser = SummarizationModule.add_model_specific_args(parser, os.getcwd())
args = parser.parse_args()
# assert args.gpus == gpus THIS BREAKS
# args.gpus = gpus
> model = main(args)
examples/seq2seq/test_bash_script.py:71:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
examples/seq2seq/finetune.py:392: in main
trainer.test()
../miniconda3/envs/nb/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py:1281: in test
results = self.__test_using_best_weights(ckpt_path, test_dataloaders)
../miniconda3/envs/nb/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py:1321: in __test_using_best_weights
results = self.fit(model)
../miniconda3/envs/nb/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py:1003: in fit
results = self.single_gpu_train(model)
../miniconda3/envs/nb/lib/python3.7/site-packages/pytorch_lightning/trainer/distrib_parts.py:182: in single_gpu_train
model, optimizers = model.configure_apex(amp, model, self.optimizers, self.amp_level)
../miniconda3/envs/nb/lib/python3.7/site-packages/pytorch_lightning/core/lightning.py:1006: in configure_apex
model, optimizers = amp.initialize(model, optimizers, opt_level=amp_level)
../miniconda3/envs/nb/lib/python3.7/site-packages/apex/amp/frontend.py:358: in initialize
return _initialize(models, optimizers, _amp_state.opt_properties, num_losses, cast_model_outputs)
../miniconda3/envs/nb/lib/python3.7/site-packages/apex/amp/_initialize.py:171: in _initialize
check_params_fp32(models)
../miniconda3/envs/nb/lib/python3.7/site-packages/apex/amp/_initialize.py:87: in check_params_fp32
name, param.type()))
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
msg = 'Found param model.model.shared.weight with type torch.cuda.HalfTensor, expected torch.cuda.FloatTensor.\nWhen using a...alize, you do not need to call .half() on your model\nbefore passing it, no matter what optimization level you choose.'
def warn_or_err(msg):
if _amp_state.hard_override:
print("Warning: " + msg)
else:
> raise RuntimeError(msg)
E RuntimeError: Found param model.model.shared.weight with type torch.cuda.HalfTensor, expected torch.cuda.FloatTensor.
E When using amp.initialize, you do not need to call .half() on your model
E before passing it, no matter what optimization level you choose.
../miniconda3/envs/nb/lib/python3.7/site-packages/apex/amp/_amp_state.py:32: RuntimeError
=========================== short test summary info ============================
FAILED examples/seq2seq/test_bash_script.py::test_train_mbart_cc25_enro_script
============================== 1 failed in 38.75s ==============================
|
Default checkpoint location problematic when using docker
|
[
"bug",
"help wanted"
] |
The default behavior of ModelCheckpoint is to use os.getcwd(). Outside my docker container, this ended up being the same directory where my tensorboard logs were saved (e.g. /my/dir/tb_logs/default/version_0/checkpoints/). But inside the docker container, it saved to the internal working directory (e.g. /home/default/version_0/checkpoints/). Since this location disappeared along with the container, the checkpoint was gone, and there was no warning raised to explain why.
Requiring a checkpoint directory isn't desirable, but I'd like to help others avoid this grief in the future. Is there a better way to infer a default location than os.getcwd()? Something as simple as a print statement with the checkpoint location would have saved me a lot of time troubleshooting.
|
trainer.test not working in ddp
|
[
"bug",
"help wanted",
"distributed"
] |
π Bug
Testing in ddp is not working in the latest master.
To Reproduce
I am using the gpu_template example from basic_examples in the repo : "python gpu_template.py --gpus 2 --distributed_backend ddp", where, instead of trainer.fit(model), I am using trainer.test(model).
I am getting "RuntimeError: connect() timed out".
Environment
PyTorch Version 1.3.1:
Ubuntu 18.04
|
training_epoch_end() only gives outputs from one optimizer when multiple optimizers are being used
|
[
"feature",
"won't fix"
] |
I'm making an abstract GAN class, where I have the following code:
def configure_optimizers(self):
return self.g_optimizer, self.d_optimizer
def training_step(self, batch: Tuple[Tensor, Tensor], batch_idx, optimizer_idx) -> Dict:
X, _ = batch
batch_size = X.shape[0]
z = self.Z.sample((batch_size,))
if self.on_gpu:
z = z.cuda(X.device)
g_X = self.G(z)
if optimizer_idx == 0:
# Train generator
d_z = self.D(g_X)
g_loss = self.generator_loss(d_z)
step_dict = {'loss': g_loss, 'g_loss': g_loss}
return step_dict
if optimizer_idx == 1:
# Train discriminator
d_x = self.D(X)
d_z = self.D(g_X)
d_loss = self.discriminator_loss(d_x, d_z)
step_dict = {'loss': d_loss, 'd_loss': d_loss}
return step_dict
In training_epoch_end(), outputs only contains the discriminator losses. Is this a bug, or am I doing something wrong?
|
Subprocess launched in ddp have the wrong cwd when using hydra.
|
[
"bug",
"help wanted"
] |
π Bug
Details: #2639 (comment). I've talked to @omry about the issue and I will send out a fix soon.
To Reproduce
Please see the comment I posted above.
Expected behavior
The CWD for subprocesses should be the same as that of the parent, and relative paths should work.
|
Can't find PyTorch 1.6
|
[
"question"
] |
β Can't find pytorch 1.6
Before asking:
search the issues.
search the docs.
I tried to use native amp and the document says I need to have PyTorch 1.6. I went to PyTorch's website and can only find 1.5.1. I tried to install the Preview (Nightly) the error popped up:
torchvision 0.8.0 has requirement torch ==1.7.0 but you'll have torch 1.5.1 which is incompatible.
Can anyone tell me why? Thanks.
The error massages are:
ERROR: torchvision 0.8.0.dev20200724+cu101 has requirement torch==1.7.0.dev20200724+cu101, but you'll have torch 1.5.1 which is incompatible.
Traceback (most recent call last):
File "pl_fuse_train_apex_1node_8gpu_16bit_amp_O2.py", line 7, in
from data.load_fuse_data import LoadFuseData
File "/newDisk/users/huzhenhua/fuse_net/data/load_fuse_data.py", line 9, in
from data import my_transforms
File "/newDisk/users/huzhenhua/fuse_net/data/my_transforms.py", line 11, in
from torchvision.transforms import transforms
File "/usr/local/lib/python3.6/dist-packages/torchvision/init.py", line 5, in
from torchvision import models
File "/usr/local/lib/python3.6/dist-packages/torchvision/models/init.py", line 12, in
from . import detection
File "/usr/local/lib/python3.6/dist-packages/torchvision/models/detection/init.py", line 1, in
from .faster_rcnn import *
File "/usr/local/lib/python3.6/dist-packages/torchvision/models/detection/faster_rcnn.py", line 7, in
from torchvision.ops import misc as misc_nn_ops
File "/usr/local/lib/python3.6/dist-packages/torchvision/ops/init.py", line 1, in
from .boxes import nms, box_iou
File "/usr/local/lib/python3.6/dist-packages/torchvision/ops/boxes.py", line 42, in
@torch.jit._script_if_tracing
AttributeError: module 'torch.jit' has no attribute '_script_if_tracing'
Code
What have you tried?
What's your environment?
OS: [e.g. iOS, Linux, Win]:Linux
Packaging [e.g. pip, conda]: pip
Version [e.g. 0.5.2.1]
|
All TPU cores create tensorboard logs
|
[
"bug",
"help wanted",
"accelerator: tpu",
"waiting on author"
] |
π Bug
With TPUs, TestTubeLogger writes many empty tensorboard logs, one log per TPU core except one. This confuses tensorboard and prevents it from updating. This is happening because the logger is created before spawning processes then the logger is replicated in each process.
To Reproduce
Train any model with ptl.Trainer(logger=TestTubeLogger(), num_tpu_cores=8) then check the tf directory, you will find 1 file with real log and 7 empty files.
Expected behavior
Only the main process creates a tensorboard log.
Environment
pytorch-lightning==0.8.5
|
lightning and apex amp performance not improved
|
[
"question"
] |
β lightning and apex amp performance not improved
Before asking:
search the issues.
search the docs.
I'm trying to use lightning and Apex amp to speed ddp training. I tried amp_level O0, O1, O2, and O3, and they use almost the same time (all around 45 minutes).
train_loader = DataLoader(dataset=train_dataset, batch_size=2, shuffle=True, num_workers=4)
val_loader = DataLoader(dataset=val_dataset, batch_size=1, shuffle=False, num_workers=4)
trainer = pl.Trainer(gpus= 8, num_nodes = 1, distributed_backend='ddp', precision = 16, amp_level = 'O1')
trainer.fit(model, train_dataloader=train_loader, val_dataloaders=val_loader)
i didn't change batchsize to be a multiple of 8 because i saw this post and my cudnn version is 7.6.5.
Thanks!
What have you tried?
I also tried torch.backends.cudnn.benchmark = True but got no improvement.
What's your environment?
OS: [e.g. iOS, Linux, Win] linx
Packaging [e.g. pip, conda] pip
Version [e.g. 0.5.2.1]; latest
|
`replace_sampler_ddp` doesn't create a shuffled sampler
|
[
"bug",
"help wanted",
"distributed"
] |
π Bug
The DistributedSampler created using replace_sampler_ddp is not shuffled. Check the kwargs here
Expected behavior
If training dataloader, create a shuffled DistributedSampler, else create a non-shuffled sampler. Even though the train flag is passed to the function here, it is ignored.
Environment
pytorch-lightning master
|
Weight tying is broken on TPUs leading to silent errors
|
[
"bug",
"feature",
"help wanted",
"accelerator: tpu"
] |
π Bug
PyTorch/XLA documentation mentions here that weight tying should happen after moving tensors to XLA, otherwise the tensors are copied. This is a silent error that can easily go undetected (thanks to @matt-peters for pointing it out), and it would be good if PL guards the user against it. Notice that weight tying is pretty common in today's models not a corner case.
Code sample
The following code snippet shows how to detect that this issue is happening and how to guard against it.
import pl
class MyPLModel(pl.LightningModule):
def to(self, *args, **kwargs):
param_count_before_moving_to_device = len(list(self.parameters())) #
super().to(*args, **kwargs)
if self.trainer.use_tpu:
# need to re-tie the weights after moving to XLA!
self.tie_weights() # a new function that the user needs to implement
param_count_after_moving_to_device = len(list(self.parameters()))
assert param_count_before_moving_to_device == param_count_after_moving_to_device
|
Improve Design of DataModule
|
[
"feature",
"help wanted"
] |
π Feature
Motivation
Since the introduction of datamodules we have duplicated code. For example, the train/val/test_dataloader methods in LightningModule and DataModule have the same name, same docs, same signature but live in two files.
Also the argparse methods are copy pasted from Trainer, it will become impossible to maintain this in the future.
Pitch
Factor out the hook definitions:
class DataHooks:
def prepare_data(...):
""" docs... """
def setup(...):
""" docs... """
def train_dataloader(...):
""" docs... """
etc.
and then in the other classes, inherit from it:
class LightningModule(..., ModelHooks, DataHooks, ...)
...
class DataModule(..., DataHooks, ...)
...
Note also:
we even have duplicated tests
Alternatives
I see no alternatives. This is the best way I know.
@PyTorchLightning/core-contributors makes sense?
|
Live long and prosper
|
[
"question"
] |
This issue is just to thank the PyTorch Lightning team.
Even though my beginner knowledge over Pytorch and other Deep Learning frameworks, I was able to easily implement a quite complex model that is clearly written and completely reproducible.
I hope to contribute in some way in the near future.
|
TensorBoard logging in validation_step and test_step
|
[
"question"
] |
Even defining the log in all steps of the PL model:
def training_step(self, batch, batch_idx):
...
# TensorBoard logging
log = {"train_loss": train_loss, "train_mrr": train_mrr}
return {'loss': train_loss, "log":log}
def test_step(self, batch, batch_idx):
...
# TensorBoard logging
log = {"test_loss": test_loss, "test_mrr": test_mrr}
return {'test_loss': test_loss, 'log': log}
def validation_step(self, batch, batch_idx):
...
# TensorBoard logging
log = {"val_loss": val_loss, "val_mrr": val_mrr}
return {'val_loss': val_loss, 'log': log}
I could visualize only the train_loss and train_mrr on TensorBoard. I mean, test_loss and test_mrr do not appear in TesorBoard. Similar behavior happens with val_loss and val_mrr.
|
Horovod and Native Amp not work
|
[
"bug",
"help wanted"
] |
##π Bug
I'm no sure if there is a bug. But when I was tryng to use Horovod as backend to do native amp in PyTorch 1.6, Lightning always points to the function that uses apex amp instead.
To Reproduce
Steps to reproduce the behavior:
Go to 'trainer/distrib_parts.py'
Run '....'
Scroll down to '....'
See error
Code sample
trainer = pl.Trainer(gpus=1, num_nodes = 1, distributed_backend='horovod', precision = 16)
trainer.fit(generator, train_dataloader=train_loader, val_dataloaders=val_loader)
I tried to run it like: horovodrun --verbose -np 32 -H server1-0:8,server2-0:8,server3-0:8,server4-0:8 python file.py
Expected behavior
Use native amp when trying to launch Horovod in PyTorch 1.6.
Environment
Please copy and paste the output from our
environment collection script
(or fill out the checklist below manually).
You can get the script and run it with:
wget https://raw.githubusercontent.com/PyTorchLightning/pytorch-lightning/master/tests/collect_env_details.py
# For security purposes, please check the contents of collect_env_details.py before running it.
python collect_env_details.py
CUDA:
- GPU:
- Tesla V100-SXM2-32GB
- Tesla V100-SXM2-32GB
- Tesla V100-SXM2-32GB
- Tesla V100-SXM2-32GB
- Tesla V100-SXM2-32GB
- Tesla V100-SXM2-32GB
- Tesla V100-SXM2-32GB
- Tesla V100-SXM2-32GB
- available: True
- version: 10.1
Packages:
numpy: 1.19.1
pyTorch_debug: False
pyTorch_version: 1.6.0.dev20200623+cu101
pytorch-lightning: 0.8.5
tensorboard: 2.3.0
tqdm: 4.48.0
System:
OS: Linux
architecture:
64bit
processor: x86_64
python: 3.6.9
version: #83~16.04.1-Ubuntu SMP Wed Dec 18 04:56:23 UTC 2019
How you installed PyTorch (conda, pip, source): pip
cuDNN version: 7.6.5
Additional context
|
AttributeError: module 'pytorch_lightning' has no attribute 'LightningDataModule'
|
[
"bug",
"help wanted"
] |
Name: pytorch-lightning
Version: 0.8.5
following latest doc: https://pytorch-lightning.readthedocs.io/_/downloads/en/latest/pdf/
|
how to resume in the same folder
|
[
"question",
"won't fix"
] |
when i launch the pytorch lightning trainer fit(), to resume it seems you can pass one old checkpoint, but i am not sure if it is possible (or perhaps it is chosen not to) to simply resume the current training in the same "version" sub-folder?
|
Use pytorch-like `ignore_index` instead of `remove_bg` for IoU
|
[
"feature",
"help wanted"
] |
π Feature
PL's implementation of the IoU metric has a remove_bg flag that allows to ignore the background class.
I propose changing it to be the same as PyTorch's loss functions,
that is an ignore_index argument that takes the index of the class to be ignored.
Motivation for the proposal
Currently when remove_bg is set to True, the background is expected to be associated to index 0.
This is not a constant among datasets, for example Cityscapes uses index 255 for its ignored semantic segmentation class.
Pytorch's loss functions for classification that allow to ignore a class do it with an ignore_index argument, like CrossEntropyLoss.
That means that when using those the way they are intended to be, you have to manually move the ignored class' channel at the beginning of the tensor in-between the loss and metric computation, and I don't like having that in the middle of my training step.
Pitch
I'd like to have the same interface as PyTorch's loss functions with the ignore_index argument.
On a side note, I also think that the documentation should display the expected shape of inputs, like PyTorch' loss functions, and not keep quiet about the automatic adaptation to cases when the prediction is given in one-hot encoding or as an index array.
Alternatives
Appart from modifying the data between loss an metric computation in the training step, it should be possible to do it at the data-set creation level, with custom transformations, and then use an ignore_index=0 for the loss function.
For the first method, this is what I add:
# set ignored class at pos 0
y_pred = torch.roll(y_pred, shifts=1, dims=1)
y = torch.add(y, 1)
y[y == 256] = 0
|
[DataModule] Datamodule setup in docs shows non-existent stage arg
|
[
"help wanted",
"docs"
] |
π Bug
To Reproduce
Colab Minimal code
Expected behavior
DataModule should call prepare_data and setup
Environment
Colab
Additional context
PL Version: 0.9.0rc3
|
How to make stages?
|
[
"feature",
"question",
"discussion",
"working as intended"
] |
Hi! Thank you for a great framework! I've tried to write down stages for training. E.g. in my config:
<<: *default
stage2:
<<: *default
datasets:
<<: *datasets
# 240k
root: ['path1',
'path2,
'path3]
per_folder_ratio: [1.0, 1.0, 1.0]
transform:
<<: *transform
augs_lvl: "light"
optimizer:
class_name: RAdam
lr: 0.0001
and in train.py:
for stage_name, stage in selected_stages.items():
print(f"running stage {stage_name}")
model = build_model(stage) # BUILD LIGHTNING MODULE from func
trainer = build_trainer(stage_config=stage,
module=model,
...) # MAKE pytorch_lightning Trainer using this model
trainer.fit(model)
I'm using early stopping which make one iteration in aforementioned loop (another way which i'm also tried is to wait max_epochs epochs).
The problem is that second call to trainer.fit initializes DDP one more time and program is crashing because ip address is not freed from previous DDP init.
RuntimeError: Address already in use
I've tried master version of pytorch lightning but the problem did not dissapear
|
0.8.1 keeps writing into "version_0" folder instead of creating new version_1/2/3...
|
[] |
0.7.6 (I believe) would properly create a new "version_X" folder per run, but since upgrading to 0.8.1, it no longer does this.
Here's my logging-related code in my train script, which are then passed onto Trainer:
# custom logging directory
logger = pl.loggers.TestTubeLogger(
save_dir=logging_dir,
name=args.name
)
log_ckpt = pl.callbacks.ModelCheckpoint(save_top_k=-1, verbose=True)
|
Docs : Introduction Guide, test_dataloader wrong sequence length in random_split
|
[
"docs"
] |
Docs : Introduction Guide, test_dataloader
For the MNIST dataset, The training set contains 60000 examples, and the test set 10000 examples.
while creating the test_dataloader, in the code
mnist_train = MNIST(os.getcwd(), train=False, download=False, transform=transform)
the train is set to false, so it reads from the test set which only has 10000 examples, hence the next step causes it to fail as we make a split of 55k & 5k examples.
_, mnist_val = random_split(mnist_train, [55000, 5000])
with error
ValueError: Sum of input lengths does not equal the length of the input dataset!
best way to resolve it would be to remove the random_split call & return the complete test data from the test_dataloadet
|
Add a test case for running trainer.test without trainer.fit on DDP
|
[
"bug",
"help wanted",
"ci",
"distributed"
] |
π Bug
Running trainer.test(model) using DDp without running trainer.fit hangs.
To Reproduce
import pytorch_lightning as pl
import torch
from torch.utils.data import DataLoader, Dataset
class RandomDataset(Dataset):
def __init__(self, num_samples=100, dim=5):
self.num_samples = num_samples
self.dim = dim
def __len__(self):
return self.num_samples
def __getitem__(self, item):
x = torch.rand(self.dim)
y = x.sum()
return x, y
class Model(pl.LightningModule):
def __init__(self):
super(Model, self).__init__()
self.layer = torch.nn.Linear(5, 1)
def forward(self, x):
y = self.layer(x)
return y
def configure_optimizers(self):
optimizer = torch.optim.Adam(self.parameters(), lr=0.1)
return optimizer
def train_dataloader(self):
return DataLoader(
dataset=RandomDataset(num_samples=100, dim=5),
batch_size=32,
)
def test_dataloader(self):
return DataLoader(
dataset=RandomDataset(num_samples=64, dim=5),
batch_size=8
)
def training_step(self, batch, batch_idx, optimizer_idx=0):
x, y = batch
x = x.view(-1, 5)
y = y.view(-1, 1)
y_dash = self(x)
loss = ((y - y_dash) ** 2).sum()
return {'loss': loss, 'log': {'train_loss': loss / x.size(0)}}
def test_step(self, batch, batch_idx, dataloader_idx=0):
return self.training_step(batch, batch_idx)
def test_epoch_end(self, outputs):
loss = torch.stack([log['loss'] for log in outputs]).mean()
return {'test_loss': loss}
if __name__ == '__main__':
model = Model()
trainer = pl.Trainer(
max_steps=20,
amp_level='O1',
gpus=2,
precision=16,
distributed_backend='ddp'
)
# comment below / remove comment below
# trainer.fit(model)
trainer.test(model)
Expected behavior
Should be able to run test with DDP.
Environment
PyTorch Version (e.g., 1.0): 1.5
PL: 0.8.5
Additional context
|
to_categorical should go before get_num_classes in metrics/functional/classification.py
|
[
"bug"
] |
pytorch-lightning/pytorch_lightning/metrics/functional/classification.py
Lines 174 to 178
in
d18b9ef
num_classes = get_num_classes(pred=pred, target=target,
num_classes=num_classes)
if pred.ndim == target.ndim + 1:
pred = to_categorical(pred, argmax_dim=argmax_dim)
to_categorical should go before get_num_classes, since get_num_classes assumes pred has already been turned into categories (it is doing int(pred.max().detach().item() + 1)).
Warnings are raised now and then with current code, for example:
UserWarning: You have set 10 number of classes if different from predicted (xx) and target (10) number of classes
If logits are passed in directly for metrics.
|
Speed Drastically Decrease Under Horovod
|
[
"bug",
"help wanted",
"won't fix",
"3rd party"
] |
π Bug
If the backend is set to horovod, training speed will drop drastically.
To Reproduce
Steps to reproduce the behavior:
I run the training job in docker, and the Dockerfile is shown below:
FROM nvidia/cuda:10.1-devel-ubuntu18.04
# TensorFlow version is tightly coupled to CUDA and cuDNN so it should be selected carefully
ENV TENSORFLOW_VERSION=2.1.0
ENV PYTORCH_VERSION=1.5.1
ENV TORCHVISION_VERSION=0.5.0
ENV CUDNN_VERSION=7.6.5.32-1+cuda10.1
ENV NCCL_VERSION=2.4.8-1+cuda10.1
ENV MXNET_VERSION=1.6.0
# Python 3.6 is supported by Ubuntu Bionic out of the box
ARG python=3.6
ENV PYTHON_VERSION=${python}
# Set default shell to /bin/bash
SHELL ["/bin/bash", "-cu"]
RUN apt-get update && apt-get install -y --allow-downgrades --allow-change-held-packages --no-install-recommends \
build-essential \
cmake \
g++-4.8 \
git \
curl \
vim \
wget \
ca-certificates \
libcudnn7=${CUDNN_VERSION} \
libnccl2=${NCCL_VERSION} \
libnccl-dev=${NCCL_VERSION} \
libjpeg-dev \
libpng-dev \
python${PYTHON_VERSION} \
python${PYTHON_VERSION}-dev \
python${PYTHON_VERSION}-distutils \
librdmacm1 \
libibverbs1 \
ibverbs-providers
RUN ln -s /usr/bin/python${PYTHON_VERSION} /usr/bin/python
RUN curl -O https://bootstrap.pypa.io/get-pip.py && \
python get-pip.py && \
rm get-pip.py
# Install TensorFlow, Keras, PyTorch and MXNet
RUN pip install future typing
RUN pip install numpy \
tensorflow-gpu==${TENSORFLOW_VERSION} \
h5py
# keras \
# RUN pip install https://download.pytorch.org/whl/cu101/torch-${PYTORCH_VERSION}-$(python -c "import wheel.pep425tags as w; print('-'.join(w.get_supported(None)[0][:-1]))")-linux_x86_64.whl \
# https://download.pytorch.org/whl/cu101/torchvision-${TORCHVISION_VERSION}-$(python -c "import wheel.pep425tags as w; print('-'.join(w.get_supported(None)[0][:-1]))")-linux_x86_64.whl
RUN pip install mxnet-cu101==${MXNET_VERSION}
COPY backbones backbones
RUN cd backbones && pip install -r requirements.txt
# Install Open MPI
RUN mkdir /tmp/openmpi && \
cd /tmp/openmpi && \
wget https://www.open-mpi.org/software/ompi/v4.0/downloads/openmpi-4.0.0.tar.gz && \
tar zxf openmpi-4.0.0.tar.gz && \
cd openmpi-4.0.0 && \
./configure --enable-orterun-prefix-by-default && \
make -j $(nproc) all && \
make install && \
ldconfig && \
rm -rf /tmp/openmpi
# Install Horovod, temporarily using CUDA stubs
RUN ldconfig /usr/local/cuda/targets/x86_64-linux/lib/stubs && \
HOROVOD_GPU_OPERATIONS=NCCL HOROVOD_WITH_TENSORFLOW=1 HOROVOD_WITH_PYTORCH=1 HOROVOD_WITH_MXNET=1 \
pip install --no-cache-dir horovod && \
ldconfig
# Install OpenSSH for MPI to communicate between containers
RUN apt-get install -y --no-install-recommends openssh-client openssh-server && \
mkdir -p /var/run/sshd
# Allow OpenSSH to talk to containers without asking for confirmation
RUN cat /etc/ssh/ssh_config | grep -v StrictHostKeyChecking > /etc/ssh/ssh_config.new && \
echo " StrictHostKeyChecking no" >> /etc/ssh/ssh_config.new && \
mv /etc/ssh/ssh_config.new /etc/ssh/ssh_config
# Download examples
RUN apt-get install -y --no-install-recommends subversion && \
svn checkout https://github.com/horovod/horovod/trunk/examples && \
rm -rf /examples/.svn
WORKDIR "/examples"
My GPUs include 4 NVIDIA P100.
My training script is in /backbones, and if I run the script below, it trains smoothly:
export CUDA_VISIBLE_DEVICES=0,1,2,3
python backbones/tasks/mask_lm/trainer.py \
--data_dirs $DATA_DIR \
--bert_config_file $CONFIG_FILE \
--vocab_file $VOCAB_FILE \
--max_epochs=30 \
--gpus=1 \
--progress_bar_refresh_rate 1 \
--checkpoint_callback=True \
--val_check_interval 1.0 \
--default_root_dir=$OUTPUT_DIR \
--gradient_clip_val=5.0 \
--batch_size=32 \
--distributed_backend=ddp \
--accumulate_grad_batches 16 \
--lr 1e-4 \
--weight_decay 0.01 \
--workers 1 \
--max_length 128
If I run the script below which is using horovod, the speed will be about 1/3 of the one using ddp:
export CUDA_VISIBLE_DEVICES=0,1,2,3
horovodrun -np 4 python backbones/tasks/mask_lm/trainer.py \
--data_dirs $DATA_DIR \
--bert_config_file $CONFIG_FILE \
--vocab_file $VOCAB_FILE \
--max_epochs=30 \
--progress_bar_refresh_rate 1 \
--checkpoint_callback=True \
--val_check_interval 1.0 \
--default_root_dir=$OUTPUT_DIR \
--gradient_clip_val=5.0 \
--batch_size=32 \
--accumulate_grad_batches 16 \
--lr 1e-4 \
--weight_decay 0.01 \
--workers 1 \
--max_length 128 \
--distributed_backend horovod
Code sample
Expected behavior
The training speed under horovod will at least be around the one under ddp, not 1/3 or 1/4.
|
Multiple Model Multiple Loss Unsupervised Training
|
[
"question",
"won't fix"
] |
Hi, recently I have been trying to standardize on of our research models which led me to Lightning.
I have a situation around Multi-model multi loss training, which is described in the below post:
https://discuss.pytorch.org/t/multiple-networks-multiple-losses/91130?u=pavanmv
Please let me know if this can be achieved using LIghtning.
|
Pass second-order closure to all optimizers (not just LBFGS)
|
[
"feature",
"help wanted"
] |
π Feature
I could be wrong, but I noticed the following in the code of lightning module's optimizer_step
if on_tpu:
xm.optimizer_step(optimizer)
elif using_native_amp:
self.trainer.scaler.step(optimizer)
elif using_lbfgs:
optimizer.step(second_order_closure)
else:
optimizer.step()
If someone uses a custom optimizer that needs the closure returning the loss multiple times, it won't work.
Pitch
Since all classes that inherit from torch.optim.Optimizer have the step method accept a closure (even if they don't need it), we could just do
if on_tpu:
xm.optimizer_step(optimizer)
elif using_native_amp:
self.trainer.scaler.step(optimizer)
# elif using_lbfgs:
# optimizer.step(second_order_closure)
else:
optimizer.step(second_order_closure)
and drop the "using_lbfgs" argument?
Alternatives
The user has to override the optimizer_step themself.
|
Different loss functions for training and validation
|
[
"question",
"won't fix",
"waiting on author"
] |
I am having a strange problem where I am unable to use different loss functions for pytorch lightning. So, if I use something like this, it is completely fine:
class SegmentationModel(pl.LightningModule):
def __init__(self, hparams: dict):
self.lossfn = GeneralizedDiceLoss()
def training_step(self, batch, batch_idx):
inputs, targets = self.prepare_batch(batch, output_folders=False)
logits = self.model(inputs)
loss_val = self.lossfn(logits, targets)
return {'loss': loss_val}
def validation_step(self, batch, batch_idx):
inputs, targets = self.prepare_batch(batch)
logits = self.model(inputs)
loss_val = self.lossfn(logits, targets)
return {'val_loss': loss_val}
Now, as soon as I want to use a different function (even if it is the same function!), things do not work. In particular, the validation loss does not move. So, if I do something like this in my class definition:
class SegmentationModel(pl.LightningModule):
def __init__(self, hparams: dict):
self.train_lossfn = GeneralizedDiceLoss()
self.val_lossfn = GeneralizedDiceLoss() # same loss!
def training_step(self, batch, batch_idx):
inputs, targets = self.prepare_batch(batch, output_folders=False)
logits = self.model(inputs)
loss_val = self.train_lossfn(logits, targets) # use train loss
return {'loss': loss_val}
def validation_step(self, batch, batch_idx):
inputs, targets = self.prepare_batch(batch)
logits = self.model(inputs)
loss_val = self.val_lossfn(logits, targets) # use validation loss
return {'val_loss': loss_val}
This makes the model validation completely stuck and it seems the model does not generalise at all. I have a feeling that both the loss functions get called in sequence because the output shows something like:
| Name | Type | Params
-----------------------------------------------------
0 | model | UNet2D | 2 M
1 | train_lossfn | GeneralizedDiceLoss | 0
2 | val_lossfn | GeneralizedDiceLoss | 0
Perhaps I am using this wrong and I probably need to detach the validation loss function but is there a way to achieve this?
|
EvalResult doesn't do mean_of_gpus if using TensorMetric
|
[
"bug",
"help wanted"
] |
I want to use new AccuracyMetric, it can automatically sync in ddp, but it doesn't divide by word_size. In manually mode, I can divide it by word_size by hand in validation_epoch_end. But if I use EvalResult, how to do this? It only do mean across batches, but no across gpus.
This is original code:
def validation_epoch_end(self, outputs):
avg_loss = torch.stack([x['val_loss'] for x in outputs]).mean()
avg_acc = torch.stack([x['acc'] for x in outputs]).mean()
avg_acc = avg_acc / self.trainer.world_size
|
how to run m validation batches after running every n training batches?
|
[
"help wanted",
"question"
] |
π Feature
For example, I'm runing a model on a big dataset. After every 10000 training batches, I'd like to run 1000 validation batches to check the avg_traning_loss and avg_val_loss.
I tried val_check_interval but it just run all validation dataset, which is too big and time consuming. How to validate only part of the validation data?
This is similar to #2534 with something different.
Thanks a lot!
Ben
|
Issue with running multiple models in PyTorch Lightning
|
[
"bug",
"help wanted"
] |
π Bug
I am developing a system which needs to train dozens of individual models (>50) using Lightning, each with their own TensorBoard plots and logs. My current implementation has one Trainer object per model and it seems like I'm running into an error when I go over ~90 Trainer objects. Interestingly, the error only appears when I run the .test() method, not during .fit().
As I just started with Lightning, I am not sure if having one Trainer/model is the best approach. However, I require individual plots from each model, and it seems that if I use a single trainer for multiple models the results get overridden.
To Reproduce
Steps to reproduce the behaviour:
1.Define more than 90 Trainer objects, each with their own model.
2. Run training for each model.
3. Run testing for each model.
4. See error
Traceback (most recent call last):
File "lightning/main_2.py", line 193, in <module>
main()
File "lightning/main_2.py", line 174, in main
new_trainer.test(model=new_model, test_dataloaders=te_loader)
File "\Anaconda3\envs\pysyft\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 1279, in test
results = self.__test_given_model(model, test_dataloaders)
File "\Anaconda3\envs\pysyft\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 1343, in __test_given_model
self.set_random_port(force=True)
File "\Anaconda3\envs\pysyft\lib\site-packages\pytorch_lightning\trainer\distrib_data_parallel.py", line 398, in set_random_port
default_port = RANDOM_PORTS[-1]
IndexError: index -1 is out of bounds for axis 0 with size 0
Code sample
Defining the Trainer objects:
for i in range(args["num_users"]):
trainer_list_0.append(Trainer(max_epochs=args["epochs"], gpus=1, default_root_dir=args["save_path"],
fast_dev_run=args["fast_dev_run"], weights_summary=None))
trainer_list_1.append(Trainer(max_epochs=args["epochs"], gpus=1, default_root_dir=args["save_path"],
fast_dev_run=args["fast_dev_run"], weights_summary=None))
trainer_list_2.append(Trainer(max_epochs=args["epochs"], gpus=1, default_root_dir=args["save_path"],
fast_dev_run=args["fast_dev_run"], weights_summary=None))
Training:
for i in range(args["num_users"]):
trainer_list_0[i].fit(model_list_0[i], train_dataloader=dataloader_list[i],
val_dataloaders=val_loader)
trainer_list_1[i].fit(model_list_1[i], train_dataloader=dataloader_list[i],
val_dataloaders=val_loader)
trainer_list_2[i].fit(model_list_2[i], train_dataloader=dataloader_list[i],
val_dataloaders=val_loader)
Testing:
for i in range(args["num_users"]):
trainer_list_0[i].test(test_dataloaders=te_loader)
trainer_list_1[i].test(test_dataloaders=te_loader)
trainer_list_2[i].test(test_dataloaders=te_loader)
Expected behaviour
I expected the code to work without crashing.
Environment
PyTorch Version (e.g., 1.0): 1.4
OS (e.g., Linux): Windows 10 Pro 2004
How you installed PyTorch (conda, pip, source): conda
Python version: 3.7.6
CUDA/cuDNN version: CUDA 10.1/cuDNN 7.0
GPU models and configuration: RTX 2060 Super
|
NumpyMetric not mapping back to GPU in multi-GPU training
|
[
"bug",
"help wanted"
] |
π Bug
I created a NumpyMetric class for an involved metric that requires numpy operations; however, the metric fails when training on multiple GPUs. After some debugging, this appears to be due to the resulting tensor not being mapped back to the appropriate GPU (or any GPU for that matter).
To Reproduce
Steps to reproduce the behavior:
Define a NumpyMetric class
class MyNumpyMetric(NumpyMetric):
def forward(self, y_hat, y):
# complicated numpy stuff (no calls to .cpu() or .cuda() or .to() or anything like that)
return metric
Instantiate it in the __init__ and validation_step of my PyTorchLightning module, e.g.,
class MyNetwork(pl.LightningModule):
def __init__(self, args):
# other init stuff
self.my_metric = MyNumpyMetric('my_metric')
def validation_step(self, batch, batch_idx):
# other validation stuff
my_metric = self.my_metric(y_hat, y) # where y_hat and y are tensors, no .cpu(), .cuda(), .to() called on either
out_dict = dict(val_my_metric=my_metric)
return out_dict
Run:
model = MyNetwork(args)
trainer = Trainer(
benchmark=True,
check_val_every_n_epoch=1,
accumulate_grad_batches=1,
min_epochs=n_epochs,
max_epochs=n_epochs,
fast_dev_run=False,
gpus=2,
distributed_backend='dp'
)
trainer.fit(model)
See:
Traceback (most recent call last):
File "./tiramisu3d.py", line 574, in <module>
trainer.fit(model)
File "/iacl/pg20/jacobr/miniconda3/envs/msseg-9.2/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 997, in fit
results = self.dp_train(model)
File "/iacl/pg20/jacobr/miniconda3/envs/msseg-9.2/lib/python3.8/site-packages/pytorch_lightning/trainer/distrib_parts.py", line 270, in dp_train
result = self.run_pretrain_routine(model)
File "/iacl/pg20/jacobr/miniconda3/envs/msseg-9.2/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1193, in run_pretrain_routine
eval_results = self._evaluate(model,
File "/iacl/pg20/jacobr/miniconda3/envs/msseg-9.2/lib/python3.8/site-packages/pytorch_lightning/trainer/evaluation_loop.py", line 293, in _evaluate
output = self.evaluation_forward(model, batch, batch_idx, dataloader_idx, test_mode)
File "/iacl/pg20/jacobr/miniconda3/envs/msseg-9.2/lib/python3.8/site-packages/pytorch_lightning/trainer/evaluation_loop.py", line 444, in evaluation_forward
output = model(*args)
File "/iacl/pg20/jacobr/miniconda3/envs/msseg-9.2/lib/python3.8/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/iacl/pg20/jacobr/miniconda3/envs/msseg-9.2/lib/python3.8/site-packages/pytorch_lightning/overrides/data_parallel.py", line 66, in forward
return self.gather(outputs, self.output_device)
File "/iacl/pg20/jacobr/miniconda3/envs/msseg-9.2/lib/python3.8/site-packages/torch/nn/parallel/data_parallel.py", line 168, in gather
return gather(outputs, output_device, dim=self.dim)
File "/iacl/pg20/jacobr/miniconda3/envs/msseg-9.2/lib/python3.8/site-packages/torch/nn/parallel/scatter_gather.py", line 68, in gather
res = gather_map(outputs)
File "/iacl/pg20/jacobr/miniconda3/envs/msseg-9.2/lib/python3.8/site-packages/torch/nn/parallel/scatter_gather.py", line 61, in gather_map
return type(out)(((k, gather_map([d[k] for d in outputs]))
File "/iacl/pg20/jacobr/miniconda3/envs/msseg-9.2/lib/python3.8/site-packages/torch/nn/parallel/scatter_gather.py", line 61, in <genexpr>
return type(out)(((k, gather_map([d[k] for d in outputs]))
File "/iacl/pg20/jacobr/miniconda3/envs/msseg-9.2/lib/python3.8/site-packages/torch/nn/parallel/scatter_gather.py", line 55, in gather_map
return Gather.apply(target_device, dim, *outputs)
File "/iacl/pg20/jacobr/miniconda3/envs/msseg-9.2/lib/python3.8/site-packages/torch/nn/parallel/_functions.py", line 54, in forward
assert all(map(lambda i: i.is_cuda, inputs))
Code sample
I will try to do this soon.
Expected behavior
I expected no error to occur. The documentation states: "[NumpyMetric] already handles DDP sync and input/output conversions." However, this doesn't appear to be the case in my implementation.
Environment
CUDA:
GPU:
Tesla M40 24GB
Tesla M40 24GB
available: True
version: 9.2
Packages:
numpy: 1.18.5
pyTorch_debug: False
pyTorch_version: 1.5.1
pytorch-lightning: 0.8.5
tensorboard: 2.3.0
tqdm: 4.48.0
System:
OS: Linux
architecture:
64bit
ELF
processor: x86_64
python: 3.8.3
version: #1 SMP Wed Sep 26 11:06:22 UTC 2018
Additional context
PyTorch and PyTorch Lightning were installed with conda (along with all of the other packages).
I was able to work around this error by adding the following .to() call to the validation step:
def validation_step(self, batch, batch_idx):
# other validation stuff
my_metric = self.my_metric(y_hat, y)
my_metric = my_metric.to(y_hat.device)
out_dict = dict(val_my_metric=my_metric)
return out_dict
I presume, however, that this is not the intended way to use the NumpyMetric class.
FWIW, I briefly looked at the code to see if I could just submit a PR with the fix (if this isn't user error), but it wasn't clear to me where the best places to look were. If you point me in the right direction, I might be able to submit a PR with the fix.
|
Metric on all test data
|
[
"question"
] |
Is there an approach to handle scenarios in which the metric calculated during test_step depends on the entire test set and not just the existing data in the batch?
|
LR finder broken 2: not sure why (and other tiny bugs)
|
[
"bug",
"help wanted",
"won't fix",
"trainer: tune",
"priority: 2"
] |
π Bug
LR finder doesn't seem to work. The model doesn't train when trainer.lr_find(model) is running (the loss metric oscillates around its initial value). When looking at the figure from lr_finder.plot(), I suspected the learning rate wasn't being changed somehow, but internally it does. So I rebuilt a custom LR finder to check if it wasn't the rest of my code. It seems lr_find is broken, but I'm not sure why, since the implementation is kinda complex for me to debug. People might get wrong results if they don't check lr_finder.plot() before using lr_find.suggestion().
To Reproduce
Steps to reproduce the behavior:
Clone this test repository
Run the corresponding script (run.bat or run.sh)
Compare plot results for LR finder and a custom LR finder (lr_finder.png and custom_lr_finder.png)
Edit: I made a new branch called lr_bug, so please refer to that code instead
Code sample
The sample code is available on this repo. It trains ResNet-s on CIFAR10 with 10% of the train/val set for 10 epochs with initial learning_rate=1e-5 and end_lr=1.
Following is a stripped-down version of it:
# -----------------------------
# 3 FIND INITIAL LEARNING RATE
# -----------------------------
# Run learning rate finder
lr_finder = trainer.lr_find(
model,
num_training=hparams.epochs*model.batches_per_epoch,
min_lr=hparams.learning_rate,
mode='exponential')
# Plot
import matplotlib.pyplot as plt
fig = lr_finder.plot(suggest=True)
fig.tight_layout()
fig.savefig('lr_finder.png', dpi=300, format='png')
# Pick point based on plot, or get suggestion
new_lr = lr_finder.suggestion()
# -------------------------------------
# 4 FIND INITIAL LEARNING RATE (CUSTOM)
# -------------------------------------
# the scheduler is already configured as a LR sweeper
trainer.fit(model)
# get metrics from a custom CSV logger callback
metrics = trainer.callbacks[1].batch_metrics
loss = metrics['loss'].values
# Same as lr_finder.suggestion(), but with a moving average filter
index, lrs, loss = lr_suggestion(metrics, model.batches_per_epoch)
custom_lr = lrs[index]
# Plot
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
ax.plot(metrics['lr'], metrics['loss'], ':', label='Per Batch')
ax.plot(lrs, loss, label='Filtered ("Per Epoch")')
ax.plot(lrs[index], loss[index], 'ro', label='Suggestion')
ax.set_xscale('log')
ax.set_xlabel('Learning Rate')
ax.set_ylabel('Loss')
ax.legend()
fig.tight_layout()
fig.savefig('custom_lr_finder.png', dpi=300, format='png')
The "custom" learning rate finder is supposed to replicate lr_finder, it's just the same scheduler (lr_finder._ExponentialLR) and a custom CSV logger callback which logs lr collected from inside the training loop:
def training_step(self, batch, batch_idx):
# forward pass
x, y = batch
y_hat = self.forward(x)
# calculate loss
loss_val = self.loss(y_hat, y)
# acc
acc = ...
# lr
lr = self.trainer.lr_schedulers[0]['scheduler']._last_lr[0]
tqdm_dict = {'acc': acc, 'lr': lr}
output = OrderedDict({
'loss': loss_val,
'progress_bar': tqdm_dict,
'log': tqdm_dict
})
# can also return just a scalar instead of a dict (return loss_val)
return output
def configure_optimizers(self):
optimizer = optim.SGD(self.parameters(),
self.hparams.learning_rate,
momentum=self.hparams.momentum,
weight_decay=self.hparams.weight_decay)
customlr = _ExponentialLR
# customlr = _LinearLR
clr = customlr(
optimizer,
end_lr=1,
num_iter=self.hparams.epochs*self.batches_per_epoch,
last_epoch=-1
)
scheduler = dict(scheduler=clr,
interval='step')
return [optimizer], [scheduler]
When calculating the learning rate suggestion, a moving average filter was applied (with size batches_per_epoch). This prevents amplifying the noise with np.gradient() and getting wrong results from a "lucky batch". scipy.signal.filtfilt is necessary to avoid a delay in the loss array. I removed the line with loss = loss[np.isfinite(loss)] for simplicity (and because of a potential bug when loss contains NaNs).
def lr_suggestion(metrics, filter_size=100, skip_begin=10, skip_end=1):
loss = metrics['loss'].values
lrs = metrics['lr'].values
# if loss has any NaN values, lrs.size != loss.size,
# which would result in the wrong index for lrs
# this code assumes there's no NaNs in loss
# loss = loss[np.isfinite(loss)]
# Moving average before calculating the "gradient"
from scipy import signal
coef = np.ones(filter_size) / filter_size
loss = signal.filtfilt(coef, 1, loss)
index = np.gradient(loss[skip_begin:-skip_end]).argmin() + skip_begin
return index, lrs, loss
Expected behavior
LR finder plot results (not expected):
Custom LR finder (blue line is the expected behavior):
Environment
CUDA:
- GPU:
- GeForce GTX 950M
- available: True
- version: 10.2
Packages:
- numpy: 1.19.1
- pyTorch_debug: False
- pyTorch_version: 1.6.0
- pytorch-lightning: 0.8.5
- tensorboard: 2.2.1
- tqdm: 4.47.0
System:
- OS: Windows
- architecture:
- 64bit
- WindowsPE
- processor: Intel64 Family 6 Model 142 Stepping 9, GenuineIntel
- python: 3.7.7
- version: 10.0.19041
Additional context
PS: When debugging this problem, I noticed that LearningRateLogger only supports 'steps' and 'epoch' as an interval, not logging the lr when interval == 'batch'. The sample code has a simple fix which changes 2 lines of code (L68 and L82) to latest_stat = self._extract_lr(trainer, ['step', 'batch']) and if scheduler['interval'] in interval:, respectively.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.