title
stringlengths 5
164
| labels
sequence | bodyText
stringlengths 0
46.7k
|
---|---|---|
fix failing examples | [
"bug",
"help wanted",
"example",
"ci",
"strategy: dp",
"priority: 1"
] | π Bug
We have uncommented PL examples in #4551 but two of them turned to be failing and shall be fixed:
FAILED pl_examples/test_examples.py::test_examples_dp_image_classifier[\n--max_epochs 1 --batch_size 32 --limit_train_batches 2 --limit_val_batches 2 --gpus 2 --distributed_backend dp --precision 16 ]
FAILED pl_examples/test_examples.py::test_examples_dp_autoencoder[\n--max_epochs 1 --batch_size 32 --limit_train_batches 2 --limit_val_batches 2 --gpus 2 --distributed_backend dp --precision 16 ]
To Reproduce
uncomment two remaining examples with TODO comment
Environment
Drone Multi-GPU testing
Additional context
get back full example testing... |
PL shouldn't override PYTHONWARNINGS | [
"bug",
"help wanted",
"let's do it!",
"3rd party"
] | π Bug
This is bad:
pytorch-lightning/pytorch_lightning/trainer/trainer.py
Line 71
in
c208ac6
os.environ['PYTHONWARNINGS'] = 'ignore:semaphore_tracker:UserWarning'
What a user to do if they need to use PYTHONWARNINGS?
Adding to the existing setting is the way, but overriding it is taking away control from the user.
And after removing this override I still can't figure why I'm getting these:
PYTHONWARNINGS=ignore python -c "import pytorch_lightning"
/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/wandb/util.py:36: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.9 it will stop working
from collections import namedtuple, Mapping, Sequence
/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/wandb/vendor/graphql-core-1.1/graphql/type/directives.py:55: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.9 it will stop working
assert isinstance(locations, collections.Iterable), 'Must provide locations for directive.'
PYTHONWARNINGS=ignore / -W ignore doesn't get rid of those.
There must be yet another override somewhere else.
I reported the deprecations wandb/client#1503, but clearly something in PL or its dependencies is stealing user's control away still.
Thanks. |
Move dev debugger directly in logging | [
"ci",
"refactor"
] | @edenlightning commented on Mon Nov 16 2020 |
Where are the parameters passed to? | [
"question",
"won't fix"
] | β Questions and Help
What is your question?
I am curious about where are the parameters passed to? In the official implementation of SimCLR, I cannot find any operations use the initialized parameters:
batch_size: int,
num_samples: int,
warmup_epochs: int = 10,
lr: float = 1e-4,
opt_weight_decay: float = 1e-6,
loss_temperature: float = 0.5,
**kwargs):
The reason to know that is, when I copy the function and mimic it to write my own function. I cannot find where are these parameters passed to...
Code
https://github.com/PyTorchLightning/pytorch-lightning-bolts/blob/b746be06a7ef59c52fb7911b985a1a73821d1a74/pl_bolts/models/self_supervised/simclr/simclr_module.py#L56 |
How to monitor more than one quantity? | [
"question"
] | What i do if i want to monitor more than one quantity? |
Accuracy calculation issue related to "+=" | [] | pytorch-lightning/pytorch_lightning/metrics/classification/accuracy.py
Line 98
in
c208ac6
self.correct += torch.sum(preds == target)
I encountered RuntimeError: Trying to pass too many CPU scalars to CUDA kernel! error when using the accuracy metric.
This might be related to:
https://discuss.pytorch.org/t/trying-to-pass-too-many-cpu-scalars-to-cuda-kernel/87757/5
pytorch/pytorch#40986 |
GAN Domain Template: Typo in the description of Adam's beta 2 | [
"docs"
] | Hi,
I think Adam's beta 2 parameter was mistakenly referred to as the first order momentum of the gradient, whereas I think it should be the second order momentum?
In the current domain template:
parser.add_argument("--b2", type=float, default=0.999, help="Adam: decay of first order momentum of gradient")
Whereas it should be:
parser.add_argument("--b2", type=float, default=0.999, help="Adam: decay of second order momentum of gradient")
This has no effect on the correct working of the example however.
All the best,
Ludger |
Give users more control on paths used by ModelCheckpoint callback | [
"duplicate"
] | When initializing a ModelCheckpoint callback like this:
model_checkpoint = ModelCheckpoint(monitor="val_loss", filename="{epoch}-{val_loss:.3f}")
we get checkpoint files with names that look like this:
epoch=1-val_loss=0.436.ckpt
This looks really nice but the use of this name=value pattern is not necessarily what the end user would want.
In particular, tools like hydraΒ will need the end user to escape the = sign:
$ python train.py trainer.resume_from_checkpoint="epoch\=1-val_loss\=0.436.ckpt"
which is not very practical.
Here comes my question: would it be possible to allow for more flexibility on the naming scheme of model checkpoints in ModelCheckpoint callback?
The culprit is this line:
pytorch-lightning/pytorch_lightning/callbacks/model_checkpoint.py
Line 377
in
c208ac6
filename = filename.replace(group, name + "={" + name) |
Fix Lightning examples | [
"duplicate",
"example"
] | DP/DDP is failing for all https://github.com/PyTorchLightning/pytorch-lightning/tree/master/pl_examples |
Gather function in Lightning module with gradients | [
"feature",
"help wanted"
] | π Feature
Add a gather function with gradients in LightningModule which can be implemented with the help of:
https://github.com/PyTorchLightning/pytorch-lightning-bolts/blob/master/pl_bolts/models/self_supervised/simclr/simclr_module.py#L25
Motivation
The default all_gather in pytorch breaks the gradient graph as it does not propagate any gradient backwards.
Pitch
Allow users to use self.gather in LightningModule.
x = self.gather(x)
As suggested by @williamFalcon, this can be implemented like:
def gather(self):
self.trainer.gather():
Trainer.gather(self, x):
x = self.gather.accelerator(x)
All accelerators except CPU should implement this.
The gradients can be synced with:
x = self.gather(x, sync_grads=True)
@edenlightning @tchaton @teddykoker |
Iβm new to this | [] | β Questions and Help
Before asking:
Try to find answers to your questions in the Lightning Forum!
Search for similar issues.
Search the docs.
What is your question?
Code
What have you tried?
What's your environment?
OS: [e.g. iOS, Linux, Win]
Packaging [e.g. pip, conda]
Version [e.g. 0.5.2.1] |
wrong test acc because redundant data in ddp mode | [
"duplicate",
"feature",
"help wanted"
] | If I have 499 videos to test set, in ddp model, It will load 512 videos to test, maybe by copying videos to match the batch size. But it will cause wrong test accuracy. Now I need to save each videos' predictions and calculate acc by my own. Is there any way to solve this problem? |
Python malloc error with iPython | [
"bug",
"help wanted",
"3rd party"
] | π Bug
When in iPython, doing:
from pytorch_lightning import Trainer
or
import pytorch_lightning
gives:
In [1]: from pytorch_lightning import Trainer
python(82101,0x10e7f1dc0) malloc: can't allocate region
:*** mach_vm_map(size=18446744071734263808, flags: 100) failed (error code=3)
python(82101,0x10e7f1dc0) malloc: *** set a breakpoint in malloc_error_break to debug
init_dgelsd failed init
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
<ipython-input-1-9a5a68534ecc> in <module>
----> 1 from pytorch_lightning import Trainer
~/opt/anaconda3/lib/python3.7/site-packages/pytorch_lightning/__init__.py in <module>
54 # We are not importing the rest of the lightning during the build process, as it may not be compiled yet
55 else:
---> 56 from pytorch_lightning import metrics
57 from pytorch_lightning.callbacks import Callback
58 from pytorch_lightning.core import LightningDataModule, LightningModule
~/opt/anaconda3/lib/python3.7/site-packages/pytorch_lightning/metrics/__init__.py in <module>
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
---> 14 from pytorch_lightning.metrics.metric import Metric
15
16 from pytorch_lightning.metrics.classification import (
~/opt/anaconda3/lib/python3.7/site-packages/pytorch_lightning/metrics/metric.py in <module>
21
22 import os
---> 23 import torch
24 from torch import nn
25
~/opt/anaconda3/lib/python3.7/site-packages/torch/__init__.py in <module>
188 if USE_GLOBAL_DEPS:
189 _load_global_deps()
--> 190 from torch._C import *
191
192 # Appease the type checker; ordinarily this binding is inserted by the
ImportError: numpy.core.multiarray failed to import
Though, when just using Python in terminal, does not have the same results and the import is happening as it should be.
This is a problem because different tools (e.g. Vimspector debugging tool) cannot as well import Trainer and have the exact same error.
To Reproduce
Install pytorch-lightning, open iPython, import Trainer from pytorch_lightning.
Expected behavior
The importing should happen normally, without errors.
Environment
* CUDA:
- GPU:
- available: False
- version: None
* Packages:
- numpy: 1.19.2
- pyTorch_debug: True
- pyTorch_version: 1.7.0
- pytorch-lightning: 1.0.7
- tqdm: 4.51.0
* System:
- OS: Darwin
- architecture:
- 64bit
-
- processor: i386
- python: 3.7.9
- version: Darwin Kernel Version 19.6.0: Thu Oct 29 22:56:45 PDT 2020; root:xnu-6153.141.2.2~1/RELEASE_X86_64 |
η¬θ«bug | [] | π Bug
Please reproduce using the BoringModel and post here
To Reproduce
Expected behavior
Environment
Note: Bugs with code are solved faster ! Colab Notebook should be made public !
IDE: Please, use our python bug_report_model.py template.
Colab Notebook: Please copy and paste the output from our environment collection script (or fill out the checklist below manually).
You can get the script and run it with:
wget https://raw.githubusercontent.com/PyTorchLightning/pytorch-lightning/master/tests/collect_env_details.py
# For security purposes, please check the contents of collect_env_details.py before running it.
python collect_env_details.py
PyTorch Version (e.g., 1.0):
OS (e.g., Linux):
How you installed PyTorch (conda, pip, source):
Build command you used (if compiling from source):
Python version:
CUDA/cuDNN version:
GPU models and configuration:
Any other relevant information:
Additional context |
Add model sharding/gradient checkpointing from FairScale | [] | |
FairScale integration | [
"feature"
] | |
Getting an error when trying to use torch.load on model trained using DDP | [
"question",
"checkpointing"
] | β Questions and Help
What is your question?
I'm trying to save my model with torch.save() with the trainer and logger detached and then load it with torch.load(). If I train the model using DDP on 2 P4 gpus, I get the following error when I try to load it:
RuntimeError: [enforce fail at inline_container.cc:222] . file not found: archive/data/94881266361728
Reverting back to pytorch-lightning version 0.7.5 (big version gap.. I know) seems to fix the issue as does training on a single GPU. I'm wondering if there's something that got attached to the model somewhere in the version gap that I need to detach before pickling.
Code
I can't share code unfotunately.
What have you tried?
I've tried running on a single GPU or reverting lightning versions.
What's your environment?
OS: [e.g. iOS, Linux, Win] Ubuntu 20.04
Packaging [e.g. pip, conda] conda
Version [e.g. 0.5.2.1] 1.0.7 (w/ pytorch 1.7 and cuda 11) |
[Metrics] Add Image Gradient | [
"feature",
"help wanted"
] | π Feature
Implement Image-Gradients for PT Lightning.
Motivation
Recently I was working on a vanilla PT implementation of the DenseDepth paper. They happen to use a DepthLoss as one of their loss functions. Incidentally, DepthLoss is based on calculating Image gradients between two images, native implementation of which doesn't exist in PT! Including Image Gradient as a metric would make life easier for devs working on similar papers/projects. Also often metrics like ssim and image gradients are used in conjunction (as in this paper)
Pitch
Make Image-gradients available as a metric out-of-box in PT Lightning
Alternatives
Alternative solutions often include custom implementation of these metrics, thus increasing the development overhead. |
Close issues where author did not respond | [
"feature",
"help wanted",
"won't fix"
] | Can we automate with GH actions?
https://github.com/probot/no-response
@Borda what do you think about this? |
pl not respecting total_val_batches | [
"bug",
"help wanted"
] | It took me a while to hunt it down,
If I provide:
total_train_batches = 10
total_val_batches = 3
val_check_batch = 1
how do we get to total_batches 40 in tqdm:
Eventually I found the culprit to be this:
https://github.com/PyTorchLightning/pytorch-lightning/blob/master/pytorch_lightning/callbacks/progress.py#L326-L330
I have a hard time following this logic.
We should have a total of 13 items here, not 40.
Obviously the real numbers are much bigger - so we end with an incredibly huge numbers.
Thanks. |
Edit Profile Β· User Settings Β· GitLab | [] | ζεδΊ«δΊγEdit Profile Β· User Settings Β· GitLabγ, εΏ«ζ₯ηε§οΌ@ε°η±³ζ΅θ§ε¨ | https://gitlab.com/-/profile |
Add useful links to our metrics docs | [
"docs"
] | Add wikipedia/equations links to metrics docs, or any other resources on how metrics are calculated. |
How to 101 | [
"question"
] | β Questions and Help
Before asking:
Try to find answers to your questions in the Lightning Forum!
Search for similar issues.
Search the docs.
What is your question?
Code
What have you tried?
What's your environment?
OS: [e.g. iOS, Linux, Win]
Packaging [e.g. pip, conda]
Version [e.g. 0.5.2.1] |
validation_epoch_end with DDP | [
"question",
"won't fix",
"waiting on author"
] | What is your question?
I am trying to implement a metric which needs access to whole data. So instead of updating the metric in *_step() methods, I am trying to collect the outputs in the *_epoch_end() methods. However, the outputs contain only the output of the partition of the data each device gets. Basically if there are n devices, then each device is getting 1/n of the total outputs.
Stackoverflow Post
What's your environment?
OS: ubuntu
Packaging: conda
Version [1.0.4
Pytorch: 1.6.0 |
Potential bug in metric when updated with a slice of tensor in DDP | [
"bug",
"help wanted",
"distributed",
"priority: 1"
] | π Bug
when a metric is updated with a slice of tensor as one of the inputs (either pred or target) with multiple GPU with DDP, it throws out an error:
RuntimeError: Tensors must be non-overlapping and dense
Once the slice of the tensor is clone and detach, then it works.
Please reproduce using [the BoringModel and post here]
The issue can be reproduced below:
https://colab.research.google.com/drive/1yuqwb8BHhmDATp2IUNRNjSXOg3kV73Wp?usp=sharing
To Reproduce
Expected behavior
the metric update should work with a slice of tensor
Environment
CUDA:
- GPU:
- Tesla P100-PCIE-16GB
- available: True
- version: 10.1
Packages:
numpy: 1.18.5
pyTorch_debug: True
pyTorch_version: 1.7.0+cu101
pytorch-lightning: 1.0.7
tqdm: 4.41.1
System:
OS: Linux
architecture:
64bit
processor: x86_64
python: 3.6.9
version: #1 SMP Thu Jul 23 08:00:38 PDT 2020
Additional context |
Wrong AUCROC when runing training_step in DDP mod | [
"bug",
"help wanted",
"working as intended"
] | π Bug
I run a binary classification model and compute the auc as a performance indicator. The first I ran the code on 1 single GPU and it worked well, but the second time I tried to using 4 GPUs with DDP backend, the AUC became very weird, it seemed to just sum all the AUCs of the 4 GPUs. I use pl.metrics.AUROC() to compute auc and my pl version is 0.9.0
Please reproduce using the BoringModel and post here
Here is an example of my code
https://colab.research.google.com/drive/1d-3JTypoQdbPWQFFW_vqkBxDprqIVnFD?usp=sharing
I define a random dataset
class RandomDataset(Dataset):
def __init__(self):
self.len = 8
self.data = np.array([1,5,2,6,3,7,4,8],dtype=np.float32).reshape([-1,1])
self.label = np.array([1,1,0,0,0,1,0,0], dtype=np.float32)
def __getitem__(self, index):
return self.data[index], self.label[index]
def __len__(self):
return self.len
and use seed_everything(42)
In the example the first time I use a single GPU and set batch size to 8, epoch to 1, and got auc 0.5
GPU available: True, used: True
TPU available: False, using: 0 TPU cores
CUDA_VISIBLE_DEVICES: [0]
/opt/conda/lib/python3.6/site-packages/pytorch_lightning/utilities/distributed.py:37: UserWarning: Could not log computational graph since the `model.example_input_array` attribute is not set or `input_array` was not given
warnings.warn(*args, **kwargs)
| Name | Type | Params
--------------------------------------------
0 | model | Linear | 4
1 | auc | AUROC | 0
2 | loss | BCEWithLogitsLoss | 0
/opt/conda/lib/python3.6/site-packages/pytorch_lightning/utilities/distributed.py:37: UserWarning: The dataloader, val dataloader 0, does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` (try 64 which is the number of cpus on this machine) in the `DataLoader` init to improve performance.
warnings.warn(*args, **kwargs)
Validation sanity check: 0it [00:00, ?it/s]
Val part rank cuda:0 batch: tensor([1., 5., 2., 6., 3., 7., 4., 8.], device='cuda:0') batch_idx: 0
loss tensor(24.4544, device='cuda:0')
auc tensor(0.5000, device='cuda:0')
/opt/conda/lib/python3.6/site-packages/pytorch_lightning/utilities/distributed.py:37: UserWarning: The dataloader, train dataloader, does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` (try 64 which is the number of cpus on this machine) in the `DataLoader` init to improve performance.
warnings.warn(*args, **kwargs)
Epoch 0: 0%| | 0/2 [00:00<?, ?it/s]
Rank cuda:0 batch: tensor([1., 5., 2., 6., 3., 7., 4., 8.], device='cuda:0') batch_idx: 0
loss tensor(24.4544, device='cuda:0',
grad_fn=<BinaryCrossEntropyWithLogitsBackward>)
auc tensor(0.5000, device='cuda:0')
Epoch 0: 50%|βββββββββββββββββββββββββββββββββββββββββββββββββ | 1/2 [00:00<00:00, 130.40it/s, loss=24.454, v_num=20, train_loss=24.5, auc=0.5]
Val part rank cuda:0 batch: tensor([1., 5., 2., 6., 3., 7., 4., 8.], device='cuda:0') batch_idx: 0
loss tensor(24.4479, device='cuda:0')
auc tensor(0.5000, device='cuda:0')
Epoch 0: 100%|ββββββββββββββββSaving latest checkpoint..ββββββββββββββββββββββββββββ| 2/2 [00:00<00:00, 85.70it/s, loss=24.454, v_num=20, train_loss=24.5, auc=0.5, vla_loss=24.4, val_auc=0.5]
Epoch 0: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 2/2 [00:00<00:00, 82.69it/s, loss=24.454, v_num=20, train_loss=24.5, auc=0.5, vla_loss=24.4, val_auc=0.5]
end
then I use 2 GPUs with batch size 4, epoch 1, and got auc 1.33
GPU available: True, used: True
TPU available: False, using: 0 TPU cores
CUDA_VISIBLE_DEVICES: [1,2]
2 GPU with DDP
initializing ddp: GLOBAL_RANK: 1, MEMBER: 2/2
initializing ddp: GLOBAL_RANK: 0, MEMBER: 1/2
----------------------------------------------------------------------------------------------------
distributed_backend=ddp
All DDP processes registered. Starting ddp with 2 processes
----------------------------------------------------------------------------------------------------
/opt/conda/lib/python3.6/site-packages/pytorch_lightning/utilities/distributed.py:37: UserWarning: Could not log computational graph since the `model.example_input_array` attribute is not set or `input_array` was not given
warnings.warn(*args, **kwargs)
| Name | Type | Params
--------------------------------------------
0 | model | Linear | 4
1 | auc | AUROC | 0
2 | loss | BCEWithLogitsLoss | 0
/opt/conda/lib/python3.6/site-packages/pytorch_lightning/utilities/distributed.py:37: UserWarning: The dataloader, train dataloader, does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` (try 64 which is the number of cpus on this machine) in the `DataLoader` init to improve performance.
warnings.warn(*args, **kwargs)
/opt/conda/lib/python3.6/site-packages/pytorch_lightning/utilities/distributed.py:37: UserWarning: The dataloader, val dataloader 0, does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` (try 64 which is the number of cpus on this machine) in the `DataLoader` init to improve performance.
warnings.warn(*args, **kwargs)
Epoch 0: 0%| | 0/2 [00:00<?, ?it/s]
Rank cuda:1 batch: tensor([1., 6., 7., 4.], device='cuda:1') batch_idx: 0
Rank cuda:1 batch: tensor([3., 8., 2., 5.], device='cuda:1') batch_idx: 0
loss tensor(28.9794, device='cuda:1',
grad_fn=<BinaryCrossEntropyWithLogitsBackward>)
auc tensor(1.3333, device='cuda:1')
loss tensor(19.9294, device='cuda:1',
grad_fn=<BinaryCrossEntropyWithLogitsBackward>)
auc tensor(1.3333, device='cuda:1')
Epoch 0: 50%|ββββββββββββββββββββββββββββββββββββββββββββββββββ | 1/2 [00:00<00:00, 81.04it/s, loss=28.979, v_num=21, train_loss=29, auc=1.33]
Val part rank cuda:1 batch: tensor([5., 6., 7., 8.], device='cuda:1') batch_idx: 0
Val part rank cuda:1 batch: tensor([1., 2., 3., 4.], device='cuda:1') batch_idx: 0
loss tensor(12.5368, device='cuda:1')
auc tensor(0.6667, device='cuda:1')
loss tensor(36.3589, device='cuda:1')
auc tensor(0.6667, device='cuda:1')
Saving latest checkpoint..
end
Epoch 0: 100%|ββββββββββββββββSaving latest checkpoint..βββββββββββββββββββββββββββ| 2/2 [00:00<00:00, 72.50it/s, loss=28.979, v_num=21, train_loss=29, auc=1.33, vla_loss=12.5, val_auc=0.667]
Epoch 0: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 2/2 [00:00<00:00, 51.66it/s, loss=28.979, v_num=21, train_loss=29, auc=1.33, vla_loss=12.5, val_auc=0.667]
end
Expected behavior
Compute the real auc
Environment
PyTorch Version: 1.6.0
OS: Ubuntu 18.04
How you installed PyTorch: pip
Python version: 3.6
CUDA/cuDNN version: 10.2
GPU models and configuration: 2080Ti |
`lr_finder` fails when model defines some layers in `setup`. | [
"bug",
"help wanted",
"won't fix",
"trainer: tune",
"priority: 1"
] | π Bug
In lr_find, trainer.save_checkpoint(...)Β is called before trainer has a chance to call model.setup(stage='fit') and datamodule.setup(stage='fit').
Therefore, weight restoration will fail later if extra layers are defined in setup.
Please reproduce using the BoringModel and post here
https://colab.research.google.com/gist/hbredin/7a2defe2a2b8760fb903429113092951/the-boringmodel.ipynb
Expected behavior
Both model.setup(stage='fit') and datamodule.setup(stage='fit') should be called before saving checkpoint.
Environment
* CUDA:
- GPU:
- Tesla P100-PCIE-16GB
- available: True
- version: 10.1
* Packages:
- numpy: 1.18.5
- pyTorch_debug: True
- pyTorch_version: 1.7.0+cu101
- pytorch-lightning: 1.0.6
- tqdm: 4.41.1
* System:
- OS: Linux
- architecture:
- 64bit
-
- processor: x86_64
- python: 3.6.9
- version: #1 SMP Thu Jul 23 08:00:38 PDT 2020
Additional context
Link to discussion in Slack: https://pytorch-lightning.slack.com/archives/CQXV8BRH9/p1605557610495400 |
Lower parity tests | [
"feature",
"help wanted",
"good first issue",
"ci"
] | π Bug
We have observed that the trainer reached the initial threshold for parity
Note that we are still almost the same fast, but we may find some weeks spots
Please reproduce using
see GPU tests in the benchmark folder
Additional context
actual version 1.0.7 has avg diff 0.84s |
Precision/Recall/Fbeta error out for floating tensor input of preds but OK for long tensor of preds | [
"help wanted",
"won't fix",
"working as intended",
"priority: 1"
] | π Bug
The current implementation of Precision/Recall/Fbeta used _input_format function to format input shapes and types. There appears to be a bug how this function deals with essential the same input of preds but different data types (long vs float)
Please reproduce using the BoringModel and post here
To Reproduce
Use the following notebook to replicate:
https://colab.research.google.com/drive/1GOzy9urgRmAud-Sadtva3c_GwO1nk5BD?usp=sharing
Expected behavior
The same output should be expected regardless of the input data types (long vs float) for preds
Environment
CUDA:
GPU:
available: False
version: 10.1
Packages:
numpy: 1.18.5
pyTorch_debug: True
pyTorch_version: 1.7.0+cu101
pytorch-lightning: 1.0.7
tqdm: 4.41.1
System:
OS: Linux
architecture:
64bit
processor: x86_64
python: 3.6.9
version: #1 SMP Thu Jul 23 08:00:38 PDT 2020 |
transform module with explicit transformations coupled with torch.transforms | [
"feature",
"help wanted"
] | π Feature
Thinking to file a pr for transform module which consist inhouse transformations which are not present torch.transforms.
Motivation
Gridmask, mosaic, mixup, cutmix, etc
above augmentations are missing in most of the frameworks and they are de facto for kaggle, general computer vision solutions, etc
Reference
my library which consist all of this module -> github.com/kartik4949/TensorPipe |
Apply import formatter `isort` | [
"feature",
"help wanted",
"good first issue",
"ci",
"refactor",
"priority: 2"
] | π Refactoring
As isort has been added to ci in #4242, we now need to apply the formatter step by step i.e. a submodule per PR (recommended in #4242 (comment) by @Borda)
Steps
For each PR:
choose one submodule from below list and apply isort to it
remove the corresponding line in pyproject.toml
make sure isort --check passes
Progress
The rest of the submodules should be done by @arnaudgelas.
The followings are PL submodules.
pytorch_lightning/*.py >> #4717
pytorch_lightning/accelerators/
pytorch_lightning/callbacks/
pytorch_lightning/cluster_environments/
pytorch_lightning/core/
pytorch_lightning/distributed/
pytorch_lightning/loggers/
pytorch_lightning/metrics/
pytorch_lightning/overrides/
pytorch_lightning/plugins/
pytorch_lightning/profiler/
pytorch_lightning/trainer/
pytorch_lightning/tuner/
pytorch_lightning/utilities/
The followings are tests.
tests/*.py >> #4717
tests/backends/ >> #5430
tests/base/ >> #5429
tests/callbacks/ >> #5428
tests/checkpointing/ >> #5427
tests/core/ >> #5426
tests/loggers/ >> #5425
tests/plugins/ >> #5422
tests/metrics/ >> #5424
tests/models/ >> #5423
tests/trainer/ >> #5421
tests/tuner/
tests/utilities/ >> #5420
The followings are other python files.
*.py (#4242)
docs/ (#4242)
pl_examples/ (#5291) |
Metrics are not reset when using self.log() | [
"help wanted",
"ci"
] | π Bug
Metrics are not reset when using self.log() unless user explicitly calling self.metric.compute()
See MSE metric example in the colab notebook linked below.
Printing internal states on epoch end shows that the metric states are not reset. Calling self.metric.compute() explicitly resolve the issue (uncomment line in epoch_end in linked colab)
Note: I didn't tested if reset occurs when logging on_step
https://colab.research.google.com/drive/10cTIhVkxdgKZ23WwAHiCSAlU7K1a3NRr?usp=sharing
Expected behavior
According to the documentation, metric should be reset even when using self.log() only. |
Lightning Model module , get_model_size method | [
"feature",
"help wanted",
"won't fix",
"discussion",
"design"
] | π Feature
A get_model_size method in model module which return model size in megabyte based on precision.
Motivation
Just thinking out loud..
Additional context
We can add this in mode summary too
# ------------
# model
# ------------
model = LitClassifier(args.hidden_dim, args.learning_rate)
>> print(model.size)
>> 104.14281 Mb with fp32. |
Repeated .fit() calls ignore max_steps iteration bound | [
"bug",
"help wanted",
"good first issue",
"priority: 1"
] | π Bug
Hello!
While trying to convert my code to PL (I'm starting to become a big fan!) I came across some unexpected behavior: In an iteration-based training setup repeated calls of trainer.fit() result in ignoring the iteration bound set by the max_steps argument. The trainer will finish the entire epoch, even though in my opinion it shouldn't (forgive me if I missed something obvious, which is easily possible since I'm new to PL).
Please reproduce using the BoringModel and post here
https://colab.research.google.com/drive/1gKLNoYXjW7s3ifSJJ00SVZi4b08GDy5F?usp=sharing
To Reproduce
In the BoringModel, I only changed the test cell to something like this:
def test_x(tmpdir):
# init model
model = BoringModel()
iterations = 80
print('fit() should only do {} iterations'.format(iterations))
# Initialize a trainer
trainer = pl.Trainer(
max_steps=iterations,
progress_bar_refresh_rate=20
)
# Train the model β‘
trainer.fit(model, train, val) # here the trainer correctly does 80 iterations
print('global_step: ', trainer.global_step)
print('first fit() done')
trainer.fit(model, train, val) # in this fit(), the trainer will complete the epoch
# trainer.test(test_dataloaders=test)
Expected behavior
I expect repeated trainer.fit() calls to result in either
No action, because trainer.global_step == trainer.max_steps
or
Again fitting of the model for another max_steps iterations
I think most people would expect the former.
Environment
CUDA:
GPU:
Tesla T4
available: True
version: 10.1
Packages:
numpy: 1.18.5
pyTorch_debug: True
pyTorch_version: 1.7.0+cu101
pytorch-lightning: 1.0.7
tqdm: 4.41.1
System:
OS: Linux
architecture:
64bit
processor: x86_64
python: 3.6.9
version: #1 SMP Thu Jul 23 08:00:38 PDT 2020
Additional context
The same problem arises if a fully trained model is loaded, and then trainer.fit() is called. This is especially troubling when a trainer.test() follows. |
Learning Rate schedulers do not follow the optimizer frequencies. | [
"bug",
"help wanted",
"good first issue",
"priority: 1"
] | π Bug
I want to use two optimizers sequentially with different LR schedulers. For the first one, I want to use OneCycleLR and for the second optimizer I do not want to use a LR scheduler. The problem with OneCycleLR is that you need to specify the exact number of steps. I have the setup like this
def configure_optimizers(self):
optimizer_one = torch.optim.SGD(self.model.parameters(), lr=self.hparams.lr, momentum=0.9, weight_decay=5e-4)
optimizer_two = torch.optim.SGD(self.model.parameters(), lr=0.01, momentum=0.9, weight_decay=5e-4)
return [
{
'optimizer': optimizer_one,
'lr_scheduler': {
'scheduler': torch.optim.lr_scheduler.OneCycleLR(optimizer, 0.1, epochs=self.hparams.first_opt_epochs, steps_per_epoch=math.ceil(45000/batch_size)),
'interval': 'step',
},
'frequency': self.hparams.first_opt_epochs * math.ceil(45000/batch_size)
},
{
'optimizer': optimizer_two,
'frequency': (self.trainer.max_epochs - self.hparams.first_opt_epochs) * math.ceil(45000/batch_size)
}]
The problem with this is that after the first_opt_epochs, I get a ValueError: Tried to step 28142 times. The specified number of total steps is 28140. This is because update_learning_rates is getting called from self.update_train_loop_lr_schedulers even after the frequency of the first optimizer has passed
Please reproduce using the BoringModel and post here
https://colab.research.google.com/drive/1A0p04A05ltiFkVEYPE-267LDopE3m8ju?usp=sharing
Expected behavior
The LR scheduler OneCycleLR of the first optimizer should only be updated when using the first optimizer.
Environment
Note: Bugs with code are solved faster ! Colab Notebook should be made public !
IDE: Please, use our python bug_report_model.py template.
Colab Notebook: Please copy and paste the output from our environment collection script (or fill out the checklist below manually).
You can get the script and run it with:
wget https://raw.githubusercontent.com/PyTorchLightning/pytorch-lightning/master/tests/collect_env_details.py
# For security purposes, please check the contents of collect_env_details.py before running it.
python collect_env_details.py
CUDA:
GPU:
Tesla P100-PCIE-16GB
available: True
version: 10.1
Packages:
numpy: 1.18.5
pyTorch_debug: True
pyTorch_version: 1.7.0+cu101
pytorch-lightning: 1.0.7
tqdm: 4.41.1
System:
OS: Linux
architecture:
64bit
processor: x86_64
python: 3.6.9
version: #1 SMP Thu Jul 23 08:00:38 PDT 2020 |
How to get default checkpoint dir | [
"question"
] | In test_epoch_end, I want to save some files to default checkpoint dir, How can I access this path. |
"MisconfigurationException: PyTorch XLA not installed." Even though PyTorch XLA is installed | [
"bug",
"help wanted",
"accelerator: tpu",
"waiting on author",
"3rd party"
] | π Bug
I'm using exactly the same command to install pytorch xla as shown in the doc:
curl https://raw.githubusercontent.com/pytorch/xla/master/contrib/scripts/env-setup.py -o pytorch-xla-env-setup.py
python pytorch-xla-env-setup.py --version nightly --apt-packages libomp5 libopenblas-dev
However, I still got this error:
GPU available: False, used: False
TPU available: True, using: 1 TPU cores
training on 1 TPU cores
---------------------------------------------------------------------------
MisconfigurationException Traceback (most recent call last)
<ipython-input-11-1f9f6fbe4f6c> in <module>()
----> 1 test_x(tmpdir)
2 frames
/usr/local/lib/python3.6/dist-packages/pytorch_lightning/accelerators/tpu_accelerator.py in setup(self, model)
61 # TODO: Move this check to Trainer __init__ or device parser
62 if not TPU_AVAILABLE:
---> 63 raise MisconfigurationException('PyTorch XLA not installed.')
64
65 # see: https://discuss.pytorch.org/t/segfault-with-multiprocessing-queue/81292/2
MisconfigurationException: PyTorch XLA not installed.
Please reproduce using the BoringModel and post here
https://colab.research.google.com/drive/1dz4NY233VoBv9SIb22JnrWiqYBHFpxCm?usp=sharing
To Reproduce
run through the colab notebook in the link above
Expected behavior
no error
Environment
Note: Bugs with code are solved faster ! Colab Notebook should be made public !
IDE: Please, use our python bug_report_model.py template.
Colab Notebook: Please copy and paste the output from our environment collection script (or fill out the checklist below manually).
You can get the script and run it with:
wget https://raw.githubusercontent.com/PyTorchLightning/pytorch-lightning/master/tests/collect_env_details.py
# For security purposes, please check the contents of collect_env_details.py before running it.
python collect_env_details.py
CUDA:
GPU:
available: False
version: None
Packages:
numpy: 1.18.5
pyTorch_debug: False
pyTorch_version: 1.8.0a0+4ed7f36
pytorch-lightning: 1.0.7
tqdm: 4.41.1
System:
OS: Linux
architecture:
64bit
processor: x86_64
python: 3.6.9
version: #1 SMP Thu Jul 23 08:00:38 PDT 2020
Additional context |
How to use pytorch-lightning to train a detectron2 model? | [
"question"
] | The trainers in pytorch-lightning are really cool, but how can I use them to train a detectron2 model? Is there any example that I can follow? |
Questions about GPUStatsMonitor callback and GPU utilization | [
"question",
"won't fix"
] | I'm logging the GPU activity using the default values. I'm getting the following plots on my tensorboard:
A couple of questions:
Is the utilization really zero at specific time points (jigsaw) or is it just plotting zero's in-between epochs?
If my utilization is really zero at specific points, how can I make sure this doesn't happen? What about memory utlization?
As you can see in the first two graphs, the memory allocation is above 75%, but the memory utilization shows 16-18%. Why do these two numbers don't correspond?
Is there a similar callback for CPU? E.g.:If I want to check CPU utilization in-between training cycles (e.g.: to check data loader preparation)
Any recommendations on how to set num_workers on an 8-GPU system?
Thanks! |
Use custom batch dataloader with DDP | [
"question",
"won't fix"
] | Hello,
In our project, we use a custom data loader like so:
class BatchDataLoader(torch.utils.data.dataloader.DataLoader):
def __init__(self, ds, batch_size, shuffle, num_workers=0):
inner_sampler = RandomSampler(ds) if shuffle else SequentialSampler(ds)
sampler = BatchSampler(inner_sampler, batch_size, False)
super().__init__(ds, batch_size=None, sampler=sampler, num_workers=num_workers)
We use this custom dataloader so that the __getitem__ method is directly called with the list of batch indices and returns a full batch instead of single items. We need this because for some technical reasons, the __getitem__ method is way way faster when accessing a batch of data items rather than single ones. This has been working perfectly fine so far until we started looking into using ddp for training our model.
By default Lightning setup its own sampler for distributed training. We can override this behaviour with replace_sampler_ddp=False but then our custom sampler would be used, whereas it is not made for distributing.
If we wish to make our own custom distributed version of it, then how can we retrieve the distributed context from lightning? Is there a way to know if we are in a distributed context, and if so, how many nodes, etc? Can this be done from a Lightning data module, as this is where we instantiate the data loaders? I tried using torch.distributed,is_available() as a first step. While this passes on Windows, this call fails with AssertionError: Default process group is not initialized on Linux if we are not in a distrbuted context (for instance training on a single GPU). For info, I'm using Lightning 1.0.5 with PyTorch 1.5.1. We wish to setup our sampler automatically, without the user having to specify extra parameters.
Or do you see a way to achieve the same behaviour (getting batches of indices in the __getitem__ method of the dataset) while still benefitting from the default DistributedSampler that Lightning setup? I thought of wrapping my original dataset in another one that would create "buckets" of data, but then how can I make sure the dataset is reshuffled each epoch so that the buckets don't always have the same content?
Thanks for your help :) |
DDP accelorator should have 'trainer=none' as default | [
"bug",
"help wanted",
"design"
] | Add to DDP accelerator 'trainer=None' as a first argument (otherwise I cannot pass it to the Trainer instantiation)
https://github.com/PyTorchLightning/pytorch-lightning/blob/master/pytorch_lightning/accelerators/ddp_accelerator.py#L49
class DDPAccelerator(Accelerator):
def __init__(self, trainer, cluster_environment=None, ddp_plugin=None) |
`lr_finder` fails when called after training for 1 or more epochs | [
"bug",
"help wanted",
"trainer: tune",
"priority: 1"
] | π Bug
Calling lr_finder on the model after trainer.fit() has been called will fail with:
LR finder stopped early due to diverging loss.
Failed to compute suggesting for `lr`. There might not be enough points.
, even when the default value of min_lr=1e-08 has been changed to 1e-30.
Please reproduce using the BoringModel and post here
Reproduced using a callback: https://colab.research.google.com/drive/1sbOPs8edyFi_idJNnd6gr3etyv7V57YU?usp=sharing
Reproduced with calling Trainer twice: https://colab.research.google.com/drive/1WxUvayBBg_163nu8fjv-jsvPk6pUrSrK?usp=sharing
To Reproduce
Add the following callback (as demonstrated with the BoringModel):
# Call Learning Rate finder after X epochs
class LRFinderXEpoch(Callback):
def __init__(self, epoch=1):
super().__init__()
self.epoch = epoch
def on_train_epoch_start(self, trainer, pl_module):
if trainer.current_epoch == self.epoch:
print("Calling learning rate finder!")
trainer.tune(pl_module)
# trainer.tuner.lr_find(pl_module, min_lr=1e-30)
Expected behavior
Find the best learning rate after a few epochs of training (e.g. when doing Transfer Learning).
Environment
* CUDA:
- GPU:
- Tesla T4
- available: True
- version: 10.1
* Packages:
- numpy: 1.18.5
- pyTorch_debug: True
- pyTorch_version: 1.7.0+cu101
- pytorch-lightning: 1.0.8
- tqdm: 4.41.1
* System:
- OS: Linux
- architecture:
- 64bit
-
- processor: x86_64
- python: 3.6.9
- version: #1 SMP Thu Jul 23 08:00:38 PDT 2020
Additional context
Issue came from the following discussion: https://forums.pytorchlightning.ai/t/train-2-epochs-head-unfreeze-learning-rate-finder-continue-training-fit-one-cycle/366/4
Potentially related issues:
#4784
#4616 |
Metric names when using multiple dataloader for validation | [
"question"
] | Hi guys,
I am using multiple dataloaders for validation. This works great so far, but I have some questions regarding the logged metrics:
As far as I understand, lightning will automatically assign a metric suffix (/dataloader_idx_X). Is there any way for me to control that behaviour? Wandb assumes groups with that slash, and it means, that it groups by metric name and not validation set. However, there are scenarios where it makes much more sense to group by validation set, as they might look at completely different metrics. On the bottom, there is a screenshot on how it looks without any further wanbd configuration.
Apart from my question, this is more of a feature proposal: Can't we supply a (ordered) dictionary instead of a list via val_dataloader() in DataModules? So that it looks like this:
DataModule:
def val_dataloader(self) -> Union[Dict[str, DataLoader], List[Dataloader], DataLoader]:
return {
'scenario1': DataLoader('...'),
'scenario2': DataLoader('...'),
}
LightningModule
def validation_step(self, batch, batch_idx: int, loader: Union[str, int]):
# do something
self.log('my_metric', x)
Which then logs scenarioX/my_metric in case of the example or dataloader_idx_X/my_metric when a list is provided. |
Training loss not convergence | [
"bug",
"help wanted"
] | π Bug
Running the example code below, at my pc , the loss not change:
Epoch 29: 43%|βββββββββββββββββββββββββ | 800/1875 [00:04<00:05, 189.48it/s, loss=1.483, v_num=87]
as u see, even at epoch29, the loss is around 1.5.
code:
`import os
import torch
from torch.nn import functional as F
from torch.utils.data import DataLoader
from torchvision.datasets import MNIST
from torchvision import transforms
import pytorch_lightning as pl
class MNISTModel(pl.LightningModule):
def __init__(self):
super(MNISTModel, self).__init__()
self.l1 = torch.nn.Linear(28 * 28, 10)
def forward(self, x):
return torch.relu(self.l1(x.view(x.size(0), -1)))
def training_step(self, batch, batch_nb):
x, y = batch
loss = F.cross_entropy(self(x), y)
tensorboard_logs = {'train_loss': loss}
return {'loss': loss, 'log': tensorboard_logs}
def configure_optimizers(self):
return torch.optim.Adam(self.parameters(), lr=0.02)
train_loader = DataLoader(MNIST(os.getcwd(), train=True, download=True, transform=transforms.ToTensor()), batch_size=32)
mnist_model = MNISTModel()
trainer = pl.Trainer(gpus=1, progress_bar_refresh_rate=20)
trainer.fit(mnist_model, train_loader) `
Please reproduce using [the BoringModel and post here]
Here comes the strange thing, with same code, I running at the colab,
(https://colab.research.google.com/drive/1rPmzscbE4tCo7k8pj7FQJxck3JcM-Zx8?usp=sharing)
Got this
Epoch 1: 77% 1440/1875 [00:06<00:01, 230.34it/s, loss=0.669, v_num=0]
Even at epoch 1, I got loss = 0.669!
To Reproduce
Colab:https://colab.research.google.com/drive/1rPmzscbE4tCo7k8pj7FQJxck3JcM-Zx8?usp=sharing
my local pc maybe you cannot Reach
Expected behavior
Colab and my pc get Same behavior , in other words, my pc loss should convergence.
Environment
PyTorch Version (e.g., 1.0): 1.6.0
OS (e.g., Linux): Linux
How you installed PyTorch (conda, pip, source): pip
Python version: 3.7.4
CUDA/cuDNN version: 10.1
GPU models and configuration: Gforce1060 (1080ti also get loss around 1.5 all the time.)
Any other relevant information: |
DDP Logdir Multiple Runs Bug | [
"bug",
"help wanted",
"good first issue",
"priority: 0"
] | π Bug
Using accelerator="ddp" and running a pl Experiment mulitple times each time creating a new Trainer, the log version number is handled in a wrong way.
Instead of creating folder "version_2" after running "version_1" a folder with name "2" is created.
To fix this, after the training, os.environ.pop("PL_EXP_VERSION") has to be executed.
Please reproduce using the BoringModel and post here
https://colab.research.google.com/drive/1GFQ6doCWl_aTdBaBVOgTf1dTs_qZV0sg?usp=sharing
Expected behavior
Increase "version_1" to "version_2". |
Auto-scale batch size triggers "on_train_end" | [
"bug",
"help wanted",
"trainer: tune",
"priority: 1"
] | When running Trainer with auto_scale_batch_size set to true, the on_train_end function is called at the start, but never at the end, because there is a check to not run teardown multiple times. |
How to write log info to txt file | [
"question",
"logging"
] | How to save each epoch's self.log info to txt file, in the meantime keeping the tensorboard logs.
I want to log some metric value of each epoch, and my debug text info. such as self.log('This is a text info') |
Tensorboard log_graph does not seem to do anything | [
"bug",
"help wanted"
] | π Bug
While exploring Tensorboard's logging features I experimented with the log_graph function, that was added after #2915 (issue) and #3003 (PR).
According to the docs the function has the following signature log_graph(model, input_array=None)
So I tried to call it doing, inside my PL module:
...
def __init__(self, ..):
...
self.example_input_array = torch.rand((10, self.input_dim))
...
def test_step(self, batch, batch_idx):
...
self.logger.log_graph(self)
...
without any success. No error message is displayed and Tensorboard does not recognise any logged graph.
Nevertheless if I change my code to:
...
def __init__(self, ..):
...
self.example_input_array = torch.rand((10, self.input_dim))
...
def test_step(self, batch, batch_idx):
...
self.logger.experiment.add_graph(self, self.example_input_array)
...
Which is what the log_graph function is supposed to call, then I do see my graph appearing in Tensorboard.
Please reproduce using the BoringModel and post here
To Reproduce
Execute the following gist
https://gist.github.com/pierresegonne/091a1d0e163f01a6f1c944c1bc4e9cf6
And change lines 123/124 to see the graph being logged or not
Environment
* CUDA:
- GPU:
- available: False
- version: None
* Packages:
- numpy: 1.19.2
- pyTorch_debug: True
- pyTorch_version: 1.7.0
- pytorch-lightning: 1.0.2
- tqdm: 4.50.2
* System:
- OS: Darwin
- architecture:
- 64bit
-
- processor: i386
- python: 3.8.6
- version: Darwin Kernel Version 19.6.0: Sun Jul 5 00:43:10 PDT 2020; root:xnu-6153.141.1~9/RELEASE_X86_64 |
[Doc] Callback example raising an exception | [
"bug",
"help wanted"
] | By pytorch-lightning 1.0.7, I follow Doc example as follow :
import os
import torch
from torch import nn
import torch.nn.functional as F
from torchvision.datasets import MNIST
from torchvision import transforms
from torch.utils.data import DataLoader
import pytorch_lightning as pl
from torch.utils.data import random_split
class LitAutoEncoder(pl.LightningModule):
def __init__(self):
super().__init__()
self.encoder = nn.Sequential(
nn.Linear(28*28, 64),
nn.ReLU(),
nn.Linear(64, 3)
)
self.decoder = nn.Sequential(
nn.Linear(3, 64),
nn.ReLU(),
nn.Linear(64, 28*28)
)
def forward(self, x):
# in lightning, forward defines the prediction/inference actions
embedding = self.encoder(x)
return embedding
def training_step(self, batch, batch_idx):
# training_step defined the train loop.
# It is independent of forward
x, y = batch
x = x.view(x.size(0), -1)
z = self.encoder(x)
x_hat = self.decoder(z)
loss = F.mse_loss(x_hat, x)
# Logging to TensorBoard by default
self.log('train_loss', loss)
return loss
def configure_optimizers(self):
optimizer = torch.optim.Adam(self.parameters(), lr=1e-3)
return optimizer
def on_train_start(self, trainer, pl_module):
# track the initial learning rates
for opt_idx, optimizer in enumerate(trainer.optimizers):
group = []
for param_group in optimizer.param_groups:
group.append(param_group['lr'])
self.old_lrs.append(group)
class DecayLearningRate(pl.Callback):
def __init__(self):
self.old_lrs = []
def on_train_start(self, trainer, pl_module):
# track the initial learning rates
for opt_idx, optimizer in enumerate(trainer.optimizers):
group = []
for param_group in optimizer.param_groups:
group.append(param_group['lr'])
self.old_lrs.append(group)
def on_train_epoch_end(self, trainer, pl_module, outputs):
for opt_idx, optimizer in enumerate(trainer.optimizers):
old_lr_group = self.old_lrs[opt_idx]
new_lr_group = []
for p_idx, param_group in enumerate(optimizer.param_groups):
old_lr = old_lr_group[p_idx]
new_lr = old_lr * 0.98
new_lr_group.append(new_lr)
param_group['lr'] = new_lr
self.old_lrs[opt_idx] = new_lr_group
dataset = MNIST(os.getcwd(), download=True, transform=transforms.ToTensor())
train_loader = DataLoader(dataset)
# init model
autoencoder = LitAutoEncoder()
# most basic trainer, uses good defaults (auto-tensorboard, checkpoints, logs, and more)
# trainer = pl.Trainer(gpus=8) (if you have GPUs)
decay_callback = DecayLearningRate()
trainer = pl.Trainer(callbacks=[decay_callback])
trainer.fit(autoencoder, train_loader)
The codes raising an exception
TypeError: on_train_start() missing 2 required positional arguments: 'trainer' and 'pl_module' |
metrics.classification.ConfusionMatrix() not available | [
"help wanted",
"question"
] | π Bug
The ConfusionMatrix class called from metrics.classification.ConfusionMatrix is not working. When trying to import class it fails.
To fix issue the init.py file of the metrics module needs to include a comma following the ConfusionMatrix line like so
from pytorch_lightning.metrics.classification import (
Accuracy,
Precision,
Recall,
FBeta,
F1,
ConfusionMatrix,
)
This issue occurs on the latest versions of pytorch_lightning. |
What is the module attribute in the training_loop.py? | [
"question",
"waiting on author"
] | I want to train my model using pytorch_lightning but, when I pass my model into the trainer.fit(mymodel) function an error gets thrown. It says "my model" has no attribute 'module'. I searched through the pytorch_lightning files and found it in pytorch_lightning/trainer/training_loop.py on line 123 - 125. There is actually a class attribute that's called module, but my model class doesn't have it.
#Code
```
if args.load_model_from:
speech_module = SpeechModule.load_from_checkpoint(args.load_model_from, model=model, args=args)
else:
speech_module = SpeechModule(model, args)
logger = TensorBoardLogger(args.logdir, name='speech_recognition')
trainer = Trainer(logger=logger)
trainer = Trainer(
max_epochs=args.epochs, gpus=args.gpus,
num_nodes=args.nodes, distributed_backend=None,
logger=logger, gradient_clip_val=1.0,
val_check_interval=args.valid_every,
checkpoint_callback=checkpoint_callback(args),
resume_from_checkpoint=args.resume_from_checkpoint
)
trainer.fit(speech_module)
My environment:
- OS: macOS 11.0.1
- Packaging: pip
- Version: python 3.7.9
- CPU/GPU: M1 (16GB shared cache and memory) |
[LoggerCollection.log_hyperparams] [no optional argument metrics] | [
"bug",
"help wanted",
"priority: 0",
"logger"
] | π Bug
Hello ! :)
The following method TensorBoardLogger.log_hyperparams has an optional argument metrics yet LoggerCollection.log_hyperparams doesn't, yielding an error when relying on several loggers including TensorBoardLogger and using this optional argument.
Is that behaviour expected? Shouldn't all loggers have this optional argument?
To Reproduce
logger1 = pl.loggers.TensorBoardLogger(save_dir="logs")
logger2 = pl.loggers.CSVLogger(save_dir="logs")
pl.Trainer(logger=[logger1, logger2], etc)
Yields TypeError: log_hyperparams() got an unexpected keyword argument 'metrics'.
Environment
CUDA:
GPU:
available: False
version: 10.2
Packages:
numpy: 1.19.4
pyTorch_debug: True
pyTorch_version: 1.7.0
pytorch-lightning: 1.0.6
tqdm: 4.53.0
System:
OS: Linux
architecture:
64bit
ELF
processor: x86_64
python: 3.8.6
version: #1 SMP Tue Aug 11 16:36:14 UTC 2020 |
test loading legacy checkpoints | [
"feature",
"help wanted",
"ci",
"checkpointing",
"priority: 1"
] | π Feature
create persistent storage with all legacy version checkpoints and try to load and continue training |
conflicts of warm-up and lr scheduler | [
"feature",
"help wanted",
"won't fix"
] | In docs, warm-up is done as below:
# learning rate warm-up
def optimizer_step(self, current_epoch, batch_nb, optimizer, optimizer_idx, second_order_closure=None, on_tpu=False, using_native_amp=False, using_lbfgs=False):
# warm up lr
if self.trainer.global_step < 500:
lr_scale = min(1., float(self.trainer.global_step + 1) / 500.)
for pg in optimizer.param_groups:
pg['lr'] = lr_scale * self.hparams.learning_rate
# update params
optimizer.step()
optimizer.zero_grad()
This will change lr in [0, 500] steps, but lr_scheduler will also change lr in [0, 500] steps, this warm-up method will override lr_scheduler's change. How can I use warm-up in [0, 500] steps, then starts lr_scheduler in [500, ...] steps. Or I use warm-up in [0, 3] epochs, then starts lr_scheduler in [3, ...] epochs |
WandbLogger does not mark uploaded model as 'artifact' | [
"bug",
"help wanted",
"won't fix",
"logger",
"3rd party",
"priority: 1"
] | π Bug
I'm using WandbLogger with the latest pytorch-lightning==1.0.8. It seems like trained checkpoint is treated as mere file not a model artifact, even I turned on log_model=True. It's much convenient to use model artifact from other script so I hope that is done by pytorch-lightning automatically.
Environment
* CUDA:
- GPU:
- GeForce RTX 3090
- available: True
- version: 11.0
* Packages:
- numpy: 1.19.2
- pyTorch_debug: True
- pyTorch_version: 1.7.0
- pytorch-lightning: 1.0.8
- tensorboard: 2.3.0
- tqdm: 4.51.0
* System:
- OS: Linux
- architecture:
- 64bit
- ELF
- processor: x86_64
- python: 3.8.5
- version: #60-Ubuntu SMP Fri Nov 6 10:37:59 UTC 2020 |
How to resume training | [
"question"
] | β Questions and Help
What is your question?
How to resume training
What have you tried?
The following:
trainer = pl.Trainer(gpus=1, default_root_dir=save_dir)
saves the checkpoints but does not resume from the last checkpoint (it starts a new version)
The following code starts the training from scratch (but I read that it should resume):
logger = TestTubeLogger(save_dir=save_dir, name="default", version=0)
trainer = pl.Trainer(gpus=1, default_root_dir=save_dir, logger = logger)
it always starts over and it also save always in a different checkpoint if the checkpoint exists e.g. epoch=0.ckpt, and after epoch=0-v0.ckpt (instead of overwriting)
What's your environment?
OS: Linux
Packaging pip
Version: pytorch_lightning-1.0.8 |
DDP+ manual optimization support | [
"bug",
"duplicate",
"priority: 0",
"distributed"
] | Gradients aren't being synchronised if using manual_backward |
Implementing a metric is getting complicated | [
"bug",
"help wanted"
] | In PL 0.9.0 I had the Metric implementation bellow:
class MRRMetric(TensorMetric):
def similarities(self, x1, x2):
"""
Calculates the cosine similarity matrix for every pair (i, j),
where i is an embedding from x1 and j is another embedding from x2.
:param x1: a tensors with shape [batch_size, hidden_size].
:param x2: a tensors with shape [batch_size, hidden_size].
:return: the cosine similarity matrix with shape [batch_size, batch_size].
"""
x1 = x1 / torch.norm(x1, dim=1, p=2, keepdim=True, dtype=torch.float32)
x2 = x2 / torch.norm(x2, dim=1, p=2, keepdim=True, dtype=torch.float32)
return torch.matmul(x1, x2.t())
def forward(self, r1, r2):
distances = 1 - self.similarities(r1, r2)
correct_elements = torch.unsqueeze(torch.diag(distances), dim=-1)
batch_ranks = torch.sum(distances < correct_elements, dim=-1) + 1.0
return torch.mean(1.0 / batch_ranks)
In PL 1.0.8 the same metric is implemented with the same logic as:
class MRRMetric(Metric):
def __init__(self):
super().__init__()
self.add_state("mrrs", default=[])
def similarities(self, x1, x2):
"""
Calculates the cosine similarity matrix for every pair (i, j),
where i is an embedding from x1 and j is another embedding from x2.
:param x1: a tensors with shape [batch_size, hidden_size].
:param x2: a tensors with shape [batch_size, hidden_size].
:return: the cosine similarity matrix with shape [batch_size, batch_size].
"""
x1 = x1 / torch.norm(x1, dim=1, p=2, keepdim=True, dtype=torch.float32)
x2 = x2 / torch.norm(x2, dim=1, p=2, keepdim=True, dtype=torch.float32)
return torch.matmul(x1, x2.t())
def update(self, r1, r2):
distances = 1 - self.similarities(r1, r2)
correct_elements = torch.unsqueeze(torch.diag(distances), dim=-1)
batch_ranks = torch.sum(distances < correct_elements, dim=-1) + 1.0
mrr = torch.mean(1.0 / batch_ranks)
self.mrrs.append(mrr )
def compute(self):
return torch.mean(torch.tensor(self.mrrs))
However I'm getting differents outputs when calling this metrics:
PL 0.9.0
s=torch.tensor([[0.0023, 0.8380, 0.2920, 0.2263, 0.3362, 0.7074, 0.5290, 0.5974, 0.8176,
0.5816],
[0.6414, 0.9122, 0.0162, 0.3203, 0.5081, 0.9035, 0.3201, 0.2870, 0.0693,
0.3736],
[0.4570, 0.2787, 0.0140, 0.5623, 0.7226, 0.3223, 0.3566, 0.6783, 0.8384,
0.1610],
[0.1334, 0.7529, 0.5574, 0.1949, 0.9526, 0.9542, 0.2212, 0.1759, 0.1354,
0.8739],
[0.3224, 0.2001, 0.2776, 0.4203, 0.0720, 0.2702, 0.0114, 0.1538, 0.0090,
0.4954]])
r=torch.tensor([[1.7271e-01, 6.2802e-01, 1.9287e-01, 3.7181e-01, 9.8512e-01, 3.3500e-01,
7.8245e-01, 8.2903e-01, 3.8158e-01, 9.0951e-01],
[1.6962e-01, 1.8394e-01, 8.1776e-01, 1.5577e-01, 2.6253e-01, 8.9279e-02,
4.3836e-01, 9.3631e-01, 9.9012e-01, 7.0277e-01],
[6.0912e-01, 3.9780e-04, 2.9773e-02, 3.0968e-01, 9.5504e-01, 5.9606e-01,
7.1733e-01, 5.7158e-01, 7.0587e-01, 2.0964e-01],
[5.1019e-01, 3.9333e-01, 8.7822e-02, 8.1885e-01, 3.9182e-01, 6.6586e-02,
1.6968e-01, 1.6206e-01, 6.1676e-01, 1.8388e-01],
[8.3266e-01, 5.5918e-01, 6.3568e-01, 5.4674e-01, 3.4784e-01, 6.0922e-01,
1.3884e-01, 2.3742e-01, 9.0644e-01, 2.3266e-01]])
mrr = MRRMetric()
mrr(s, r)
#tensor(0.5300)
PL 1.0.8
```python
s=torch.tensor([[0.0023, 0.8380, 0.2920, 0.2263, 0.3362, 0.7074, 0.5290, 0.5974, 0.8176,
0.5816],
[0.6414, 0.9122, 0.0162, 0.3203, 0.5081, 0.9035, 0.3201, 0.2870, 0.0693,
0.3736],
[0.4570, 0.2787, 0.0140, 0.5623, 0.7226, 0.3223, 0.3566, 0.6783, 0.8384,
0.1610],
[0.1334, 0.7529, 0.5574, 0.1949, 0.9526, 0.9542, 0.2212, 0.1759, 0.1354,
0.8739],
[0.3224, 0.2001, 0.2776, 0.4203, 0.0720, 0.2702, 0.0114, 0.1538, 0.0090,
0.4954]])
r=torch.tensor([[1.7271e-01, 6.2802e-01, 1.9287e-01, 3.7181e-01, 9.8512e-01, 3.3500e-01,
7.8245e-01, 8.2903e-01, 3.8158e-01, 9.0951e-01],
[1.6962e-01, 1.8394e-01, 8.1776e-01, 1.5577e-01, 2.6253e-01, 8.9279e-02,
4.3836e-01, 9.3631e-01, 9.9012e-01, 7.0277e-01],
[6.0912e-01, 3.9780e-04, 2.9773e-02, 3.0968e-01, 9.5504e-01, 5.9606e-01,
7.1733e-01, 5.7158e-01, 7.0587e-01, 2.0964e-01],
[5.1019e-01, 3.9333e-01, 8.7822e-02, 8.1885e-01, 3.9182e-01, 6.6586e-02,
1.6968e-01, 1.6206e-01, 6.1676e-01, 1.8388e-01],
[8.3266e-01, 5.5918e-01, 6.3568e-01, 5.4674e-01, 3.4784e-01, 6.0922e-01,
1.3884e-01, 2.3742e-01, 9.0644e-01, 2.3266e-01]])
mrr = MRRMetric()
mrr(r, s)
#tensor(0.6800) |
default_root_path vs default_root_dir | [
"docs"
] | I noticed that the argument to Trainer is default_root_dir, but the docs sometimes show default_root_path. I did a grep on the codebase for default_root_path, and here are the results:
./notebooks/05-trainer-flags-overview.ipynb:2155: "trainer = pl.Trainer(default_root_path=os.getcwd())\n",
./pytorch_lightning/trainer/properties.py:51: default_root_path: str
./docs/source/trainer.rst:618: trainer = Trainer(default_root_path=os.getcwd())
I'm fairly convinced it's simply a documentation issue, but it requires some code changes (the 2nd line), and I'm not sure whether that will affect anything. |
Learning rate scheduling interval on LambdaLR scheduler cannot be set to "step". | [
"bug",
"help wanted",
"priority: 0"
] | π Bug
Hello. I am trying to use LambdaLR learning rate scheduler with lr updates at every step with my custom function.
However, I have found that even with scheduler={'interval': 'step'}, the update occurs at each epoch, not each training step.
def configure_optimizers(self):
def func(step: int, max_steps=max_steps):
return (1 - (step / max_steps)) ** 0.9
scheduler = optim.lr_scheduler.LambdaLR(lr_lambda=func)
return {'optimizer': optimizer, 'lr_scheduler': scheduler, 'interval': 'step'}
To Reproduce
Apologies for the small snippet but the project is very complex and I think this is the relevant part.
Expected behavior
Learning rate should decrease at every training step after backpropagation.
Environment
Ubuntu16.04 LTS on Docker. PyTorch Lightning v.1.0.6 and 1.0.8. Pytorch version 1.6.0
Additional context
I think the usage of step-wise iteration could be better documented as to how the arguments are parsed and utilized. |
The correct way to resume experiment with logger | [
"question"
] | From what I understand checkpoints don't save id of the logger run right?
Then what is the correct way to resume training from checkpoint and resume the correct logger run at the same time?
For example, I'm using Wights&Biases logger where you can pass id argument which resumes run experiment:
WandbLogger(id=run_id)
How can I know what is the correct run id without checking it manually?
Is it possible to somehow store it in the checkpoint?
Also, why is checkpoint not storing things like that automatically? Seems like a useful feature. |
How to log more than one metrics in logger? | [
"question"
] | I want to log two metircs.What should i do?
self.log('my_loss', loss, on_step=True, on_epoch=True, prog_bar=True, logger=True)
This can only log one metrics. |
manual_optimization does not work with ddp | [
"bug",
"help wanted",
"priority: 0"
] | π Bug
Can't run ddp with manual optimization. Fails on the second batch with a error:
RuntimeError: Expected to mark a variable ready only once. This error is caused by one of the following reasons: 1) Use of a module parameter outside the forwardfunction. Please make sure model parameters are not shared across multiple concurrent forward-backward passes2) Reused parameters in multiple reentrant backward passes. For example, if you use multiplecheckpoint functions to wrap the same part of your model, it would result in the same set of parameters been used by different reentrant backward passes multiple times, and hence marking a variable ready multiple times. DDP does not support such use cases yet.
To Reproduce
Change optimization to manual in basic gan bolt.
Expected behavior
Do not fail when n_gpus > 1
Environment
CUDA:
GPU:
Tesla V100-SXM2-16GB
Tesla V100-SXM2-16GB
Tesla V100-SXM2-16GB
Tesla V100-SXM2-16GB
available: True
version: 10.2
Packages:
numpy: 1.19.4
pyTorch_debug: True
pyTorch_version: 1.7.0
pytorch-lightning: 1.0.8
tqdm: 4.54.0
System:
OS: Linux
architecture:
64bit
processor: x86_64
python: 3.7.9
version: #1 SMP Tue Sep 10 10:50:19 EDT 2019
Additional context
To have manual optimization working with GANs in multi-gpu regime is very useful applicaiton. |
how to properly skip samples that cause inf/nan gradients/loss | [
"feature",
"question",
"won't fix"
] | tl;dr
does the approach in the code snippet below look ok, or is there a better alternative for automatically skipping few "bad" samples in the data that cause inf/nan gradients/loss? (is it a good practice altogether?)
details
sometimes, there is a small percentage (but annoyingly large in absolute value) of "dirty" samples in the data that cause the loss to be nan, although the neural-network architecture itself is fine and stable in terms of numerical stability.
one approach is to automatically stop training (use terminate_on_nan) and then somehow isolate all these samples and remove them from the data permanently. but..
sometimes we simply want to automatically skip these samples as if they never existed (perhaps with a warning), and continue training.
I couldn't find any documentation about how to do that, nor anyone who asked this question. so i decided to ask and offer a solution I found, for others that might need it as well.
in the end, i came up with the following approach - override on_after_backwards method in my lightning-module with the following code:
code
def on_after_backward(self) -> None:
valid_gradients = True
for name, param in self.named_parameters():
if param.grad is not None:
valid_gradients = not (torch.isnan(param.grad).any() or torch.isinf(param.grad).any())
if not valid_gradients:
break
if not valid_gradients:
log.warning(f'detected inf or nan values in gradients. not updating model parameters')
self.zero_grad()
pros
this code successfully identifies nan/inf gradients, and skips parameter update by zeroing gradients for the specific batch
support multi-gpu (at least ddp which I tested). when done this way, detecting inf/nan gradients (instead of inf/nan loss), we avoid a potential cases of losing synchronization between different processes, because typically one of the processes would generate an inf loss, while the others won't. if we stop only one process from doing a backwards pass, we lose synchronization, and would stumble into a never-ending processes that wait for nothing. training stalls. when checking gradients, it is after all gradients in all processes have been affected by the bad inf loss. so we have synchronization.
cons
can't catch bad samples that way.. need to work harder..
might not be future proof
clutters lightning module code (it is essentially architecture agnostic, boiler-plate code)
perhaps there is a better way..
final question
is it worth having such functionality integrated into lightning as a simple command-line-switch/parameter? |
`validation_epoch_end` might need current epoch | [
"feature",
"help wanted"
] | Hi,
I would like to save some files with suffix of filenames using current epoch on validation_epoch_end. However, it seems that it's not a parameter of validation_epoch_end and I can only get it from pl.Trainer. Is there any another way instead of just adding additional parameter to validation_epoch_end?
Thanks! |
manual_optimization does not work with dp | [
"bug",
"help wanted",
"waiting on author",
"priority: 1"
] | π Bug
We can run dp backend with manual optimization, but the gradients seem to be messed up hence the model can't learn anything.
To Reproduce
Change optimization to manual in basic gan bolt, then change the backend to dp.
Set batch_size = 2, compare experiments on 1 GPU vs 2 GPUs
When using 1 GPU everything is fine, but using 2 GPUs will fail the training.
I haven't really test it yet, but since I've done many experiments on my own implementations (which is too heavy to paste them here and hard to extract), I think it should be able to reproduce.
Expected behavior
Performance under 2 GPUs with dp backend should be identical to the 1 GPU one.
Environment
(Should be ) Any.
Additional context
This bug comes from my experiments on GANs but should be affecting other models as long as the manual optimization is utilized. |
Custom Checkpoint file extension | [
"feature",
"good first issue",
"checkpointing"
] | π Feature
Atm, we have hardcoded .ckpt as a file extension for any checkpoint.
pytorch-lightning/pytorch_lightning/callbacks/model_checkpoint.py
Line 429
in
db69d16
ckpt_name = f"{filename}.ckpt"
Proposed solution 1:
class ModelCheckpoint(Callback):
FILE_EXTENSION = ".ckpt"
Proposed solution 2:
ModelCheckpoint(ext=''pt') |
Precision and Recall over validation step | [
"question",
"working as intended"
] | When Precision and Recall are directly computed, I get the following result:
import torch
from pytorch_lightning.metrics import Precision
from pytorch_lightning.metrics import Recall
y = torch.tensor([0, 0, 2, 2, 1, 1, 1, 2, 0, 0])
y_hat = torch.tensor([1, 1, 2, 1, 1, 1, 1, 1, 2, 1])
precision = Precision(num_classes=3)
recall = Recall(num_classes=3)
precision(y_hat, y)
#>>>tensor(0.2917)
recall(y_hat, y)
#>>>tensor(0.4444)
However, when the same metrics are computed over validation_step, I get the following stranger result:
def validation_step(self, batch, batch_idx):
x, y = batch["x"], batch["y"] # y = tensor([0, 0, 2, 2, 1, 1, 1, 2, 0, 0], device='cuda:0')
y_hat = self(x) # y_hat = tensor([1, 1, 2, 1, 1, 1, 1, 1, 2, 1], device='cuda:0')
precision = self.precision_score(y_hat, y) # precision = tensor(0.4000, device='cuda:0')
recall = self.recall_score(y_hat, y) # recall = tensor(0.4000, device='cuda:0')
what am I missing? |
Loss format from .3f to .3g in the training loop | [
"feature",
"help wanted",
"design"
] | π Feature
I propose to change the default format of loss during training (in the tqdm bar) from .3f to .3g
Motivation
When using pytorch-lightning with losses that are quite close to zero, the tqdm information during training becomes non informative, because the loss is always 0.000. For example:
Epoch 41: 76%|ββββββ | 37/49 [00:00<00:00, 65.74it/s, loss=0.000, v_num=92]
The proposed change takes it to
Epoch 41: 76%|ββββββ | 37/49 [00:00<00:00, 65.74it/s, loss=2.2e-08, v_num=92]
This change from .3f to .3g has also an advantage when loss is a large number
$print('loss: %.3f' % 80808423243)
loss: 80808423243.000
$print('loss: %.3g' % 80808423243)
loss: 8.08e+10
In other situations, the output of .3f and .3g does not change
$print('loss: %.3f' % .884)
loss: 0.884
$print('loss: %.3g' % .884)
loss: 0.884 |
Abstract matplotlib figure logging from individual logger API | [
"feature",
"help wanted",
"won't fix",
"logger"
] | π Feature
Implement default log_figure for all implemented loggers
Motivation
Currently, logging figures relies on calling the API of the individual logger directly. This is not really convenient for a variety of reasons:
It is cumbersome to change loggers
It is cumbersome to disable logging (e.g. for debugging) --> if self.logger is not None: \n[...]
It is not really nice if you have multiple loggers
Pitch
I propose something like
logger.log_figure(figure_name, plt.figure, step, close_figure, kwargs)
where the kwargs are passed on to the respective logger implementation (i.e. if one wants something specific).
Additional context
Should a log_image method also be considered? Should it rather be log_figures (plural, i.e. passing multiple figures)? |
wandb logger problem with on_step log on validation | [
"bug",
"help wanted",
"question",
"3rd party",
"priority: 1"
] | π Bug
When logging on the validation_step with on_step=True and on_epoch=False the following happens:
wandb warnings are generated to alert about a step numbering problem (probably confusing the validation step number which seems cyclical with the overall step which is always increasing)
wandb charts for training (by step) is shrunk on the x dimension (like the number of steps for the whole training were less). We tested 2 training runs: the first (blue in the image below) with on_step=False and on_epoch=True on validation_step, the second with on_step=True and on_epoch=False (red in the image below). As you can see the training chart is affected by this:
an error is issued at the end of the second training run:
two new (unrequested) panels appear at the top to the wandb project (this is the weirdest of the lot :-))
Please reproduce using the colab link at the top of this article
To Reproduce
Just change the validation_step logging like this:
def validation_step(self, batch, batch_idx):
x, y = batch
logits = self(x)
loss = F.nll_loss(logits, y)
# validation metrics
preds = torch.argmax(logits, dim=1)
acc = accuracy(preds, y)
self.log('val_loss', loss, on_step=True, on_epoch=False, prog_bar=True)
self.log('val_acc', acc, on_step=True, on_epoch=False, prog_bar=True)
return loss |
Where is stored logs of the NeptuneLogger when I use offline mode for a logger? | [
"question",
"won't fix",
"logger"
] | I use Neptune Logger with the offline mode in my pipeline, but I can't find where log files are located, and I can't find parameters of NeptuneLogger to set store dir and so on from offline mode. Can someone help with it? Thanks in advance! |
Metrics for object detection with bounding boxes | [
"feature",
"help wanted"
] | π Feature
We could add a whole bunch of metrics for object detection (bounding boxes) by just writing a plugin that transforms the bounding boxes to multi-class label tensor representation.
Motivation & Pitch
Metrics for bounding box detection are basically just classification metrics (see here for example) - the only difference is that the inputs come as bounding boxes which then have to be interpreted as class predictions.
If we write a plugin for this "interpretation" part - and I don't even think this would be that hard - we can just plug it into the classification metrics that we already have, and voila, we suddenly cover all object detection metrics!
I think it would still be good idea to create separate classes for these metrics (ObjectDetectionAveragePrecision or something?), but on the inside they would be a simple call to the transformation plugin + usual classification metrics.
If people think this is a good idea, I am willing to create this transformation plug-in and then slowly add these metrics :) |
Add multi-task support to EarlyStopping | [
"feature",
"help wanted",
"won't fix"
] | π Feature
Make EarlyStopping watch multiple values and only stop when all of them no longer improve.
Motivation
I have been training a multi-task model with multiple outputs. Each output is validated and logged by the model.
As of today, early stopping can only watch one of them. Hence, the user has to choose which task is the main one.
Pitch
Make EarlyStopping watch multiple values and only stop when all of them no longer improve.
Alternatives
One could add multiple EarlyStopping callbacks (one for each task) but this would stop as soon as one of the task no longer improves (even though another one might still be improving).
Additional context
Note that ModelCheckpoint does not have this problem: one can add multiple instances of ModelCheckpoint (one for each task) and you get what you expect: checkpointing based on each task's validation metric. |
Use lightning with dgl | [
"feature",
"help wanted",
"won't fix",
"3rd party"
] | def train_dataloader(self):
return DataLoader(self.train_dataset,
batch_size=self.hparams.batch_size,
shuffle=True,
num_workers=4,
collate_fn=self.batcher)
I use collate_fn to batch dgl graphs, but when I training model, trigger this warning:
DGLWarning: DGLGraph.__len__ is deprecated.Please directly call DGLGraph.number_of_nodes.
How tor fix it ? |
`LightningModule.log(..., on_epoch=True)` logs with `global_step` instead of `current_epoch` | [
"feature",
"help wanted",
"logging"
] | π Bug
When logging using:
self.log(f"some_metric", value, on_step=False, on_epoch=True)
in ie. training_step, the data is logged to the tensorboard with X axis in steps instead of epochs:
Expected behavior is for the x axis to be in epochs:
To Reproduce
(I'll try to work on reproduction example once I find some free cycles this week)
Environment
Pytorch 1.7 and Lightning 1.1rc
Additional context
@tchaton |
Duplicate epochs when calling .fit() twice | [
"bug",
"help wanted",
"priority: 0",
"breaking change"
] | π Bug
To Reproduce
def test_bug(tmpdir):
epochs = []
class TestModel(BoringModel):
def on_epoch_end(self):
epochs.append(self.current_epoch)
trainer = Trainer(
max_epochs=2,
limit_train_batches=1,
limit_val_batches=1,
default_root_dir=tmpdir,
checkpoint_callback=False,
logger=False,
weights_summary=None,
progress_bar_refresh_rate=0,
)
trainer.fit(TestModel())
trainer.max_epochs=4
trainer.fit(TestModel())
assert epochs == list(range(4))
# AssertionError [0, 1, 1, 2, 3] != [0, 1, 2, 3]
Expected behavior
Assertion does not fail
Environment
Current master
cc @tchaton @Borda |
Accuracy metric for preds at half precision is zero with pl=1.0.8 | [
"bug",
"help wanted",
"priority: 0"
] | π Bug
The accuracy metric is wrong if preds are given with half precision. See example.
To Reproduce
import torch
from pytorch_lightning.metrics import Accuracy
acc = Accuracy(threshold=0.5)
target = torch.Tensor([1, 1, 0, 0])
preds = torch.Tensor([0.7, 0.4, 0.8, 0.4])
print(acc(preds, target)) -> 0.5
print(acc(preds.half(), target)) -> 0.0
Expected behavior
The accuracy metric should not fail silently. Either an Error needs to be raised when preds are half precision or it should work correctly.
Environment
PyTorch Version (e.g., 1.0): 1.7.0
OS (e.g., Linux): Linux
How you installed PyTorch (conda, pip, source): conda
Build command you used (if compiling from source):
Python version: 3.8
CUDA/cuDNN version: 10.2
GPU models and configuration: ...
Any other relevant information:
Additional context
This might already be fixed in master. I filed the issue regardless because I don't have time to check. |
fix typing in PL codebase - multiple PRs | [
"feature",
"help wanted",
"good first issue",
"refactor",
"priority: 1"
] | π Feature
add/fix typing in all PL codebase, this will yield in multiple PR as we prefer to split the work into smaller peace
Additional context
in #5021 we introduced a new ignore list so after it is merged, take out a single ignored item and fix typing for that part
each item + its fix = one PR
Sections
pytorch_lightning.callbacks.* #7035
pytorch_lightning.core.* #7035
pytorch_lightning.accelerators.* #7035
pytorch_lightning.loggers.* #7035
pytorch_lightning.logging.* #7035
pytorch_lightning.metrics.* <---| @hassiahk
pytorch_lightning.overrides.*
pytorch_lightning.profiler.*
pytorch_lightning.pt_overrides.*
pytorch_lightning.plugins.* #7022
pytorch_lightning.root_module.*
pytorch_lightning.trainer.*
pytorch_lightning.distributed.*
pytorch_lightning.tuner.*
pytorch_lightning.utilities.*
pl_examples.* by <---| @janhenriklambrechts
benchmarks.*
tests.*
This sections can be split even to smaller, most likely still stay per package/folder |
Behaviour when limit_train_batches is float is inconsistent | [
"bug",
"help wanted",
"priority: 1"
] | π Bug
When passing limit_train_batches as a float to the trainer, the total number of steps during training is inconsistent and is dependent on accumulate_grad_batches, and independent of drop_last. Please see the tests in the notebooks for more examples.
Please reproduce using the BoringModel and post here
To Reproduce
See notebook.
https://colab.research.google.com/drive/1RmIwW97NM0bOmF2GUUilPyzCQ2MYYG1g?usp=sharing
Expected behavior
I don't have an expected behaviour other than to request that the total number of steps remain consistent. Either always drop an incomplete batch, or don't.
Environment
CUDA:
GPU:
Tesla T4
available: True
version: 10.1
Packages:
numpy: 1.18.5
pyTorch_debug: True
pyTorch_version: 1.7.0+cu101
pytorch-lightning: 1.0.8
tqdm: 4.41.1
System:
OS: Linux
architecture:
64bit
processor: x86_64
python: 3.6.9
version: #1 SMP Thu Jul 23 08:00:38 PDT 2020
Additional context |
The right place for an "essential" callback | [
"question"
] | β Questions and Help
What is your question?
I am currently using an ordinal loss formulation that cuts up a real-valued output space into regions using cutpoints. Each region is linked to a discrete ordered label. After the backward (optimizer step), the model requires cutpoints to be re-arranged in an ascending order. I've provided a brief snippet from the original author on how to achieve this using vanilla pytorch (library : spacecutter)
def ascension_callback(margin=0.0, min_val=-1.0e6):
def _clip(module):
if isinstance(module, LogisticCumulativeLink):
cutpoints = module.cutpoints.data
for i in range(cutpoints.shape[0] - 1):
cutpoints[i].clamp_(
min_val, cutpoints[i + 1] - margin
)
return _clip
callback = ascension_callback()
# In your training loop, do the following:
for data in data_iterator:
# Calculate loss
# Step optimizer
model.apply(callback)
What have you tried?
To achieve this in pytorch-lightning, I converted the original callback code to the form below :
class AscensionCallback(Callback):
"""
Ensure that each cutpoint is ordered in ascending value.
e.g.
.. < cutpoint[i - 1] < cutpoint[i] < cutpoint[i + 1] < ...
This is done by clipping the cutpoint values at the end of a batch gradient
update. By no means is this an efficient way to do things, but it works out
of the box with stochastic gradient descent.
Parameters
----------
margin : float, (default=0.0)
The minimum value between any two adjacent cutpoints.
e.g. enforce that cutpoint[i - 1] + margin < cutpoint[i]
min_val : float, (default=-1e6)
Minimum value that the smallest cutpoint may take.
"""
def __init__(self, margin: float = 0.0, min_val: float = -1.0e6) -> None:
super().__init__()
self.margin = margin
self.min_val = min_val
def clip(self, module: Module) -> None:
# NOTE: Only works for LogisticCumulativeLink right now
# We assume the cutpoints parameters are called `cutpoints`.
if isinstance(module, LogisticCumulativeLink):
cutpoints = module.cutpoints.data
for i in range(cutpoints.shape[0] - 1):
cutpoints[i].clamp_(self.min_val, cutpoints[i + 1] - self.margin)
def on_batch_end(self, trainer, pl_module):
pl_module.model.apply(self.clip)
I then call it using the trainer like so:
trainer = Trainer.from_argparse_args(args, callbacks=[AscensionCallback()])
Questions?
The documentation suggests using callbacks for non-essential code. In this case, the callback is essential and directly modifies the cutpoints. Are callbacks the right way to do something like this?
If not, how can I incorporate this into the main LightningModule such that the clip function is called after optimizer step during training. In which function do I place a call to this "AscensionCallback()"
What's your environment?
OS: Ubuntu 20.04.1 LTS
Packaging : pip
Version : 0.8.5 |
Memory allocated on gpu:0 when using torch.cuda.empty_cache() | [
"bug",
"help wanted"
] | π Bug
Pytorch lightning calls torch.cuda.empty_cache() at times, e.g. at the end of the training loop. When the trainer is set to run on GPUs other than gpu:0, it still allocates memory on gpu:0 when running torch.cuda.empty_cache(). Apparently this is the initial device context, but it can be avoided. For example,
with torch.cuda.device('cuda:1'):
torch.cuda.empty_cache()
If the cache is emptied in this way, it will not allocate memory on any other gpu other than the one specified
This seems to be the same issue as in #458, but was never resolved and is still an issue.
To Reproduce
Steps to reproduce the behavior:
Create a pl.Trainer with gpus=[1]
Fit a model on gpu:1
torch.cuda.empty_cache() runs in run_training_teardown at the end of the training loop
nvidia-smi shows memory usage on gpu:0
If gpu:0 already had high memory allocation because of another job, then it will throw a CUDA out of memory error
.../pytorch_lightning/trainer/training_loop.py in run_training_teardown(self)
1153 model = self.get_model()
1154 model.cpu()
-> 1155 torch.cuda.empty_cache()
1156
1157 def training_forward(self, batch, batch_idx, opt_idx, hiddens):
.../torch/cuda/memory.py in empty_cache()
84 """
85 if is_initialized():
---> 86 torch._C._cuda_emptyCache()
87
88
RuntimeError: CUDA error: out of memory
Code sample
trainer = Trainer(gpus=[1])
trainer.fit(task, train_dataloader, val_dataloader)
Expected behavior
Only gpu:1 should be used when training this model.
Environment
CUDA:
GPU:
GeForce RTX 2080 Ti
GeForce GTX 1080 Ti
available: True
version: 10.1
Packages:
numpy: 1.18.1
pyTorch_debug: False
pyTorch_version: 1.5.0
pytorch-lightning: 0.9.0rc12
tensorboard: 2.2.1
tqdm: 4.46.0
System:
OS: Linux
architecture:
64bit
ELF
processor: x86_64
python: 3.8.3
version: 18.04.1-Ubuntu |
Make all grad_norm logs in a single section | [
"feature",
"help wanted",
"won't fix"
] | π Feature
Make all grad_norm logs in a single section
Motivation
current tensorboard logs of gradient norm looks like:
e.g. for each parameter, there is a separate section that contains a single plot. It takes too much space especially in case of really deep networks or when you're experimenting with different architectures. In both cases, it may take several "screens" and scrolling it becomes annoying. Even if you logged gradnorm in a single experiment and turn it off, this empty plots still exist.
Pitch
instead of logging metrics with tag grad_norm_parameter_name make single section grad_norm by changing tag on grad_norm/parameter_name
or, change
pytorch-lightning/pytorch_lightning/trainer/logging.py
Line 60
in
ca18e11
metrics.update(grad_norm_dic)
to
metrics.update({f'grad_norm/{k}': v for k, v in grad_norm_dic.items()}) |
Get correct metrics for an epoch with DDP | [
"question"
] | β Questions and Help
What is your question?
Before #2528 is fixed to allow calculating metrics for an entire epoch (for example, average precision) with DDP, I am curious if there is a workaround at the moment, given the current metrics is calculated on each GPU before syncing data from each GPU?
Take average precision for example with 4 GPUs. First on each GPU based on all batches of the epoch an average precision is calculated. Then 4 average precision from 4 GPUs are averaged again to get the final average precision. This could cause sometime quite big discrepancy compared to the true average precision of the epoch.
Is it possible to sync all data for the entire epoch before calculating the metric, given what we have at the moment, while waiting for fixes in the near future? Thanks.
BTW: really incredible work on this! I already became a fan and in the process of migrating code from Pytorch to Lightning.
Code
What have you tried?
I have google/read all official documents and most of GitHub issues
What's your environment?
OS: [e.g. iOS, Linux, Win]. Linux
Packaging [e.g. pip, conda]. Pip
Version [e.g. 0.5.2.1]. 0.8.5 |
Auto-scaling batch-size not compatible with half precision training | [
"bug",
"help wanted",
"priority: 0"
] | Using precision=16 and auto_scale_batch_size=True yields 'NoneType' object has no attribute 'state_dict' error
Expected behavior
Largest batch size should be found when using 16 bit precision
Environment
PyTorch Version (e.g., 1.0): 1.6
OS (e.g., Linux): Windows 10
How you installed PyTorch (conda, pip, source): conda
Python version: 3.8
CUDA/cuDNN version: 10/7
GPU models and configuration: 2080ti |
Incorrect default cuda device when using single gpu other than cuda:0 | [
"bug",
"help wanted",
"priority: 0"
] | π Bug
The default cuda is not set properly to the trainer.root_gpu in single-GPU mode. The tensors created with device='cuda' will be placed on the incorrect gpu, and the dataloader will acquire memory on the incorrect gpu when pin_memory=True.
Maybe we'll need to add
torch.cuda.set_device(self.trainer.root_gpu) to
pytorch-lightning/pytorch_lightning/accelerators/gpu_backend.py
Line 24
in
5dfc7b1
class GPUBackend(object):
as DDPBackend did:
pytorch-lightning/pytorch_lightning/accelerators/ddp_backend.py
Line 195
in
5dfc7b1
torch.cuda.set_device(self.trainer.root_gpu)
To Reproduce
Running the following code will get
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:1 and cuda:0!
Code sample
import pytorch_lightning as pl
import torch
from torch import nn
from torch.utils import data
class Dataset(data.Dataset):
def __getitem__(self, item):
return torch.zeros(1)
def __len__(self):
return 5
class Model(pl.LightningModule):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.x = nn.Parameter(torch.zeros(1))
def forward(self, *args, **kwargs):
return self.x
def training_step(self, *args, **kwargs):
return self.x + torch.zeros(1, device='cuda') # RuntimeError.
def train_dataloader(self):
return data.DataLoader(Dataset(), num_workers=1, pin_memory=True)
def configure_optimizers(self):
return torch.optim.SGD(self.parameters(), 1.0)
if __name__ == '__main__':
trainer = pl.Trainer(gpus=[1], num_sanity_val_steps=0, max_epochs=1)
model = Model()
trainer.fit(model)
Expected behavior
No RuntimeError occurs.
Environment
CUDA:
GPU:
available:
version:
Packages:
numpy: 1.18.5
pyTorch_debug: False
pyTorch_version: 1.6.0
pytorch-lightning: 0.9.0rc16
tensorboard: 2.3.0
tqdm: 4.48.2
System:
OS: Windows
architecture:
64bit
WindowsPE
processor:
python: 3.7.3
version: 10.0.18362
Additional context |
Epoch counting is one-off in multiple instances | [
"bug",
"help wanted",
"priority: 0"
] | π Bug
Two issues occur:
The final epoch does not save a checkpoint during training.
Resuming from a checkpoint N will start the epochs at N+2.
Expected behavior
Final checkpoint should save a .ckpt file, as usual.
Should resume from epoch N+1.
Environment
* CUDA:
- GPU:
- Tesla V100-DGXS-16GB
- Tesla V100-DGXS-16GB
- Tesla V100-DGXS-16GB
- Tesla V100-DGXS-16GB
- available: True
- version: 10.1
* Packages:
- numpy: 1.18.1
- pyTorch_debug: False
- pyTorch_version: 1.6.0
- pytorch-lightning: 0.9.0rc12
- tensorboard: 2.2.1
- tqdm: 4.46.1
* System:
- OS: Linux
- architecture:
- 64bit
-
- processor: x86_64
- python: 3.7.7
- version: #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 |
Incorrect Precision/Recall/F1 score compared to sklearn | [
"bug",
"help wanted",
"priority: 0"
] | π Bug
To Reproduce
Steps to reproduce the behavior:
Copy the code
Run the code from top to bottom
Compare print results
See Difference between sklearn and Lightning
Code
import torch
import numpy as np
import pytorch_lightning as pl
from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score
print(pl.__version__)
#### Generate binary data
pl.seed_everything(2020)
n = 10000 # number of samples
y = np.random.choice([0, 1], n)
y_pred = np.random.choice([0, 1], n, p=[0.1, 0.9])
y_tensor = torch.tensor(y)
y_pred_tensor = torch.tensor(y_pred)
# Accuracy appears alright
print('accuracy from sklearn', accuracy_score(y, y_pred))
print('accuracy from lightning functional', pl.metrics.functional.accuracy(y_pred_tensor, y_tensor, num_classes=2))
print('accuracy from lightning tensor', pl.metrics.Accuracy(num_classes=2)(y_pred_tensor, y_tensor))
## results
## accuracy from sklearn 0.4986
## accuracy from lightning functional tensor(0.4986)
## accuracy from lightning tensor tensor(0.4986)
# Precision appears to be off, compared to sklearn
print('precision from sklearn', precision_score(y, y_pred))
print('precision from lightning functional', pl.metrics.functional.precision(y_pred_tensor, y_tensor, num_classes=2))
print('precision from lightning tensor', pl.metrics.Precision(num_classes=2)(y_pred_tensor, y_tensor))
## precision from sklearn 0.5005544466622311
## precision from lightning functional tensor(0.4906)
## precision from lightning tensor tensor(0.4906)
#Recall appears to be off, compared to sklearn
print('recall from sklearn', recall_score(y, y_pred))
print('recall from lightning functional', pl.metrics.functional.recall(y_pred_tensor, y_tensor, num_classes=2))
print('recall from lightning tensor', pl.metrics.Recall(num_classes=2)(y_pred_tensor, y_tensor))
## recall from sklearn 0.8984872611464968
## recall from lightning functional tensor(0.4967)
## recall from lightning tensor tensor(0.4967)
#F1 appears to be off, compared to sklearn
print('F1 from sklearn', f1_score(y, y_pred))
print('F1 from lightning functional', pl.metrics.functional.f1_score(y_pred_tensor, y_tensor, num_classes=2))
print('F1 from lightning tensor', pl.metrics.F1(num_classes=2)(y_pred_tensor, y_tensor))
## F1 from sklearn 0.6429283577837915
## F1 from lightning functional tensor(0.4007)
## F1 from lightning tensor tensor(0.4007)
Expected behavior
Precision/Recall/F1 results are expected to be consistent with those from sklearn.
Environment
Please copy and paste the output from our
environment collection script
(or fill out the checklist below manually).
You can get the script and run it with:
wget https://raw.githubusercontent.com/PyTorchLightning/pytorch-lightning/master/tests/collect_env_details.py
# For security purposes, please check the contents of collect_env_details.py before running it.
python collect_env_details.py
PyTorch Version : 1.5.1
OS (e.g., Linux): MacOS
How you installed PyTorch (conda, pip, source): Pip
Build command you used (if compiling from source):
Python version: 3.7
CUDA/cuDNN version: None
GPU models and configuration: @@None
Any other relevant information:
Additional context |
RuntimeError: No `loss` value in the dictionary returned from `model.training_step()` with pytorch lightning | [
"question"
] | β Questions and Help
What is your question?
I am trying to run a custom dataset using Pytorch lightning but am not able to do so due to the following error.
The input is an array with the shape as (n, m). Can anyone tell me what am I doing wrong?
Traceback (most recent call last): File "TestNet.py", line 285, in <module> trainer.fit(model) File /path/to/pytorch/lib64/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 1003, in fit results = self.single_gpu_train(model) File "/path/to/pytorch/lib64/python3.6/site-packages/pytorch_lightning/trainer/distrib_parts.py", line 186, in single_gpu_train results = self.run_pretrain_routine(model) File "/path/to/pytorch/lib64/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 1213, in run_pretrain_routine self.train() File "/path/to/pytorch/lib64/python3.6/site-packages/pytorch_lightning/trainer/training_loop.py", line 370, in train self.run_training_epoch() File "/path/to/pytorch/lib64/python3.6/site-packages/pytorch_lightning/trainer/training_loop.py", line 452, in run_training_epoch batch_output = self.run_training_batch(batch, batch_idx) File "/path/to/pytorch/lib64/python3.6/site-packages/pytorch_lightning/trainer/training_loop.py", line 632, in run_training_batch self.hiddens File "/path/to/pytorch/lib64/python3.6/site-packages/pytorch_lightning/trainer/training_loop.py", line 783, in optimizer_closure training_step_output = self.process_output(training_step_output, train=True) File "/path/to/pytorch/lib64/python3.6/site-packages/pytorch_lightning/trainer/logging.py", line 159, in process_output 'No lossvalue in the dictionary returned frommodel.training_step().' RuntimeError: No lossvalue in the dictionary returned frommodel.training_step(). Exception ignored in: <object repr() failed> Traceback (most recent call last): File "/path/to/pytorch/lib64/python3.6/site-packages/tqdm/std.py", line 1086, in __del__ File "/path/to/pytorch/lib64/python3.6/site-packages/tqdm/std.py", line 1293, in close File "/path/to/pytorch/lib64/python3.6/site-packages/tqdm/std.py", line 1471, in display File "/path/to/pytorch/lib64/python3.6/site-packages/tqdm/std.py", line 1089, in __repr__ File "/path/to/pytorch/lib64/python3.6/site-packages/tqdm/std.py", line 1433, in format_dict TypeError: 'NoneType' object is not iterable
Code
I have the training_step and the other functions as below
`def training_step(self, train_batch, batch_idx):
x, y = train_batch
logits = self.forward(x)
loss = F.l1_loss(logits, y)
tensorboard_logs = {'train_loss': loss}
return {'train_loss': loss, 'log': tensorboard_logs}
def validation_step(self, val_batch, batch_idx):
x, y = val_batch
logits = self.forward(x)
loss = F.l1_loss(logits, y)
return {'val_loss': loss}
def validation_epoch_end(self, outputs):
# called at the end of the validation epoch
# outputs is an array with what you returned in validation_step for each batch
# outputs = [{'loss': batch_0_loss}, {'loss': batch_1_loss}, ..., {'loss': batch_n_loss}]
avg_loss = torch.stack([x['val_loss'] for x in outputs]).mean()
tensorboard_logs = {'val_loss': avg_loss}
return {'avg_val_loss': avg_loss, 'log': tensorboard_logs}
def test_step(self, test_batch, batch_nb):
x, y = test_batch
logits = self.forward(x)
loss = F.l1_loss(logits, y)
correct = torch.sum(logits == y.data)
# I want to visualize my predictions vs my actuals so here I'm going to
# add these lines to extract the data for plotting later on
predictions_pred.append(logits)
predictions_actual.append(y.data)
return {'test_loss': loss, 'test_correct': correct, 'logits': logits}
def test_epoch_end(self, outputs):
# called at the end of the test epoch
# outputs is an array with what you returned in test_step for each batch
# outputs = [{'loss': batch_0_loss}, {'loss': batch_1_loss}, ..., {'loss': batch_n_loss}]
avg_loss = torch.stack([x['test_loss'] for x in outputs]).mean()
logs = {'test_loss': avg_loss}
return {'avg_test_loss': avg_loss, 'log': logs, 'progress_bar': logs }
def train_dataloader(self):
train_dataset = TensorDataset(torch.tensor(new_x_train).float(), torch.tensor(new_y_train).float())
train_loader = DataLoader(dataset = train_dataset, batch_size = 32)
return train_loader
def val_dataloader(self):
val_dataset = TensorDataset(torch.tensor(new_x_val).float(), torch.tensor(new_y_val).float())
val_loader = DataLoader(dataset = val_dataset, batch_size = 32)
return val_loader
def test_dataloader(self):
test_dataset = TensorDataset(torch.tensor(new_x_test).float(), torch.tensor(new_y_test).float())
test_loader = DataLoader(dataset = test_dataset, batch_size = 32)
return test_loader
def configure_optimizers(self):
optimizer = torch.optim.Adam(model.parameters(), lr=1e-4)
return optimizer`
I have defined the custom dataset outside of the Pytorch Lightning Module with new_x_train, new_y_train as input, and label for the training set. The naming is similar for validation as well as the test set.
What's your environment?
OS: Linux
Packaging pip |
MLFlowLogger throws a JSONDecodeError | [
"bug",
"help wanted"
] | π Bug
To Reproduce
Steps to reproduce the behavior:
Code sample
from pytorch_lightning import Trainer
from pytorch_lightning.loggers import MLFlowLogger
mlflow_logger = MLFlowLogger(experiment_name="test-experiment", tracking_uri="URI_HERE")
t = Trainer(logger=mlflow_logger)
t.logger.experiment_id
throws a JSONDecodeError exception.
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/envs/pl_env/lib/python3.7/site-packages/pytorch_lightning/loggers/mlflow.py", line 120, in experiment_id
_ = self.experiment
File "/envs/pl_env/lib/python3.7/site-packages/pytorch_lightning/loggers/base.py", line 421, in experiment
return get_experiment() or DummyExperiment()
File "/envs/pl_env/lib/python3.7/site-packages/pytorch_lightning/utilities/distributed.py", line 13, in wrapped_fn
return fn(*args, **kwargs)
File "/envs/pl_env/lib/python3.7/site-packages/pytorch_lightning/loggers/base.py", line 420, in get_experiment
return fn(self)
File "/envs/pl_env/lib/python3.7/site-packages/pytorch_lightning/loggers/mlflow.py", line 98, in experiment
expt = self._mlflow_client.get_experiment_by_name(self._experiment_name)
File "/envs/pl_env/lib/python3.7/site-packages/mlflow/tracking/client.py", line 154, in get_experiment_by_name
return self._tracking_client.get_experiment_by_name(name)
File "/envs/pl_env/lib/python3.7/site-packages/mlflow/tracking/_tracking_service/client.py", line 114, in get_experiment_by_name
return self.store.get_experiment_by_name(name)
File "/envs/pl_env/lib/python3.7/site-packages/mlflow/store/tracking/rest_store.py", line 219, in get_experiment_by_name
response_proto = self._call_endpoint(GetExperimentByName, req_body)
File "/envs/pl_env/lib/python3.7/site-packages/mlflow/store/tracking/rest_store.py", line 32, in _call_endpoint
return call_endpoint(self.get_host_creds(), endpoint, method, json_body, response_proto)
File "/envs/pl_env/lib/python3.7/site-packages/mlflow/utils/rest_utils.py", line 145, in call_endpoint
js_dict = json.loads(response.text)
File "/envs/pl_env/lib/python3.7/json/__init__.py", line 348, in loads
return _default_decoder.decode(s)
File "/envs/pl_env/lib/python3.7/json/decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/envs/pl_env/lib/python3.7/json/decoder.py", line 355, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
Expected behavior
Environment
Environment details
- PyTorch Version (e.g., 1.0): 1.6.0
- PyTorch Lightning Version: 0.9.0rc12
- OS (e.g., Linux): Linux
- How you installed PyTorch (`conda`, `pip`, source): conda
- Build command you used (if compiling from source):
- Python version: 3.7.7
- CUDA/cuDNN version: Not relevant
- GPU models and configuration: Not relevant
- Any other relevant information: Not relevant
### Additional context
<!-- Add any other context about the problem here. --> |
How to scheduler.step() after every batch | [
"question"
] | I want to make use of schedulers like torch.optim.lr_scheduler.CyclicLR & torch.optim.lr_scheduler.OneCycleLR
https://pytorch.org/docs/stable/optim.html
These schedulers require scheduler.step() to be callled after each batch.
How to achieve this is PyTorchLightning? |
load_from_checkpoint() doesn't work when a LightningModule inherits from typing.Generic | [
"bug",
"help wanted"
] | π Bug
When a LightningModule with saved hyperparameters inherits from typing.Generic, hyperparameters saved in the checkpoint file are not loaded automatically, causing an error. When load_from_checkpoint() calls inspect.signature() to gather the list of arguments of the LightningModule that inherits from typing.Generic, inspect.signature() returns ['args', 'kwds'] instead of the actual arguments, because typing.Generic implements an empty __new__() (the execution path ends up here: https://github.com/python/cpython/blob/3.8/Lib/inspect.py#L2324). As a result, PL filters out all the saved hyperparameters from the checkpoint, which results in an error when trying to instantiate the LightningModule. I'd assume this would happen when a LightningModule inherits from any class that implements __new__() such as abc.ABC.
To Reproduce
Create a LightningModule that inherits from typing.Generic with some hyperparameters, fit it, then try to load it from a checkpoint.
Code sample
import torch
import torch.nn.functional as F
import pytorch_lightning as pl
from typing import Generic, TypeVar
from torch.utils.data import DataLoader
T = TypeVar("T")
class GenericLitClassifier(Generic[T], pl.LightningModule):
def __init__(self, dim):
super().__init__()
self.l1 = torch.nn.Linear(28 * 28, dim)
self.save_hyperparameters()
def forward(self, x):
return torch.relu(self.l1(x.view(x.size(0), -1)))
def training_step(self, batch, batch_nb):
x, y = batch
loss = F.cross_entropy(self(x), y)
tensorboard_logs = {"train_loss": loss}
return {"loss": loss, "log": tensorboard_logs}
def configure_optimizers(self):
return torch.optim.Adam(self.parameters(), lr=0.02)
class LitClassifier(GenericLitClassifier[str]):
pass
class Dataset:
def __getitem__(self, idx):
return torch.ones(1, 784), 1
def __len__(self):
return 5
train_loader = DataLoader(Dataset(), batch_size=2)
model = LitClassifier(10)
trainer = pl.Trainer(max_epochs=5)
trainer.fit(model, train_loader)
for path, _ in trainer.checkpoint_callback.best_k_models.items():
lm = LitClassifier.load_from_checkpoint(path)
Expected behavior
Even when a LightningModule inherits from any class that implements __new__() (e.g. typing.Generic), its hyperparameters should be loaded automatically from a checkpoint.
Environment
* CUDA:
- GPU:
- available: False
- version: None
* Packages:
- numpy: 1.19.1
- pyTorch_debug: False
- pyTorch_version: 1.5.1
- pytorch-lightning: 0.8.5
- tensorboard: 2.3.0
- tqdm: 4.48.2
* System:
- OS: Darwin
- architecture:
- 64bit
-
- processor: i386
- python: 3.7.7
- version: Darwin Kernel Version 18.7.0: Mon Apr 27 20:09:39 PDT 2020; root:xnu-4903.278.35~1/RELEASE_X86_64
Additional context |
Rename class metrics PrecisionRecall to PrecisionRecallCurve and update examples | [
"feature",
"help wanted"
] | π Feature
Rename the class metric PrecisionRecall to PrecisionRecallCurve
Motivation
the class metric PrecisionRecall is based on the functional metric precision_recall_curve and therefore should have PrecisionRecallCurve as its proper name instead of the current name PrecisionRecall.
In addition, there is need to update examples both for the functional and class metrics as in this case pred is a probability and the original example is misleading:
>>> pred = torch.tensor([0, 1, 2, 3])
>>> target = torch.tensor([0, 1, 2, 2])
>>> metric = PrecisionRecall()
>>> prec, recall, thr = metric(pred, target)
>>> prec
tensor([0.3333, 0.0000, 0.0000, 1.0000])
>>> recall
tensor([1., 0., 0., 0.])
>>> thr
tensor([1., 2., 3.])
The below is proposed example for the functional metric:
>>> pred = torch.tensor([0.1, 0.4, 0.35, 0.8])
>>> target = torch.tensor([0, 0, 1, 1])
>>> precision, recall, thresholds = precision_recall_curve(pred, target)
>>> precision
tensor([0.6667, 0.5000, 1.0000, 1.0000])
>>> recall
tensor([1.0000, 0.5000, 0.5000, 0.0000])
>>> thresholds
tensor([0.3500, 0.4000, 0.8000])
The below is proposed example for the class metric:
>>> pred = torch.tensor([0.1, 0.4, 0.35, 0.8])
>>> target = torch.tensor([0, 0, 1, 1])
>>> metric = PrecisionRecallCurve()
>>> prec, recall, thr = metric(pred, target)
>>> prec
tensor([0.6667, 0.5000, 1.0000, 1.0000])
>>> recall
tensor([1.0000, 0.5000, 0.5000, 0.0000])
>>> thr
tensor([0.3500, 0.4000, 0.8000])
Pitch
Please see above
Alternatives
Additional context |
Supporting TPU pods | [
"feature",
"help wanted"
] | More context: https://cloud.google.com/tpu/docs/training-on-tpu-pods |
Callbacks are not being called, gradient norm not getting logged,how to debug these scenarios? | [
"question",
"won't fix"
] | β Questions and Help
I have defined a callback for the on_epoch_end event. Also, I have defined track_grad_norm = 1, which I think is also kind of a callback.
When I used --fast_dev_run or --limit_train_batches and --limit_val_batches the callback(including gradient norm plot) got executed properly.
When I run without the above flags, on_epoch_end is sometimes not executing (or silently failing?), gradient norms are not getting plotted at all.
Checkpoint saving callback is always successfully executing.
Now I am unsure how do I debug this scenario. There is no error messages logged.
What's your environment?
OS: Linux
Packaging:- pip
Version:- 0.9.0rc15 |
Support a to_torchscript function on the LightningModule | [
"feature",
"help wanted",
"let's do it!",
"design"
] | π Feature
Support a conversion function to PyTorch JIT similar to what's available for ONNX.
Motivation
TorchScript is a way to create serializable and optimizable models from PyTorch code. Any TorchScript program can be saved from a Python process and loaded in a process where there is no Python dependency. TorchScript is a method by which users can serve PyTorch models efficiently.
Pitch
By default, we can use TorchScript to script the LightningModule. Users can override this in their own lightning modules to use tracing, or to script specific nn.Modules inside their LightningModule. This can then be extended to other Lightning utilities like model checkpointing, so we can save TorchScript or ONNX converted models alongside the best model checkpoints to make going to serving even easier to do
def to_torchscript(self):
"""Saves the model as a JIT module.
This can be overridden to support custom TorchScript module export
Example:
>>> class SimpleModel(LightningModule):
... def __init__(self):
... super().__init__()
... self.l1 = torch.nn.Linear(in_features=64, out_features=4)
...
... def forward(self, x):
... return torch.relu(self.l1(x.view(x.size(0), -1)))
>>> with tempfile.NamedTemporaryFile(suffix='.onnx', delete=False) as tmpfile:
... model = SimpleModel()
... torch.jit.save(model.to_torchscript(), tmpfile.name)
... os.path.isfile(tmpfile.name)
True
"""
return torch.jit.script(self.eval()) |
Checkpoint/saving a model in the middle of an epoch | [
"question",
"won't fix"
] | β Questions and Help
Hi all,
I have a network that trains slowly on a large dataset (something like 1 week per epoch). In my previous pure-Pytorch version, I saved a checkpoint of the model every hour along the way without doing any sort of additional validation. I just want to make sure I didn't lose my progress. is there a way to do something similar in lightning? It doesn't necessarily need to be time based--I just don't want to wait a week for the model to save. I like to periodically download the latest model from the server and see how it's doing along the way.
Thanks! |
No log is recorded in debug mode, but manual operation works | [
"bug",
"help wanted"
] | π Bug
just as title
Code sample
import torch
from torch.nn import functional as F
from torch import nn
from pytorch_lightning.core.lightning import LightningModule
from pytorch_lightning.core.step_result import TrainResult
from pytorch_lightning import loggers
from torch.utils.data import DataLoader, random_split
from torchvision.datasets import MNIST
import os
from torchvision import datasets, transforms
from pytorch_lightning import Trainer
from torch.optim import Adam
class LitMNIST(LightningModule):
def __init__(self):
super().__init__()
self.layer_1 = torch.nn.Linear(28 * 28, 128)
self.layer_2 = torch.nn.Linear(128, 256)
self.layer_3 = torch.nn.Linear(256, 10)
def forward(self, x):
batch_size, channels, width, height = x.size()
x = x.view(batch_size, -1)
x = self.layer_1(x)
x = torch.relu(x)
x = self.layer_2(x)
x = torch.relu(x)
x = self.layer_3(x)
x = torch.log_softmax(x, dim=1)
return x
def prepare_data(self):
# download only (not called on every GPU, just the root GPU per node)
MNIST(os.getcwd(), train=True, download=True)
def train_dataloader(self):
transform=transforms.Compose([transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))])
mnist_train = MNIST(os.getcwd(), train=True, download=False, transform=transform)
return DataLoader(mnist_train, batch_size=64)
def configure_optimizers(self):
return Adam(self.parameters(), lr=1e-3)
def training_step(self, batch, batch_idx):
x, y = batch
logits = self(x)
loss = F.nll_loss(logits, y) # a Breakpoint at here
# self.logger.log_metrics({'loss':loss},step=self.global_step)
result = TrainResult(loss)
result.log('train_loss', loss)
# add logging
return result
model = LitMNIST()
trainer = Trainer(gpus=1)
trainer.fit(model)
Expected behavior
tensorboard should show my loss curve
Environment
* CUDA:
- GPU:
- GeForce RTX 2080 Ti
- GeForce RTX 2080 Ti
- GeForce RTX 2080 Ti
- GeForce RTX 2080 Ti
- available: True
- version: 10.2
* Packages:
- numpy: 1.19.1
- pyTorch_debug: False
- pyTorch_version: 1.6.0
- pytorch-lightning: 0.9.0
- tensorboard: 2.2.0
- tqdm: 4.48.2
* System:
- OS: Linux
- architecture:
- 64bit
-
- processor: x86_64
- python: 3.7.7
- version: #168-Ubuntu SMP Wed Jan 16 21:00:45 UTC 2019
Additional context
when I use self.logger.log_metrics({'loss':loss},step=self.global_step) it work for me |
TrainResult wrong parameters on docs | [
"help wanted",
"docs"
] | While checking the code against the docs I noticed that the docs mention 3 positional parameters on pl.TrainResult class, but the code signature has 4.
Also in the getting started tutorial instances this class with the first positional parameter without explaining what it uses for. This last inconsistency is key because I can't figure if the tutorial intends to track the training loss for early_stop_on param (based on docs) or minimize param (based on source code).
PD: Also there is a typo in the tutorial part I referenced last (https://pytorch-lightning.readthedocs.io/en/latest/new-project.html#wrap-loss-in-a-trainresult-evalresult) |
IoU metric returns 0 score for classes not present in prediction or target | [
"bug",
"help wanted"
] | π Bug
The iou metric implementation always returns a score of 0 for a class that is not present in either the prediction or the target. This can lead to a deflated score even for perfectly-predicted examples.
Case 1: one example of an affected case is multi-class semantic segmentation of an image that does not contain one of the classes. This can be outlined as follows:
We have 3 possible classes in this dataset (0, 1, and 2, where 0 can optionally be the background class).
Ground-truth target for an image consists only of classes 0 and 2.
Model perfectly predicts the target.
The IoU score should be 1.0 (perfect), but the actual score will be deflated (0.67) since there will be an unnecessary penalty for class 1.
Case 2: another example that is a bit more implementation-dependent to explain:
Target contains only 1's.
Prediction perfectly assigns all 1's.
The IoU score should be 1.0 (perfect), but the actual score will be deflated (0.5) since there will be an unnecessary penalty for class 0.
This only applies when a higher-numbered class is present, and lower-numbered classes are not present.
Case 3: All the above are also affected by any num_classes parameter passed to the functional iou implementation - if num_classes=N is given, then all classes with ids <N that did not appear in the target or prediction will always be assigned 0 IoU score. For example, if N=10, and only classes 0 and 1 are present and correct in target and prediction, then classes 2-9 will all have IoU score 0.0.
Especially in aggregate for a dataset with substantial neutral ground-truth values (i.e., semantic segmentation dataset with lots of images where not all classes are present), this can significantly deflate the (m)IoU score(s). This can also undesirably interact with checkpointing that looks at IoU-based metrics.
To Reproduce / Code sample
Case 1 above:
import torch
from pytorch_lightning.metrics.functional.classification import iou
target = torch.tensor([0, 2])
pred = torch.tensor([0, 2])
iou(pred, target) # Returns tensor(0.6667)
# Same computation, but with 'none' reduction to illustrate what score each class gets:
iou(pred, target, reduction='none') # Returns tensor([1., 0., 1.])
Case 2 above:
target = torch.tensor([1])
pred = torch.tensor([1])
iou(pred, target) # Returns tensor(0.5)
iou(pred, target, reduction='none') # Returns tensor([0., 1.])
Case 3 above:
target = torch.tensor([0, 1])
pred = torch.tensor([0, 1])
iou(pred, target, num_classes=10) # Returns tensor(0.2), or 2/10
iou(pred, target, num_classes=10, reduction='none') # Returns tensor([1., 1., 0., 0., 0., 0., 0., 0., 0., 0.])
Expected behavior
The fallback IoU score to use for classes not in the target and correctly not in the prediction should be configurable. This should probably default to 1.0, which seems more expected behavior to me.
Case 1:
target = torch.tensor([0, 2])
pred = torch.tensor([0, 2])
iou(pred, target) # Should return tensor(1.)
iou(pred, target, reduction='none') # Should return tensor([1., 1., 1.])
Case 2:
target = torch.tensor([1])
pred = torch.tensor([1])
iou(pred, target) # Should return tensor(1.)
iou(pred, target, reduction='none') # Should return tensor([1., 1.])
Case 3:
target = torch.tensor([0, 1])
pred = torch.tensor([0, 1])
iou(pred, target, num_classes=10) # Should return tensor(1.)
iou(pred, target, num_classes=10, reduction='none') # Should return tensor([1., 1., 1., 1., 1., 1., 1., 1., 1., 1.])
Environment
* CUDA:
- GPU:
- GeForce RTX 2070 with Max-Q Design
- available: True
- version: 10.2
* Packages:
- numpy: 1.19.1
- pyTorch_debug: False
- pyTorch_version: 1.5.1
- pytorch-lightning: 0.9.0rc18
- tensorboard: 2.2.0
- tqdm: 4.48.0
* System:
- OS: Linux
- architecture:
- 64bit
-
- processor: x86_64
- python: 3.7.8
- version: #38~1596560323~20.04~7719dbd-Ubuntu SMP Tue Aug 4 19:12:34 UTC 2
Additional context
I have a draft PR open at #3098 that attempts to implement the expected behavior described above, and adds some tests for this. Any feedback welcome!
Somewhat-related issues:
#2736
#2753 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.