title
stringlengths 5
164
| labels
sequence | bodyText
stringlengths 0
46.7k
|
---|---|---|
Unable to import trainer | [] | Hey, I am able to import pytorch_lightning but not the trainer. I am new to python and have no idea how to deal with it. It throws following error:
File "", line 1, in
ImportError: cannot import name Trainer
Thanks |
Nitpick: `ptl` may be better as `pl` | [] | (Feel free to ignore.)
All the usage examples do import pytorch_lighting as ptl. Instead of ptl, pl may be better as it doesn't clash with any library I know of, is 2 characters like NumPy's np, and is harder to mistype as plt, which many researchers probably also have imported. Since the library is in its early days, I don't think it would be that dramatic a change and is a little easier to read for people like me who often mix up letters like that.
On the other hand, it's pretty clear that it's not matplotlib from context, is yet another change, and is an aesthetic choice at its root, so it may not be worth it. |
Add Gradient Checkpointing | [
"feature",
"help wanted"
] | Super useful to minimize RAM (trade-off speed).
Anyone interested in implementing?
(Instructions are here).
Probably needs to be generalized though. |
self-balancing architecture | [
"feature",
"help wanted"
] | This is a really awesome feature we're looking to add. Super hard problem also if any ninjas want to try to tackle it :) (you'll be legendary haha).
Problem:
Some models are too big to fit in memory. Thus can't do any distributed training currently available (even in PyTorch).
But... we can break up the model and put parts on each GPU. The trick though is to do it automatically, because manually doing this is a PITA (trust me, i spend weeks dealing with this haha).
Proposed solution:
User hook in LightningModule where user returns the modules they want balanced.
class MyModule(LightningModule):
def __init__(...):
self.model_a = SomeModel()
self.layer_1 = Linear(...)
self.layer2 = Linear(...)
def forward(x):
# in each of these module calls, auto place the input x on the gpu of the module
x = self.model_a(x)
# in each of these module calls, auto place the input x on the gpu of the module
x = self.layer_1(x)
# in each of these module calls, auto place the input x on the gpu of the module
x = self.layer_2(x)
return x
def self_balance():
return [self.model_a, self.layer_1, self.layer_2]
So the above does two cool things:
user says how they want to break up the model.
In the forward, we auto put the input on that module's GPU.
That's the easy part lol... the hard part is deciding how to balance... optimizing for speed so you minimize data transfer across GPUs while not blowing up the RAM and using the RAM efficiently.
Anyone want to give this a shot? |
Image logging to tensorboard | [] | Hi @williamFalcon ,
Thanks for your nice work.
I am just wondering is it possible to log the image tensor to tensorboard to train such a U-net?
Bests, |
Training accuracy | [] | I was wondering whether there is something like validation_end but for training (e.g., training_end). I want to compute the training accuracy at the end of each epoch. Thanks! |
revert to absolute imports | [] | recent relative imports are causing issues. in addition, pep8 recommends absolute imports for clarity as well.
let's go back to absolute imports |
codecov not updating | [] | Awesome improvements to coverage and tests! thanks @Borda
Wondering what has to be done to update the badge now. I pushed a report from GPU coverage but no updates.
@Borda |
Trainer.fit() crashes if no checkpoint callback is provided | [
"bug"
] | I hope it's okay that I keep posting issues...
Now that I can circumvent the github installation issues, I pulled in the latests master and let my simple CoolModel demo code run. But now calling trainer.fit() crashes with:
AttributeError Traceback (most recent call last)
in
21 )
22
---> 23 trainer.fit(model)
24 # exp.close()
/opt/miniconda3/envs/dev_pytorch_lightning36/lib/python3.6/site-packages/pytorch_lightning/models/trainer.py in fit(self, model)
494 self.optimizers, self.lr_schedulers = self.optimizers
495
--> 496 self.__run_pretrain_routine(model)
497
498 # return 1 when finished
/opt/miniconda3/envs/dev_pytorch_lightning36/lib/python3.6/site-packages/pytorch_lightning/models/trainer.py in __run_pretrain_routine(self, model)
680
681 # restore training and model before hpc call
--> 682 self.restore_state_if_existing_checkpoint()
683
684 # enable cluster checkpointing
/opt/miniconda3/envs/dev_pytorch_lightning36/lib/python3.6/site-packages/pytorch_lightning/models/trainer.py in restore_state_if_existing_checkpoint(self)
261
262 # find last epoch
--> 263 checkpoints = os.listdir(self.checkpoint_callback.filepath)
264 for name in checkpoints:
265 # ignore hpc ckpts
AttributeError: 'NoneType' object has no attribute 'filepath'
Looking at the code, it appears to happen because I did not provide a checkpoint callback and it tries to access it in restore_state_if_existing_checkpoint |
Quantisation and Pruning Support | [
"feature",
"help wanted"
] | Is your feature request related to a problem? Please describe.
Nowadays, there is a need to take the floating point models that have been trained and deploy them to edge devices. One way that is popular is to quantise the weights and activation os a neural network to a lower bit width (eg: 8 bits or even 4 bits). The benefits of this are 2 fold:
Some accelerators perform computation at lower bit widths much faster than fp16 or fp32 computation.
The model takes less space, and the savings increase by a substantial factor every time we reduce a bit from the tensor data type.
People have tried other means to compress a model, one of them is pruning.
Pruning basically means that some of the weights of a neural network are zero, hence we seek to introduce sparsity in the network.
The benefits of this are that you potentially do not have to perform the useless multiplications with zeros hence providing a potential computation saving. Research has shown that even after pruning ~80% of weights (this is fine grained pruning), the network preserves it's accuracy . This is a very surprising result. Course grained pruning (setting all weights of a channel to zero) also works to an extent but results in significantly more accuracy loss. This is an active research area.
Describe the solution you'd like
Generally how quantisation works is through the use of a scale value and a zero point value, so each quantised tensor needs to have the quantised tensor, it's scale and zero point. The scale and zero point are needed to convert to and from quantised and dequantized tensors.
There are 2 ways to quantize a model:
Post training quantisation: Quantises a trained model, no retraining required (works well for down to 8 bits).
Quantisation Aware Training: A way to train a model to induce robustness to quantisation. (It works well for aggressive quantizations schemes (down to 4 bits))
I have successfully implemented the post training quantisation algorithms and was able to get a quantised MNIST model down to 8 bits with next to no accuracy loss. Going down to 4 bits resulted in the model diverging.I am currently working on quant aware training as of now. If you want to see how post train quantisation works, please check out this Google colab notebook.
Now, let's come to pruning:
Pruning is a very general thing, there could be a lot of ways to perform it. As far as I know, there is generally a "pruning schedule". The researcher decided when to prune how many percent of weights (aka the degree of sparsity of the layer). Now, they could prune some layers, leave some as is. Slowly increase the sparsity degree of the pruned players with time during training. There are also different types of pruning, a structured way to prune weights (eg: take off full channels of a conv kernel or reduce a dimension of a fully connected layer by 1) or an unstructured way to prune (randomly zero out weights).
Lightning could potentially offer a structured and unstructured way to prune to help out researchers. If you would like to see pruning in action, I have tried pruning out on an MNIST model by using the Google paper algorithm, "To Prune or not to Prune". It is unstructured pruning with 90% sparsity and I was able roughly the same accuracy as the un-pruned model. This is the Google Colab link for it.
Describe alternatives you've considered
Right now Pytorch doesn't have quantization and pruning support however, that is in the works. We could either wait for them to complete their work or we could implement a small library by ourselves.
What use case I was trying to target is lightning could become a playground where researchers could test out quantisation and pruning on their models and potentially could implement novel algorithms through it's base support.
Additional context
If any of you want to learn more about quantization, I have embedded the resources I learnt from below. They were indeed invaluable.
Jacob Benoit et al’s Quantisation Paper (Google)
Raghuraman’s Paper on Quantisation (Google, he’s now at Facebook)
Distiller Docs on Quantisation
Gemmlowp’s Quantisation Tutorial |
Relax requirement for DistributedSampler with ddp | [
"feature",
"help wanted"
] | Is your feature request related to a problem? Please describe.
I have an application where I'm using a custom BatchSampler to construct batches for the N-Pairs metric learning loss. I need all of the data to be available on all processes when using DistributedDataParallel, so I wouldn't want to use DistributedSampler, even if it was compatible with a custom BatchSampler. Right now, I've hit a wall because lightning throws this exception:
pytorch_lightning.utilities.debugging.MisconfigurationException:
when using multiple gpus and multiple nodes you must pass
a DistributedSampler to DataLoader(sampler).
ie: this:
dataset = myDataset()
dataloader = Dataloader(dataset)
becomes:
dataset = myDataset()
dist_sampler = torch.utils.data.distributed.DistributedSampler(dataset)
dataloader = Dataloader(dataset, sampler=dist_sampler)
Describe the solution you'd like
Could this exception be turned into a warning? I'm all for letting the user know when they're violating best practices, but throwing an exception removes flexibility for advanced users.
Describe alternatives you've considered
I looked at using the dp backend, but that's not going to work because the n-pairs loss needs the entire batch to compute the loss. Splitting it into chunks breaks things.
If I'm understanding correctly, this is actually another limitation introduced by Lightning. In a usual DataParallel setting, the batch would be merged back together before computing the loss and everything would be fine. |
Is it possible to make `validation_step` and `val_dataloader` no-ops? | [
"feature",
"help wanted"
] | Is your feature request related to a problem? Please describe.
Sometimes I don't have a separate validation split, only a train/test split. I'm trying out pytorch-lightning to prototype / experiment, and trying to see what the best of way of doing this is.
I could make the train dataset and then do torch.utils.data.random_split or use torch.utils.data.SubsetRandomSampler to build a validation set as well, but if I don't have enough data (or just don't want to do a separate validation step) this isn't ideal.
Describe the solution you'd like
I'd like to be able to implement only the training_step, train_dataloader, and test_dataloader methods and then have the validation step and validation metrics be omitted (maybe explicit no-ops). Right now, I'm experimenting with having an empty DataLoader for the validation data.
Describe alternatives you've considered
Implement val_dataloader with an empty (dummy) DataLoader
Not sure if this will work yet (if lightning will still call validation_step and validation_end). |
Predict method for test set | [
"feature",
"help wanted"
] | The main Lightning module requires to define the test_dataloader function. But I'm not able to find any method that requires the test loader as input. Is there a model.predict() method to call on the test set? |
When my X is a tuple input, the training_step method does not transfer them to cuda. | [
"bug"
] | It has been solved. |
Old test_tube version | [
"bug"
] | https://github.com/williamFalcon/pytorch-lightning/blob/46e27e38aae21227d0d0f1cab97ec10b8b8766c2/setup.py#L34
test_tube library version should be updated to >=0.6.8. This old version causes Tensorboard logging directory issues. |
Schedulers step before optimizers | [
"priority: 2"
] | According to pytorch docs,
Learning rate scheduling should be applied after optimizer’s update
Which is also apparent in the warning
UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule.See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate
And since pytorch>1.1.0 is required, we should either call the scheduler step after __run_tng_batch() is called in __train() or in __run_tng_batch() itself |
Fit Error: validation_step() takes 3 positional arguments but 4 were given | [
"bug"
] | Common bugs:
Hello, I installed pytorch lightning like
pip install git+https://github.com/williamFalcon/pytorch-lightning.git@master --upgrade
(btw, i couldnt install by pip install pytorch_lightning: i got error related to setuptools even i have setuptools)
then run following scripts which described in 'How do I use it'
I got an error at section 2 (Fit with a trainer)
Describe the bug
trainer.fit(model)
validation_step() takes 3 positional arguments but 4 were given
Environment:
OS: Ubuntu 18.04
Python: 3.6.8
Anaconda: 1.7.2
torch: 1.2.0
torchvision: 0.4.0
pytorch_lightning: 0.4.4
I'm sorry that I'm not fluent English speaker and hope we'll get figure it out. |
Support of different batch types | [
"feature",
"help wanted"
] | Hi!
As I say later, I got two problems:
in my research I have to put to model three tensors, like
{'tokens': torch.Tensor, 'head': torch.Tensor, 'tail': torch.Tensor}, but when I tried to use GPU to train it, I had error RuntimeError: Expected object of backend CUDA but got backend CPU for argument #3 'index'.
And its easy to fix (for single GPU, I do not know how to work with DDP and DP), just add two conditions.
Second problem about GPU too. In my dataloader I have batch like dict with tokens. I use pretrained embeddings, so I can vectorize my batch only in training_step or validation_step, but I cannot push it in GPU memory, so I have same error RuntimeError: Expected object of backend CUDA but got backend CPU for argument #3 'index'. |
Recursive device conversion of model inputs and targets | [
"feature",
"help wanted"
] | Is your feature request related to a problem? Please describe.
I want to conduct fine-tuning of the faster R-CNN model that is implemented in torchvision by following this tutorial (https://pytorch.org/tutorials/intermediate/torchvision_tutorial.html).
Inputs and targets of the faster R-CNN are a list of tensors and a list of dict of tensors, respectively (See example in https://pytorch.org/docs/stable/torchvision/models.html#faster-r-cnn ).
I found that the above data structures cause type error because a condition if isinstance(x, torch.Tensor) (see follows) cannot catch such cases.
https://github.com/williamFalcon/pytorch-lightning/blob/590282f2b04a9ca2470296cb5f4a4351f46d5496/pytorch_lightning/models/trainer.py#L390-L395
Describe the solution you'd like
I want to move a list (or dict, etc.) of tensors into GPUs recursively.
Describe alternatives you've considered
Temporarily, I'm addressing the above problem by using the following code:
for i, x in enumerate(data_batch):
if isinstance(x, torch.Tensor):
data_batch[i] = x.cuda(gpu_id)
+ elif isinstance(x, list):
+ for j, element in enumerate(x):
+ if isinstance(element, torch.Tensor):
+ x[j] = element.cuda(gpu_id)
+ elif isinstance(element, dict):
+ x[j] = {k: v.cuda(gpu_id) \
+ for k, v in element.items()} |
Fix appveyor build | [
"bug"
] | windows support is not a priority. If the badge can be fixed today we'll keep appveyor. Otherwise we'll drop it from the project.
@Borda |
AttributeError: 'xxx' object has no attribute 'tng_dataloader' | [
"bug"
] | Describe the bug
I modified the template to implement my model and task. In the process, I ran it several times and correctly. However, when I revised the evaluation metrics, the program suddenly started to report errors: AttributeError: 'DSANet' object has no attribute 'tng_dataloader'. However, I've defined the def tng_dataloader(self) method. Although I delete what I have written and make sure all is right, the error still exists.
Code
The error report:
Traceback (most recent call last):
File "single_cpu_trainer.py", line 112, in <module>
main(hyperparams)
File "single_cpu_trainer.py", line 83, in main
trainer.fit(model)
File "E:\Program Files (x86)\Microsoft Visual Studio\Shared\Anaconda3_64\lib\site-packages\pytorch_lightning\models\trainer.py", line 567, in fit
self.__run_pretrain_routine(model)
File "E:\Program Files (x86)\Microsoft Visual Studio\Shared\Anaconda3_64\lib\site-packages\pytorch_lightning\models\trainer.py", line 742, in __run_pretrain_routine
self.get_dataloaders(ref_model)
File "E:\Program Files (x86)\Microsoft Visual Studio\Shared\Anaconda3_64\lib\site-packages\pytorch_lightning\models\trainer.py", line 469, in get_dataloaders
self.tng_dataloader = model.tng_dataloader
File "E:\Program Files (x86)\Microsoft Visual Studio\Shared\Anaconda3_64\lib\site-packages\torch\nn\modules\module.py", line 591, in __getattr__
type(self).__name__, name))
AttributeError: 'DSANet' object has no attribute 'tng_dataloader'
The code:
class DSANet(LightningModule):
# ...
def __dataloader(self, train):
# init data generators
set_type = 'train' if train else 'test'
dataset = MTSFDataset(window=self.hparams.window, horizon=self.hparams.horizon, data_name=self.hparams.data_name, set_type=set_type, data_dir=self.hparams.data_dir)
# when using multi-node we need to add the datasampler
train_sampler = None
batch_size = self.hparams.batch_size
try:
if self.on_gpu:
train_sampler = DistributedSampler(dataset, rank=self.trainer.proc_rank)
batch_size = batch_size // self.trainer.world_size # scale batch size
except Exception as e:
pass
should_shuffle = train_sampler is None
loader = DataLoader(
dataset=dataset,
batch_size=batch_size,
shuffle=should_shuffle,
sampler=train_sampler
)
return loader
@pl.data_loader
def tng_dataloader(self):
print('tng data loader called')
return self.__dataloader(train=True)
@pl.data_loader
def val_dataloader(self):
print('val data loader called')
return self.__dataloader(train=False)
@pl.data_loader
def test_dataloader(self):
print('test data loader called')
return self.__dataloader(train=False)
I can make a new repo and upload my code if it is needed.
Expected behavior
Run correctly.
Screenshots
command line:
Environment:
OS: windows 10.0, 17134
conda version (no venv): conda 4.7.11
PyTorch version: 1.2.0
Lightning version: 0.4.6
Test-tube version: 0.6.9
Additional context
What have you tried: I used print and found that it seemed to be something wrong around loader = DataLoader( dataset=dataset, batch_size=batch_size, shuffle=should_shuffle, sampler=train_sampler ). And I found that val_dataloader and test_dataloader are also None when printing info about my class. However, I still don't know why. |
early stopping or checkpointing without val step | [
"feature",
"help wanted"
] | Is your feature request related to a problem? Please describe.
When val step isn't defined, early stopping nor checkpointing work.
Describe the solution you'd like
Early stopping should look at training_step metrics instead |
val_dataloader is not optional in distributed_backend='ddp' | [
"bug"
] | Describe the bug
val_dataloader function is kept optional but a line in the code does not check for 'if self.val_dataloader is not None'. Which leads to the following error:
File "/misc/vlgscratch4/FergusGroup/ananya/pyenv/py3.6/lib/python3.6/site-packages/pytorch_lightning/models/trainer.py", line 500, in get_dataloaders
for dataloader in self.val_dataloader):
TypeError: 'NoneType' object is not iterable
File: models/trainer.py
line 500
To Reproduce
Steps to reproduce the behavior:
Not write the optional function: val_dataloader
Use the following trainer configuration.
trainer = Trainer(
experiment=exp,
checkpoint_callback=checkpoint_callback,
distributed_backend='ddp',
gpus=args.gpu_id,
amp_level='O2',
use_amp=True,
max_nb_epochs=args.epochs,
progress_bar=True
)
Expected behavior
Code should ignore "all(isinstance(dataloader, DistributedSampler) for dataloader in self.val_dataloader)" check if self.val_dataloader is None.
Environment:
PyTorch 1.1.0
Cuda 10.0
test-tube 0.6.7.6
pytorch-lightning 0.4.6 |
Error if dataset size = 1 batch. | [
"bug"
] | Describe the bug
If the dataset size is just one batch, then line 372:
self.val_check_batch = int(self.nb_tng_batches * self.val_check_interval)
in trainer.py evaluates to 0 as nb_tng_batches is 1 and val_check_interval=0.98 by default.
When the trainer then gets to the validation step on line 852:
is_val_check_batch = (batch_nb + 1) % self.val_check_batch == 0
The error ZeroDivisionError: integer division or modulo by zero is raised.
To Reproduce
import os
import torch
from torch.nn import functional as F
from torch.utils.data import DataLoader
from torchvision.datasets import MNIST
import torchvision.transforms as transforms
import pytorch_lightning as pl
class CoolModel(pl.LightningModule):
def __init__(self):
super(CoolModel, self).__init__()
# not the best model...
self.l1 = torch.nn.Linear(28 * 28, 10)
def forward(self, x):
return torch.relu(self.l1(x.view(x.size(0), -1)))
def training_step(self, batch, batch_nb):
# REQUIRED
x, y = batch
y_hat = self.forward(x)
return {'loss': F.cross_entropy(y_hat, y)}
def configure_optimizers(self):
# REQUIRED
return [torch.optim.Adam(self.parameters(), lr=0.02)]
@pl.data_loader
def tng_dataloader(self):
dataset = MNIST(os.getcwd(), train=True, download=True, transform=transforms.ToTensor())
subset = torch.utils.data.Subset(dataset, range(32)) # Dataset size = 1 batch.
return DataLoader(subset, batch_size=32)
from pytorch_lightning import Trainer
model = CoolModel()
trainer = Trainer()
trainer.fit(model)
Possible solutions
A fix could be to do like on line 364 by adding
self.val_check_batch = max(1, self.val_check_batch) after line 372.
Additional context
It may be an error that val_check_interval=0.98 by default.
A default value of 1.0 makes more sense, since it is more common to go through all training data before validating. |
Progress bar code simplification. | [
"feature",
"help wanted"
] | Is your feature request related to a problem? Please describe.
I think some of the code for the progress bar could be simplified.
The main issue is that the code is a little harder to read and maintain, so it is not a huge issue.
We have self.prog_bar which is the tqdm bar, and self.progress_bar which is the boolean value, which tells whether this bar should be enabled. Some examples of use in the code:
Line 444:
if self.progress_bar and self.prog_bar is not None:
self.prog_bar.update(1)
If self.progress_bar is False, then why would self.prog_bar not be None?
Line 649:
# show progbar only on prog_rank 0
self.prog_bar = self.prog_bar and self.node_rank == 0 and gpu_nb == 0
In this line self.prog_bar takes on a boolean value. But that is what self.progress_bar was mean to be.
Describe the solution you'd like
I think a solution would be to adopt the style that self.prog_bar should be None in case it is not enabled. It should then also be renamed to self.progress_bar (Like in #119), such that we have only one variable instead of two.
I am not sure if this may mess with some of the distributed training code though.
Additional context
I think maybe the intention is that you should be able to toggle the progress bar between calls to trainer.fit. If this is a functionality we would like to retain, maybe progress_bar should be an argument to the trainer.fit call instead. So instead of having a trainer object with the self.progress_bar state, it may be more fitting to have it as part of the model fitting, which is an "atomic" function that blocks the program for a while.
The added benefit is that the number of arguments in Trainer.__init__ would be reduced. |
Gradient Accumulation Scheduler | [
"feature",
"help wanted"
] | Is your feature request related to a problem? Please describe.
Often during a training, loss changes really rapidly first few epochs, so, at this time, we don't really need to use a gradient accumulation. And sometimes we want to schedule changing of accumulation factor, for example, increase it every 10 epochs
Describe the solution you'd like
Let's define scheduler, what will control the changing of accumulation factor
schedule = {6:2, 11:4}
accumulator = GradientAccumulationScheduler(schedule)
According to this schedule, we will fit our model first 5 epochs with factor 1. next 5 with factor 2, and next epochs a factor will be 4
Describe alternatives you've considered
We can interrupt model finning and manually change factor, but its really not user-friendly
We can override on_epoch_begin in pl_model, and changing factor this way.
Additional context
I have heard about this technique at ML Training from one of the competition winners, so it could be a useful feature |
transfer_to_batch_gpu returns null when input has primitives | [
"bug"
] | Describe the bug
when passing a batch such as:
batch = list(tensor, tensor, [0,1,2])
the list of ints won't be returned correctly
Additional context
Fix should add a return of the item it no condition matches |
[FEATURE] Logging frequency for TensorBoard | [
"feature",
"help wanted"
] | Is your feature request related to a problem? Please describe.
TensorBoard is a great tool for visualization but the size of event files rapidly grows bigger when we log images.
To prevent this problem, we need to set logging frequencies.
As the best of my knowledge, in pytorch-lightning, we can manually set them like:
class CoolModule(pl.LightningModule):
def __init__(self, args):
self.log_feq = args.log_freq
self.model = ...
def training_step(self, data_batch, batch_nb):
input = data_batch['input']
output = self.forward(input)
if self.global_step % self.log_freq == 0:
self.experiment.add_image('output_image', output, self.global_step)
This is an easy way, however, I think it is clearer to control frequencies by Trainer.
Describe the solution you'd like
Add an option for controlling the frequencies to Trainer.
trainer = Trainer(model, tb_log_freq=foo)
To enable this functionality, training_step should return image tensors like:
def training_step(self, data_batch, batch_nb):
input = data_batch['input']
output = self.forward(input)
loss = ...
return {'loss', loss, 'prog': {'loss': loss}, 'image': {'output': output}} |
CUDA error: initialization error (and some other problems) when searching hyperparameters on GPUs | [
"bug"
] | Describe the bug
CUDA error: initialization error (setDevice at /pytorch/c10/cuda/impl/CUDAGuardImpl.h:40) when searching hyperparameters on GPUs
To Reproduce
Following the Examples chapter in documentation, and use files in examples\new_project_templates as example.
Set hyperparameters in lightning_module_template.py tunable, e.g.,
learning_rate.
In single_gpu_node_dp_template.py, we delete main(hyperparams) and following the documentation, we add hyperparams.optimize_parallel_gpu( main, nb_trials=4, nb_workers=1, gpus=[0,1,2,3] )
If now we run the script python single_gpu_node_dp_template.py in gpu, there will be TypeError: optimize_parallel_gpu() got an unexpected keyword argument 'nb_trials'.
Watch into argparse_hopt.py for HyperOptArgumentParser, we will find in function def optimize_parallel_gpu, there is only max_nb_trials but no nb_trials as parameter. Also, there is no nb_workers, and no gpus but only gpu_ids as parameter.
Change the code in single_gpu_node_dp_template.py, it becomes hyperparams.optimize_parallel_gpu( main, max_nb_trials=4, gpu_ids=[0,1,2,3] ). This time, there is TypeError: str expected, not int to remind me that [0,1,2,3] should be changed as ['0,1,2,3'] .
Change it, this time, there is TypeError: main() takes 1 positional argument but 2 were given. The reason is in function def optimize_parallel_gpu_private in argparse_hopt.py, there is results = train_function(trial_params, gpu_id_set). However, our main function in single_gpu_node_dp_template.py has only one parameter hparams. Considering updating argparse_hopt.py is more difficult in gpu server, I add gpu_id_set as a new parameter in main function (although it isn't used).
This time, it will first show gpu available: False, used: False, then
terminate called after throwing an instance of 'c10::Error'
terminate called recursively
what(): CUDA error: initialization error (setDevice at /pytorch/c10/cuda/impl/CUDAGuardImpl.h:40)
frame #0: c10::Error::Error(c10::SourceLocation, std::string const&) + 0x33 (0x7fcbbb895273 in /home/huangsiteng/.local/lib/python3.6/site-packages/torch/lib/libc10.so)
frame #1: <unknown function> + 0xc3ca (0x7fcbbbac83ca in /home/huangsiteng/.local/lib/python3.6/site-packages/torch/lib/libc10_cuda.so)
frame #2: torch::autograd::Engine::set_device(int) + 0x159 (0x7fcb261e8179 in /home/huangsiteng/.local/lib/python3.6/site-packages/torch/lib/libtorch.so)
frame #3: torch::autograd::Engine::thread_init(int) + 0x1a (0x7fcb261e81aa in /home/huangsiteng/.local/lib/python3.6/site-packages/torch/lib/libtorch.so)
frame #4: torch::autograd::python::PythonEngine::thread_init(int) + 0x2a (0x7fcbb6ea892a in /home/huangsiteng/.local/lib/python3.6/site-packages/torch/lib/libtorch_python.so)
frame #5: <unknown function> + 0xa75f (0x7fcbbc4b475f in /home/huangsiteng/.local/lib/python3.6/site-packages/torch/_C.cpython-36m-x86_64-linux-gnu.so)
frame #6: <unknown function> + 0x76ba (0x7fcbc04f06ba in /lib/x86_64-linux-gnu/libpthread.so.0)
frame #7: clone + 0x6d (0x7fcbc022641d in /lib/x86_64-linux-gnu/libc.so.6)
However, there is no problems when using ipython to test whether cuda is avaliable:
In [1]: import torch
In [2]: torch.cuda.is_available()
Out[2]: True
In [3]: torch.cuda.get_device_name(0)
Out[3]: 'GeForce RTX 2080 Ti'
In [4]: torch.cuda.device_count()
Out[4]: 4
Expected behavior
Run correctly after updating the code.
Environment
CUDA Version 10.0.130
PyTorch version: 1.2.0
Lightning version: 0.4.6
Test-tube version: 0.6.9
Additional context
Contents about hyperparameter search is too simple and not consistent with the current version.
Maybe adding some explanation like what is
nb_workers is better.
Update those not consistent with the current version like main_local , nb_trials and gpus.
Is there any solutions to search over all the hyperparameter combinations set in model without calculating how many combinations by hand and filling in the code? |
Adding Support for Torchtext iterators | [
"feature",
"help wanted"
] | I recently came across pytorch lightning and i am absolutely loving it until now. Not having to worry about my training cycle and making it super efficient and fast. It has increased the amount of experiments i can pull off and good results have come out from it.
Right now, i have been using torchtext with its dataset classes and its custom iterators. But when i tried to use the iterators option from torchtext such as Iterator or BucketIterator instead of Dataloader i get the following error:
TypeError: embedding(): argument 'indices' (position 2) must be Tensor, not NoneType
The problem is that instead of getting a Tensor im getting a NoneType. And i dont know why that is.
Now, i tried to load the Dataset classes from torchtext with the DataLoader itself and i find the next error:
TypeError: default_collate: batch must contain tensors, numpy arrays, numbers, dicts or lists; found <class 'torchtext.data.example.Example'>
So, ideally i would really like to have the torchtext iterators supported with pytorch-lighting. But i dont know if there is a way around this issue that i havent found, still using the torchtext Dataset classes. Could anybody help me out with this? |
empty meta_tags.csv | [
"question"
] | Before asking:
search the issues.
search the docs.
If you still can't find what you need:
What is your question?
For loading data, I should write down path for 'meta_tags.csv'.
However, in the 'meta_tags.csv', there is nothing.
Code
Please paste a code snippet if your question requires it!
Same get_started code with checkpoint callbacks
import os
import torch
from torch.nn import functional as F
from torch.utils.data import DataLoader
from torchvision.datasets import MNIST
import torchvision.transforms as transforms
import pytorch_lightning as pl
class CoolSystem(pl.LightningModule):
def __init__(self):
super(CoolSystem, self).__init__()
# not the best model...
self.l1 = torch.nn.Linear(28 * 28, 10)
def forward(self, x):
return torch.relu(self.l1(x.view(x.size(0), -1)))
def training_step(self, batch, batch_nb):
# REQUIRED
x, y = batch
y_hat = self.forward(x)
return {'loss': F.cross_entropy(y_hat, y)}
def validation_step(self, batch, batch_nb):
# OPTIONAL
x, y = batch
y_hat = self.forward(x)
return {'val_loss': F.cross_entropy(y_hat, y)}
def validation_end(self, outputs):
# OPTIONAL
avg_loss = torch.stack([x['val_loss'] for x in outputs]).mean()
return {'avg_val_loss': avg_loss}
def configure_optimizers(self):
# REQUIRED
# can return multiple optimizers and learning_rate schedulers
return torch.optim.Adam(self.parameters(), lr=0.02)
@pl.data_loader
def tng_dataloader(self):
# REQUIRED
return DataLoader(MNIST(os.getcwd(), train=True, download=True, transform=transforms.ToTensor()), batch_size=32)
@pl.data_loader
def val_dataloader(self):
# OPTIONAL
return DataLoader(MNIST(os.getcwd(), train=True, download=True, transform=transforms.ToTensor()), batch_size=32)
@pl.data_loader
def test_dataloader(self):
# OPTIONAL
return DataLoader(MNIST(os.getcwd(), train=True, download=True, transform=transforms.ToTensor()), batch_size=32)
from pytorch_lightning import Trainer
from pytorch_lightning.callbacks import ModelCheckpoint, EarlyStopping
from test_tube import Experiment
model = CoolSystem()
checkpoint_callback = ModelCheckpoint(
filepath='./model_ckpt/weights6.ckpt',
save_best_only=True,
verbose=True,
monitor='val_loss',
mode='auto'
)
early_stopping = EarlyStopping(
monitor='val_loss',
patience=5,
verbose=True,
mode='auto'
)
exp = Experiment(save_dir=os.getcwd(), version=24)
trainer = Trainer(experiment=exp, max_nb_epochs=1000, train_percent_check=0.1, gpus=[1], checkpoint_callback=checkpoint_callback, early_stop_callback=early_stopping)
What have you tried?
I tried on my model and this base tutorial code. But both of them are failed.
I just started using this API today, but I've faced a lot of difficulties..
Sometimes, I need to import exit from sys.
Sometimes, loss doesn't decrease. When I restart jupyter notebook multiple times, suddenly loss gets decreasing.
Sometimes, memory can't be allocated during training (about 81 epochs).
And I can't tag something like 'Losses/train_loss', 'Losses/valid_loss' for tensorboard by using test-tube... The error said 'loss' is referenced before assignments.
I can tolerate these difficulties because I can still progress my training.
But I don't have any solution with loading models from empty file.
What's your environment?
conda version (no venv)
no conda
PyTorch version
torch==1.2.0+cu92
torchvision==0.4.0
Lightning version
pytorch-lightning==0.4.7
Test-tube version
test-tube==0.6.9 |
Expand on model loading docs, make tags_csv optional | [
"feature",
"help wanted"
] | Is your feature request related to a problem? Please describe.
The UX for loading a model isn't very clear atm.
Describe the solution you'd like
Clarify docs and make tags_csv optional for models without hparams object.
Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.
Additional context
Brought up by an issue |
Pass output from all val dataloaders to `validation_end` | [
"feature",
"help wanted"
] | Is your feature request related to a problem? Please describe.
I'm working on a retrieval problem. To compute validation scores, I'd like to compute feature vectors for a query set and a catalog set, then measure information retrieval metrics (precision, recall, etc) of the query set against the catalog set.
To do this, I'd like to return the two datasets from val_dataloader (already possible), then use validation_step to accumulate all the feature vectors for each dataset (already possible). Once I have all the feature vectors, I need some way to bring the vectors from the two datasets together so that I can compute retrieval metrics. Right now, validation_end processes the outputs of multiple validation dataloaders separately.
Describe the solution you'd like
Change the behavior of validation_end in cases where there are multiple validation dataloaders. I propose that it only be called once, but that it be passed a list of outputs dicts, one from each dataloader. Using it would look something like this:
def validation_end(self, outputs):
query_output, catalog_output = outputs
query_vectors = torch.cat([o['vectors'] for o in query_output])
query_labels = torch.cat([o['labels'] for o in query_output])
catalog_vectors = torch.cat([o['vectors'] for o in catalog_output])
catalog_labels = torch.cat([o['labels'] for o in catalog_output])
return retrieval_metrics(
query_vectors,
query_labels,
catalog_vectors,
catalog_labels
)
Describe alternatives you've considered
Breaking outside of lightning entirely to do validation
Concatenating query and catalog datasets, and somehow adding a flag to indicate which dataset a given example came from |
Add an easy way to manipulate progressbar metrics | [
"feature",
"help wanted"
] | Until now, the progress bar only shows a limited number of parameters. It'd be very helpful if we could directly manipulate internal tqdm metrics. |
issue about version number | [
"bug"
] | Hey, it seems that the release/tag number is messed up. The lastest one shoule be 0.4.8, but you used 0.11 0.111 0.112 0.113 0.12 0.121 0.122 0.13 0.13.0 0.21, and 0.122 > 0.48, 0.122 is regarded as the latest release/tag. Consider bump version to 1.0? |
Logging of GPU memory utilization can significantly slow down training | [
"bug"
] | When training using GPUs pytorch_lightning automatically logs the GPU memory utilization during training. This is a useful feature, but can severely impact performance dependent on the speed of the nvidia-smi call.
On our particular cluster (University-scale hpc cluster based on IBM LSF), this leads to a performance decrease of almost 10 fold when training on GPU vs. CPU.
Describe the bug
Logging of GPU memory can have a severe impact on training performance.
To Reproduce
Remove gpu memory logging by commenting out the lines 937 to 939 in pytorch_lightning/models/trainer.py
see here
Expected behavior
Logging of GPU memory utilization should not impede performance (by running the nvidia-smi call in the background) or it should at least be possible to deactivate it in case performance issues arise.
Desktop (please complete the following information):
OS: Ubuntu Linux 16.04, NVIDIA Geforce GTX 1080 |
Should the dependency, test_tube, be explicity stated in the readme at the top? | [
"question"
] | Right after the first example using CoolSystem, the readme starts using test_tube to introduce tensorboard logging. After looking at the setup.py file, it becomes clear that test_tube is a hard dependency.
To reduce cognitive load on readers, it may be good to link to https://github.com/williamFalcon/test-tube right before the code that imports test_tube. |
AttributeError: 'tqdm' object has no attribute 'reset' | [
"bug"
] | I was just trying to train on ubuntu (over ssh) and got this error:
Traceback (most recent call last):
File "vggish_trainer.py", line 88, in <module>
main(hyperparams)
File "vggish_trainer.py", line 58, in main
trainer.fit(model)
File "/home/hlt/.conda/lib/python3.7/site-packages/pytorch_lightning/models/trainer.py", line 554, in fit
self.__single_gpu_train(model)
File "/home/hlt/.conda/lib/python3.7/site-packages/pytorch_lightning/models/trainer.py", line 602, in __single_gpu_train
self.__run_pretrain_routine(model)
File "/home/hlt/.conda/lib/python3.7/site-packages/pytorch_lightning/models/trainer.py", line 795, in __run_pretrain_routine
self.__train()
File "/home/hlt/.conda/lib/python3.7/site-packages/pytorch_lightning/models/trainer.py", line 811, in __train
self.progress_bar.reset(self.total_batches)
AttributeError: 'tqdm' object has no attribute 'reset'
The code worked fine on CPU on MacOS, any ideas about what might be going wrong here? |
Calculating gradient during validation | [
"feature",
"help wanted"
] | Hello,
I am using pytorch-lightning in a physics-related project, where I need to calculate gradients during validation. At the moment the gradient calculation is disabled in the validate function of the trainer. Of course, commenting out the line solves the problem. However, it took some time to figure out where everything goes wrong.
Solution:
I would suggest adding another parameter to trainer (e.g. enable_grad_during_validation) to allow the enabling of gradient calculation during validation. Of course, this parameter should be set to False by default so that nothing changes for users that do not require the feature. The changes that are required would be to add the parameter and change the line where the gradient is disabled.
# disable gradients to save memory
torch.set_grad_enabled(enable_grad_during_validation)
This might be a niche issue, therefore if no one else needs this change, I would suggest adding an extra note in the documentation of the validation loop, that informs users that gradient calculation is disabled during validation.
Ps: thank you for the great library |
ERROR: No matching distribution found for pytorch-lightning | [
"bug"
] | pip install pytorch-lightning
ERROR: No matching distribution found for pytorch-lightning |
Integration with knockknock | [
"feature",
"help wanted"
] | Is your feature request related to a problem? Please describe.
Training usually takes time. It would be more efficient to let it run and do something else, then and come back when it's done.
Describe the solution you'd like
Preferably, we would like to have a process that automatically informs us when the process finishes. Fortunately, there is a package, called Knockknock, that we can integrate into pytorch-lightning and it will take care how to send notification to us through our favourite channels, i.e. Slack.
If you're interested, I can take a look on how to implement the feature. |
Tensorboard logging in multi-gpu setting not working properly? | [
"question"
] | Hi there :)
I have a question (that may be an issue with the code or just my ignorance).
(b.t.w. I am using the latest version, pytorch-lightning==0.4.9)
If I set the trainer
trainer = Trainer(experiment=exp, gpus=[0])
I can see the corresponding logging (scalars and hyperparameters) in Tensorboard.
If I change it to distributed training (keeping the rest of the code unchanged) :
trainer = Trainer(experiment=exp, gpus=[0,1], distributed_backend='ddp')
The Tensorboard logging stops working at least for scalars and hyperparameters, I see nothing except the experiment name.
In both cases 'exp' is a Experiment instantiated like this:
exp = Experiment(save_dir=/SOME/PATH, name=NAME, version=VERSION, description=DESCRIPTION)
This picture illustrates the problem.
In the picture the red arrows point to the "distributed" experiment, with no drawing in the chart. The other two (the ones that appear in the chart) are the very same, except than the run in single GPU.
Am I missing something or do I need to add extra configuration to make the logging work in multi-gpu with the ddp setting? Or is it a bug?
Thank you! :) |
Tqdm bar has unexpected behavior when val_check_interval < 1 | [
"bug"
] | Describe the bug
If we set the trainer to have val_check_interval < 1, then here's what happens:
From batch_nb = 0 up to batch_nb = val_check_batch + 1: the tqdm bar correctly shows the progress on the training set.
Onwards: the validation step begins. Every batch gathered from the validation set is counted and added to the progress bar. So the progress bar will continue from val_check_batch + 1 up until val_check_batch + 1 + len(val_dataloader).
This is very counter intuitive and I suspect we can classify this as a bug! A more expected behavior would be to have a second tqdm bar spawning while the original training tqdm bar remains visible.
To Reproduce
Steps to reproduce the behavior:
Use show_progress_bar=True
Set val_check_batch to a value strictly below 1. The close to 0, the easier it will be to observe the bug.
Thanks! If I have a bit of time over the weekend, I'd be happy to try and make a PR. |
set MASTER_PORT automatically | [
"feature",
"help wanted"
] | when using DDP, use must set a manual MASTER_PORT. However, we should automatically set one.
The problem is that we can't choose a random one as each process might generate a different solution. Instead, I propose to use the SLURM JOB ID as the seed to k possible ports. Then every process can deterministically generate the same sequence of ports. With that list, the root node can init NCCL connection making its way down the list until a port is open.
However, if we choose the length of the job id to use correctly we may potentially not run into collisions and won't need to iterate a list of potential ports. |
In Multi GPU DDP, pytorch-lightning creates several tfevents files | [
"bug"
] | Describe the bug
Right now pytorch-lightning seems to create several tfevent files in the multi-gpu ddp way:
e.g. for 2 GPUs:
-rw-rw-r--. 1 sam sam 40 Sep 19 08:11 events.out.tfevents.1568880714.google2-compute82.3156.0
-rw-rw-r--. 1 sam sam 165K Sep 19 08:22 events.out.tfevents.1568880716.google2-compute82.3186.0
-rw-rw-r--. 1 sam sam 40 Sep 19 08:11 events.out.tfevents.1568880718.google2-compute82.3199.0
I suppose the first one is created by the main process and the next 2 are created by the 2 DDP processes (one per GPU). Unfortunately, the actual events are not logged in the last created one, and that confuses tensorboard, cf tensorflow/tensorboard#1011
I have to restart tensorboard if I want to see the new data.
A clear and concise description of what the bug is.
To Reproduce
Launch any training on multi GPU DDP.
Expected behavior
Only one tfevent file is created, from the master GPU. |
Aggregate output of validation_end across all ddp processes | [
"feature",
"help wanted"
] | In validation_end I want to gather all outputs from validation_step, then use another file to calculate scores. but when use multi-gpu ddp mode, I find it launches multiple process of validation_end to calculate scores. how to make it only call once? |
Add documentation for EarlyStop | [] | I couldn't find any mention of EarlyStop Callback in the documentation.
Maybe it's needed ? |
Is 'update_tng_log_metrics' working? | [
"question"
] | Hi, all!
Does lightning call update_tng_log_metrics function right before it logs metric for the batch as described in the docs ?
I wanted loss (in training steps) to be displayed as loss/train in tensorboard.
To do so, I planned to use update_tng_log_metrics and copy logs['loss'] to logs['loss/train'], and there I noticed that the function may not called.
Code
Here is my code.
class myLightningModule(pl.LightningModule):
# REQUIRED functions
def training_step(self, batch, batch_nb):
inputs, targets = batch
outputs = self.forward(inputs)
return {'loss':self.loss_func(outputs, targets)}
def update_tng_log_metrics(self, logs):
logs['loss/train'] = logs['loss']
return logs
What have you tried?
To look into logs, I tried dump logs inside update_tng_log_metrics function by print it. I didn't got any output, not even a None. |
UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars | [
"bug"
] | Describe the bug
Not sure if this is a bug. It shows me this warning at the beginning of training:
/home/adrian/research/envs/research/lib/python3.7/site-packages/torch/nn/parallel/_functions.py:61: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.
To Reproduce
The minimal MNIST example from the docs has this problem when trained on multiple GPUs. Attached the Python script:
import os
import torch
from torch.nn import functional as F
from torch.utils.data import DataLoader
from torchvision.datasets import MNIST
import torchvision.transforms as transforms
import pytorch_lightning as pl
class CoolModel(pl.LightningModule):
def __init__(self):
super(CoolModel, self).__init__()
# not the best model...
self.l1 = torch.nn.Linear(28 * 28, 10)
def forward(self, x):
return torch.relu(self.l1(x.view(x.size(0), -1)))
def training_step(self, batch, batch_nb):
# REQUIRED
x, y = batch
y_hat = self.forward(x)
return {'loss': F.cross_entropy(y_hat, y)}
def validation_step(self, batch, batch_nb):
# OPTIONAL
x, y = batch
y_hat = self.forward(x)
return {'val_loss': F.cross_entropy(y_hat, y)}
def validation_end(self, outputs):
# OPTIONAL
avg_loss = torch.stack([x['val_loss'] for x in outputs]).mean()
return {'avg_val_loss': avg_loss}
def test_step(self, batch, batch_nb):
# OPTIONAL
x, y = batch
y_hat = self.forward(x)
return {'test_loss': F.cross_entropy(y_hat, y)}
def test_end(self, outputs):
# OPTIONAL
avg_loss = torch.stack([x['test_loss'] for x in outputs]).mean()
return {'avg_test_loss': avg_loss}
def configure_optimizers(self):
# REQUIRED
return [torch.optim.Adam(self.parameters(), lr=0.02)]
@pl.data_loader
def tng_dataloader(self):
return DataLoader(MNIST(os.getcwd(), train=True, download=True, transform=transforms.ToTensor()), batch_size=32)
@pl.data_loader
def val_dataloader(self):
# OPTIONAL
# can also return a list of val dataloaders
return DataLoader(MNIST(os.getcwd(), train=True, download=True, transform=transforms.ToTensor()), batch_size=32)
@pl.data_loader
def test_dataloader(self):
# OPTIONAL
# can also return a list of test dataloaders
return DataLoader(MNIST(os.getcwd(), train=False, download=True, transform=transforms.ToTensor()), batch_size=32)
if __name__ == '__main__':
from pytorch_lightning import Trainer
trainer = Trainer(
gpus=[2, 3],
distributed_backend='dp',
)
model = CoolModel()
trainer.fit(model)
Expected behavior
There are no scalars involved in the forward pass, so the warning does not make sense and should not be shown.
Desktop (please complete the following information):
OS: Ubuntu 18.04
Version: the latest pip install |
progressbar dict can't be used with DP/DDP | [] | I took me a very long time to figure this one out, but I found that the "prog" dict for the output of training_step cannot be used together with multi-GPU training, otherwise an error occurs (attached).
pl_zip_error.txt
I suggest to add a note to the docs or warn the user directly through PL. |
Profiling | [
"feature",
"help wanted"
] | Is your feature request related to a problem? Please describe.
I realize my testing loop is not efficient at all. I need to understand where is the bottleneck and how I can make it faster.
Describe the solution you'd like
An option similar to fast_dev_run where the training OR validation OR testing loop is profiled, in order to see where is the bottleneck.
Describe alternatives you've considered
No alternative so far. |
Add tensorboard logger | [
"feature",
"help wanted"
] | Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
Describe the solution you'd like
A clear and concise description of what you want to happen.
Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.
Additional context
Add any other context or screenshots about the feature request here. |
'LightningTemplateModel' object has no attribute '_lazy_train_dataloader' | [
"bug"
] | Describe the bug
Try to run the file single_gpu_node_ddp_template.py, to test the code, but got error:
To Reproduce
Steps to reproduce the behavior:
Download folder pytorch-lightning/examples/new_project_templates/
run file: $python ingle_gpu_node_ddp_template.py$
See error
Screenshots
If applicable, add screenshots to help explain your problem.
Desktop (please complete the following information):
OS: ubuntu18
pytorch 1.2, cudatoolkit 9.2
pytorch_lgihting 0.5 |
How to use EarlyStopping and ModelCheckpoint | [
"question"
] | The logs passed into the on_epoch_end method of both ModelCheckpoint and EarlyStopping seems to have the loss field set to a string, which leads to exception when comparing with the last best result. Following codes in trainer.py,
407 @property
408 def __training_tqdm_dict(self):
409 tqdm_dict = {
410 'loss': '{0:.3f}'.format(self.avg_loss),
411 'epoch': '{}'.format(self.current_epoch),
412 'batch_nb': '{}'.format(self.batch_nb),
413 }
How should I use the ModelCheckpoint and EarlyStopping, is adding my own field in progress field when returning from training_step the way to go? What is the purpose of converting to string?
What's your environment?
conda version (no venv) 4.7.12
PyTorch version 1.2.0
Lightning version 0.5.0
Test-tube version 0.7.1 |
JIT support | [
"feature",
"help wanted"
] | Initial issue description below
JIT support requires several changes in the Trainer and LightningModule:
No use of python properties like the current dataloader implementation
Possible solution: Use getters like implemented in #275
Other possibility: Handle dataloading completely in trainer. The user is able to modify the dataloaders e.g. every epoch using hooks/callbacks
The trainer cannot set PLModule's class members afterwards like self.model.trainer = self
This is because after converting my_pl_module = torch.jit.script(my_pl_module), the module has the class ScriptModule. Adding members only adds the members to the ScriptModule not to the underlying LightningModule.
Solution could be: Implement setter in LightningModule. These methods will be transfered to the ScriptModule
Saving and restoring might need some changes, too. One could conditionally check the class of the provided module in the trainer for use of torch.jit.save in trainer_io
JIT is currently not compatible with distributed training (see pytorch issue #15421)
Is your feature request related to a problem? Please describe.
The current implementation of pl.LightningModule does not support pytorch's JIT. This is due to the use of python properties for the dataloaders, which is currently not supported in JIT (see here).
example trace:
$ python train.py
VISIBLE GPUS:
Traceback (most recent call last):
File "train.py", line 69, in <module>
train(config, data_dir)
File "train.py", line 37, in train
trainer.fit(pl_module)
File "/home/schroeter/.virtualenvs/pytorch-1.2/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 694, in fit
self.__run_pretrain_routine(model)
File "/home/schroeter/.virtualenvs/pytorch-1.2/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 879, in __run_pretrain_routine
self.get_dataloaders(ref_model)
File "/home/schroeter/.virtualenvs/pytorch-1.2/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 574, in get_dataloaders
self.train_dataloader = model.train_dataloader
File "/home/schroeter/.virtualenvs/pytorch-1.2/lib/python3.7/site-packages/torch/jit/__init__.py", line 1563, in __getattr__
return super(ScriptModule, self).__getattr__(attr)
File "/home/schroeter/.virtualenvs/pytorch-1.2/lib/python3.7/site-packages/torch/nn/modules/module.py", line 589, in __getattr__
type(self).__name__, name))
AttributeError: 'ScriptModule' object has no attribute 'train_dataloader'
where pl_module has:
class PlModule(pl.LightningModule):
# ....
@pl.data_loader
def train_dataloader(self):
return self.__dataloader("train")
Describe the solution you'd like
torch.jit.script(pl_module)
Describe alternatives you've considered
A workaround might be defining a separate Module handling all nn.Module stuff and transforming only this part into a jit script module.
A solution might be to define explicit getters like get_train_dataloader(self) instead of using properties. |
New logger abstraction breaks self.experiment | [
"bug"
] | with the new abstraction we can't do:
self.experiment.log_histogram(...)
instead we have to do:
self.experiment.experiment.log_histogram(...)
@neggert any suggestions on how to fix? Ideally the user can just access the self.experiment object directly and operate on it. |
Modify param printing to show top level modules only | [
"feature",
"help wanted"
] | Add a model summary option that just shows the number of parameters for the top level modules.
Set pd.options.display.float_format = ’{:,}’.for- mat so it uses comma separators.
Also summarize the number of parameters like 1,000,234 to 1M.
use 1k, 1M, 1B, 1T, etc...
Suggested by @adefazio
Specifically, change:
Trainer(print_weights_summary=True)
To:
# default
Trainer(weights_summary='full')
# second option
Trainer(weights_summary='top') |
Decouple training_step return from logging | [
"feature",
"help wanted"
] | We should adopt a standard for all validation and training functions.
Maybe something like:
return {loss: loss,
log: {},
prog: {}
}
Where log goes to self.experiment.
prog goes to the progress bar?
@neggert @adefazio |
HyperparameterArgumentParser don't print functions binded | [
"feature"
] | When printing HyperparameterArgumentParser don't print functions binded |
Default logger and checkpoint | [
"feature",
"help wanted"
] | We should make the checkpoint and logger default without the user having to pass those in. However, they can still pass them in to modify the default behavior even further.
@adefazio |
The saved epoch number seems to be wrong? | [] | The saved epoch number seems to be wrong. I don't know whether it is my fault.
Specifically, I first train my model for 2 epochs, with the following code:
exp = Experiment(save_dir='.')
trainer = Trainer(experiment=exp, max_nb_epochs=2, gpus=[0], checkpoint_callback=checkpoint_callback)
trainer.fit(model)
During the first epoch, epoch=0. After the training of the first epoch, it shows:
Epoch 00001: avg_val_loss improved from inf to 1.42368, saving model to checkpoints//_ckpt_epoch_1.ckpt
During the second epoch, epoch=1. After the training of the second epoch, it shows:
Epoch 00002: avg_val_loss improved from 1.42368 to 1.23873, saving model to checkpoints//_ckpt_epoch_2.ckpt
At this moment, I save exp with the code:
exp.save()
and it gives:
100%|████| 15000/15000 [04:31<00:00, 454.06it/s, avg_val_loss=1.24, batch_nb=12499, epoch=1, gpu=0, loss=1.283, v_nb=0]
And then, I want to continue my training with the following code:
new_exp = Experiment(save_dir='.', version=0)
new_trainer = Trainer(experiment=new_exp, max_nb_epochs=3, gpus=[0], checkpoint_callback=checkpoint_callback)
new_model = Net()
new_trainer.fit(new_model)
It starts with epoch=1 instead of epoch=2. Therefore, to reach new_trainer's max_nb_epochs=3, another 2 epochs will be implemented.
Obviously, the epoch number in the saved exp is wrong. After the first two epochs, the saved epoch number should be 2. But it saved epoch=1, which causes the continuing training starts from epoch=1.
It really confused me. Looking forward to your help. Thanks. |
Pass experiment tags to MLFlowLogger | [
"feature",
"help wanted"
] | Is your feature request related to a problem? Please describe.
When using MLFlowLogger, I'm unable to easily set experiment tags, like username or run name.
Describe the solution you'd like
Add parameter tags=None which is passed to MLFlowLogger. Tags will be passed to create_run method
Describe alternatives you've considered
Manually hack logger, get experiment from it and set tag there
If you don't see any drawbacks, I can make a PR |
Add TPU support | [
"feature",
"help wanted"
] | Although still experimental, we should add the according support for it. Likely as a distributed backend flag?
I'm not familiar with the TPU APIs but I assume there is a DP and DDP version? So, something like
Trainer(distributed_backend='tpu_dp')
Trainer(distributed_backend='tpu_ddp') |
Add DP DDP benchmarks | [
"feature",
"help wanted",
"good first issue",
"won't fix",
"example"
] | Benchmark DP, and all the DDP implementations in Lightning on 1 epoch through cifar-10 across 4 GPUs |
Val_loss not available | [
"bug"
] | Describe the bug
When I train my network, which has validation steps defined similar to the doc example
def validation_step(self, batch, batch_nb):
x = torch.squeeze(batch['x'], dim=0).float()
y = torch.squeeze(batch['y'], dim=0).long()
output = self.forward(x)
return {'batch_val_loss': self.loss(output, y),
'batch_val_acc': accuracy(output, y)}
def validation_end(self, outputs):
avg_loss = torch.stack([x['batch_val_loss'] for x in outputs]).mean()
avg_acc = torch.stack([x['batch_val_acc'] for x in outputs]).mean()
return {'val_loss': avg_loss, 'val_acc': avg_acc}
with my cusotm EarlyStopCallback
early_stop_callback = EarlyStopping(monitor='val_loss', patience=5)
tt_logger = TestTubeLogger(
save_dir=log_dir,
name="default",
debug=False,
create_git_tag=False
)
trainer = Trainer(logger=tt_logger,
row_log_interval=10,
checkpoint_callback=checkpoint_callback,
early_stop_callback=early_stop_callback,
gradient_clip_val=0.5,
gpus=gpus,
check_val_every_n_epoch=1,
max_nb_epochs=99999,
train_percent_check=train_frac,
log_save_interval=100,
)
the program cannot see my validation metrics:
Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,epoch,batch_nb,v_nb <class 'RuntimeWarning'>
In a previous release running on Windows (now I am on macOS), this behaviour was not happening. But in the previous version, TestTubeLogger was not present
Desktop (please complete the following information):
OS: macOS
Version: latest |
Add IterableDataset support | [
"feature",
"help wanted"
] | Looks like currently there is no way to use an IterableDataset instance for training. Trying to do so results in a crash with this exception:
Traceback (most recent call last):
File "main.py", line 12, in <module>
trainer.fit(model)
File "/home/akonstantinov/.miniconda3/envs/cpn/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 677, in fit
self.__single_gpu_train(model)
File "/home/akonstantinov/.miniconda3/envs/cpn/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 725, in __single_gpu_train
self.__run_pretrain_routine(model)
File "/home/akonstantinov/.miniconda3/envs/cpn/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 882, in __run_pretrain_routine
self.__layout_bookeeping()
File "/home/akonstantinov/.miniconda3/envs/cpn/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 436, in __layout_bookeeping
self.nb_training_batches = len(self.train_dataloader)
File "/home/akonstantinov/.miniconda3/envs/cpn/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 297, in __len__
return len(self._index_sampler) # with iterable-style dataset, this will error
File "/home/akonstantinov/.miniconda3/envs/cpn/lib/python3.7/site-packages/torch/utils/data/sampler.py", line 212, in __len__
return (len(self.sampler) + self.batch_size - 1) // self.batch_size
File "/home/akonstantinov/.miniconda3/envs/cpn/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 57, in __len__
raise TypeError('Cannot determine the DataLoader length of a IterableDataset')
TypeError: Cannot determine the DataLoader length of a IterableDataset |
learning rate warmup | [
"question"
] | What is the most appropriate way to add learning rate warmup ?
I am thinking about using the hooks. def on_batch_end(self):, but not sure where to put this function to ? Thank you. |
install error 0.122 | [
"bug"
] | pip install https://github.com/williamFalcon/pytorch-lightning/archive/0.122.zip --upgrade
ERROR: Could not find a version that satisfies the requirement test-tube>=0.652 |
Validation loss in progress bar printed line by line | [
"bug",
"feature",
"help wanted"
] | Common bugs:
checked.
Describe the bug
When adding a "progress_bar" key to the validation_end output, the progress bar doesn't behave as expected and prints one line per iteration, eg:
80%|8| 3014/3750 [00:23<00:01, 516.63it/s, batch_nb=1874, epoch=14, gpu=0, loss=1.070, training_loss=0.792, val_
82%|8| 3066/3750 [00:23<00:01, 517.40it/s, batch_nb=1874, epoch=14, gpu=0, loss=1.070, training_loss=0.792, val_
83%|8| 3118/3750 [00:23<00:01, 516.65it/s, batch_nb=1874, epoch=14, gpu=0, loss=1.070, training_loss=0.792, val_ 85%|8| 3170/3750 [00:23<00:01, 517.42it/s, batch_nb=1874, epoch=14, gpu=0, loss=1.070, training_loss=0.792, val_
86%|8| 3222/3750 [00:23<00:01, 517.59it/s, batch_nb=1874, epoch=14, gpu=0, loss=1.070, training_loss=0.792, val_
87%|8| 3274/3750 [00:23<00:00, 518.00it/s, batch_nb=1874, epoch=14, gpu=0, loss=1.070, training_loss=0.792, val_
89%|8| 3326/3750 [00:23<00:00, 518.16it/s, batch_nb=1874, epoch=14, gpu=0, loss=1.070, training_loss=0.792, val_
90%|9| 3378/3750 [00:23<00:00, 518.45it/s, batch_nb=1874, epoch=14, gpu=0, loss=1.070, training_loss=0.792, val_
91%|9| 3430/3750 [00:23<00:00, 518.36it/s, batch_nb=1874, epoch=14, gpu=0, loss=1.070, training_loss=0.792, val_
93%|9| 3482/3750 [00:23<00:00, 518.02it/s, batch_nb=1874, epoch=14, gpu=0, loss=1.070, training_loss=0.792, val_
94%|9| 3534/3750 [00:24<00:00, 517.26it/s, batch_nb=1874, epoch=14, gpu=0, loss=1.070, training_loss=0.792, val_
96%|9| 3586/3750 [00:24<00:00, 517.68it/s, batch_nb=1874, epoch=14, gpu=0, loss=1.070, training_loss=0.792, val_
97%|9| 3638/3750 [00:24<00:00, 518.08it/s, batch_nb=1874, epoch=14, gpu=0, loss=1.070, training_loss=0.792, val_
98%|9| 3690/3750 [00:24<00:00, 518.18it/s, batch_nb=1874, epoch=14, gpu=0, loss=1.070, training_loss=0.792, val_
100%|9| 3742/3750 [00:24<00:00, 518.23it/s, batch_nb=1874, epoch=14, gpu=0, loss=1.070, training_loss=0.792, val_
100%|#| 3750/3750 [00:24<00:00, 518.23it/s, batch_nb=1874, epoch=14, gpu=0, loss=1.070, training_loss=0.792, val_loss=1.16]
save callback...
100%|#| 3750/3750 [00:24<00:00, 152.16it/s, batch_nb=1874, epoch=14, gpu=0, loss=1.070, training_loss=0.792, val_loss=1.16]
To Reproduce
Steps to reproduce the behavior:
Take MNIST script minimal example (https://williamfalcon.github.io/pytorch-lightning/LightningModule/RequiredTrainerInterface/)
with some code to run it
if __name__ == "__main__":
model = CoolModel()
# most basic trainer, uses good defaults
default_save_path = '/tmp/checkpoints/'
trainer = pl.Trainer(default_save_path=default_save_path,
show_progress_bar=True)
trainer.fit(model)
Change validation_end method to:
def validation_end(self, outputs):
avg_loss = torch.stack([x['val_loss'] for x in outputs]).mean()
tqdm_dict = {'val_loss': avg_loss}
return {
'progress_bar': tqdm_dict,
'log': {'val_loss': avg_loss},
}
Change training_step to:
def training_step(self, batch, batch_nb):
x, y = batch
y_hat = self.forward(x)
loss = F.cross_entropy(y_hat, y)
output = {
'loss': loss, # required
'progress_bar': {'training_loss': loss}, # optional (MUST ALL BE TENSORS)
}
return output
Run the script, see error at validation time.
Note that both steps 2 and 3 are necessary to reproduce the issue, each separately would run as expected.
Expected behavior
A progress bar on a single line.
Screenshots
If applicable, add screenshots to help explain your problem.
Desktop (please complete the following information):
OS: Linux
Version
pytorch-lightning==0.5.1.3
torch==1.2.0
Additional context
Actually I ran into this issue after trying to add EarlyStopping, which asked for val_loss, which I found out was to be added via the progress_bar metrics... which was quite unexpected for me (I would have had it in "log" or direct key?) |
Earlystopping should use a key not in progress_bar or log | [
"feature",
"help wanted"
] | Make early stopping use any of the keys not used in progress_bar or log
(see #330) |
ModelCheckpoint with monitor='loss' and save_best_only=True crashes | [
"bug"
] | checkpoint_callback = ModelCheckpoint(checkpoint_path, monitor='loss', save_best_only=True)
trainer = Trainer(gpus=[0], checkpoint_callback=checkpoint_callback)
trainer.fit(model)
The above code crashes with the following output:
gpu available: True, used: True
VISIBLE GPUS: 0
Name Type Params
0 encoder_layers ModuleList 29461600
1 encoder_layers.0 Linear 26910000
2 encoder_layers.1 Linear 2001000
3 encoder_layers.2 Linear 500500
4 encoder_layers.3 Linear 50100
5 decoder_layers ModuleList 29474954
6 decoder_layers.0 Linear 50500
7 decoder_layers.1 Linear 501000
8 decoder_layers.2 Linear 2002000
9 decoder_layers.3 Linear 26921454
10 loss L1Loss 0
100%|█| 100/100 [00:19<00:00, 5.50it/s, batch_nb=99, epoch=0, gpu=0, loss=201.344, v_nb=3]save callback...
Traceback (most recent call last):
File "/home/akonstantinov/.pycharm_helpers/pydev/pydevd.py", line 1758, in <module>
main()
File "/home/akonstantinov/.pycharm_helpers/pydev/pydevd.py", line 1752, in main
globals = debugger.run(setup['file'], None, None, is_module)
File "/home/akonstantinov/.pycharm_helpers/pydev/pydevd.py", line 1147, in run
pydev_imports.execfile(file, globals, locals) # execute the script
File "/home/akonstantinov/.pycharm_helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "/home/akonstantinov/cross_platform_norm/dae/main.py", line 15, in <module>
trainer.fit(model)
File "/home/akonstantinov/.miniconda3/envs/cpn/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 742, in fit
self.__single_gpu_train(model)
File "/home/akonstantinov/.miniconda3/envs/cpn/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 790, in __single_gpu_train
self.__run_pretrain_routine(model)
File "/home/akonstantinov/.miniconda3/envs/cpn/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1025, in __run_pretrain_routine
self.__train()
File "/home/akonstantinov/.miniconda3/envs/cpn/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1053, in __train
self.run_training_epoch()
File "/home/akonstantinov/.miniconda3/envs/cpn/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1105, in run_training_epoch
self.__run_evaluation(test=self.testing)
File "/home/akonstantinov/.miniconda3/envs/cpn/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1467, in __run_evaluation
logs=self.__training_tqdm_dict)
File "/home/akonstantinov/.miniconda3/envs/cpn/lib/python3.7/site-packages/pytorch_lightning/callbacks/pt_callbacks.py", line 238, in on_epoch_end
if self.monitor_op(current, self.best):
TypeError: ufunc 'less' did not contain a loop with signature matching types dtype('<U32') dtype('<U32') dtype('bool')
Quick debugging shows that the crash happens because current variable contains loss in form of a string instead of a float. |
No way to disable checkpoint callback | [
"bug"
] | With the recent change to provide a checkpoint callback by default, there's no way to disable checkpointing. Passing checkpoint_callback=None results in the default checkpointer being created.
In Trainer.__Init__:
self.checkpoint_callback = checkpoint_callback
if self.checkpoint_callback is None:
if isinstance(logger, TestTubeLogger):
ckpt_path = '{}/{}/{}'.format(self.default_save_path, self.logger.name,
self.logger.version)
else:
ckpt_path = self.default_save_path
self.checkpoint_callback = ModelCheckpoint(
filepath=ckpt_path
) |
Support for retain_graph=True | [
"feature",
"help wanted"
] | Is your feature request related to a problem? Please describe.
Some models require retain_graph=True, but it's not possible to set it in the .backward() call inside of Trainer.__run_training_batch(...)
Describe the solution you'd like
Add train_graph member function the LightningModule have the trainer read this option and then pass it into the .backward() call.
Describe alternatives you've considered
Driving a version of Trainer to support retain_graph=True is tough because __run_training_batch and other functions are name-mangled. |
Support for add_scalars in log | [
"feature",
"help wanted"
] | Is your feature request related to a problem? Please describe.
Sometimes we want to use add_scalars instead of add_scalar in the log results. For example, our loss function is a multitasking loss function, and we want to be able to display a graph of each loss function in the same graph. |
on_epoch_* callbacks should only be run for 1 process | [
"bug"
] | Bug Description:
Anything done in on_epoch_* (like on_epoch_end or on_epoch_start) callbacks happens n number of times when doing distributed training i.e. ddp . Process rank check should be added.
To Reproduce
Use ddp for training and create the on_epoch_end callback method with a print statement like: print(self.current_epoch)
Expected behavior
Each epoch is supposed to printed only once. |
setting gpus=-1 and gpus='-1' in Trainer give different behaviours | [
"bug"
] | I discovered this while looking through the code. Trainer constructor does not mention that
gpus can be -1 or '-1'. However if such values are passed they are accepted and result in
different behaviour: -1 will result in no gpus used, '-1' will use all available gpus.
To Reproduce
Run any model first setting trainer gpus parameter to -1. No gpus will be used.
Run same model setting gpus to '-1'. All available gpus will be used.
Being able to set -1 to indicate that all gpus should be used is and I believe useful behaviour.
The issue is in function self.__parse_gpu_ids(gpus) where the handling of -1 when passed as int is not implemented.
Solution would be to implement equivalent logic for -1 as for '-1'.
Happy to submit a PR. |
Number of batches is off-by-one during training | [
"bug"
] | Describe the bug
The number of mini batches iterated over during an epoch seems to be off-by-one during training when passing a train_percent_check < 1.0. More specifically, it trains for one additional iteration.
The operator in this line should probably be changed from > to >=.
https://github.com/williamFalcon/pytorch-lightning/blob/master/pytorch_lightning/trainer/trainer.py#L1130
Evaluation by the way seems correct as seen in the following code.
https://github.com/williamFalcon/pytorch-lightning/blob/master/pytorch_lightning/trainer/trainer.py#L614
This leads to unexpected behaviors in e.g. EarlyStopping since the log that it's given is the callback log from the training and not the evaluation (the additional iteration overwrites the evaluation log with the training log).
To Reproduce
Specify some train_percent_check < 1.0 e.g 0.5.
Fit.
Notice that it trains for an additional iteration.
Expected behavior
It should not train for that additional iteration. |
GPU usage of model decreasing each epoch | [
"bug"
] | Describe the bug
Hi, i'm trying to train a model on my GPU (Tesla K80) but i have an issue : the training starts well and ~45% of the GPU is used, but at each epoch training uses less and less of the GPU slowing the training process (each epoch last longer than the previous one).
This is my Module :
class TestModule(pl.LightningModule):
def __init__(self,train_dataset,dev_dataset,hparams):
super(TestModule,self).__init__()
self.linear1=torch.nn.Linear(2048,768)
self.droupout=torch.nn.Dropout(p=0.1)
self.rrelu=torch.nn.RReLU()
self.linear2=torch.nn.Linear(768,1)
self.hparams=hparams
self.train_dataset=train_dataset
self.dev_dataset=dev_dataset
self.loss=torch.nn.BCEWithLogitsLoss()
def forward(self,input_embedding_1,input_embedding_2):
pooled_output_1 = input_embedding_1
pooled_output_2 = input_embedding_2
concatened=torch.cat((pooled_output_1,pooled_output_2),dim=1)
logits=self.linear2(self.rrelu(self.droupout(self.linear1(concatened))))
#print(act)
return logits.view(-1)
def training_step(self, batch, batch_nb):
# REQUIRED
questions_embeddings, responses_embeddings, y = batch
y_hat = self.forward(questions_embeddings,responses_embeddings)
loss = self.loss(y_hat, y)
tensorboard_logs = {'train_loss': loss}
return {'loss': loss, 'log': tensorboard_logs}
def validation_step(self, batch, batch_nb):
# OPTIONAL
questions_embeddings, responses_embeddings, y = batch
with torch.no_grad():
y_hat = self.forward(questions_embeddings,responses_embeddings)
predicted_labels=(y_hat>=0).float()
val_acc = torch.sum(y == predicted_labels).item() / (len(y) * 1.0)
val_acc = torch.tensor(val_acc)
if self.on_gpu:
val_acc = val_acc.cuda(val_acc.device.index)
return {'val_loss': self.loss(y_hat, y), 'val_acc': val_acc,"predicted_labels":predicted_labels,"true_labels":y}
def validation_end(self, outputs):
# OPTIONAL
avg_loss = torch.stack([x['val_loss'] for x in outputs]).mean()
avg_acc = torch.stack([x['val_acc'] for x in outputs]).mean()
predictions = torch.stack([x["predicted_labels"] for x in outputs]).view(-1)
true_labels = torch.stack([x["true_labels"] for x in outputs]).view(-1)
precision = torch.tensor(((predictions[true_labels==1]==1).sum().item())/((predictions==1).sum().item()))
recall= torch.tensor(((predictions[true_labels==1]==1).sum().item())/((true_labels==1).sum().item()))
f1_score=2*((precision*recall)/(precision+recall))
if self.on_gpu:
f1_score = f1_score.cuda(f1_score.device.index)
tensorboard_logs = {'val_loss': avg_loss,"val_acc":avg_acc,"f1_score":f1_score}
return {'progress_bar': tensorboard_logs, 'log': tensorboard_logs}
def configure_optimizers(self):
# REQUIRED
# can return multiple optimizers and learning_rate schedulers
# (LBFGS it is automatically supported, no need for closure function)
return torch.optim.Adam(self.parameters(), lr=self.hparams.lr)
@pl.data_loader
def train_dataloader(self):
# REQUIRED
data_train_sampler = RandomSampler(self.train_dataset)
return DataLoader(self.train_dataset, sampler=data_train_sampler, batch_size=self.hparams.batch_size,drop_last=True,pin_memory=True)
@pl.data_loader
def val_dataloader(self):
data_dev_sampler = SequentialSampler(self.dev_dataset)
return DataLoader(self.dev_dataset, sampler=data_dev_sampler, batch_size=self.hparams.batch_size,drop_last=True,pin_memory=True)
trainer = Trainer(gpus=1,default_save_path="output_non_linear_without_duplicated/",check_val_every_n_epoch=1,accumulate_grad_batches=2)
model = TestModule(train_dataset=train_ds,dev_dataset=dev_ds,hparams=config)
trainer.fit(model)
I've tried the same model but training with a standard PyTorch training loop and all goes well (the training uses ~60% of the GPU and the usage remains stable).
Do I have made an error building my PyTorch Lightning module ?
Expected behavior
Model uses the same amount of GPU than the same module in standard PyTorch and the GPU usage remains stable across each training epoch.
Desktop (please complete the following information):
OS: Ubuntu 18.04
Python 3.7, PyTorch 1.2 |
Early stopping with ddp bug | [
"bug"
] | Describe the bug
Earley stopping with ddp stalls :
When using distribued mode ddp and early stopping if the stop condition is met in one or more subprocess but not in all subprocess, the corresponding subprocess are stop but the others ones are still running and the training hangs.
with dp mode all is doing fine
To Reproduce
I use the example code, with a forked early stopping callback wich stops if val_acc>threshold
I also fix the bug #371 in trainer.py line 1131 in met_batch_limit = batch_nb > self.nb_training_batches (i.e change > by >= )
In validation_end() i return :
result = {'progress_bar': tqdm_dict, 'log': tqdm_dict,'val_acc':val_acc_mean}
return result
I use pytorch 1.2
The code :
from pytorch_lightning.callbacks import EarlyStopping
class ThresholdStopping(EarlyStopping):
def __init__(self, monitor="val_acc",thresh=0.0,mode="max",verbose=0):
super(ThresholdStopping, self).__init__()
self.monitor = monitor
self.stopped_epoch = 0
self.thresh=thresh
self.verbose = verbose
if mode not in ['auto', 'min', 'max']:
print('EarlyStopping mode %s is unknown, fallback to auto mode.' % mode)
mode = 'auto'
if mode == 'min':
self.monitor_op = np.less
elif mode == 'max':
self.monitor_op = np.greater
else:
if 'acc' in self.monitor:
self.monitor_op = np.greater
else:
self.monitor_op = np.less
self.on_train_begin()
def on_train_begin(self, logs=None):
self.stopped_epoch = 0
def on_epoch_end(self, epoch, logs=None):
current = logs.get(self.monitor)
stop_training = False
print("current",current,self.thresh)
if current is None:
print('Early stopping conditioned on metric `%s` '
'which is not available. Available metrics are: %s' %
(self.monitor, ','.join(list(logs.keys()))), RuntimeWarning)
stop_training = True
return stop_training
if self.monitor_op(current ,self.thresh):
self.stopped_epoch = epoch
stop_training = True
self.on_train_end()
return stop_training
def on_train_end(self, logs=None):
if self.stopped_epoch >= 0:
print('Epoch %05d: early stopping' % (self.stopped_epoch + 1))
def main(hparams):
"""
Main training routine specific for this project
:param hparams:
"""
# ------------------------
# 1 INIT LIGHTNING MODEL
# ------------------------
model = LightningTemplateModel(hparams)
# ------------------------
# 2 INIT TRAINER
# ------------------------
# trainer = Trainer(max_nb_epochs=1, gpus=[0, 1, 3, 7], distributed_backend='ddp')
gpu_param=hparams.gpus
early_stop_callback=ThresholdStopping(monitor='val_acc',thresh=0.85)
if hparams.distributed_backend=="ddp":
gpu_param=[0,1,2,3,4, 5, 6, 7]
trainer = Trainer(
gpus=gpu_param,
distributed_backend=hparams.distributed_backend,
use_amp=hparams.use_16bit,
early_stop_callback=early_stop_callback,
min_nb_epochs=-1,
max_nb_epochs=20,
)
# ------------------------
# 3 START TRAINING
# ------------------------
trainer.fit(model)
if __name__ == '__main__':
# ------------------------
# TRAINING ARGUMENTS
# ------------------------
# these are project-wide arguments
root_dir = os.path.dirname(os.path.realpath(__file__))
parent_parser = ArgumentParser(add_help=False)
# gpu args
parent_parser.add_argument(
'--gpus',
type=int,
default=2,
help='how many gpus'
)
parent_parser.add_argument(
'--distributed_backend',
type=str,
default='dp',
help='supports three options dp, ddp, ddp2'
)
parent_parser.add_argument(
'--use_16bit',
dest='use_16bit',
action='store_true',
help='if true uses 16 bit precision'
)
# each LightningModule defines arguments relevant to it
parser = LightningTemplateModel.add_model_specific_args(parent_parser, root_dir)
hyperparams = parser.parse_args()
# ---------------------
# RUN TRAINING
# ---------------------
main(hyperparams)
Expected behavior
All the subprocess should stop
Screenshots
If applicable, add screenshots to help explain your problem.
Desktop (please complete the following information):
OS: Ubuntu 16.04.4 LTS
Browser [e.g. chrome, safari]
Version : pip version
Additional context |
Change ModelCheckpoint.save_best_only without changing any other behaviour | [
"question"
] | How do I change the default behaviour ModelCheckpoint.save_best_only=False to True without changing any other behaviour, like where the checkpoints should be stored, or logging dir etc. When creating my own ModelCheckpoint like described in the docs, will delete the os.getcwd() directory and put the checkpoint there:
# DEFAULTS used by the Trainer
checkpoint_callback = ModelCheckpoint(
filepath=os.getcwd(),
save_best_only=True,
verbose=True,
monitor='val_loss',
mode='min',
prefix=''
)
trainer = Trainer(checkpoint_callback=checkpoint_callback)
I have no Idea how to get the actual logging dir from the trainer, as I have to instantiate the Trainer with the callback. Couldn't find any easy nice solution without doing some nasty stuff. Can you help me here?
This seems not to work either..
log_dir = "/log"
checkpoint_callback = ModelCheckpoint(
filepath=log_dir + "/checkpoints",
save_best_only=True,
verbose=True,
monitor="val_acc",
mode="min",
prefix="",
)
trainer = Trainer(
gpus=1,
checkpoint_callback=checkpoint_callback,
default_save_path=log_dir,
train_percent_check=0.1,
)
In my opinion that's a really bad default behaviour as the drive will blow up saving a checkpoint each epoch.
What's your environment?
python 3.7.4
torch 1.3.0
Lightning 0.5.2.1
Test-tube 0.7.1 |
MLFlow Logger Docs: MLFlowLogger has no attribute experiment | [
"logger"
] | |
"RuntimeError: Address already in use" when running multiple multi-gpu training (DDP). | [
"bug"
] | Describe the bug
I see "RuntimeError: Address already in use" error message if I try to run two multi-gpu training session (using ddp) at the same time.
To Reproduce
Run two multi-gpu training session at the same time.
Expected behavior
Able to run two multi-gpu training session at the same time.
Screenshots
Desktop (please complete the following information):
OS: Ubuntu 18.04
Pytorch 1.3.0
CUDA 10.1 |
Error when model checkpoint and no early stop | [
"bug"
] | Describe the bug
When creating a Trainer, if we set a checkpoint_callback=ModelCheckpoint(...) and early_stop_callback = False we get an error at this line here.
To Reproduce
Steps to reproduce the behavior:
Create a ckpt = ModelCheckpoint(...)
Create a Trainer
Setting checkpoint_callback = ckpt and early_stop_callback=False
See the error AttributeError: 'NoneType' object has no attribute 'wait'
Expected behavior
It should be possible to save the model without setting an EarlyStopping condition. Of course one could set an EarlyStopping with the max integer, but changing the condition from an or to an and solves the problem.
Desktop
OS: Ubuntu 19.04
Browser: Firefox Quantum
Version: 69.0.2 |
`validation_step` progress bar output should be used | [
"bug"
] | When you're running validation steps your progress bar iter counter will increment once per validation_step. However, AFAICT there's no way to actually customize the progress_bar text during these validation_steps. For large validation sets, this makes it hard to know how far into validation you are, since the iter counter is the total steps.
I am returning a dict from validation_step like:
def validation_step(self, batch, batch_nb):
output = {'progress_bar': {'batch_nb': batch_nb}}
return output |
Add contributors section | [
"feature",
"help wanted",
"good first issue",
"docs"
] | Would be great to have a nice display of the contributors in the README.md.
Here's an example:
https://github.com/tqdm/tqdm#contributions |
Separate out API of LighteningModule into a few classes? | [
"feature",
"help wanted"
] | Is your feature request related to a problem? Please describe.
I think LighteningModule has identified a great set of APIs to separate research logic from training/evaluation infrastructure. This is an absolutely crucial first-step.
However, IMHO the current LighteningModule is too many things. It is at least:
A nn.Module with bells and whistles, such as GradInformation, ModelIO, etc.
An "algorithm" implementation, in terms of training/validation steps.
A statistics aggregator, because most likely you'd save metrics on self in validation_step and do something to it in validation_end.
A dataset provider, including possible data augmentation, etc.
I can appreciate the "this class has everything for this paper" sentiment, but IMHO, especially given the research work pattern of trying many things, it seems likely this class would grow quite large with lots of flags that toggle functionality as a research project grows.
Describe the solution you'd like
I think some of these functionality can be separated into a few separate classes, and still aggregated in a top-level Lightening class to give the "everything is here" feel.
A LighteningModule class that serves the same purpose as a nn.Module (forward pass, parameter management) with the current bells and whistles that are added.
A Algorithm class that implements the training/validation steps and aggregation.
Perhaps we can abstract some aggregation using the Metrics style in ignite.
A DataProvider class that returns DataLoaders for training/validation.
Finally, a Lightening class that gathers the above, and can still be provided to the trainer.
The separated classes provide better code reusability, eg:
Data augmentation you might want to do can be abstracted in the DataProvider classes. For example, one class that provides the vanilla CIFAR dataset, a mixin for some data augmentation, and any research projects that use CIFAR can reuse these implementations.
Swap out training algorithms without having to change the rest.
Some of the monitoring and metrics code can likely be reused more.
This is basically a refactoring suggestion. All of the above can be done currently by subclassing LighteningModule, or otherwise decomposing it, this is basically just a suggestion for the library to choose a best-practice.
Describe alternatives you've considered
Keeping the class in the library as-is, users can implement the methods in any way they wish, including my suggestions above with and use a mixin style to aggregate them into a LighteningModule subclass.
But IMHO the separation is better design.
Additional context
Happy to help with implementation if there's any interest in this, not just suggesting someone else make the change. |
Documentation for "Restoring training session" | [] | At https://williamfalcon.github.io/pytorch-lightning/Trainer/Checkpointing/#restoring-training-session (docs/Trainer/Checkpointing.md) it is suggested that the way to restore a training session is to use a test_tube Experiment:
from test_tube import Experiment
exp = Experiment(version=a_previous_version_with_a_saved_checkpoint)
trainer = Trainer(experiment=exp)
But running this code gives a type error, as Trainer no longer takes the named parameter experiment: TypeError: __init__() got an unexpected keyword argument 'experiment'
What's the current method for restoring training sessions? |
Lightning/Transformers tutorial contribution? | [
"feature",
"help wanted"
] | First off, thank you for your hard work on this package! It's proven to be very helpful for fully leveraging multi-GPU training of large-scale language models.
Is your feature request related to a problem? Please describe.
I'm using the (also incredible) transformers package by the folks at HuggingFace to do a number of tasks, including text classification. I realized that training these models can be quite computationally expensive, even on decently-sized single-GPU instances. I found out about this repo and was (with relative ease) able to train the same model over 4 GPUs and 16-bit precision much, much more easily than training with vanilla PyTorch.
I was wondering if there was any interest in a contributed tutorial that specifically goes through the process of integrating pytorch-lightning with transformers, which seems to have quite a bit of staying power and success within the realm of applied NLP. Regardless, thanks again!
Describe the solution you'd like
N/A?
Describe alternatives you've considered
N/A?
Additional context
N/A? |
Issue with multi gpu dp backend | [
"bug"
] | Describe the bug
I tried to run gpu_template.py from basic_examples but I get an error that says Fatal Python error:Aborted (screenshot below). I am able to use ddp but not dp. Did anyone else face this issue?
Screenshots |
PackedSequence inputs gets converted to tuples of tensors | [
"bug"
] | Describe the bug
dataloaders returning 4 tensors (attributes of packed sequence) when collate_fn returns packed sequences
To Reproduce
def collate_seq(batch):
x = rnn.pack_sequence([item[0] for item in batch], enforce_sorted=False)
y = rnn.pack_sequence([item[1] for item in batch], enforce_sorted=False)
return x, y
Expected behavior
the x packed sequence gets passed to forward() |
Add a way to operate on all outputs from training_step | [
"feature",
"help wanted"
] | I realized the ddp2 implementation i put together doesn't allow the user to operate on the ouputs of all DP processes.
For instance, you calculate some logits on each process and current ddp2 forces the loss to be calculated in each process individually. However, if you wanted to say normalize across all examples in the batch you'd need to somehow share the output of each process.
Currently:
total_loss = []
for process:
# training step
out = model(x)
loss = loss(out)
total_loss.append(loss)
loss = total_loss.mean()
Proposed:
outs = []
for process:
# training step
out = model(x)
outs.append(out)
# allow training_end to (softmax for instance) using ALL outputs
loss = model.training_end(outs)
loss.backward()
The implication is adding an optional:
def training_end(...):
To model which when defines gives you all the outputs of training_step.
If you don't need anything advanced like this, you return a loss from training_step and don't implement training_end. If you need more advanced control, you implement training_end.
@tullie @neggert Any thoughts on this? |
Unfreezing layers during training? | [
"question"
] | Freezing layers at the beginning of training works, however unfreezing in on_epoch_start() during training causes the gradient to explode. Without the unfreezing part (or without freezing at all), the model trains fine with no gradient issues.
I'm using DDP + Apex O2 and the loss scaling will keep going down to 0 where it would encounter 0 division and crash.
Is unfreezing during training not possible in pytorch/lightning? or am I missing snippet? |
Using custom ModelCheckPoint results in RunTimeError | [
"bug"
] | Describe the bug
Modifying the ModelCheckpoint per https://williamfalcon.github.io/pytorch-lightning/Trainer/Checkpointing/ results in:
100%|██████████| 10/10 [00:02<00:00, 4.04it/s, batch_nb=8, epoch=0, gpu=0, loss=2.707, train_acc=0.12, v_nb=6, val_acc=0.12, val_loss=2.88]Can save best model only with val_loss available, skipping. <class 'RuntimeWarning'>
To Reproduce
def main(hparams):
# init module
model = BnnOnCIFAR10(hparams)
# DEFAULTS used by the Trainer
checkpoint_callback = ModelCheckpoint(
filepath="../../results",
save_best_only=True,
verbose=True,
monitor="val_loss",
mode="min",
prefix="",
)
# most basic trainer, uses good defaults
trainer = Trainer(
max_nb_epochs=hparams.max_nb_epochs,
gpus=hparams.gpus,
nb_gpu_nodes=hparams.nodes,
overfit_pct=0.01,
checkpoint_callback=checkpoint_callback,
default_save_path="../../results",
check_val_every_n_epoch=1,
show_progress_bar=True,
)
trainer.fit(model)
def validation_step(self, batch, batch_idx):
# OPTIONAL
x, y = batch
y_hat = self.forward(x)
# validation metrics for monitoring
labels_hat = torch.argmax(y_hat, dim=1)
val_acc = torch.sum(y == labels_hat).item() / (len(y) * 1.0)
val_loss = F.cross_entropy(y_hat, y)
return OrderedDict(
{
"val_loss": val_loss.clone().detach(),
"val_acc": torch.tensor(val_acc),
}
)
def validation_end(self, outputs):
"""
outputs -- list of outputs ftom each validation step
"""
# The outputs here are strictly for progress bar
avg_loss = torch.stack([x["val_loss"] for x in outputs]).mean()
avg_acc = torch.stack([x["val_acc"] for x in outputs]).mean()
logger_logs = {"val_acc": avg_acc, "val_loss": avg_loss}
output = OrderedDict({"progress_bar": logger_logs, "log": logger_logs})
return output
Expected behavior
Expected result is that my custom ModelCheckpoint is given the val_loss value and can function correctly.
Screenshots
Desktop (please complete the following information):
OS: xubuntu 18.04
Browser: not applicable
pip:
(venv) nik@nik-hmm:~/kth/y2/adl/Rethinking-Binarized-Neural-Network-Optimization$ pip list
Package Version Location
-------------------------------- -------------------- ---------------------------------------------------------------------
absl-py 0.8.1
astor 0.8.0
cachetools 3.1.1
certifi 2019.9.11
chardet 3.0.4
cycler 0.10.0
dataclasses 0.7
future 0.18.0
gast 0.2.2
google-auth 1.6.3
google-auth-oauthlib 0.4.1
google-pasta 0.1.7
grpcio 1.24.3
h5py 2.10.0
idna 2.8
imageio 2.6.1
Keras 2.3.1
Keras-Applications 1.0.8
Keras-Preprocessing 1.1.0
kiwisolver 1.1.0
larq 0.7.4
Markdown 3.1.1
matplotlib 3.1.1
numpy 1.17.3
oauthlib 3.1.0
opt-einsum 3.1.0
pandas 0.25.1
Pillow 6.2.0
pip 19.0.3
protobuf 3.10.0
pyasn1 0.4.7
pyasn1-modules 0.2.7
pyparsing 2.4.2
python-dateutil 2.8.0
pytorch-lightning 0.5.2.1
pytz 2019.3
PyYAML 5.1.2
requests 2.22.0
requests-oauthlib 1.2.0
research-seed 0.0.1 /home/nik/kth/y2/adl/Rethinking-Binarized-Neural-Network-Optimization
rsa 4.0
scipy 1.3.1
setuptools 41.6.0
six 1.12.0
tb-nightly 2.1.0a20191103
tensorflow 2.0.0rc2
tensorflow-estimator 2.0.1
tensorflow-estimator-2.0-preview 2.0.0
tensorflow-gpu 2.0.0rc2
termcolor 1.1.0
terminaltables 3.1.0
test-tube 0.7.1
tf-estimator-nightly 1.14.0.dev2019080601
tf-nightly-2.0-preview 2.0.0.dev20191002
torch 1.3.0
torchsummary 1.5.1
torchvision 0.4.1
tqdm 4.36.1
urllib3 1.25.6
Werkzeug 0.16.0
wheel 0.33.6
wrapt 1.11.2
NVIDIA:
(venv) nik@nik-hmm:~/kth/y2/adl/Rethinking-Binarized-Neural-Network-Optimization$ nvidia-smi
Mon Nov 4 12:12:21 2019
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 430.50 Driver Version: 430.50 CUDA Version: 10.1 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX 105... Off | 00000000:01:00.0 Off | N/A |
| N/A 66C P0 N/A / N/A | 298MiB / 4042MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
Additional context
Add any other context about the problem here. |
error when constructing own model module | [
"bug"
] | Common bugs:
Describe the bug
when construct the module "SummarizerModule" myself, i got error:
Traceback (most recent call last):
File "/remote-home/yrchen/anaconda3/envs/pytorch1.2_p37/lib/python3.7/site-packages/pytorch_lightning/root_module/decorators.py", line 15, in _get_data_loader
value = getattr(self, attr_name)
File "/remote-home/yrchen/anaconda3/envs/pytorch1.2_p37/lib/python3.7/site-packages/torch/nn/modules/module.py", line 591, in getattr
type(self).name, name))
AttributeError: 'SummarizerModule' object has no attribute '_lazy_val_dataloader'
the 'SummarizerModule' is defined as :
class SummarizerModule(pl.LightningModule):
"""PyTorch Lightning system.
"""
def __init__(self, summarizer, token_indexer,
loaders_params: dict, optimizer_params: dict, args):
super().__init__()
self.summarizer = summarizer
self._optimizer_params = optimizer_params
self._loaders_params = loaders_params
self._token_indexer = token_indexer
self.args = args
def forward(self, article, article_length,
prev_input, prev_input_length):
return self.summarizer(article, article_length,
prev_input, prev_input_length)
def training_step(self, batch, batch_nb):
article_data, prev_input_data, gold_summary = batch
article, article_length = article_data
prev_input, prev_input_length = prev_input_data
infered_summary = self.forward(article, article_length,
prev_input, prev_input_length)
return {'loss': sequence_nll(infered_summary, gold_summary)}
def validation_step(self, batch, batch_nb):
return self.training_step(batch, batch_nb)
def validation_end(self, outputs):
# OPTIONAL
avg_loss = torch.stack([x['loss'] for x in outputs]).mean()
return {'avg_val_loss': avg_loss}
def configure_optimizers(self):
return torch.optim.Adam(self.summarizer.parameters(),
**self._optimizer_params)
@pl.data_loader
def train_dataloader(self):
train_set = DCADataset(split="train", n_agents=3,
token_indexer=self._token_indexer, args=self.args)
return DataLoader(train_set, **self._loaders_params, shuffle=True)
@pl.data_loader
def val_dataloader(self):
val_set = DCADataset(split="val", n_agents=3,
token_indexer=self._token_indexer, args=self.args)
return DataLoader(val_set, **self._loaders_params, shuffle=False) |
Make coverage great again | [
"feature",
"help wanted"
] | I think it's time for some coverage clean up. We're sitting at around 93%. Let's bring it back up to 99% or perhaps *gasp 100%?
@Borda @neggert |
Automatic docs | [
"feature",
"help wanted"
] | I know it was proposed before and I tabled it but I think it's time to reconsider so docs can scale easier.
Let's use whatever PyTorch uses and do documentation in the code (i assume that's how it's done?).
Anyone want to take a stab at this? @Borda |
Cannot import pytorch-lightning-v0.5.3 | [
"bug"
] | Describe the bug
After updating to v0.5.3, import pytorch_lightning fails due to the following error:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.7/site-packages/pytorch_lightning/__init__.py", line 28, in <module>
from .trainer.trainer import Trainer
File "/usr/local/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 16, in <module>
from pytorch_lightning.trainer.callback_config_mixin import TrainerCallbackConfigMixin
File "/usr/local/lib/python3.7/site-packages/pytorch_lightning/trainer/callback_config_mixin.py", line 4, in <module>
from pytorch_lightning.logging import TestTubeLogger
ImportError: cannot import name 'TestTubeLogger' from 'pytorch_lightning.logging' (/usr/local/lib/python3.7/site-packages/pytorch_lightning/logging/__init__.py)
To Reproduce
Steps to reproduce the behavior:
$ docker run --rm -it python:3.7 /bin/bash
//
// Install `pytorch_lightning` (succeeded)
//
root@93bfd08b5db7:/# pip install pytorch_lightning
Collecting pytorch_lightning
Downloading https://files.pythonhosted.org/packages/2d/2e/ef5bedf1bb7f2f786d42f5af71ad5d7383416efec86098372d8016f5305d/pytorch-lightning-0.5.3.tar.gz (55kB)
...
Building wheels for collected packages: pytorch-lightning
Building wheel for pytorch-lightning (setup.py) ... done
Created wheel for pytorch-lightning: filename=pytorch_lightning-0.5.3-cp37-none-any.whl size=67557 sha256=80287a76e8fa15b4a64568fcd0c033688479f1bf7cb69dde8ba1d77da25453a6
Stored in directory: /root/.cache/pip/wheels/0f/4e/df/486c6c64d8d2f4706c70255493e434bacbf3497c7f5d0ab040
Successfully built pytorch-lightning
Installing collected packages: numpy, scipy, scikit-learn, tqdm, chardet, certifi, urllib3, idna, requests, docutils, webencodings, six, bleach, Pygments, readme-renderer, pkginfo, requests-toolbelt, twine, torch, pillow, torchvision, pytz, python-dateutil, pandas, pytorch-lightning
Successfully installed Pygments-2.4.2 bleach-3.1.0 certifi-2019.9.11 chardet-3.0.4 docutils-0.15.2 idna-2.8 numpy-1.16.4 pandas-0.25.3 pillow-6.2.1 pkginfo-1.5.0.1 python-dateutil-2.8.1 pytorch-lightning-0.5.3 pytz-2019.3 readme-renderer-24.0 requests-2.22.0 requests-toolbelt-0.9.1 scikit-learn-0.20.2 scipy-1.3.1 six-1.13.0 torch-1.3.0 torchvision-0.4.1 tqdm-4.35.0 twine-1.13.0 urllib3-1.25.6 webencodings-0.5.1
//
// Import `pytorch_lightning` (failed)
//
root@93bfd08b5db7:/# python
Python 3.7.5 (default, Oct 19 2019, 00:03:48)
[GCC 8.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import pytorch_lightning
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.7/site-packages/pytorch_lightning/__init__.py", line 28, in <module>
from .trainer.trainer import Trainer
File "/usr/local/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 16, in <module>
from pytorch_lightning.trainer.callback_config_mixin import TrainerCallbackConfigMixin
File "/usr/local/lib/python3.7/site-packages/pytorch_lightning/trainer/callback_config_mixin.py", line 4, in <module>
from pytorch_lightning.logging import TestTubeLogger
ImportError: cannot import name 'TestTubeLogger' from 'pytorch_lightning.logging' (/usr/local/lib/python3.7/site-packages/pytorch_lightning/logging/__init__.py) |
Simple way to get the best scores at the end of a training? | [
"question"
] | What is your question?
After a training, is there an easy way to get the best scores returned by the validation_end function? In order to use a hyperparameters optimizer like Tune.
Code example
model = CoolSystem()
trainer = Trainer()
trainer.fit(model)
best_scores = ???
print(best_scores)
What's your environment?
conda 4.7.10
PyTorch 1.3.0
Lightning 0.5.3.1 |
During checkpoint saving: TypeError: not all arguments converted during string formatting | [
"bug"
] | Describe the bug
A recent commit introduced an extra , in a logging call, causing its failure:
Epoch 1: 100%|████████████████| 760/760 [07:29<00:00, 4.63batch/s, batch_nb=719, loss=0.347, v_nb=1--- Logging error ---
Traceback (most recent call last):
File "/home/mog/opt/miniconda/lib/python3.7/logging/__init__.py", line 1034, in emit
msg = self.format(record)
File "/home/mog/opt/miniconda/lib/python3.7/logging/__init__.py", line 880, in format
return fmt.format(record)
File "/home/mog/.virtualenvs/mloncode/lib/python3.7/site-packages/coloredlogs/__init__.py", line 1116, in format
return logging.Formatter.format(self, record)
File "/home/mog/opt/miniconda/lib/python3.7/logging/__init__.py", line 619, in format
record.message = record.getMessage()
File "/home/mog/opt/miniconda/lib/python3.7/logging/__init__.py", line 380, in getMessage
msg = msg % self.args
TypeError: not all arguments converted during string formatting
Call stack:
File "/home/mog/.virtualenvs/mloncode/bin/mloncode", line 11, in <module>
load_entry_point('mloncode', 'console_scripts', 'mloncode')()
File "/home/mog/work/mloncode/mloncode/__main__.py", line 9, in main
handler(**kw_args)
File "/home/mog/work/mloncode/mloncode/pipelines/codrep/train.py", line 212, in train
trainer.fit(model)
File "/home/mog/.virtualenvs/mloncode/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 364, in fit
self.run_pretrain_routine(model)
File "/home/mog/.virtualenvs/mloncode/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 471, in run_pretrain_routine
self.train()
File "/home/mog/.virtualenvs/mloncode/lib/python3.7/site-packages/pytorch_lightning/trainer/train_loop_mixin.py", line 60, in train
self.run_training_epoch()
File "/home/mog/.virtualenvs/mloncode/lib/python3.7/site-packages/pytorch_lightning/trainer/train_loop_mixin.py", line 114, in run_training_epoch
self.run_evaluation(test=self.testing)
File "/home/mog/.virtualenvs/mloncode/lib/python3.7/site-packages/pytorch_lightning/trainer/evaluation_loop_mixin.py", line 160, in run_evaluation
logs=self.callback_metrics)
File "/home/mog/.virtualenvs/mloncode/lib/python3.7/site-packages/pytorch_lightning/callbacks/pt_callbacks.py", line 257, in on_epoch_end
f' saving model to {filepath}')
Message: '\nEpoch 00001: eval_mrr improved from -inf to 0.29670,'
Arguments: (' saving model to out/train/_ckpt_epoch_1.ckpt',) |
Random Master Port Number | [
"feature",
"help wanted"
] | I am tring to run two experiments on a 8 GPU machine. Each of them will uses 4 GPUs. When I start the second experiment, I got the following error:
-- Process 0 terminated with the following error:
Traceback (most recent call last):
File "/opt/conda/envs/pytorch-py3.6/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 19, in _wrap
fn(i, *args)
File "/opt/conda/envs/pytorch-py3.6/lib/python3.6/site-packages/pytorch_lightning/trainer/ddp_mixin.py", line 146, in ddp_train
model.init_ddp_connection(self.proc_rank, self.world_size)
File "/opt/conda/envs/pytorch-py3.6/lib/python3.6/site-packages/pytorch_lightning/root_module/root_module.py", line 153, in init_ddp_connection
dist.init_process_group('nccl', rank=proc_rank, world_size=world_size)
File "/opt/conda/envs/pytorch-py3.6/lib/python3.6/site-packages/torch/distributed/distributed_c10d.py", line 400, in init_process_group
store, rank, world_size = next(rendezvous(url))
File "/opt/conda/envs/pytorch-py3.6/lib/python3.6/site-packages/torch/distributed/rendezvous.py", line 143, in _env_rendezvous_handler
store = TCPStore(master_addr, master_port, world_size, start_daemon)
RuntimeError: Address already in use
https://github.com/williamFalcon/pytorch-lightning/blob/master/pytorch_lightning/root_module/root_module.py#L134
I saw that I can set the master port manually. But I guess it would be better to set a default port number randomly in a range? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.