title
stringlengths 5
164
| labels
sequence | bodyText
stringlengths 0
46.7k
|
---|---|---|
Running Average of my accuracy, losses etc. | [
"question"
] | What is your question?
I want my tqdm logger to show me a history of my training on the terminal. Right now, when a epoch ends, all data for it is scrubbed from the command line and the new epoch data is shown.
Also I want to see the running accuracy of my network and a running average of my loss on the tqdm bar. How should I go on about doing that ?
What have you tried?
I have looked at the docs and logging but am unable to figure out how to modify the tqdm logger, more so maintain a running average
What's your environment?
conda version: latest
PyTorch version : 1.3
Lightning version : pip install pytorch-lightning at the date of this issue
Test-tube version: came boot strapped with lightning.
I installed everything on the date of this issue. |
dataloader NotImplementedError | [] | I want to use a changed GAN structure from the sample here with random samples of shape(3200,5). When run, NotImplementedError occurs. I dont see where the problem is. Thanks for help.
from argparse import ArgumentParser
from collections import OrderedDict
import torch
import torch.nn.functional as F
import pytorch_lightning as pl
from torch.utils.data import DataLoader
import torch.nn as nn
lnum=torch.randn(1).item()*0.0001
print(lnum)
class ComeModel(nn.Module):
def __init__(self,m_in,m_hid,m_out):
super(ComeModel,self).__init__()
self.model=nn.Sequential(
nn.Linear(m_in, m_hid),
nn.LeakyReLU(lnum,inplace=True),
nn.Linear(m_hid, m_out),
nn.LeakyReLU(lnum, inplace=True),
)
def forward(self,x):
out = self.model(x)
return out
class GoModel(nn.Module):
def __init__(self,m_in,m_hid,m_out):
super(GoModel,self).__init__()
self.model=nn.Sequential(
nn.Linear(m_in, m_hid),
nn.LeakyReLU(lnum,inplace=True),
nn.Linear(m_hid, m_out),
nn.LeakyReLU(lnum, inplace=True),
)
def forward(self,x):
out = self.model(x)
return out
class TranscationModel(pl.LightningModule):
def __init__(self,hparams):
super(TranscationModel,self).__init__()
self.hparams=hparams
self.comer=ComeModel(self.hparams.in_fea,self.hparams.hid_dim,self.hparams.out_fea)
self.goer=GoModel(self.hparams.in_fea,self.hparams.hid_dim,self.hparams.out_fea)
def forward(self,x):
chart_x = x[:3]
fund_x = x[:]
out =torch.log(1+self.comer(chart_x)+self.comer(fund_x))-torch.log(1+self.goer(chart_x)+self.goer(fund_x))
return out
def peak_loss(selfself,out,y):
return F.binary_cross_entropy(out,y)
def training_step(self, batch, batch_nb):
x,y=batch
y_pred=self.forward(x)
ploss=self.peak_loss(y_pred,y)
tqdm_dict={'ploss':ploss}
output=OrderedDict(
{
'loss':ploss,
'progress_bar':tqdm_dict,
'log':tqdm_dict
}
)
return output
def configure_optimizers(self):
lr =self.hparams.lr
b1 = self.hparams.b1
b2 = self.hparams.b2
opt_come=torch.optim.Adam(self.comer.parameters(),lr=lr,betas=(b1,b2))
opt_goer = torch.optim.Adam(self.comer.parameters(), lr=lr, betas=(b1, b2))
return [opt_come,opt_goer],[]
@pl.data_loader
def train_dataloader(self):
dataset = torch.randn(3200,5)
print(dataset.size())
return DataLoader(dataset,batch_size=self.hparams.batch_size)
def main(hparams):
# ------------------------
# 1 INIT LIGHTNING MODEL
# ------------------------
model = TranscationModel(hparams)
# ------------------------
# 2 INIT TRAINER
# ------------------------
trainer = pl.Trainer()
# ------------------------
# 3 START TRAINING
# ------------------------
trainer.fit(model)
if __name__ == '__main__':
parser = ArgumentParser()
parser.add_argument("--batch_size", type=int, default=64, help="size of the batches")
parser.add_argument("--lr", type=float, default=0.0002, help="adam: learning rate")
parser.add_argument("--b1", type=float, default=0.5,
help="adam: decay of first order momentum of gradient")
parser.add_argument("--b2", type=float, default=0.999,
help="adam: decay of first order momentum of gradient")
parser.add_argument("--in_fea", type=int, default=4,
help="dimensionality of the input dimension")
parser.add_argument("--hid_dim", type=int, default=10,
help="dimensionality of the latent space")
parser.add_argument("--out_fea", type=int, default=1,
help="dimensionality of the output dimension")
hparams = parser.parse_args()
main(hparams) |
Early stopping conditioned on metric `val_loss` isn't recognised when setting the val_check_interval | [
"bug"
] | Describe the bug
Training stops when setting val_check_interval<1.0 in the Trainer class as it doesn't recognise val_loss. I get the following warning at the end of the 3rd epoch:
Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,train_loss
To Reproduce
Steps to reproduce the behavior:
Run the CoolModel example but change the trainer line to
trainer = Trainer(val_check_interval=0.5,default_save_path="test")
Training will stop at the end of the third epoch and the above warning will show.
Expected behavior
Training shouldn't stop and val_loss should be recognised.
Desktop (please complete the following information):
VM: Google Colab
Version 0.5.3.2
Additional context
This doesn't happen with 0.5.2.1 although it looks like something has changed with model saving mechanism since it only seems to save the best model in 0.5.3.2.
EDIT: Also seems to happen when setting train_percent_check<1.0 |
IterableDataset breaks 1.1 compatibility | [
"bug"
] | A recently introduced feature unfortunately breaks compability with Pytorch 1.1.0.
Describe the bug
IterableDataset support, introduced in issue 323, requires Pytorch 1.2.0+.
To Reproduce
In a python environment with Pytorch 1.1.0 do:
import pytorch_lightning
Expected behavior
Compatibility with Pytorch 1.1.0. I'm filing it as a bug report rather than a docs fix since the dependency on 1.2.0+ introduced by issue 323 doesn't seem to be intentional. |
Error message for multiple optimizers | [
"feature",
"help wanted"
] | When a user uses multiple optimizers and doesn't add optimizer_index to training_step, the error is super cryptic and not obvious.
Let's add this error:
you passed in {len(self.optimizers)}
but didn't add optimizer_idx to the training_step
arguments |
Escaping % in add_default_args | [
"bug"
] | Describe the bug
In utilities/arg_parse.py, a percentage symbol is not escaped and would cause an error when printing help information.
parser.add_argument('--overfit', default=-1, type=float,
help='% of dataset to use with this option. float, or -1 for none')
To Reproduce
Steps to reproduce the behavior:
import os
import random
import sys
from pytorch_lightning.utilities.arg_parse import add_default_args
from test_tube import HyperOptArgumentParser, Experiment
if __name__ == "__main__":
root_dir = os.path.split(os.path.dirname(sys.modules['__main__'].__file__))[0]
parent_parser = HyperOptArgumentParser(strategy='random_search', add_help=True)
add_default_args(parent_parser, root_dir)
hyperparams = parent_parser.parse_args()
Execute the file with --help
python temp.py --help
Throws an error:
WARNING:root:This caffe2 python run does not have GPU support. Will run in CPU only mode.
Traceback (most recent call last):
File "/Users/chenghaomou/Code/ai2/temp.py", line 11, in <module>
hyperparams = parent_parser.parse_args()
File "/Users/chenghaomou/Anaconda/envs/Elisa/lib/python3.7/site-packages/test_tube/argparse_hopt.py", line 238, in parse_args
results = self.__parse_args(args, namespace)
File "/Users/chenghaomou/Anaconda/envs/Elisa/lib/python3.7/site-packages/test_tube/argparse_hopt.py", line 157, in __parse_args
args, argv = self.parse_known_args(args, namespace)
File "/Users/chenghaomou/Anaconda/envs/Elisa/lib/python3.7/argparse.py", line 1782, in parse_known_args
namespace, args = self._parse_known_args(args, namespace)
File "/Users/chenghaomou/Anaconda/envs/Elisa/lib/python3.7/argparse.py", line 1988, in _parse_known_args
start_index = consume_optional(start_index)
File "/Users/chenghaomou/Anaconda/envs/Elisa/lib/python3.7/argparse.py", line 1928, in consume_optional
take_action(action, args, option_string)
File "/Users/chenghaomou/Anaconda/envs/Elisa/lib/python3.7/argparse.py", line 1856, in take_action
action(self, namespace, argument_values, option_string)
File "/Users/chenghaomou/Anaconda/envs/Elisa/lib/python3.7/argparse.py", line 1038, in __call__
parser.print_help()
File "/Users/chenghaomou/Anaconda/envs/Elisa/lib/python3.7/argparse.py", line 2475, in print_help
self._print_message(self.format_help(), file)
File "/Users/chenghaomou/Anaconda/envs/Elisa/lib/python3.7/argparse.py", line 2459, in format_help
return formatter.format_help()
File "/Users/chenghaomou/Anaconda/envs/Elisa/lib/python3.7/argparse.py", line 284, in format_help
help = self._root_section.format_help()
File "/Users/chenghaomou/Anaconda/envs/Elisa/lib/python3.7/argparse.py", line 215, in format_help
item_help = join([func(*args) for func, args in self.items])
File "/Users/chenghaomou/Anaconda/envs/Elisa/lib/python3.7/argparse.py", line 215, in <listcomp>
item_help = join([func(*args) for func, args in self.items])
File "/Users/chenghaomou/Anaconda/envs/Elisa/lib/python3.7/argparse.py", line 215, in format_help
item_help = join([func(*args) for func, args in self.items])
File "/Users/chenghaomou/Anaconda/envs/Elisa/lib/python3.7/argparse.py", line 215, in <listcomp>
item_help = join([func(*args) for func, args in self.items])
File "/Users/chenghaomou/Anaconda/envs/Elisa/lib/python3.7/argparse.py", line 525, in _format_action
help_text = self._expand_help(action)
File "/Users/chenghaomou/Anaconda/envs/Elisa/lib/python3.7/argparse.py", line 615, in _expand_help
return self._get_help_string(action) % params
TypeError: %o format: an integer is required, not dict
Expected behavior
Escape the percentage sign and help can be printed.
Desktop (please complete the following information):
OS: macOS 10.15
Browser Chrome
Version 78.0.3904.87
Additional context
Add any other context about the problem here. |
Minimalize tests | [
"feature",
"help wanted"
] | The test time takes very long, we shall adjust tests to be just a tests with minimal running time... See #504 (comment) |
Tensorboard Epoch Weird Chart | [
"bug"
] | Describe the bug
I am getting these weird graphs in my tensorboard, it worked fine when I was doing model.cuda() manually , but when I shifted to the automated stuff using gpus = 1 and distributed backend = None.
I have posted this graph below:
My code of trainer and lightning module is as follows:
Trainer:
"""
This file runs the main training/val loop, etc... using Lightning Trainer
"""
from pytorch_lightning import Trainer
from argparse import ArgumentParser
from research_seed.baselines.kd_baseline.kd_baseline import KD_Cifar
from pytorch_lightning.logging import TestTubeLogger
def main(hparams):
# init module
model = KD_Cifar(hparams)
logger = TestTubeLogger(
save_dir=hparams.save_dir,
version=hparams.version # An existing version with a saved checkpoint
)
# most basic trainer, uses good defaults
if hparams.gpus > 1:
dist = 'ddp'
else:
dist = None
# most basic trainer, uses good defaults
trainer = Trainer(
max_nb_epochs=hparams.epochs,
gpus=hparams.gpus,
nb_gpu_nodes=hparams.nodes,
early_stop_callback=None,
logger=logger,
default_save_path=hparams.save_dir,
distributed_backend=dist,
)
trainer.fit(model)
if __name__ == '__main__':
parser = ArgumentParser(add_help=False)
parser.add_argument('--epochs', default=100, type=int, help='number of total epochs to run')
parser.add_argument('--gpus', type=int, default=1)
parser.add_argument('--nodes', type=int, default=1)
parser.add_argument('--save-dir', type=str, default='./lightning_logs')
parser.add_argument('--version', type=int, required=True, help= "version number for experiment")
# give the module a chance to add own params
# good practice to define LightningModule speficic params in the module
parser = KD_Cifar.add_model_specific_args(parser)
# parse params
hparams = parser.parse_args()
main(hparams)
Lightning Module
"""
This file defines the core research contribution
"""
import os
import torch
from torch.nn import functional as F
import torch.nn as nn
import torchvision
from torch.utils.data import DataLoader
import torchvision.transforms as transforms
from argparse import ArgumentParser
from research_seed.baselines.model.model_factory import create_cnn_model, is_resnet
import torch.optim as optim
import pytorch_lightning as pl
import numpy as np
from collections import OrderedDict
def str2bool(v):
if v.lower() in ('yes', 'true', 't', 'y', '1'):
return True
else:
return False
def load_model_chk(model, path):
chkp = torch.load(path)
new_state_dict = OrderedDict()
for k, v in chkp['state_dict'].items():
name = k[6:] # remove `model.`
new_state_dict[name] = v
model.load_state_dict(new_state_dict)
return model
class KD_Cifar(pl.LightningModule):
def __init__(self, hparams):
super(KD_Cifar, self).__init__()
# not the best model...
self.hparams = hparams
self.student = create_cnn_model(hparams.student_model, dataset=hparams.dataset)
self.teacher = create_cnn_model(hparams.teacher_model, dataset=hparams.dataset)
# Loading from checkpoint
self.teacher = load_model_chk(self.teacher, hparams.path_to_teacher)
self.teacher.eval()
self.student.train()
self.criterion = nn.CrossEntropyLoss()
self.train_step = 0
self.train_num_correct = 0
self.val_step = 0
self.val_num_correct = 0
def loss_fn_kd(self, outputs, labels, teacher_outputs):
"""
Credits: https://github.com/peterliht/knowledge-distillation-pytorch/blob/e4c40132fed5a45e39a6ef7a77b15e5d389186f8/model/net.py#L100
Compute the knowledge-distillation (KD) loss given outputs, labels.
"Hyperparameters": temperature and alpha
NOTE: the KL Divergence for PyTorch comparing the softmaxs of teacher
and student expects the input tensor to be log probabilities! See Issue #2
"""
alpha = self.hparams.alpha
T = self.hparams.temperature
loss = nn.KLDivLoss()(F.log_softmax(outputs/T, dim=1),
F.softmax(teacher_outputs/T, dim=1)) * (alpha * T * T) + \
F.cross_entropy(outputs, labels) * (1. - alpha)
return loss
def forward(self, x, mode):
if mode == 'student':
return self.student(x)
elif mode == 'teacher':
return self.teacher(x)
else:
raise ValueError("mode should be teacher or student")
def training_step(self, batch, batch_idx):
x, y = batch
y_teacher = self.forward(x, 'teacher')
y_student = self.forward(x, 'student')
loss = self.loss_fn_kd(y_student, y, y_teacher)
pred = y_student.data.max(1, keepdim=True)[1]
self.train_step += x.size(0)
self.train_num_correct += pred.eq(y.data.view_as(pred)).cpu().sum()
return {
'loss': loss,
'log' : {
'train_loss' : loss.item(),
'train_accuracy': float(self.train_num_correct*100/self.train_step),
}
}
def validation_step(self, batch, batch_idx):
self.student.eval()
x, y = batch
y_hat = self.forward(x, 'student')
val_loss = self.criterion(y_hat, y)
pred = y_hat.data.max(1, keepdim=True)[1]
self.val_step += x.size(0)
self.val_num_correct += pred.eq(y.data.view_as(pred)).cpu().sum()
return {
'val_loss': val_loss
}
def validation_end(self, outputs):
# OPTIONAL
avg_loss = torch.stack([x['val_loss'] for x in outputs]).mean()
log_metrics = {
'val_avg_loss': avg_loss.item(),
'val_accuracy': float(self.val_num_correct*100/self.val_step)
}
self.scheduler.step(np.around(avg_loss.item(),2))
# reset logging stuff
self.train_step = 0
self.train_num_correct = 0
self.val_step = 0
self.val_num_correct = 0
# back to training
self.student.train()
return {'val_loss': avg_loss, 'log': log_metrics}
def configure_optimizers(self):
# REQUIRED
# can return multiple optimizers and learning_rate schedulers
if self.hparams.optim == 'adam':
optimizer = torch.optim.Adam(self.student.parameters(), lr=self.hparams.learning_rate)
elif self.hparams.optim == 'sgd':
optimizer = torch.optim.SGD(self.student.parameters(), nesterov=True, momentum=self.hparams.momentum,
weight_decay=self.hparams.weight_decay, lr=self.hparams.learning_rate)
else:
raise ValueError('No such optimizer, please use adam or sgd')
self.scheduler = optim.lr_scheduler.ReduceLROnPlateau(optimizer, 'min',patience=5,factor=0.5,verbose=True)
return optimizer
@pl.data_loader
def train_dataloader(self):
if self.hparams.dataset == 'cifar10' or self.hparams.dataset == 'cifar100':
transform_train = transforms.Compose([
transforms.Pad(4, padding_mode="reflect"),
transforms.RandomCrop(32, padding=4),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010)),
])
else:
raise ValueError('Dataset not supported !')
trainset = torchvision.datasets.CIFAR10(root=self.hparams.dataset_dir, train=True,
download=True, transform=transform_train)
if self.hparams.gpus > 1:
dist_sampler = torch.utils.data.distributed.DistributedSampler(trainset)
else:
dist_sampler = None
return DataLoader(trainset, batch_size=self.hparams.batch_size, num_workers=self.hparams.num_workers, sampler=dist_sampler)
@pl.data_loader
def val_dataloader(self):
if self.hparams.dataset == 'cifar10' or self.hparams.dataset == 'cifar100':
transform_test = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010)),
])
else:
raise ValueError('Dataset not supported !')
valset = torchvision.datasets.CIFAR10(root=self.hparams.dataset_dir, train=False,
download=True, transform=transform_test)
if self.hparams.gpus > 1:
dist_sampler = torch.utils.data.distributed.DistributedSampler(valset)
else:
dist_sampler = None
return DataLoader(valset, batch_size=self.hparams.batch_size, num_workers=self.hparams.num_workers, sampler=dist_sampler)
@pl.data_loader
def test_dataloader(self):
if self.hparams.dataset == 'cifar10' or self.hparams.dataset == 'cifar100':
transform_test = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010)),
])
else:
raise ValueError('Dataset not supported !')
testset = torchvision.datasets.CIFAR10(root=self.hparams.dataset_dir, train=False,
download=True, transform=transform_test)
if self.hparams.gpus > 1:
dist_sampler = torch.utils.data.distributed.DistributedSampler(testset)
else:
dist_sampler = None
return DataLoader(testset, batch_size=self.hparams.batch_size, num_workers=self.hparams.num_workers, sampler=dist_sampler)
@staticmethod
def add_model_specific_args(parent_parser):
"""
Specify the hyperparams for this LightningModule
"""
# MODEL specific
parser = ArgumentParser(parents=[parent_parser])
parser.add_argument('--dataset', default='cifar10', type=str, help='dataset. can be either cifar10 or cifar100')
parser.add_argument('--batch-size', default=128, type=int, help='batch_size')
parser.add_argument('--learning-rate', default=0.001, type=float, help='initial learning rate')
parser.add_argument('--momentum', default=0.9, type=float, help='SGD momentum')
parser.add_argument('--weight-decay', default=1e-4, type=float, help='SGD weight decay (default: 1e-4)')
parser.add_argument('--dataset-dir', default='./data', type=str, help='dataset directory')
parser.add_argument('--optim', default='adam', type=str, help='Optimizer')
parser.add_argument('--num-workers', default=4, type=float, help='Num workers for data loader')
parser.add_argument('--student-model', default='resnet8', type=str, help='teacher student name')
parser.add_argument('--teacher-model', default='resnet110', type=str, help='teacher student name')
parser.add_argument('--path-to-teacher', default='', type=str, help='teacher chkp path')
parser.add_argument('--temperature', default=10, type=float, help='Temperature for knowledge distillation')
parser.add_argument('--alpha', default=0.7, type=float, help='Alpha for knowledge distillation')
return parser
I would be very grateful if someone can tell me what Im doing wrong. |
Nvidia DALI integration | [
"feature",
"help wanted"
] | Is your feature request related to a problem? Please describe.
Lightning handles a lot of parallelization and best practices for speed and thus image processing and augmentations often become a bottleneck
Describe the solution you'd like
Support or even integration for DALI
For reference
https://devblogs.nvidia.com/fast-ai-data-preprocessing-with-nvidia-dali/
I am willing to help implement, but this is a new API for me as well. |
Add resuming from specific checkpoint | [
"feature",
"help wanted"
] | Is your feature request related to a problem? Please describe.
In current version, there is no way to resume training from a specific checkpoint (not the last checkpoint).
Sometimes (very often in my case), one needs experiment training with different hyperparameters (e.g. dropout rate, augmentation) from a specific checkpoint.
Describe the solution you'd like
Add resume_from_checkpointargument to Trainer class.
Describe alternatives you've considered
I tried to use restore from Trainer class, but was not successful because it is meant to be used only when .fit() is called.
Additional context
FYI, I made a PR to demonstrate my idea: #516 |
calling trainer.test() between epochs has side effects in 0.5.3.2 | [
"bug"
] | New user of lightning. First downloaded on Oct 15, and updated today, Nov 15 to 0.5.3.2. Working on Ubuntu 18.04.3lts, pytorch 1.3, python3.6.8m. No virtual environment.
I call trainer.test() in on_epoch_end() at intervals during training - this speeds comparisons to other model architectures.
This worked perfectly in the prior version. The test sequence worked as expected, calling test_step() and test_end() per the spec. Summary reporting to Tensorboard, model status, etc - all as expected. After the call to test_end(), training continued at the next epoch. Behavior repeated with no problems each time trainer.test() called throughout training.
This does not happen as expected in the new version.
#1 If early stopping remains set to default, the training loop exits after the first call to trainer.test() completes. The exit appears normal - as if the call to trainer.test() was an early stopping condition.
#2 if early stopping is turned off by setting 'early_stop_callback=None', the first call to trainer.test() executes as expected, and training continues as expected. However, trainer.test() is now called after EVERY epoch. These extra calls are not originating in my code.
Here is the code making the call:
def on_epoch_end(self):
#graph errors: avg training and validation loss
if self.epochReport:
self.logger.experiment.add_scalars('Stats_epoch/loss', {'avg_trn_loss': mean([x.item() for x in self.trn_loss]), 'avg_val_loss': mean([x.item() for x in self.val_loss])}, self.global_step)
self.logger.experiment.add_scalars('Stats_epoch/acc', {'avg_trn_acc': mean([x for x in self.trn_acc]), 'avg_val_acc': mean([x for x in self.val_acc])}, self.global_step)
if (((self.current_epoch+1)%self.test_prd)==0) or ((self.current_epoch+1)==self.max_nb_epochs):
msg("on_epoch_end") # for debugging
self.trainer.test()
Here is the test related code:
def test_step(self, batch, batch_nb):
imgs, labels = batch
out = self.forward(imgs)
loss = self.loss(out, labels)
# stats: calc accuracy, save loss, acc
# accuracy by category
out_idx = torch.argmax(out, 1)
c = (out_idx==labels)
for i in range(len(c)):
self.cls_cor[labels[i]] += c[i].item()
self.cls_tot[labels[i]] += 1
# acc overall
acc = ((labels==out.argmax(1)).sum()).item()/labels.shape[0]
return {'test_loss': loss}
def test_end(self, outputs):
avg_loss = torch.stack([x['test_loss'] for x in outputs]).mean()
# graph test loss, accuracy
if self.tstReport:
# get overall accuracy, add text; get accuracy per category, add text; log, reset
tst_acc = self.cls_cor.sum() / self.cls_tot.sum()
text = f'Overall accuracy of the network on {int(self.cls_tot[0]):d} test images: {100.0 * tst_acc:4.1f}% \n'
for i in range(len(self.cls_cor)):
text += f'Accuracy of {self.labels[i]} : {100.0 * (self.cls_cor[i] / self.cls_tot[i]):4.1f}% \n'
self.logger.experiment.add_text( 'Test: accuracy', text, self.global_step)
self.logger.experiment.add_scalars('Stats_epoch/tst_loss', {'avg tst_loss': avg_loss}, self.global_step)
self.logger.experiment.add_scalars('Stats_epoch/tst_acc', {'avg tst_acc' : tst_acc*100.0}, self.global_step)
# clear data if reporting or not
for i in range(len(self.cls_cor)):
self.cls_cor[i] = self.cls_tot[i] = 0
return {'avg_test_loss': avg_loss}
Any input appreciated.
seth |
Is requirement numpy==1.16.4 really needed ? | [] | Hi,
While installing ligthning via pip I saw that the numpy requirements was fixed to version 1.16.4:
ERROR: pytorch-lightning 0.5.3.2 has requirement numpy==1.16.4, but you'll have numpy 1.17.4 which is incompatible.
After a quick scroll through the source code i'm wondering: is there a reason why this requirement is so specific ?
Also, wouldn't it make more sense to use pytorch tensors to try and drop the numpy requirements ? Or at least, drop the == 1.16.4 |
when validation_step/end not defined, val_loss still gets logged | [
"bug"
] | Describe the bug
If the validation_step/end is not implemented by user, lightning still shows a mysterious validation loss in tqdm. Where does this come from?
To Reproduce
Steps to reproduce the behavior:
Take MNIST example in "basic_examples" folder (current master branch)
Uncomment validation_step and validation_end and run the gpu_template
Wait until epoch 2 starts
You will see:
Epoch 2: 14%|ββ | 150/1095 [00:10<01:05, 14.35batch/s, batch_nb=149, gpu=0, loss=0.177, train_loss=0.261, v_nb=1, val_acc=0.939, val_loss=0.19]
Expected behavior
val_acc and val_loss should not be there. Where do these numbers come from?
Screenshots
If applicable, add screenshots to help explain your problem.
Desktop (please complete the following information):
OS: Ubuntu 18.04
Version: 0.5.3.2 (via pip install) |
gan.py template fails to run with GPU | [
"bug"
] | Common bugs:
Tensorboard not showing in Jupyter-notebook see issue 79.
PyTorch 1.1.0 vs 1.2.0 support see FAQ
Describe the bug
When running the gan.py script with the only change in script
trainer = pl.Trainer(max_nb_epochs=10, gpus=1, distributed_backend='dp')
or with gpus=[0]
script fails with error on the loss function not being on GPU
File "/home/marko/anaconda2/envs/py36_torch13/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in call
result = self.forward(*input, **kwargs)
File "/home/marko/anaconda2/envs/py36_torch13/lib/python3.6/site-packages/pytorch_lightning/pt_overrides/override_data_parallel.py", line 58, in forward
return self.module.training_step(*inputs[0], **kwargs[0])
File "gan.py", line 116, in training_step
g_loss = self.adversarial_loss(self.discriminator(self.generated_imgs), valid)
File "gan.py", line 89, in adversarial_loss
return F.binary_cross_entropy(y_hat, y)
File "/home/marko/anaconda2/envs/py36_torch13/lib/python3.6/site-packages/torch/nn/functional.py", line 2065, in binary_cross_entropy
input, target, weight, reduction_enum)
RuntimeError: Expected object of device type cuda but got device type cpu for argument #2 'target' in call to _thnn_binary_cross_entropy_forward
Epoch 1: 0%| | 0/938 [00:00<?, ?batch/s]
To Reproduce
Steps to reproduce the behavior:
Go to gan.py
Change the trainer= pl.Trainer() to above on a GPU system
Expected behavior
A clear and concise description of what you expected to happen.
Desktop (please complete the following information):
OS: Ubuntu 16.04 |
Checkpoint gives error | [
"bug"
] | Hi,
I wonder if we can only save the best model with lowest validation error, and not save the others checkpoints.
I took a look at checkpoint_callback's save_best_only (below), it seems that this is for saving the best at every epoch (because the file name changes at every epoch). So I wonder if we can only save the best in the whole training process. Thanks!
checkpoint_callback = ModelCheckpoint( filepath=os.getcwd(), save_best_only=True, verbose=True, monitor='val_loss', mode='min', prefix='' ) |
Checkpoint saving period=10 off by one error | [
"bug"
] | Describe the bug
Checkpoint saving period=10 sometimes is off by one:
./checkpoints/_ckpt_epoch_10.ckpt
./checkpoints/_ckpt_epoch_20.ckpt
./checkpoints/_ckpt_epoch_30.ckpt
./checkpoints/_ckpt_epoch_40.ckpt
./checkpoints/_ckpt_epoch_50.ckpt
./checkpoints/_ckpt_epoch_60.ckpt
./checkpoints/_ckpt_epoch_70.ckpt
./checkpoints/_ckpt_epoch_80.ckpt
./checkpoints/_ckpt_epoch_90.ckpt
./checkpoints/_ckpt_epoch_100.ckpt
./checkpoints/_ckpt_epoch_110.ckpt
./checkpoints/_ckpt_epoch_120.ckpt
./checkpoints/_ckpt_epoch_130.ckpt
./checkpoints/_ckpt_epoch_140.ckpt
./checkpoints/_ckpt_epoch_150.ckpt
./checkpoints/_ckpt_epoch_160.ckpt
./checkpoints/_ckpt_epoch_170.ckpt
./checkpoints/_ckpt_epoch_180.ckpt
./checkpoints/_ckpt_epoch_191.ckpt
Another time, it incremented at 61 and then again at 192.
To Reproduce
No idea why this is happening. It's very random!
Expected behavior
The expected behavior is that it saves every 10 epochs exactly.
Screenshots
None.
Desktop (please complete the following information):
OS: Ubuntu 18.04.3 LTS
Additional context
Saving a checkpoint every 10 epochs.
checkpoint = ModelCheckpoint(
filepath=os.path.join(os.getcwd(), 'checkpoints'),
verbose=True,
save_best_only=False,
save_weights_only=False,
period=10
)
if setup.cuda_is_available():
trainer = Trainer(
distributed_backend='dp',
gpus=setup.cuda_device_count(),
checkpoint_callback=checkpoint,
early_stop_callback=None,
max_nb_epochs=params.epochs
)
else:
trainer = Trainer(
checkpoint_callback=checkpoint,
early_stop_callback=None,
max_nb_epochs=params.epochs
) |
ValueError: bad value(s) in fds_to_keep, when attemping DDP | [
"bug"
] | I can't get DDP working without getting the following error:
Traceback (most recent call last):
File "train.py", line 86, in <module>
main(config)
File "train.py", line 41, in main
trainer.fit(model)
File "/home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 343, in fit
mp.spawn(self.ddp_train, nprocs=self.num_gpus, args=(model,))
File "/home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 162, in spawn
process.start()
File "/home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/multiprocessing/process.py", line 105, in start
self._popen = self._Popen(self)
File "/home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/multiprocessing/context.py", line 284, in _Popen
return Popen(process_obj)
File "/home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/multiprocessing/popen_spawn_posix.py", line 32, in __init__
super().__init__(process_obj)
File "/home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/multiprocessing/popen_fork.py", line 19, in __init__
self._launch(process_obj)
File "/home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/multiprocessing/popen_spawn_posix.py", line 59, in _launch
cmd, self._fds)
File "/home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/multiprocessing/util.py", line 417, in spawnv_passfds
False, False, None)
ValueError: bad value(s) in fds_to_keep
What I have tried and that didn't work:
Python 3.7
Python 3.6
pytorch 1.1.0
pytorch 1.2.0
Downgrading "scikit-learn" which had helped in unrelated projects according to results on Google
Lightning 0.5.3.2
Lightning master version
Lightning 0.5.2.1
CUDA 9.0
CUDA 9.2
CUDA 10.0
Tried removing visdom from the project
The error occurs on two servers I tried on, one with 4 Titan X cards and one with 8 Tesla V100 running Ubuntu 18.04.3 LTS.
I suspect that something in my model is triggering it and would appreciate ideas. I can not share the source code though. The model works in dp and single gpu mode. |
Refactoring? | [
"feature",
"help wanted"
] | Thanks for lightning, it's a nice tool.
Trainer already has around 30 arguments and the PRs want to make it grow (see #516 and #539 for example). The more features, the more arguments and that's not really scalable IMHO.
There are things which could be grouped into objects instead, are you considering refactoring Trainer?
That would help the community see where the project is going and in which case to use it. |
object has no attribute 'add_scalar' | [
"question"
] | I'm trying to use the default logger to record scalars for tensorboard using add_scalar but I get:
self.logger.add_scalar('loss/train_loss', 42, 42)
AttributeError: 'TestTubeLogger' object has no attribute 'add_scalar'
The docs say TestTubeLogger inherits from SummaryWriter so add_scalar should be ok.
Can anyone help?
What's your environment?
conda version (no venv) : 4.7.12
PyTorch version: 1.3.1
Lightning version: 0.5.3.2
Test-tube version: 0.7.3 |
0.5.3 broke DDP vs 0.5.2.1 | [
"bug"
] | Using ddp on 0.5.3.2 would cause a gpu worker to crash at an epoch transition (start of epoch 7 for me) and hang the whole training process. I rolled back to 0.5.2.1 (used for my last project) and the issue was gone. Single gpu training works fine on both versions and toggling amp makes no difference. I'm using pytorch 1.3 in NGC 19.10-py3.
Any ideas on how to debug this? since it just hangs without an error code...
Edit:
Should add that I don't have any special hooks/functions running at the start of epochs |
Full callback handling | [] | Is your feature request related to a problem? Please describe.
I started deep learning using fastAI, and despite all its drawbacks there is one thing I found very handy when I wanted to tweek the training loop : callbacks. This is far less important here as we have control over training_step, but I was for instance wondering how I could do something when train begins (like log my model's graph). Maybe I'm missing something but it seems to me that there is no simple way to do that.
The fact that there is a Callback class defined (used for early stopping and checkpointing) makes me think that it was somewhat planned at some point, but was never fully integrated (or I may be missing something).
Describe the solution you'd like
Implement callback handling with a few methods (the one defined in Callback) that can be called at specific points during the training loop. Early stopping and checkpointing can be added as default callbacks and integrated within this new framework. The idea is to have something simple to interact with the training loop at specific points.
Describe alternatives you've considered
Let things as they are and just add the callback directly in pl.LightningModule as optional methods to be implemented. They then just need to be called at the right time during the training loop. It doesn't change much compared to traditional callbacks but it may be closer to the design of pytorch-lightning.
If you find that there is any value in this idea I could try and write a PR in the following weeks. |
The gan template does not seem to set properly the gradients of the discriminator to zero | [
"question"
] | In each iteration at the beginning of the discriminator training, the gradient is not set to zero. To investigate, just print the gradients of the discriminator after the line if optimizer_i == 1:.
The optimizer.zero_grad() for discriminator is then only called once all the gradients are accumulated, including the ones coming from the update of the generator.
By the way, it may be helpful to indicate somewhere that the script is performing simultaneous gradient descent instead of alternating updates.
I am wondering if someone succeeds in training a GAN with this script because this does not seem to be possible to me. Thanks in advance for the clarification. |
Update docs to be clear on --gpus behaviour. | [
"feature",
"help wanted"
] | Final resolution:
The resolution then should be alternative 1, since we agree that don't want to get rid of the 'number of gpus' functionality (which was the original proposed aggressive solution).
If we detect --gpus 0 with int, a warning should suffice alongside updated docs.
Is your feature request related to a problem? Please describe.
Trainer.gpus can currently be used to specify a number of GPUs or specific GPUs to run on. This makes values like
0 (run on CPU), "0" (Run on GPU 0), [0] (run on GPU 0)
confusing for newcomers.
Describe the solution you'd like
As an aggressive solution to this issue, we move to have gpus always specify specific GPUs as that is the more encompassing case. Going forward, we can put a deprecation notice up when a single int is passed in:
"In the future, gpus to specify specific GPUs the model will run on. If you would like to run on CPU, pass in None or an empty list."
Then, in the next breaking version, we can simplify the behaviour.
Describe alternatives you've considered
Keep as is: This is a viable solution. We could just document more carefully. However, anecdotally, this is confusing for our team and most likely other users.
Have gpus mean number of GPUs: There are many cases where researchers need to run multiple experiments on the same time on a multi-gpu machine. Being able to specify which GPU easily would be useful. As an argument for this, one could use 'CUDA_VISIBLE_DEVICES' to do this.
Create a new num_gpus argument: This could make it self-documenting and allow for both workflows. However, it will be an additional argument to maintain.
Additional context |
Using print_nan_grads in the Trainer results in an error | [
"bug"
] | Describe the bug
When using
print_nan_grads=True
in the Trainer, I am getting the error below.
trainer.fit(lstm_model)
File "/Users/anaconda3/envs/snorkel/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 364, in fit
self.run_pretrain_routine(model)
File "/Users/anaconda3/envs/snorkel/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 471, in run_pretrain_routine
self.train()
File "/Users/anaconda3/envs/snorkel/lib/python3.6/site-packages/pytorch_lightning/trainer/train_loop_mixin.py", line 60, in train
self.run_training_epoch()
File "/Users/anaconda3/envs/snorkel/lib/python3.6/site-packages/pytorch_lightning/trainer/train_loop_mixin.py", line 99, in run_training_epoch
output = self.run_training_batch(batch, batch_nb)
File "/Users/anaconda3/envs/snorkel/lib/python3.6/site-packages/pytorch_lightning/trainer/train_loop_mixin.py", line 219, in run_training_batch
self.print_nan_gradients()
File "/Users/anaconda3/envs/snorkel/lib/python3.6/site-packages/pytorch_lightning/trainer/training_tricks_mixin.py", line 16, in print_nan_gradients
if torch.isnan(param.grad.float()).any():
AttributeError: 'NoneType' object has no attribute 'float'
To Reproduce
Steps to reproduce the behavior:
If some param object, does not have .grad, then that object should not be checked for nans |
failing tests for Windows (Appveyor) | [
"bug"
] | Common bugs
There are several issues related to the Windows path structure
To Reproduce
Steps to reproduce the behavior:
see our CI results - https://ci.appveyor.com/project/Borda/pytorch-lightning/builds/29252138/job/ip4j5poawhphfd7g
Expected behavior
Fix bugs and enable Appveyor CI for PR check |
Support custom weighted loss in gradient accumulation | [
"feature",
"help wanted",
"won't fix"
] | Is your feature request related to a problem? Please describe.
When using triplet loss, there could be some "easy triplets" contributing 0.0 to the batch loss.
Currently, pytorch-lighting assumes all samples in a mini-batch do contribute non-zero value to the closure_loss. This should be an issue for online triplet mining since any arbitrary portion of a batch could contribute 0.0 to the batch loss. Simply re-weight the batch loss by 1 / self.accumulate_grad_batches could lead to gradient oscillation.
https://github.com/williamFalcon/pytorch-lightning/blob/62f6f92fdf6343e17336f0a28247ff95d35bac3a/pytorch_lightning/trainer/train_loop_mixin.py#L352-L354
Describe the solution you'd like
Support returning an optional callback metric batch_loss_efficiency_ratio and re-weight the loss based on this factor. I have implemented a demo for this idea and could submit a PR if you think this is worth following.
huntzhan@9d8b78c
And the usage:
In __init__
self.loss = nn.TripletMarginLoss(margin=self.hparams.triplet_loss_margin, reduction='none')
In training_step
sample_losses = self.meta_step(batch, batch_idx)
non_zero_mask = (sample_losses != 0)
batch_loss_efficiency_ratio = float(non_zero_mask.sum()) / sample_losses.shape[0]
if batch_loss_efficiency_ratio > 0:
loss = sample_losses[non_zero_mask].mean()
else:
loss = torch.tensor(0.0, requires_grad=True)
tqdm_dict = {
'train_loss': loss,
'batch_loss_efficiency_ratio': batch_loss_efficiency_ratio,
}
return {
'loss': loss,
'progress_bar': tqdm_dict,
'log': tqdm_dict,
}
Describe alternatives you've considered
N/A
Additional context
N/A |
GAN training with Pytorch Lightning is broken. | [
"bug"
] | I was trying to train a DCGAN on my dataset but it wouldn't work in any means until I detach the training logic from Lightning and run the code without it. It was not working when my training logic is in Lightning module. I checked the gan examples in the docs and also multiple optimizer things. After 2 days of headaches, source code inspections and putting numerous print statements in the lightning source code, I found the culprit.
GAN training with Pytorch Lightning is simply broken. The culprit is only calling optimizer.zero_grad() after optimizer.step() since it clears the gradients of Generator or Discriminator only. Before the other network's weights are updated for, say, Generator; "loss.backward()" is called and it updates gradients for all parameters, but when optimizer.zero_grad() is called after the parameters are updated, only Generator's gradients are cleaned. So when it comes the Discriminator loss.backward(), leftover gradients are accumulated for Discriminator parameters and it just messes up with everything. Any kind of GAN training is impossible with this settings. That's why you can not find any GAN implementations with Pytorch Lightning on the internet.
Possible solutions
-Putting a warning in the docs or on the console after detecting multiple optimizers are defined like "if you train GAN, don't forget to zero all gradients by overriding the optimizer_step method or just reset gradients in your training loop before returning loss dictionary". (would be weird, honestly)
calling zero_grad before calling backward in optimizer_closure (but it would mess with gradient accumulation, i suppose)
calling self.zero_grad() instead of optimizer.zero_grad() by default in the optimizer_step.
call zero_grad for all optimizers of the lightning module after gradient step by default so that the hook on_before_zero_grad is called whenever appropriate.
(By the way, on_before_zero_grad is not seem to be called anywhere right now. Maybe the issue can be fixed with a default behavior of that method, alternatively.)
I just sent a pull request which implements the last option and updates the docs. |
GAN example: Only one backward() call? | [
"feature",
"question",
"let's do it!"
] | In the PyTorch GAN tutorial https://pytorch.org/tutorials/beginner/dcgan_faces_tutorial.html there are two backward() calls for the discriminator. How do you ensure this with your structure, where backward() gets called after the training step?
Best,
Alain |
Custom callbacks | [
"feature",
"help wanted"
] | It would be great if the Trainer could support any custom callbacks that follow the Callback structure. |
Why are hparams mandatory in the LightningModule definition? | [
"question"
] | When I don't pass hparams in the LightningModule, it doesn't allow me to load a previously saved model using a checkpoint. Particularly, hparams can't be passed in Jupyter Notebook/Lab, so how to use it in such a usecase (for testing, etc.)?
I am able to train a model, checkpoint it, but I can't load it when I restart the kernel. |
Early Stopping kicks in at min_epochs + 2 instead of min_epochs | [
"bug"
] | Describe the bug
I was working on a fix for #524 and found that early stopping starts to kick in at epoch 3 despite min_epochs = 1.
To Reproduce
run basic_examples/gpu_template.py and log the callback calls every epoch.
Expected behavior
When setting min_epochs=n (counting from 1), we should evaluate early stopping at the end of epoch n.
Proposed fix:
I propose to change this line in the training loop:
met_min_epochs = epoch > self.min_epochs
to
met_min_epochs = epoch >= self.min_epochs - 1
Why the "-1"? The epoch variable in the training loop starts at 0, but the Trainer argument min_epochs starts counting at 1.
Why the ">="? The early stop check is done at the end of each epoch, hence the epoch counter will be = to min_epochs after min_epochs have passed.
Desktop (please complete the following information):
OS: Linux
Version: master |
empty_cache calls in training occupy memory on gpu #0 | [
"bug"
] | Training on GPU other than gpu #0 allocates a ~500Mb chunk of memory on gpu #0, the memory is totally unused and should not be allocated at all. Debugging shows that initial allocation happens at this line: https://github.com/williamFalcon/pytorch-lightning/blob/2f01c03b38fc16618aa9839d39e0ae5a142c0559/pytorch_lightning/trainer/trainer.py#L517
A bit of research led me to this issue in PyTorch repo: pytorch/pytorch#25752. This is not the only place where empty_cache is called, by the way. Did not check, but other calls probably work the same way.
For now I duct-tape-fixed it for myself by running my script with CUDA_VISIBLE_DEVICES=2 and setting gpus=[0]. Not sure how to fix it properly, though. Would be glad if someone would take a look. |
Comet PAPI Depreciated | [
"bug"
] | Use of the Comet API logger reports an unecessary depreciation warning relating to the use of comet_ml.papi, rather than the newer comet_ml.api.
Example:
COMET WARNING: You have imported comet_ml.papi; this interface is deprecated. Please use comet_ml.api instead. For more information, see: https://www.comet.ml/docs/python-sdk/releases/#release-300 |
How to share y_hat on a batch with multi optimizers? | [
"question",
"won't fix"
] | I tried to implement EdgeConnect with pytorch-lightning, and my implementation is this.
But, this code is very slow in training because very heavy generator is called by 2 times.
While I want to share y_hat(outputs) on a batch, I have no idea.
Fast training implementation which is forcibly is here (this is not elegant code...).
How to use pytorch-lightning functions?
Thank you |
Typo in README | [] | In the section https://github.com/williamFalcon/pytorch-lightning#what-does-lightning-control-for-me there is a large jpg. In the section "DATA SETUP" it says "Augmets" instead of "Augments".
If that image was generated from LaTeX, or if it was an svg, anybody in the community could edit it. |
Cyclic learning rate finder as a part of Trainer | [
"feature",
"help wanted"
] | π Feature
Learning rate finder to plot lr vs loss relationship for Trainer and find a good starting learning rate.
Motivation
Cyclical Learning Rates for Training Neural Networks by Leslie N. Smith documents how to find a good learning rate for training with CyclicLR scheduler.
Pitch
Adding a methods to the Trainer class:
find() : Runs the CLR finder and plots the graph in Logger. |
num_training_batches rounds down, causing 0 batches count | [
"bug"
] | π Bug
self.num_training_batches is defined using int here, which rounds it down to 0 when a small training_percent_check or overfit_pct is used, even though at least 1 batch is still processed.
This does not cause any errors in "vanilla" lightning, but crashes any user code that uses the number of batches in a division (for example to get an average of some quantity over batches).
To Reproduce
Steps to reproduce the behavior:
Set the training percentage to a small enough percentage that the number of examples is smaller than the batch size for a given dataset.
This would require a very simple fix, either to use math.ceil() or max(1, self.num_training_batches), depending of how the quantity is expected to behave in the rest of the code. |
Trainers' .fit() mimics .test() after first call to .test() + .test() doesn't print metrics | [
"bug",
"help wanted"
] | π Bug
After first call to Trainer.test() all subsequent calls to Trainer.fit() exhibit output behavior of Trainer.test()
Trainer.test() doesn't print metrics (and returns None) returned by LightningModule.test_end()
To Reproduce
Run following code in a Python 3.6.8 env with torch=1.3.1 and pytorch_lightning=0.5.3.2 installed:
Code sample
Click to view the code sample.
import torch
import torch.nn as nn
import torch.optim as optim
from torch.utils.data import TensorDataset, DataLoader
import pytorch_lightning as pl
print(torch.__version__, pl.__version__)
class TestModule(pl.LightningModule):
def __init__(self, bs):
super(TestModule, self).__init__()
self.fc = nn.Linear(2, 2)
self.bs = bs
self.criterion = nn.MSELoss()
def forward(self, x):
x = self.fc(x)
return x
def training_step(self, batch, batch_nb):
x, y = batch
y_hat = self.forward(x)
return {'loss': self.criterion(y_hat, y)}
def test_step(self, batch, batch_nb):
x, y = batch
y_hat = self.forward(x)
return {'test_loss': self.criterion(y_hat, y)}
def test_end(self, outputs):
test_loss = torch.stack([x['test_loss'] for x in outputs]).mean()
test_metrics = {'test_loss': test_loss}
return {'progress_bar': test_metrics, 'log': test_metrics}
def configure_optimizers(self):
self.optimizer = optim.Adam(self.parameters())
return self.optimizer
@pl.data_loader
def train_dataloader(self):
x = torch.rand(1000, 2) - 0.5
y = torch.sign(x)
ds = TensorDataset(x, y)
dl = DataLoader(ds, batch_size=self.bs, shuffle=True)
return dl
@pl.data_loader
def test_dataloader(self):
x = torch.rand(100, 2) - 0.5
y = torch.sign(x)
ds = TensorDataset(x, y)
dl = DataLoader(ds, batch_size=self.bs * 2)
return dl
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
net = TestModule(bs=32).to(device)
epochs = 10
trainer = pl.Trainer(gpus=-1, max_nb_epochs=epochs, min_nb_epochs=epochs)
trainer.fit(net)
trainer.test()
trainer.fit(net)
trainer.fit(net)
Code output
Output of the sequence of calls fit -> test -> fit -> fit
1.3.1 0.5.3.2
Epoch 10: 100%|ββββββββββ| 32/32 [00:00<00:00, 272.26batch/s,
batch_nb=31, gpu=0, loss=1.009, v_nb=65]
Testing: 100%|ββββββββββ| 2/2 [00:00<00:00, 222.23batch/s]
Testing: 100%|ββββββββββ| 2/2 [00:00<00:00, 357.40batch/s]
Testing: 100%|ββββββββββ| 2/2 [00:00<00:00, 380.52batch/s]
Expected behavior
Trainer.fit() should always run model training, even after it was already tested once via Trainer.test().
Trainer.test() should return metrics produced by LightningModule.test_end(). What is processing the whole test dataset through the model good for if not collecting performance metrics?
Environment
Click to view the environment.
PyTorch version: 1.3.1 [0/47800]
Is debug build: No
CUDA used to build PyTorch: 10.1.243
OS: Ubuntu 18.04.3 LTS
GCC version: (Ubuntu 7.4.0-1ubuntu1~18.04.1) 7.4.0
CMake version: Could not collect
Python version: 3.6
Is CUDA available: Yes
CUDA runtime version: 10.0.130
GPU models and configuration: GPU 0: TITAN V
Nvidia driver version: 418.87.00
cuDNN version: Probably one of the following:
/usr/local/cuda-10.0/targets/x86_64-linux/lib/libcudnn.so.7
/usr/local/cuda-10.1/targets/x86_64-linux/lib/libcudnn.so.7
/usr/local/cuda-9.0/targets/x86_64-linux/lib/libcudnn.so.7
Versions of relevant libraries:
[pip3] botorch==0.1.4
[pip3] gpytorch==0.3.6
[pip3] numpy==1.17.4
[pip3] pytorch-lightning==0.5.3.2
[pip3] torch==1.3.1
[pip3] torchsummary==1.5.1
[pip3] torchvision==0.4.2
[conda] Could not collect |
Semantic Segmentation example | [
"feature"
] | π Feature
Semantic Segmentation example code with dataloading and training implemented.
Motivation
There are not many examples available for PyTorch Lightning (PL) as of now. A reproducible example illustrating semantic segmentation will be helpful for users to understand how everything works. I had to look around a lot while using PL, i.e. the documentation is not sufficient yet. So, this will improve the situation.
Pitch
I have already implemented semantic segmentation using PL in my repository here using the KITTI dataset and a few models (like ResNet50 and 101 FCN, DeepLabv3 - these are available directly in torchvision). So, the example will be similar i.e. it will illustrate the dataloading, training step and optimizer configuration steps clearly.
I can open a pull request implementing this if the idea is acceptable.
Alternatives
The existing examples are too complicated (even the basic ones have way too much functionality mentioned for someone starting out with PL). The basic one is based on MNIST classification while the domain one is on GAN, so this can be an addition to the domain-specific examples.
Additional context
Check my repo. It took me a lot of time to implement this simple thing π¬ and wouldn't have been possible using only the pl-examples in the repo (I had to look around a lot). |
tensorflow version | [
"question"
] | Hi,
I am getting the following error and I was wondering what tensorflow version is currently supported since I am using 1.11.0.
module 'tensorflow.io' has no attribute 'gfile'
Thanks! |
Step-wise processing, better support for `IterableDataset`, and others | [
"feature",
"help wanted"
] | I have been using PTL for a month. It is nice and saves a lot of time, and I intend to use it in future projects. That said, I have a list of feature requests and improvements that would be very helpful to have to support a wider set of use cases. I am not sure what the best format for this list, so I will just write them here.
Better support for IterableDataset
In addition to val_check_interval, we also need num_val_steps and num_train_steps. num_val_steps is needed because the validation set is also using an IterableDataset. num_train_steps is needed because you usually need to carefully pick number of gradient updates which has some interaction with the learning rate scheduler (num_train_steps=inf is not sufficient)
For validation, keep the same DataLoader object instead of instantiating a new one on each validation cycle, because it is costly to construct new workers each time.
Some of the debugging features that run on a small percentage of the training/validation don't work because they are assuming a Dataset not IterableDataset
Step-wise processing
Thinking of the "gradient update" as the unit of training instead of (or in addition to) an epoch. A typical use case is pretraining a language model, where you want to control for number of gradient updates, not epochs (e.g. check the RoBERTa/BERT papers).
Add an option to do scheduler.step() after every gradient update
Have self.trainer.num_train_steps be available for the LR scheduler. The scheduler is usually a function of number of steps
Checkpointing the current step, and resume from that step. Again, this is important to get the right scheduler, and also important for the tensorboard logging. It will be nice to resume from the same training example, but this is less important.
Misc. These are smaller points, but nice to have.
Having the default tensorboard logging include LR, time per steps, allgradnorm (check fairseq)
Trainer(gpus=2) ignores CUDA_VISIBLE_DEVICES and always picks the first two gpus.
with ddp, sync validation stats across processes. This is a common mistake, and it will be nice to guard users against it. It is something like having the following line at the end of validation_end:
val_loss = torch.distributed.all_reduce(val_loss, op=torch.distributed.ReduceOp.SUM)/self.trainer.world_size
various logs refer to "batches", and it is not clear if it a "batch" or a "step". They are usually the same except with gradient accumulation. Personally, I prefer the word step because it eliminates that confusion.
Thanks for the helpful library and sorry for the long list. |
powersgd | [
"feature",
"help wanted",
"discussion",
"waiting on author"
] | Powersgd paper shows promising scaling for distributed training.
https://arxiv.org/abs/1905.13727
I'm interested in porting this to pytorch-lightning, do you think it's a good idea?
Thanks |
add "no logging" option | [
"feature",
"help wanted"
] | I may be wrong, but I see no way to entirely avoid logging during training, which sometimes may be convenient for quick exploratory experiments.
I suggest to have
trainer = Trainer(logger=None)
construct a trainer that does no logging at all |
Turn off validation if val_percent_check=0 | [
"bug"
] | As was suggested by @williamFalcon in #536 (comment) val_percent_check=0 should turn off the validation loop. But now it will not work because of
self.num_val_batches = max(1, self.num_val_batches)
So I suggest to fix it. Moreover I suggest to make more thorough processing of train_percent_check and val_check_interval:
We should require all *_percent_check and val_check_interval to be in the range [0.0; 1.0].
Final num_val_batches can be equal to 0 that will effectively disable validation.
Final num_train_batches and num_test_batches should be at least 1. (See also #631)
Final val_check_interval should be at least 1.
The user can try to turn off validation by setting val_check_interval to a big value. Maybe in that case we should print a helpful message that validation can be turned off by setting val_percent_check=0.
Any thoughts? |
What is hparams exactly? | [
"question"
] | Hi, thanks for the nice product again.
From #525 and #599, I could guess that hparams is required to load a saved model (which I think should be mentioned somewhere in the doc btw). And from the examples, seems like hparams may be argparse.Namespace. Unfortunately though, it was not so easy to understand the concept.
What is hparams exactly? What kind of information it should/can/should not include to work properly? Is it recommended to use hyperparameter argument parser? Say, if I'm not into hyperparameter search at the moment and just want to be able to load the checkpoint model, what is the requirement on the hparams? |
How to save checkpoint when turning off the validation? | [
"feature",
"help wanted"
] | Help
In some cases like fintuning bert, We don't need the validation step, but have to save the model checkpoint. But I can't make it. If anyone know, please tell me. Thank you! |
ValueError: bad value(s) in fds_to_keep when using DDP | [
"bug",
"help wanted"
] | π Bug
I see the following error when I try to use ddp for distributed training. I see #538 is the similar issue, but I couldn't figure out how to apply the solution to my problem.
The following is the error log.
2019-12-27 22:17:31,780 - __main__ - INFO - Loaded dictionary
2019-12-27 22:17:31,816 - model.dictionary - INFO - Error parsing line 703
2019-12-27 22:17:31,832 - gensim.models.utils_any2vec - INFO - loading projection weights from data_in/GoogleNews-vectors-negative300.bin.gz
2019-12-27 22:19:41,750 - gensim.models.utils_any2vec - INFO - loaded (3000000, 300) matrix from data_in/GoogleNews-vectors-negative300.bin.gz
2019-12-27 22:19:45,292 - __main__ - INFO - Loaded and imported w2v weights (50161) words
2019-12-27 22:19:45,696 - root - INFO - gpu available: True, used: True
2019-12-27 22:19:45,696 - root - INFO - VISIBLE GPUS: 0,1
Traceback (most recent call last):
File "snrm_trainer.py", line 128, in <module>
main(hparams)
File "snrm_trainer.py", line 94, in main
trainer.fit(model)
File "/home/kyoungrok/anaconda3/envs/sigir/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 343, in fit
mp.spawn(self.ddp_train, nprocs=self.num_gpus, args=(model,))
File "/home/kyoungrok/anaconda3/envs/sigir/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 162, in spawn
process.start()
File "/home/kyoungrok/anaconda3/envs/sigir/lib/python3.7/multiprocessing/process.py", line 112, in start
self._popen = self._Popen(self)
File "/home/kyoungrok/anaconda3/envs/sigir/lib/python3.7/multiprocessing/context.py", line 284, in _Popen
return Popen(process_obj)
File "/home/kyoungrok/anaconda3/envs/sigir/lib/python3.7/multiprocessing/popen_spawn_posix.py", line 32, in __init__
super().__init__(process_obj)
File "/home/kyoungrok/anaconda3/envs/sigir/lib/python3.7/multiprocessing/popen_fork.py", line 20, in __init__
self._launch(process_obj)
File "/home/kyoungrok/anaconda3/envs/sigir/lib/python3.7/multiprocessing/popen_spawn_posix.py", line 59, in _launch
cmd, self._fds)
File "/home/kyoungrok/anaconda3/envs/sigir/lib/python3.7/multiprocessing/util.py", line 432, in spawnv_passfds
False, False, None)
ValueError: bad value(s) in fds_to_keep
To Reproduce
Code sample
I attach two source files:
snrm.py (model)
snrm_trainer.py (trainer)
Expected behavior
The model runs on multi-gpus
Environment
PyTorch version: 1.3.1
Is debug build: No
CUDA used to build PyTorch: 10.1.243
OS: Ubuntu 18.04.3 LTS
GCC version: (Ubuntu 7.4.0-1ubuntu1~18.04.1) 7.4.0
CMake version: version 3.13.4
Python version: 3.7
Is CUDA available: Yes
CUDA runtime version: 10.1.243
GPU models and configuration:
GPU 0: GeForce GTX 1080 Ti
GPU 1: GeForce GTX 1080 Ti
Nvidia driver version: 430.64
cuDNN version: /usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.4
Versions of relevant libraries:
[pip] numpy==1.16.4
[pip] pytorch-lightning==0.5.3.2
[pip] torch==1.3.1
[pip] torchvision==0.4.2
[conda] blas 1.0 mkl
[conda] mkl 2019.4 243
[conda] mkl-service 2.3.0 py37he904b0f_0
[conda] mkl_fft 1.0.15 py37ha843d7b_0
[conda] mkl_random 1.1.0 py37hd6b4f25_0
[conda] pytorch 1.3.1 py3.7_cuda10.1.243_cudnn7.6.3_0 pytorch
[conda] pytorch-lightning 0.5.3.2 pypi_0 pypi
[conda] torchvision 0.4.2 py37_cu101 pytorch
Additional context
None |
Extract dataset definition out of the LightningModule | [
"feature",
"help wanted"
] | π Feature
Extract dataset definition out of the LightningModule
Motivation
Separation of data from the model.
Pitch
The datasets loaders could easily be passed to the fit method directly instead of having to define them inside the LightningModule, this avoids having a single class that possibly contains: data, data pipeline params, model, model hyperparams.
The basic example cloud look like this:
from pytorch_lightning import Trainer
train_dataloader = DataLoader(...)
val_dataloader = DataLoader(...)
model = CoolSystem()
# most basic trainer, uses good defaults
trainer = Trainer()
trainer.fit(model, train_dataloader=train_dataloader, val_dataloader=val_dataloader)
Its much more natural to how you usually structure your code in scikit-learn or keras.
Alternatives
They could also be based to the Trainers constructor. |
`max_nb_epochs` not effective in master branch | [
"bug"
] | Hi,
I think this might be an easy one to fix. I'm using the bleeding edge version from master with pytorch 1.3.
Trainer(max_nb_epochs=...) does not limit the max epochs in training at the moment. See doc
I think this is due to the following code and default setting:
https://github.com/williamFalcon/pytorch-lightning/blob/c32f2b91164abe309dd8456a16c01e9c35748bff/pytorch_lightning/trainer/trainer.py#L160-L164
But max_epochs has default set to 1000:
https://github.com/williamFalcon/pytorch-lightning/blob/c32f2b91164abe309dd8456a16c01e9c35748bff/pytorch_lightning/trainer/trainer.py#L75
Therefore by default if you only set max_nb_epochs=, the following is never called. Hence max_epochs will continue to be 1000.
https://github.com/williamFalcon/pytorch-lightning/blob/c32f2b91164abe309dd8456a16c01e9c35748bff/pytorch_lightning/trainer/trainer.py#L163-L164
This logic probably needs some enhancement, or let just one of these max epoch parameters drive training. Open to suggestions and happy to get a PR going based on what's decided.
Thanks. |
How to log train and validation loss in the same figure ? | [
"question"
] | β Questions and Help
What is your question?
How can we log train and validation loss in the same plot and preview them in tensorboard?
Having both in the same plot is useful to identify overfitting visually.
Code
def training_step(self, batch, batch_idx):
images, labels = batch
output = self.forward(images)
loss = F.nll_loss(output, labels)
return {"loss": loss, 'log': {'train_loss': loss}}
def validation_step(self, batch, batch_idx):
images, labels = batch
output = self.forward(images)
loss = F.nll_loss(output, labels)
return {"loss": loss}
def validation_end(self, outputs):
avg_loss = torch.stack([x['loss'] for x in outputs]).mean()
return {'val_loss': avg_loss, 'log': {'val_loss': avg_loss}}
What have you tried?
Using Loss/train and Loss/valid contains them in the same section, but still in separate plot.
def training_step(self, batch, batch_idx):
images, labels = batch
output = self.forward(images)
loss = F.nll_loss(output, labels)
return {"loss": loss, 'log': {'Loss/train': loss}}
def validation_step(self, batch, batch_idx):
images, labels = batch
output = self.forward(images)
loss = F.nll_loss(output, labels)
return {"loss": loss}
def validation_end(self, outputs):
avg_loss = torch.stack([x['loss'] for x in outputs]).mean()
return {'val_loss': avg_loss, 'log': {'Loss/valid': avg_loss}}
I tried to use self.logger.experiment.add_scalars(), but confused on how to access train loss in validation loop.
What's your environment?
OS: MAC OSX
Packaging: conda
Version: 0.5.3.2 |
Removing particular defaults from progress bar | [
"question"
] | Related to issue #629, since proposes to remove some default entries from the progress bar. Is there an existing way to remove entries from the tqdm_dict once the trainer is initialized? |
Correctly using `ReduceLROnPlateau` | [
"question"
] | Hello all, I'm trying to use the learning rate scheduler ReduceLROnPlateau, though I'm not sure I'm implementing this correctly. The scheduler doesn't seem to be working properly.
I am essentially using the same code as the Colab MNIST tutorial (I ran this in colab)
import os
import torch
from torch.nn import functional as F
from torch.utils.data import DataLoader
from torchvision.datasets import MNIST
from torchvision import transforms
import pytorch_lightning as pl
class MNISTModel(pl.LightningModule):
def __init__(self):
super(MNISTModel, self).__init__()
# not the best model...
self.l1 = torch.nn.Linear(28 * 28, 10)
def forward(self, x):
return torch.relu(self.l1(x.view(x.size(0), -1)))
def training_step(self, batch, batch_nb):
# REQUIRED
x, y = batch
y_hat = self.forward(x)
loss = F.cross_entropy(y_hat, y)
tensorboard_logs = {'train_loss': loss}
return {'loss': loss, 'log': tensorboard_logs}
def validation_step(self, batch, batch_nb):
# OPTIONAL
x, y = batch
y_hat = self.forward(x)
return {'val_loss': F.cross_entropy(y_hat, y)}
def validation_end(self, outputs):
# OPTIONAL
avg_loss = torch.stack([x['val_loss'] for x in outputs]).mean()
tensorboard_logs = {'val_loss': avg_loss}
print(avg_loss)
return {'val_loss': avg_loss, 'log': tensorboard_logs}
def test_step(self, batch, batch_nb):
# OPTIONAL
x, y = batch
y_hat = self.forward(x)
return {'test_loss': F.cross_entropy(y_hat, y)}
def test_end(self, outputs):
# OPTIONAL
avg_loss = torch.stack([x['test_loss'] for x in outputs]).mean()
logs = {'test_loss': avg_loss}
return {'avg_test_loss': avg_loss, 'log': logs, 'progress_bar': logs}
def configure_optimizers(self):
# REQUIRED
# can return multiple optimizers and learning_rate schedulers
# (LBFGS it is automatically supported, no need for closure function)
optimizer = torch.optim.Adam(self.parameters(), lr=0.02)
scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(optimizer,
mode='min',
factor=0.2,
patience=2,
min_lr=1e-6,
verbose=True)
return [optimizer], [scheduler]
@pl.data_loader
def train_dataloader(self):
# REQUIRED
return DataLoader(MNIST(os.getcwd(), train=True, download=True, transform=transforms.ToTensor()), batch_size=512)
@pl.data_loader
def val_dataloader(self):
# OPTIONAL
return DataLoader(MNIST(os.getcwd(), train=True, download=True, transform=transforms.ToTensor()), batch_size=512)
@pl.data_loader
def test_dataloader(self):
# OPTIONAL
return DataLoader(MNIST(os.getcwd(), train=False, download=True, transform=transforms.ToTensor()), batch_size=512)
The only differences besides the configure_optimizers() method are the batch size (512 vs. 32 originally) and printing (though I don't see how either of these would affect the scheduler behavior).
Question: does the scheduler here automatically receive val_loss computed in the validation_end() step? I've tried running the above code using both avg_val_loss and val_loss as keys in the dictionary returned by validation_end(), and it does not seem to make a difference.
Despite the average validation loss seeming to decrease monotonically, the lr scheduler keeps on reducing the learning rate.
tensor(2.3137, device='cuda:0')
tensor(0.6615, device='cuda:0')
/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/trainer_io.py:210: UserWarning: Did not find hyperparameters at model.hparams. Saving checkpoint without hyperparameters
"Did not find hyperparameters at model.hparams. Saving checkpoint without"
tensor(0.6424, device='cuda:0')
tensor(0.6337, device='cuda:0')
tensor(0.6291, device='cuda:0')
Epoch 3: reducing learning rate of group 0 to 4.0000e-03.
tensor(0.6162, device='cuda:0')
tensor(0.6147, device='cuda:0')
tensor(0.6137, device='cuda:0')
Epoch 6: reducing learning rate of group 0 to 8.0000e-04.
tensor(0.6121, device='cuda:0')
tensor(0.6118, device='cuda:0')
tensor(0.6115, device='cuda:0')
Epoch 9: reducing learning rate of group 0 to 1.6000e-04.
tensor(0.6114, device='cuda:0')
tensor(0.6113, device='cuda:0')
tensor(0.6113, device='cuda:0')
Epoch 12: reducing learning rate of group 0 to 3.2000e-05.
Could anyone kindly advise as to how to correctly implement the scheduler? Thank you.
Edit: I forgot to attach the code for the Trainer portion, but it is also essentially the same as in the example.
mnist_model = MNISTModel()
# most basic trainer, uses good defaults (1 gpu)
trainer = pl.Trainer(gpus=1, show_progress_bar=False)
trainer.fit(mnist_model) |
Requires_grad automatically set to false during training | [] | Hi,
First of all I'd like to thank you for the find package, it removes a lot of overhead.
I'm currently working on a project using the pytorch lightning module. However during the training procedure, the flag 'self.requires_grad' is set to false even though the 'self.unfreeze()' method has been used before in the forward pass.
I've been trying to find the location in the source code where this behavior is dictated, but cannot locate this.
Greatly appreciative in any hint or potential work around.
BR |
Mismatch of displayed 'epoch' | [
"bug",
"good first issue"
] | π Bug
The display of epoch's number mismatches between the progress bar and the checkpoint indicator. I wonder this mismatch could confuse users.
progress bar: The number of epochs starts from 1.
checkpoint indicator: The number of epochs starts from 0.
metrics.csv also starts from 0.
I think that to change checkpoint and metrics.csv causes a serious problem.
So progress bar should be changed in my opinion.
What do you think about it?
Epoch 32: 100%|ββββββββββ| 331/331 [00:05<00:00, 88.73batch/s, batch_idx=17, loss=1.148, train_batch_loss=1.02, v_num=0, val_loss=1.05]
AINFO:root:
Epoch 00031: val_loss reached 1.04545 (best 1.04545), saving model to /dummy/version_0/checkpoints/_ckpt_epoch_31.ckpt as top 1
{'loss': 1.022357702255249, 'train_batch_loss': 1.022357702255249, 'val_loss': 1.0454469919204712}
Epoch 33: 5%|β | 18/331 [00:00<00:05, 61.06batch/s, batch_idx=17, loss=1.073, train_batch_loss=1.31, v_num=0, val_loss=1.05]
Environment
PyTorch Version : 1.3.1
OS : macOS 10.14.6
How you installed PyTorch : pip install git+https://github.com/williamFalcon/pytorch-lightning.git@master --upgrade
Python version : 3.7.3
use CPU |
Multi-GPU on AWS p2.8xlarge instance (ddp2 and ddp) | [
"bug"
] | As information, AWS p2.8xlarge has 8 K80s all on the same node.
I have tried my model gpus=1 and distributed_backend=None on an AWS p2.xlarge instance (1 K80) and it works.
When I try gpus=8 and distributed_backend='ddp2' on an AWS p2.8xlarge, I get the following error:
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/trainer.py", line 335, in fit
task = int(os.environ['SLURM_LOCALID'])
File "/usr/lib/python3.6/os.py", line 669, in __getitem__
raise KeyError(key) from None
KeyError: 'SLURM_LOCALID' |
Multi-GPU (dp) on AWS p2.8xlarge instance | [
"bug"
] | I don't think the AWS instance is the problem, since the model dies on the first forward pass. Here is the error:
ο
16:17:51
Traceback (most recent call last):
ο
16:17:51
File "/Siamese_BERT_blogpost/train.py", line 107, in <module>
ο
16:17:51
trainer.fit(model)
ο
16:17:51
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/trainer.py", line 348, in fit
ο
16:17:51
self.dp_train(model)
ο
16:17:51
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/dp_mixin.py", line 104, in dp_train
ο
16:17:51
self.run_pretrain_routine(model)
ο
16:17:51
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/trainer.py", line 455, in run_pretrain_routine
ο
16:17:51
self.evaluate(model, self.get_val_dataloaders(), self.nb_sanity_val_steps, self.testing)
ο
16:17:51
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/evaluation_loop_mixin.py", line 50, in evaluate
ο
16:17:51
test)
ο
16:17:51
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/evaluation_loop_mixin.py", line 174, in evaluation_forward
ο
16:17:51
output = model(*args)
ο
16:17:51
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 541, in __call__
ο
16:17:51
result = self.forward(*i
ο
16:17:51
wandb: Waiting for W&B process to finish, PID 162
ο
16:17:51
nput, **kwargs)
ο
16:17:51
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/pt_overrides/override_data_parallel.py", line 65, in forward
ο
16:17:51
outputs = self.parallel_apply(replicas, inputs, kwargs)
ο
16:17:51
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/pt_overrides/override_data_parallel.py", line 69, in parallel_apply
ο
16:17:51
return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
ο
16:17:51
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/pt_overrides/override_data_parallel.py", line 199, in parallel_apply
ο
16:17:51
raise output
ο
16:17:51
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/pt_overrides/override_data_parallel.py", line 165, in _worker
ο
16:17:51
output = module.validation_step(*input, **kwargs)
ο
16:17:51
File "/Siamese_BERT_blogpost/wrapper.py", line 42, in validation_step
ο
16:17:51
out = self.forward(batch)
ο
16:17:51
File "/Siamese_BERT_blogpost/wrapper.py", line 35, in forward
ο
16:17:51
return self.siamese(batch)
ο
16:17:51
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py",wandb: Program failed with code 1. Press ctrl-c to abort syncing.
ο
16:17:51
line 541, in __call__
ο
16:17:51
result = self.forward(*input, **kwargs)
ο
16:17:51
File "/Siamese_BERT_blogpost/models.py", line 46, in forward
ο
16:17:51
premise = self.language_model(premise)[0]
ο
16:17:51
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 541, in __call__
ο
16:17:51
result = self.forward(*input, **kwargs)
ο
16:17:51
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_bert.py", line 735, in forward
ο
16:17:51
embedding_output = self.embeddings(input_ids=input_ids, position_ids=position_ids, token_type_ids=token_type_ids, inputs_embeds=inputs_embeds)
ο
16:17:51
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 541, in __call__
ο
16:17:51
result = self.forward(*input, **kwargs)
ο
16:17:51
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_bert.py", line 186, in forward
ο
16:17:51
inputs_embeds = self.word_embeddings(input_ids)
ο
16:17:51
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 541, in __call__
ο
16:17:51
result = self.forward(*input, **kwargs)
ο
16:17:51
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/sparse.py", line 114, in forward
ο
16:17:51
self.norm_type, self.scale_grad_by_freq, self.sparse)
ο
16:17:51
File "/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py", line 1484, in embedding
ο
16:17:51
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
ο
16:17:51
RuntimeError: arguments are located on different GPUs at /pytorch/aten/src/THC/generic/THCTensorIndex.cu:400
ο
16:17:55
wandb: Syncing 6 W&B file(s) and 0 media file(s)
ο
16:17:56
wandb: - 0.01MB of 0.01MB uploaded wandb: \ 0.01MB of 0.01MB uploaded wandb: | 0.01MB of 0.01MB uploaded wandb: / 0.01MB of 0.01MB uploaded wandb: - 0.01MB of 0.01MB uploaded wandb: \ 0.01MB of 0.01MB uploaded wandb: | 0.01MB of 0.01MB uploaded wandb:
ο
16:17:56
wandb: Synced vague-bush-37: https://app.wandb.ai/laksh/Siamese_SNLI/runs/7uabsn91 |
Checkpoint saving isn't atomic | [
"bug"
] | π Bug
Saving checkpoints happens non-atomically. In some cases, this causes an incomplete write of a checkpoint (for example when receiving a SIGKILL during writing), causing any subsequent loading to fail with
RuntimeError: unexpected EOF, expected 8 more bytes. The file might be corrupted.
To Reproduce
This is difficult to reproduce, since it relies on timing outside of code. For me, it happens with fast-running models that run at ~1-4 seconds per epoch.
Expected behavior
Checkpointing should be resistant to such issues, and instead simply continue as-is. |
How to make test_end() return metrics | [
"question"
] | I have searched through the docs / Google as well as looked through the source code.
It seems like test_end() returns nothing (it has no return in the function). I was wondering if I was missing something really obvious.
I would simply like to return the metrics of the test end. |
drop Pandas dependency | [
"feature",
"help wanted",
"good first issue"
] | π Feature
replace the few Pandas usage by native CSV package
Motivation
#687 (comment) |
can i run multiple ddp jobs on single node | [
"bug",
"feature"
] | I am running on a 14 core, 7 gpu machine. Ubuntu 18.04.2LTS, python 3.6.8, lightning 0.5.3.2, no virtual environment, no SLURM.
I have moved a tried and true model to ddp. It works great in all scenarios, including ddp as a single invocation.
I cannot succesfully start a second one, unfortunately. I get the following failure:
File "/home/seth/.local/lib/python3.6/site-packages/torch/distributed/rendezvous.py", line 143, in _env_rendezvous_handler
store = TCPStore(master_addr, master_port, world_size, start_daemon)
RuntimeError: Address already in use
This second job is running on different gpus and has a different log path.
After a brief investigation, it seems to me that the second job is trying to use the same master address as the first. I did not see any way to alter this with pytorch-lightning, though it seems straightforward in pytorch.
My questions are:
Can I run multiple simultaneous ddp jobs on the same node with different GPUs?
If so, how?
Thanks |
Documentation was disappeared | [] | The documentation on github.io was disappeared for a few days.
Is it moved to some other place? I really need it to continue my works.
Thanks a lot |
Fitting with log_gpu_memory=True fails in python3.6. | [
"bug"
] | Bug
Fitting with log_gpu_memory=True in the Trainer fails in python3.6 version.
To Reproduce
Use python3.6 version
Create any trainer with log_gpu_memory=True option.
Then fit it.
See error:
/a/pytorch-lightning/pytorch_lightning/core/memory.py in get_gpu_memory_map()
237 encoding='utf-8',
238 capture_output=True,
--> 239 check=True)
240 # Convert lines into a dictionary
241 gpu_memory = [int(x) for x in result.stdout.strip().split(os.linesep)]
/usr/lib/python3.6/subprocess.py in run(input, timeout, check, *popenargs, **kwargs)
421 kwargs['stdin'] = PIPE
422
--> 423 with Popen(*popenargs, **kwargs) as process:
424 try:
425 stdout, stderr = process.communicate(input, timeout=timeout)
TypeError: __init__() got an unexpected keyword argument 'capture_output'
Code sample
trainer = Trainer(
log_gpu_memory=True,
# ....
)
trainer.fit()
Expected behavior
For the same code there is no errors for python3.7
Environment
pytorch: 1.2.0
Ubuntu 18.04
pytorch-lightning:
- installed to pip environment
- commit 7a1df80
- python setup.py develop
- version 0.6.0
python: 3.6.8
cuda: 10.0, V10.0.130
cudnn: 7.6.2
GPU: RTX 2080 TI
Additional context
In the setup.py
python_requires='>=3.6',
But capture_output is used in subprocess.run calling, which is valid only for python3.7
See also workaround to maintain python3.6:
https://stackoverflow.com/questions/53209127/ |
LR Schedulers shouldn't get `epoch` argument in `step` function | [
"bug",
"good first issue"
] | π Bug
PyTorch LR schedulers now shouldn't get any arguments in step function, see here and here.
Looks like the calls in PytorchLightning are not in line with the new interface, see here.
This results in unexpected LR changes. Removing the epoch argument from step call solves the issue for me.
Environment
PyTorch 1.4
PyTorchLightning 0.5.3.2 |
logging basic configuration level: INFO vs. WARNING (usability with W&B) | [
"good first issue",
"logger"
] | Thanks for the amazing package! I am having a great time using it.
Issue
Recently, I have been playing around with the weights and biases (W&B) logging functionality and I noticed that I was getting a lot of logging messages in my jupyter notebook while training my model (every epoch I got new messages).
When I looked into logging.__init__.py, the logging basic configuration was set with:
logging.basicConfig(level=logging.INFO)
However, that means a lot of INFO messages are being printed from W&B's use of the logger (e.g. when the RunManager modifies files).
Potential Solution
To disable this, I switched the logging basic configuration in logging.__init__.py to:
logging.basicConfig(level=logging.WARNING)
Would this be a useful addition in general to the package or should I just keep this change local? |
W&B: Allow for passing experiment into the WandbLogger (and logging semantics) | [
"logger"
] | Currently, the WandbLogger will automatically create a new internal experiment (run) whenever you create a new WandbLogger.
Issue
If I instantiate a wandb experiment outside of the logger, then I will have two experiments when I train my model since there is no way to set the internal experiment of the WandbLogger to my current external experiment.
Potential Solution
Allow for passing an experiment into the WandbLogger:
class WandbLogger(LightningLoggerBase):
def __init__(self, name=None, save_dir=None, offline=False, id=None, anonymous=False,
version=None, project=None, tags=None, experiment=None):
.
.
.
self._experiment = experiment
Then I can do this:
experiment = wandb.init(.......)
wandb_logger = WandbLogger(experiment=experiment)
I made this change locally, however, I wasn't sure if this was something you also wanted to implement as well. It works for me.
Another small note
In the WandbLogger.log_metrics function, I would change:
self.experiment.history.add(metrics) --> self.experiment.log(metrics) |
Trainer is setting parameters with requires_grad=False to requires_grad=True (bug) | [
"bug"
] | π Bug
When training a model that has some parameters where requires_grad=False, the Trainer is actually setting requires_grad=True for these parameters and changing them. The bug appears to originate in the TrainerTrainLoopMixin code.
To Reproduce
Steps to reproduce the behavior:
Create a model with some parameters which have requires_grad=False
Fit the model using the Trainer
Check to see if the parameters which were set with `requires_grad=False' have changed.
Code sample (to reproduce the bug)
import torch
import numpy as np
import os
from torch.nn import functional as F
from torch.utils.data import DataLoader
import pytorch_lightning as pl
# Make toy dataset
features = torch.from_numpy(np.asarray([[0],[0],[0],[1],[1],[1]])).float()
targets = torch.from_numpy(np.asarray([0,0,0,1,1,1]))
train = torch.utils.data.TensorDataset(features, targets)
train_loader = torch.utils.data.DataLoader(train, batch_size=2, shuffle=True)
#Define lightning model
class CoolSystem(pl.LightningModule):
def __init__(self):
super(CoolSystem, self).__init__()
self.l1 = torch.nn.Linear(1, 10)
self.l2 = torch.nn.Linear(10, 2)
for param in self.l2.parameters():
param.requires_grad = False
self.loss_func = torch.nn.CrossEntropyLoss()
def forward(self, x):
return self.l2(torch.relu(self.l1(x)))
def training_step(self, batch, batch_idx):
x, y = batch
y_hat = self.forward(x)
loss = self.loss_func(y_hat, y)
tensorboard_logs = {'train_loss': loss}
return {'loss': loss, 'log': tensorboard_logs}
def configure_optimizers(self):
return torch.optim.Adam(self.parameters(), lr=0.02)
@pl.data_loader
def train_dataloader(self):
return train_loader
# Run the lightning model (check parameter before and after training)
coolsystem = CoolSystem()
print(list(coolsystem.parameters())[3])
trainer = pl.Trainer(min_epochs=10, max_epochs=10, logger=False)
trainer.fit(coolsystem)
list(coolsystem.parameters())[3]
Expected behavior
Expected
The parameters with requires_grad == False should not change during training.
Actual
The printed out parameter before training has requires_grad == False, but after training with the Trainer, the parameter now has requires_grad == True and has changed values.
Environment
PyTorch Version 1.3.1
Linux
PyTorch installed with pip
Python 3.7.1
pytorch-lightning 0.6.0
Where I think the issue is!
Here is the code snippet from training_loop.py that I think is causing the issue:
class TrainerTrainLoopMixin(ABC):
.
.
.
def run_training_batch(self, batch, batch_idx):
.
.
.
# call training_step once per optimizer
for opt_idx, optimizer in enumerate(self.optimizers):
# make sure only the gradients of the current optimizer's paramaters are calculated
# in the training step to prevent dangling gradients in multiple-optimizer setup.
for param in self.get_model().parameters():
param.requires_grad = False
for group in optimizer.param_groups:
for param in group['params']:
param.requires_grad = True
As you can see, the params in the model are all set to param.requires_grad = True during each training batch! |
convert examples to doctests | [
"feature",
"help wanted",
"good first issue",
"docs"
] | π Feature
Converting examples to doctests...
Motivation
The examples now are static so there is no guarantee that they are still valid...
Advantages of converting to doctest would:
increase reproducibility
each example can run as a stand-alone
make testing on smaller units
smaller test units simplify debugging
Additional context
https://docs.python.org/3/library/doctest.html
https://thomas-cokelaer.info/tutorials/sphinx/doctest.html
https://stackoverflow.com/questions/361675/python-doctest-vs-unittest |
Tqdm progress bar error | [
"bug",
"duplicate",
"help wanted"
] | When running one epoch with train and val dataloader, as soon as validation is started the progressbar will create a new line for each iteration. I have this bug in pycharm as well as kaggle kernels. Below a typical example. 80% runs smoothly, as soon as validation starts a new line for each tqdm iteration is started
Selected optimization level O1: Insert automatic casts around Pytorch functions and Tensor methods.
Defaults for this optimization level are:
enabled : True
opt_level : O1
cast_model_type : None
patch_torch_functions : True
keep_batchnorm_fp32 : None
master_weights : None
loss_scale : dynamic
Processing user overrides (additional kwargs that are not None)...
After processing overrides, optimization options are:
enabled : True
opt_level : O1
cast_model_type : None
patch_torch_functions : True
keep_batchnorm_fp32 : None
master_weights : None
loss_scale : dynamic
Epoch 1: 80%|ββββββββ | 1216/1520 [09:01<02:08, 2.36batch/s, batch_nb=1215, gpu=0, loss=0.649, train_loss=0.616, v_nb=0]
Validating: 0%| | 0/304 [00:00<?, ?batch/s]
Epoch 1: 80%|ββββββββ | 1217/1520 [09:01<01:44, 2.90batch/s, batch_nb=1215, gpu=0, loss=0.649, train_loss=0.616, v_nb=0]
Epoch 1: 80%|ββββββββ | 1218/1520 [09:02<01:26, 3.48batch/s, batch_nb=1215, gpu=0, loss=0.649, train_loss=0.616, v_nb=0]
Epoch 1: 80%|ββββββββ | 1219/1520 [09:02<01:14, 4.05batch/s, batch_nb=1215, gpu=0, loss=0.649, train_loss=0.616, v_nb=0]
Epoch 1: 80%|ββββββββ | 1220/1520 [09:02<01:05, 4.58batch/s, batch_nb=1215, gpu=0, loss=0.649, train_loss=0.616, v_nb=0]
Epoch 1: 80%|ββββββββ | 1221/1520 [09:02<00:59, 5.04batch/s, batch_nb=1215, gpu=0, loss=0.649, train_loss=0.616, v_nb=0]
Epoch 1: 80%|ββββββββ | 1222/1520 [09:02<00:54, 5.42batch/s, batch_nb=1215, gpu=0, loss=0.649, train_loss=0.616, v_nb=0]
Epoch 1: 80%|ββββββββ | 1223/1520 [09:02<00:51, 5.72batch/s, batch_nb=1215, gpu=0, loss=0.649, train_loss=0.616, v_nb=0]
Environment
PyTorch version: 1.2.0
Is debug build: No
CUDA used to build PyTorch: 10.0.130
OS: Ubuntu 18.04.3 LTS
GCC version: (Ubuntu 7.4.0-1ubuntu1~18.04.1) 7.4.0
CMake version: Could not collect
Python version: 3.7
Is CUDA available: Yes
CUDA runtime version: 10.1.243
GPU models and configuration:
GPU 0: GeForce GTX 1080 Ti
GPU 1: GeForce GTX 1080 Ti
GPU 2: GeForce GTX 1080 Ti
Nvidia driver version: 418.87.00
cuDNN version: Could not collect
Versions of relevant libraries:
[pip] numpy==1.16.4
[pip] pytorch-lightning==0.5.3.2
[pip] pytorchcv==0.0.50
[pip] torch==1.2.0
[pip] torchaudio==0.3.0
[pip] torched==0.11
[pip] torchfile==0.1.0
[pip] torchvision==0.4.0
[conda] pytorch-lightning 0.5.3.2 pypi_0 pypi
[conda] pytorchcv 0.0.50 pypi_0 pypi
[conda] torch 1.2.0 pypi_0 pypi
[conda] torchaudio 0.3.0 pypi_0 pypi
[conda] torched 0.11 pypi_0 pypi
[conda] torchfile 0.1.0 pypi_0 pypi
[conda] torchvision 0.4.0 pypi_0 pypi
Additional context |
Modify hook on_batch_start() API to support other iterable dataloaders | [
"feature",
"discussion"
] | Motivation
By default LightningModule.train_dataloader() and etc return a PyTorch DataLoader, but it can be easily extended to other iterable objects by converting to vanilla batch in on_batch_start()
Pitch
def on_batch_start(self, batch):
# do something
# before
return response
# after
return batch, response # batch is (Tensor, Tensor)
# example of using torchtext for NMT
@pl.data_loader
def train_dataloader(self):
return torchtext.data.BucketIterator(dataset, batch_size)
def on_batch_start(self, batch):
x, y = batch.src, batch.trg
return (x, y), None
Alternatives
Normally I can do this in forward(), but when using multiple gpus, it's better to let Lightning handle batch transfer.
Additional context
Implementation in my fork https://github.com/chutaklee/pytorch-lightning/commit/b81c0ba1fcc01530f8e66a66f8e522c10826f797
Since it is not backward compatible(for those who've used on_batch_start() in their models), and I haven't make it to tests yet, so I'm not so sure, though it seems like a nice feature. |
Better way to set retain_graph | [
"question"
] | Is there a better way to set retain_graph, especially when using two optimizers?
I have read the issue #356 and the corresponding fix, to set it by overriding the backward function.
However, this becomes messy especially when more than 1 optimizers are used, as the function doesn't have the optimizer_idx as an argument.
There are workarounds like naming the optimizer object and using that to set/reset the retain graph, but passing optimizer_idx would be great.
This shouldn't be difficult as the opt_idx variable is available inside the loop in which the backward function is called
Version [0.6.0] |
ModelCheckpoint Filepath Doesn't Use Logger Save Dir | [
"bug",
"help wanted",
"good first issue"
] | π Bug
Not sure if this is intended, but the model checkpoint isn't using the same directory as the logger, even if the logger exists. I would have expected this line here to be self.logger.save_dir instead of self.default_save_path.
Thank you,
-Collin |
Doc broken / Link broken | [
"good first issue"
] | The readme link "Lightning module" is broken: https://pytorch-lightning.readthedocs.io/en/latest/LightningModule/RequiredTrainerInterface/
The source link in documentation goes nowhere: https://pytorch-lightning.readthedocs.io/en/latest/logging.html click on source goes to https://github.com/PyTorchLightning/PyTorch-Lightning/blob/pytorch_lightning/logging.py
It is really confusing and worrying for news comer like me who expect a simple and easy to plug api well documented and maintained. |
Incompatible torch and torchvision version numbers in requirements file | [
"bug"
] | π Bug
requirements.txt lists torch and torchvision requirements as follows
torch>=1.1
torchvision>=0.4.0
which leads to pip installing torchvision 0.4.2 and torch 1.4.0. These are incompatible with each other. Running pip -r requirements.txt throws the following error
ERROR: torchvision 0.4.2 has requirement torch==1.3.1, but you'll have torch 1.4.0 which is incompatible.
To Reproduce
Steps to reproduce the behavior:
Go to lightning root directory
Run 'pip -r requirements.txt'
See error
Expected behavior
Pip should be able to install all requirements successfully.
Environment
PyTorch version: 1.3.1
Is debug build: No
CUDA used to build PyTorch: 10.1.243
OS: Ubuntu 18.04.3 LTS
GCC version: (Ubuntu 7.4.0-1ubuntu1~18.04.1) 7.4.0
CMake version: Could not collect
Python version: 3.7
Is CUDA available: Yes
CUDA runtime version: 10.1.243
GPU models and configuration: GPU 0: GeForce RTX 2080 SUPER
Nvidia driver version: 430.64
cuDNN version: /usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.5
Versions of relevant libraries:
[pip] numpy==1.18.1
[pip] pytorch-lightning==0.6.0
[pip] torch==1.3.1
[pip] torchfile==0.1.0
[pip] torchvision==0.4.2
[conda] pytorch-lightning 0.6.0 dev_0
[conda] torch 1.3.1 pypi_0 pypi
[conda] torchfile 0.1.0 pypi_0 pypi
[conda] torchvision 0.4.2 pypi_0 pypi |
TensorBoardLogger creates another tfevents file. | [
"bug",
"help wanted"
] | π Bug
TensorBoardLogger creates another tfevents file when fit() is running.
It seems that no metrics are logged in the redundant file, but it will be shown in TensorBoard as a run.
I don't do anything about loggers in my LightningModules.
Expected file structure:
|
|- save_dir
| |- name
| |- version_0
| |- events.out.tfevents.1579833025.ip-xxx-xxx-xxx-xxx.17584.0
| |- meta_tags.csv
|- train.py
Observed file structure:
|
|- save_dir
| |- name
| |- version_0
| |- 1579833032
| |- events.out.tfevents.1579833032.ip-xxx-xxx-xxx-xxx.17584.1
| |- events.out.tfevents.1579833025.ip-xxx-xxx-xxx-xxx.17584.0
| |- meta_tags.csv
|- train.py
To Reproduce / Code sample
Basic training step of PyTorch Lightning:
# define a logger
logger = TensorBoardLogger(
save_dir='runs',
name=args.model
)
# define callbacks
ckpt_path = Path(logger.experiment.log_dir) / 'ckpts'
checkpoint_callback = ModelCheckpoint(filepath=ckpt_path)
# instantiate trainer
trainer = Trainer(
logger=logger,
checkpoint_callback=checkpoint_callback,
gpus=args.gpus
)
# define a model
model = CoolModel(args)
# start training!
trainer.fit(model)
Environment
PyTorch Lightning Version (e.g., 1.0): 0.6.0
PyTorch Version (e.g., 1.0): 1.3.1
OS (e.g., Linux): Ubuntu 16.04 LTS
How you installed PyTorch (conda, pip, source): pip
Build command you used (if compiling from source):
Python version: 3.7.4
CUDA/cuDNN version: 10.1 |
How to save model weights to mlflow tracking server while using MLFLogger to save metrics. | [
"question"
] | β Questions and Help
Anybody knows simple way to accomplish it?
Before asking:
search the issues.
search the docs.
What is your question?
I'm searching for a way to save model weights to mlflow tracking server while using MLFLogger to save metrics.
my problem is, I cannot find a way to save model weight to same run which was created inside MLFLogger.
When I run mlflow.pytorch.log_model() after trainer.fit(), metrics and model weight are saved to different run.
Code
mlf_logger = MLFlowLogger(
experiment_name="WatchNetExperiment",
)
trainer = pl.Trainer(gpus=hparams.gpus, distributed_backend='dp', min_nb_epochs=hparams.min_epochs, max_nb_epochs=hparams.max_epochs, logger=mlf_logger)
trainer.fit(model)
mlflow.pytorch.log_model(model.model, "my_model")
What have you tried?
To work around the problem, I stopped to use MLFLogger and modify my training code to save metrics at train_step() and validation_end.
What's your environment?
OS: Linux
Packaging conda
Version 0.5.3.2 |
trainer.test() fails when using ddp | [
"bug"
] | Calling 'trainer.fit()' following ddp training fails giving this error:
File "/home/seth/.local/lib/python3.6/site-packages/pytorch_lightning/trainer/model_hooks.py", line 19, in is_overriden
is_overriden = getattr(model, f_name).__code__ is not getattr(super_object, f_name).__code__
AttributeError: 'NoneType' object has no attribute 'test_step'
The model variable is set to None. Note that trainer.test(model) works fine. I had originally reported that I had some issues with this, but these turned out to be self inflicted.
Working on Ubuntu 18.04.2LTS, python 3.6.8, both pytorch 1.3, and 1.4, and both pytorch-lightning .5.3.2 and .6. 14 core, 7 gpus. No virtual environment.
Simply call trainer.test() following ddp training. |
Fit Error: raised training_step() takes 3 positional arguments but 4 were given when I use truncated_bptt_steps | [
"bug"
] | π Bug
Everything works fine when I had truncated_bptt_steps as None, but when I set it to 5. The error mentioned in the title is thrown (see below for Traceback detail):
Traceback (most recent call last):
File "Trainer.py", line 103, in <module>
trainer.fit(gen)
File "xxxx/anaconda3/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 695, in fit
self.single_gpu_train(model)
File "xxxx/anaconda3/lib/python3.7/site-packages/pytorch_lightning/trainer/distrib_parts.py", line 441, in single_gpu_train
self.run_pretrain_routine(model)
File "xxxx/anaconda3/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 829, in run_pretrain_routine
self.train()
File "xxxx/anaconda3/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 332, in train
self.run_training_epoch()
File "xxxx/anaconda3/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 386, in run_training_epoch
output = self.run_training_batch(batch, batch_idx)
File "xxxx/anaconda3/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 506, in run_training_batch
loss = optimizer_closure()
File "xxxx/anaconda3/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 475, in optimizer_closure
split_batch, batch_idx, opt_idx, self.hiddens)
File "xxxx/anaconda3/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 593, in training_forward
output = self.model.training_step(*args)
TypeError: training_step() takes 3 positional arguments but 4 were given
Code sample
My DataSet class
class DataSet(data.Dataset):
def __init__(self, data, token_id2idx):
'''
Data and label has to have the same time step size for truncated batch split to work
'label is the data offset by 1 (because dont need to predict the first CLS token)
:param data: input ids (B, max_seq)
:type data: Numpy array
:param token_id2idx: mapping to convert bert token to idx (which goes from 0 to V-1)
'''
self.data = data[:,:-1].copy() # (B, max_seq -1)
self.labels = data[:, 1:].copy() # (B, max_seq -1)
assert self.data.shape == self.labels.shape, "data shape and label shape is different, this will cause error during TBPTT"
for r in range(self.labels.shape[0]):
for t in range(self.labels.shape[1]):
self.labels[r, t] = token_id2idx[self.labels[r, t]]
self.max_seq = self.data.shape[1]
def __len__(self):
'Denotes the total number of samples'
return self.data.shape[0]
def __getitem__(self, index):
'Generates one sample of data'
# Select sample
X = self.data[index] # (max_seq)
y = self.labels[index] # (max_seq - 1)
# y = torch.ones(1).fill_(self.label)
return X, y
Environment
PyTorch version: 1.3.1+cu92
Is debug build: No
CUDA used to build PyTorch: 9.2.148
OS: Debian GNU/Linux 9.6 (stretch)
GCC version: (Debian 6.3.0-18+deb9u1) 6.3.0 20170516
CMake version: Could not collect
Python version: 3.7
Is CUDA available: Yes
CUDA runtime version: Could not collect
GPU models and configuration: GPU 0: Tesla K80
Nvidia driver version: 410.72
cuDNN version: Could not collect
Versions of relevant libraries:
[pip] numpy==1.16.4
[pip] numpydoc==0.9.1
[pip] pytorch-ignite==0.1.0
[pip] pytorch-lightning==0.6.0
[pip] pytorch-transformers==1.1.0
[pip] torch==1.3.1+cu92
[pip] torchfile==0.1.0
[pip] torchvision==0.4.2+cu92
[conda] blas 1.0 mkl
[conda] mkl 2019.4 243
[conda] mkl-service 2.0.2 py37h7b6447c_0
[conda] mkl_fft 1.0.12 py37ha843d7b_0
[conda] mkl_random 1.0.2 py37hd81dba3_0
[conda] pytorch-ignite 0.1.0 pypi_0 pypi
[conda] pytorch-lightning 0.6.0 pypi_0 pypi
[conda] pytorch-transformers 1.1.0 pypi_0 pypi
[conda] torch 1.3.1+cu92 pypi_0 pypi
[conda] torchfile 0.1.0 pypi_0 pypi
[conda] torchvision 0.4.2+cu92 pypi_0 pypi |
Allow a flag into trainer to save checkpoints at partial epochs | [
"feature",
"help wanted"
] | π Feature
Allow a flag into trainer to save checkpoints at partial epochs
Motivation
When you have a large dataset that takes tens of hours per epoch, it's important to have checkpoints along the way. Right now we only get a checkpoint on_epoch_end.
Workaround
Also interested to see if there is a good workaround. I guess I can set a reference to trainer inside my model and manually call on_epoch_end, but that feels like a hack and won't work without changing lightning code because of
if self.epochs_since_last_check >= self.period:
inside on_epoch_end.
Other ideas?
Also interested if there are better ways to solve this. Also thought about taking samples out of the dataset to make 'mini epochs,' but this breaks epochs naming convention. (eg an epoch implies all data has been run through once)
Thank you! |
Why isn't EarlyStopping called after every validation_end unlike ModelCheckpoint? | [
"question"
] | β Questions and Help
Hi,
I was wondering why the EarlyStopping Callback is not called after every validation_end unlike the ModelCheckpoint Callback? I use val_check_interval < 1 and my model overfits quite fast on the train data, so it would be handy to stop even during an epoch.
What have you tried?
I tried to replace on_epoch_end with on_batch_end of the EarlyStopping Callback, but that did not work. My next step would be to integrate the functionality of EarlyStopping into ModelCheckpoint, but maybe there is an easier solution.
What's your environment?
OS: Linux 16.04 LTS
Packaging: pip
Version: 0.6.0 |
Test metrics not logging to Comet after training | [
"bug"
] | π Bug
When testing a model with Trainer.test metrics are not logged to Comet if the model was previously trained using Trainer.fit. While training metrics are logged correctly.
Code sample
comet_logger = CometLogger()
trainer = Trainer(logger=comet_logger)
model = get_model()
trainer.fit(model) # Metrics are logged to Comet
trainer.test(model) # No metrics are logged to Comet
Expected behavior
Test metrics should also be logged in to Comet.
Environment
- PyTorch version: 1.3.0
Is debug build: No
CUDA used to build PyTorch: 10.1.243
OS: Ubuntu 18.04.3 LTS
GCC version: (Ubuntu 7.4.0-1ubuntu1~18.04.1) 7.4.0
CMake version: version 3.10.2
Python version: 3.7
Is CUDA available: Yes
CUDA runtime version: 10.1.168
GPU models and configuration:
GPU 0: GeForce GTX 1080 Ti
GPU 1: GeForce GTX 1080 Ti
GPU 2: GeForce GTX 1080 Ti
GPU 3: GeForce GTX 1080 Ti
GPU 4: GeForce GTX 1080 Ti
GPU 5: GeForce GTX 1080 Ti
GPU 6: GeForce GTX 1080 Ti
GPU 7: GeForce GTX 1080 Ti
Nvidia driver version: 418.67
cuDNN version: /usr/local/cuda-10.1/targets/x86_64-linux/lib/libcudnn.so.7.6.1
Versions of relevant libraries:
[pip3] numpy==1.16.4
[pip3] pytorch-lightning==0.6.0
[pip3] torch==1.3.0
[pip3] torchvision==0.4.1
[conda] Could not collect
Additional context
I believe the issue is caused because at the end of the training routine, logger.finalize("success") is called. This in turn calls experiment.end() inside the logger and the Experiment object doesn't expect to send more information after this.
An alternative is to create another Trainer object, with another logger but this means that the metrics will be logged into a different Comet experiment from the original. This issue can be solved using the ExistingExperiment object form the Comet SDK, but the solution seems a little hacky and the CometLogger currently doesn't support this kind of experiment. |
run_evaluation() does not work. | [
"bug",
"good first issue"
] | π Bug
run_evaluation() does not work. Suspected that model is not loaded into the trainer at any point.
To Reproduce
Steps to reproduce the behavior:
Run ImageNet example with --evaluate argument. python imagenet_example.py --evaluate
Expected behavior
Model is supposed to load from the checkpoint directory and evaluation on validation is carried out. |
wandb logging does not work - log method called on the wrong object (?) | [
"bug",
"good first issue"
] | π Bug
When using the WandbLogger and not providing an experiment, I get an AttributeError: 'Run' object has no attribute 'log' in line 84 of WandbLogger. Instead on experiment, I think log should be called on wandb
Code sample
wandb_logger = WandbLogger(name="name", save_dir="/path/to/folder", offline=False, project="project")
trainer = Trainer(logger=wandb_logger)
trainer.fit(model)
Environment
PyTorch Version (e.g., 1.0): 1.2
OS (e.g., Linux): Ubuntu 18.04
How you installed PyTorch (conda, pip, source): pip
Build command you used (if compiling from source):
Python version: 3.7
CUDA/cuDNN version:9.0
GPU models and configuration:
Any other relevant information: |
Improve tqdm progress bar | [
"feature",
"help wanted",
"good first issue"
] | At the moment the progress bar is initialized with the arg leave=False:
pytorch-lightning/pytorch_lightning/trainer/trainer.py
Line 861
in
deffbab
eval_results = self.evaluate(model, self.get_val_dataloaders(),
Sometimes, it's nice to be able to see the previous progress bar to look at the evolution of the loss and metrics.
Would that be possible to add an arg to the trainer to be able to override default tqdm parameters?
Also, another point: tqdm progress bars can be nested (https://github.com/tqdm/tqdm#nested-progress-bars). Could we imagine having a global progress bar and then a nested progress bar for each epoch loop? |
logging module collision | [
"bug"
] | Logging module collides with the Python one:
import pytorch_lightning as pl
dir(pl.logging)
It gives you the python logging module attributes instead of the pytorch_lightning ones.
This is probably due to this
pytorch-lightning/pytorch_lightning/__init__.py
Line 31
in
deffbab
import logging
Maybe you should rename the logging module to something else such as logger? |
Lightning DDP seems to be breaking autograd | [
"bug"
] | π Bug
I am attempting to make a lightning script for the example code in https://github.com/lucidrains/reformer-pytorch
I now have 2 scripts, 1 is a lightning implementation and one is a regular apex DDP implementation, The regular apex DDP implementation is able to train completly fine, but lightning throws an error about autograd warning that parameters are unused (see error message section)
I enabled find_unused_parameters in the non lightning example and do not get any errors, and disabling find_unused_parameters in the lightning example causes an autograd crash
To Reproduce
Steps to reproduce the behavior:
I personally use the nvidia pytorch container because it has most things installed
git clone https://github.com/lucidrains/reformer-pytorch.git
pip install revtorch
I've attached a zip file containing two training scripts, unzip them to the examples folder of reformer-pytorch
python -m torch.distributed.launch --nproc_per_node=1 example/train_apex_ddp.py -b 4 and note there is training and no crashing
python example/train_lightning.py --distributed to see the crash
-- Process 0 terminated with the following error:
Traceback (most recent call last):
File "/opt/conda/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 19, in _wrap
fn(i, *args)
File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/distrib_data_parallel.py", line 331, in ddp_train
self.run_pretrain_routine(model)
File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 829, in run_pretrain_routine
self.train()
File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/training_loop.py", line 332, in train
self.run_training_epoch()
File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/training_loop.py", line 386, in run_training_epoch
output = self.run_training_batch(batch, batch_idx)
File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/training_loop.py", line 506, in run_training_batch
loss = optimizer_closure()
File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/training_loop.py", line 489, in optimizer_closure
model_ref.backward(self.use_amp, closure_loss, optimizer)
File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/core/hooks.py", line 154, in backward
loss.backward()
File "/opt/conda/lib/python3.6/site-packages/torch/tensor.py", line 195, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "/opt/conda/lib/python3.6/site-packages/torch/autograd/__init__.py", line 99, in backward
allow_unreachable=True) # allow_unreachable flag
File "/opt/conda/lib/python3.6/site-packages/torch/autograd/function.py", line 77, in apply
return self._forward_cls.backward(self, *args)
File "/opt/conda/lib/python3.6/site-packages/revtorch/revtorch.py", line 161, in backward
y, dy = ctx.reversible_blocks[i].backward_pass(y, dy)
File "/opt/conda/lib/python3.6/site-packages/revtorch/revtorch.py", line 89, in backward_pass
gy1.backward(dy2)
File "/opt/conda/lib/python3.6/site-packages/torch/tensor.py", line 195, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "/opt/conda/lib/python3.6/site-packages/torch/autograd/__init__.py", line 99, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: Expected to mark a variable ready only once. This error is caused by use of a module parameter outside the `forward` function. The return value of the `forward` function is inspected by the distributed data parallel wrapper to figure out if any of the module's parameters went unused. If this is the case, it knows they won't receive gradients in a backward pass. If any of those parameters are then used outside `forward`, this error condition is triggered. You can disable unused parameter detection by passing the keyword argument `find_unused_parameters=False` to `torch.nn.parallel.DistributedDataParallel`.
Code sample
training_scripts.zip
I'm sorry I was unable to make the code samples any smaller
Environment
Collecting environment information...
PyTorch version: 1.4.0a0+a5b4d78
Is debug build: No
CUDA used to build PyTorch: 10.2
OS: Ubuntu 18.04.3 LTS
GCC version: (Ubuntu 7.4.0-1ubuntu1~18.04.1) 7.4.0
CMake version: version 3.14.0
Python version: 3.6
Is CUDA available: Yes
CUDA runtime version: Could not collect
GPU models and configuration:
GPU 0: Tesla V100-SXM3-32GB
GPU 1: Tesla V100-SXM3-32GB
GPU 2: Tesla V100-SXM3-32GB
GPU 3: Tesla V100-SXM3-32GB
Nvidia driver version: 418.67
cuDNN version: /usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.5
Versions of relevant libraries:
[pip] msgpack-numpy==0.4.3.2
[pip] numpy==1.17.4
[pip] pytorch-lightning==0.6.0
[pip] pytorch-transformers==1.1.0
[pip] revtorch==0.2.3
[pip] torch==1.4.0a0+a5b4d78
[pip] torchtext==0.4.0
[pip] torchvision==0.4.2
[conda] magma-cuda101 2.5.1 1 local
[conda] mkl 2019.1 144
[conda] mkl-include 2019.1 144
[conda] nomkl 3.0 0
[conda] pytorch-lightning 0.6.0 pypi_0 pypi
[conda] pytorch-transformers 1.1.0 pypi_0 pypi
[conda] revtorch 0.2.3 pypi_0 pypi
[conda] torch 1.4.0a0+a5b4d78 pypi_0 pypi
[conda] torchtext 0.4.0 pypi_0 pypi
[conda] torchvision 0.4.2 pypi_0 pypi
Additional context
Revtorch has custom autograd, but I have tested revtorch and am able to train a small revtorch example using lighting, so I don't think the issue is revtorch |
Model Parallel | [
"question"
] | Hi - I'm interested to start using your project, and I'm wondering if you support Model Parallel, so that I can train models that do not fit on a single card?
If this is already supported, could you please point me to an example?
Or do you have any ideas on how to set this up manually, if it is not explicitly supported as part of the framework yet?
Thanks very much! |
Checkpoint naming broken | [
"bug",
"help wanted"
] | π Bug
I would like to be able to save checkpoints with custom names that include the value of my val_loss, ie. path/epoch_2-val_loss_0.2.hdf5 . The documentation for ModelCheckpoint suggests that this is possible using the filepath argument. This does not appear to be the case, since the source code calls os.mkdirs(filepath). I have also tried using the prefix argument, but it doesn't seem to be possible to pass it a format string containing a variable.
Expected behavior
The documentation claims that filepath='{epoch:02d}-{val_loss:.2f}.hdf5' will save a checkpoint at /path/epoch_2-val_loss_0.2.hdf5. Instead, it saves a checkpoint at {epoch:02d}-{val_loss:.2f}.hdf5/_ckpt_epoch_1.ckpt.
The issues in the documentation are two-fold:
-- It suggests that filepath can contain the directory + name of the checkpoint, when it seems like it should only contain the directory specifying where to save.
-- It suggests that it can 'contain named formatting options to be auto-filled', which also doesn't seem to be the case.
Is it possible to achieve this functionality with the prefix argument instead? If so, how? |
Expand badges for tests | [
"feature",
"help wanted"
] | We do a lot of tests already. Let's have a badge for each thing we test.
We can use a 2d matrix.
On the left, the pytorch versions. On top, the PyThon versions? |
TypeError: validation_step() takes 3 positional arguments but 4 were given | [
"bug"
] | Hi, I just started using lightning and I've been running into this bug lately.
Here's my code for the validation_step and validation_end:
def validation_step(self, batch, batch_idx):
imgs = batch
(z1, z2) = torch.split(imgs['X'], 1, 1)
ct = imgs['CT']
y_hat = self.discriminator(self(z1, z2).detach())
y = self.discriminator(ct.detach())
print('Validation Step: \n')
print('y_hat = ', y_hat)
print('y_hat.shape = ', y_hat.shape)
print('y = ', y)
print('y.shape = ', y.shape)
return {'val_loss': F.cross_entropy(y_hat, y)}
def validation_end(self, outputs):
avg_loss = torch.stack([x['val_loss'] for x in outputs]).mean()
return {'val_loss': avg_loss,
'progress_bar': {avg_loss}}
Stacktrace:
Traceback (most recent call last):
File "/home/students/lan/Schreibtisch/Code/x2ct_architecture_ff/gan.py", line 333, in <module>
main(hparams)
File "/home/students/lan/Schreibtisch/Code/x2ct_architecture_ff/gan.py", line 308, in main
trainer.fit(model)
File "/work/scratch/lan/conda/envs/pytorch_thesis/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 692, in fit
self.dp_train(model)
File "/work/scratch/lan/conda/envs/pytorch_thesis/lib/python3.7/site-packages/pytorch_lightning/trainer/distrib_parts.py", line 471, in dp_train
self.run_pretrain_routine(model)
File "/work/scratch/lan/conda/envs/pytorch_thesis/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 812, in run_pretrain_routine
self.evaluate(model, self.get_val_dataloaders(), self.num_sanity_val_steps, self.testing)
File "/work/scratch/lan/conda/envs/pytorch_thesis/lib/python3.7/site-packages/pytorch_lightning/trainer/evaluation_loop.py", line 234, in evaluate
test)
File "/work/scratch/lan/conda/envs/pytorch_thesis/lib/python3.7/site-packages/pytorch_lightning/trainer/evaluation_loop.py", line 349, in evaluation_forward
output = model(*args)
File "/work/scratch/lan/conda/envs/pytorch_thesis/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/work/scratch/lan/conda/envs/pytorch_thesis/lib/python3.7/site-packages/pytorch_lightning/overrides/data_parallel.py", line 62, in forward
return self.module.validation_step(*inputs[0], **kwargs[0])
TypeError: validation_step() takes 3 positional arguments but 4 were given
There has been a similar issue in the past, namely #105, but the bug was resolved in an earlier version and couldn't help me much.
Environment:
Ubuntu: 18.04.3 LTS
Python: 3.7.5
Pytorch: 1.3.1
Pytorch-Lightning: 0.6.0
Anaconda: 4.7.12
Thanks for the help! |
Trainer got an unexpected keyword argument 'save_best_only' | [] | When I tried to run the following code:
import os
import argparse
import torch
import torch.nn as nn
from torch.utils.data import DataLoader
from torchvision.datasets import MNIST
import torchvision.transforms as transforms
import pytorch_lightning as pl
from pytorch_lightning import Trainer
def parse_args():
parser = argparse.ArgumentParser()
parser.add_argument('--input', dest='input_dim', type=int, default=28*28)
parser.add_argument('--output', dest='output_dim', type=int, default=10)
return parser.parse_args()
class Net(pl.LightningModule):
def __init__(self, input_dim, output_dim):
super(Net, self).__init__()
self.__dict__.update(locals())
self.layers = nn.Sequential(
nn.Linear(input_dim, input_dim),
nn.ReLU(inplace=True),
nn.Linear(input_dim, output_dim),
nn.ReLU(inplace=True)
)
self.criterion = nn.CrossEntropyLoss()
def forward(self, x):
out = self.layers(x.view(x.size(0), -1))
return out
def training_step(self, batch, batch_idx):
x, y = batch
y_hat = self.forward(x)
loss = self.criterion(y_hat, y)
return {'loss': loss}
def validation_step(self, batch, batch_idx):
x, y = batch
y_hat = self.forward(x)
loss = self.criterion(y_hat, y)
return {'val_loss': loss}
def validation_end(self, outputs):
avg_loss = torch.stack([x['val_loss'] for x in outputs]).mean()
tensorboard_logs = {'val_loss': avg_loss}
return {'avg_val_loss': avg_loss, 'log': tensorboard_logs}
def test_step(self, batch, batch_idx):
x, y = batch
y_hat = self.forward(x)
loss = self.criterion(y_hat, y)
return {'test_loss': loss}
def test_end(self, outputs):
avg_loss = torch.stack([x['test_loss'] for x in outputs]).mean()
tensorboard_logs = {'test_loss': avg_loss}
return {'avg_test_loss': avg_loss, 'log': tensorboard_logs}
def configure_optimizers(self):
return torch.optim.Adam(self.parameters(), lr=0.02)
@pl.data_loader
def train_dataloader(self):
dataset = MNIST(os.getcwd(), train=True, download=True, transform=transforms.ToTensor())
sampler = torch.utils.data.distributed.DistributedSampler(dataset)
loader = DataLoader(dataset, sampler=sampler, batch_size=32)
return loader
@pl.data_loader
def val_dataloader(self):
dataset = MNIST(os.getcwd(), train=True, download=True, transform=transforms.ToTensor())
sampler = torch.utils.data.distributed.DistributedSampler(dataset)
loader = DataLoader(dataset, sampler=sampler, batch_size=32)
return loader
@pl.data_loader
def test_dataloader(self):
dataset = MNIST(os.getcwd(), train=False, download=True, transform=transforms.ToTensor())
sampler = torch.utils.data.distributed.DistributedSampler(dataset)
loader = DataLoader(dataset, sampler=sampler, batch_size=32)
return loader
if __name__ == '__main__':
args = parse_args()
model = Net(input_dim=args.input_dim, output_dim=args.output_dim)
trainer = Trainer(gpus=[2,3,4,5], distributed_backend='ddp')
trainer.fit(model)
trainer.test()
I got an error:
Traceback (most recent call last):
File "example.py", line 68, in <module>
mode='min'
TypeError: __init__() got an unexpected keyword argument 'save_best_only'
I try to set checkpoint_callback as False or pass a custom checkpoint callback, but the problem still exists.
from pytorch_lightning.callbacks import ModelCheckpoint
checkpoint_callback = ModelCheckpoint(
filepath='./cache/weights.ckpt',
verbose=True,
monitor='val_loss',
mode='min'
)
trainer = Trainer(gpus=[2,3,4,5], checkpoint_callback=checkpoint_callback, distributed_backend='ddp')
Here are versions of my packages:
torch 1.3.1
torchvision 0.4.2
scikit-learn 0.20.2
scipy 1.2.1
numpy 1.16.4
pytorch-ignite 0.4.0
pytorch-lightning 0.6.0
Thank you! π |
Ability to specify step for logged metrics | [
"feature",
"help wanted"
] | π Feature
Add option to specify step for logged metrics
Motivation
After calculating some metric on n-th epoch I'd like to put corresponding mark (n) on x-axis. As I see in the source code, there's no obvious way to do this. Instead something like n * num_batches will be shown, like in the figure where the loss is calculated after each epoch.
Pitch
I'd love to see 0,1,2,... ticks on x-axis.
My proposal is to allow user to add step key in log dict, if this key is presented, corresponding value would be used in x-axis. I can make a PR if you agree with this idea.
Alternatives
It would be interesting to know how this could be achieved in other way. |
Best way to use mixup in lightning? | [
"question"
] | Just wondering what the best way to implement mixup in lightning is, possibly in the dataset? |
Release Pytorch Lightning as a conda package | [
"feature",
"help wanted"
] | π Feature
Please make the pytorch-lightning package available from the conda package manager. This would probably be done through conda-forge: conda install pytorch-lightning -c conda-forge
Motivation
The default way of installing Pytorch is via their conda channel, so (probably) most users of Lightning already use the conda package manager.
Conda packages provide automatic updates (conda update --all) and better package dependencies management, which could make the user's life a lot easier.
Pitch
I'd like to install pytorch-lightning and its dependencies via this command:
conda install pytorch-lightning -c conda-forge
Ideally, this wouldn't force reinstall the pytorch package from its channel to conda-forge.
Alternatives
My personal way of installing pytorch-lightning is running:
pip install --no-deps pytorch-lightning test-tube
And then conda install the missing dependencies that show up on pip check.
On version 0.5.3.x this was mandatory, since the required pytorch-lightning packages would mess up numpy and other conda packages. |
new profiler has failing tests | [
"bug"
] | @jeremyjordan
Tests fail on OSX
tests/test_profiler.py::test_advanced_profiler FAILED |
Customizable TensorBoard Graphics at Test Run | [
"feature",
"help wanted"
] | π Feature
Customizable TensorBoard graphics may be helpful especially at the test run.
Motivation
One may need to add customized graphics like pictures or grads to tensorboard. Especially for the test run, adding one scalar is just not enough. Exposure of tensorboard writer may be helpful. (Or even better, use-ready functions to show images, plot graphs etc.) |
Enable stepwise processing flag for schedulers | [
"feature",
"help wanted"
] | π Feature
Asking if it makes sense adding a flag in the Trainer class for calling scheduler.step() after every update (per #640).
Motivation
This makes sense for training NLP models such as BERT/XLNet or any other that update the lr based on the current step (and training defined in terms of steps instead of epochs) instead of the current state that it is called after an epoch ends.
I'm aware that users can override optimizer_step of the model, however it's a quite common training pattern for training such NLP models.
I think this feature is worthwhile and I will contribute my changes. Let me know if not.
Pitch
Add scheduler.step() call after every step (modify the optimizer step accordingly) |
Using other libraries with pytorch-lightning. | [
"question"
] | I'm just wondering, would it be possible to use the AllenNLP/Texar-PyTorch models and data processing submodule as part of PyTorch-Lightning Trainer? Do you think using the class structure setting and the GPU training setting of PyTorch Lightning would be adaptable to AllenNLP modules?
I saw that torchtext data handling approaches can be used with Lightning, if I was to call any Seq2Seq models and the data input created by using AllenNLP and then use all of pytorch lightning's approach for training on GPU and everything, would that port to AllenNLP? |
EarlyStopping when using an IterableDataset | [
"feature",
"help wanted",
"good first issue"
] | I am trying to use an IterableDataset while also including early stopping. Using version 0.6.1, it looks like the early stopping callback is only checked after an epoch. When using an IterableDataset, I don't see how this is ever called.
I have implemented a quick solution on my local machine after noticing a metric check after the validation loop is run:
pytorch-lightning/pytorch_lightning/trainer/training_loop.py
Line 415
in
4c6c3d0
self.early_stop_callback.check_metrics(self.callback_metrics)
My quick fix was to add a call to EarlyStopping.on_epoch_end like so:
# ---------------
# RUN VAL STEP
# ---------------
is_val_check_batch = (batch_idx + 1) % self.val_check_batch == 0
can_check_epoch = (self.current_epoch + 1) % self.check_val_every_n_epoch == 0
should_check_val = (not self.disable_validation and can_check_epoch and
(is_val_check_batch or early_stop_epoch))
# fast_dev_run always forces val checking after train batch
if self.fast_dev_run or should_check_val:
self.run_evaluation(test=self.testing)
if self.enable_early_stop:
self.early_stop_callback.check_metrics(self.callback_metrics)
# CHECK EARLY STOP CALLBACK HERE
early_stop_epoch = self.early_stop_callback.on_epoch_end(self.current_epoch, self.callback_metrics)
Now that early_stop_epoch is set after each validation, it is possible to stop using an IterableDataset.
# end epoch early
# stop when the flag is changed or we've gone past the amount
# requested in the batches
if early_stop_epoch or self.fast_dev_run:
break
If there is a native way to trigger the early stopping callback already, then my question is how? If not, then I could submit a PR.
What's your environment?
OS: Ubuntu 18
Packaging: pip
Version: 0.6.1 |
Add deepspeed support | [
"feature",
"help wanted"
] | Let's support this!
https://github.com/microsoft/DeepSpeed |
Enable anchor links in docs | [
"docs"
] | Our docs don't have these links. We need to enable them. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.